hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dc78cb8eacd1e2ba7a7bcac0e6e1bf090076222c | 136 | py | Python | bot/cogs/__init__.py | zd4y/discordbot | 57432b4e577241058e02c609ca36eae4b52911dc | [
"MIT"
] | null | null | null | bot/cogs/__init__.py | zd4y/discordbot | 57432b4e577241058e02c609ca36eae4b52911dc | [
"MIT"
] | null | null | null | bot/cogs/__init__.py | zd4y/discordbot | 57432b4e577241058e02c609ca36eae4b52911dc | [
"MIT"
] | null | null | null | from .loops import Loops
from .listeners import Listeners
def setup(bot):
bot.add_cog(Listeners(bot))
bot.add_cog(Loops(bot))
| 17 | 32 | 0.727941 | 21 | 136 | 4.619048 | 0.428571 | 0.123711 | 0.185567 | 0.247423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161765 | 136 | 7 | 33 | 19.428571 | 0.850877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc7b2db35c01a18588a8b2bb95431a5df601ff78 | 2,960 | py | Python | src/newlist.py | Eandreas1857/dsgrn_acdc | cfbccbd6cc27ffa4b0bd570ffb4f206b2ca9705c | [
"MIT"
] | null | null | null | src/newlist.py | Eandreas1857/dsgrn_acdc | cfbccbd6cc27ffa4b0bd570ffb4f206b2ca9705c | [
"MIT"
] | null | null | null | src/newlist.py | Eandreas1857/dsgrn_acdc | cfbccbd6cc27ffa4b0bd570ffb4f206b2ca9705c | [
"MIT"
] | null | null | null |
import DSGRN
from copy import deepcopy
def Hb_high2low(network, paramslist):
g = deepcopy(paramslist)
pg = DSGRN.ParameterGraph(network)
new_start = []
for i in paramslist[0]:
params = pg.parameter(i[1])
s = params.logic()
b = s[0].stringify()
if b[6:-2] == 'F'*len(b[6:-2]):
new_start.append(i)
if new_start == []:
print('Abs high not in list, comptuting next best thing')
for i in paramslist[0]:
params = pg.parameter(i[1])
s = params.logic()
b = s[0].stringify()
if 'F' in b[6:-2]:
new_start.append(i)
new_end = []
for i in paramslist[-1]:
params = pg.parameter(i[1])
s = params.logic()
b = s[0].stringify()
if b[6:-2] == '0'*len(b[6:-2]):
new_end.append(i)
if new_end == []:
for i in paramslist[-1]:
print('Abs low not in list, comptuting next best thing')
params = pg.parameter(i[1])
s = params.logic()
b = s[0].stringify()
if b[6] == '0' or b[6] == '1':
new_end.append(i)
g[0] = new_start
g[-1] = new_end
return g
def Kni_low2high(network, paramslist):
g = deepcopy(paramslist)
pg = DSGRN.ParameterGraph(network)
new_start = []
for i in paramslist[0]:
params = pg.parameter(i[1])
s = params.logic()
b = s[3].stringify()
if b[6:-2] == '0'*len(b[6:-2]):
new_start.append(i)
if new_start == []:
print('Abs high not in list, comptuting next best thing')
for i in paramslist[0]:
params = pg.parameter(i[1])
s = params.logic()
b = s[3].stringify()
if b[6] == '0' or b[6] == '1':
new_start.append(i)
new_end = []
for i in paramslist[-1]:
params = pg.parameter(i[1])
s = params.logic()
b = s[3].stringify()
if b[6:-2] == 'F'*len(b[6:-2]):
new_end.append(i)
if new_end == []:
for i in paramslist[-1]:
print('Abs low not in list, comptuting next best thing')
params = pg.parameter(i[1])
s = params.logic()
b = s[3].stringify()
if 'F' in b[6:-2]:
new_end.append(i)
g[0] = new_start
g[-1] = new_end
return g
def newlist(network, paramslist):
Redu_Hb = Hb_high2low(network, paramslist)
Redu_Kni = Kni_low2high(network, Redu_Hb)
pg = DSGRN.ParameterGraph(network)
params1 = pg.parameter(((Redu_Kni[0])[0])[-1])
params2 = pg.parameter(((Redu_Kni[-1])[0])[-1])
print("Checking first layer:")
print(params1)
print("Checking last layer:")
print(params2)
return Redu_Kni
| 27.407407 | 68 | 0.486149 | 394 | 2,960 | 3.576142 | 0.142132 | 0.019872 | 0.021292 | 0.090845 | 0.779276 | 0.779276 | 0.779276 | 0.779276 | 0.770759 | 0.770759 | 0 | 0.038462 | 0.367568 | 2,960 | 107 | 69 | 27.663551 | 0.714209 | 0 | 0 | 0.835294 | 0 | 0 | 0.081502 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035294 | false | 0 | 0.023529 | 0 | 0.094118 | 0.094118 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc8c0bc40ee7b1e9ca7b074e86a0c305b7b1eb3d | 29 | py | Python | mmic/components/base/__init__.py | MolSSI/MMComponents | 691a0535d1d3c421bc2d9c38c41864554317bcd0 | [
"BSD-3-Clause"
] | 3 | 2021-02-20T22:29:24.000Z | 2021-08-08T05:40:16.000Z | mmic/components/base/__init__.py | MolSSI/MMComponents | 691a0535d1d3c421bc2d9c38c41864554317bcd0 | [
"BSD-3-Clause"
] | 2 | 2021-09-23T16:17:43.000Z | 2021-11-10T03:30:42.000Z | mmic/components/base/__init__.py | MolSSI/MMComponents | 691a0535d1d3c421bc2d9c38c41864554317bcd0 | [
"BSD-3-Clause"
] | 2 | 2021-04-09T23:05:42.000Z | 2021-10-09T14:27:19.000Z | from . import base_component
| 14.5 | 28 | 0.827586 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4b104ec04332aacfef12e2ea20d5bc68e97910a | 54,469 | py | Python | resources/robot/project_files/robot_drivers/hex_walker_data.py | ramk94/Thief_Policemen | 557701909a20f9a50c9bebed8532873a1910e599 | [
"MIT"
] | 3 | 2018-11-25T02:45:54.000Z | 2019-02-13T04:27:40.000Z | resources/robot/project_files/robot_drivers/hex_walker_data.py | ramk94/Thief_Policemen | 557701909a20f9a50c9bebed8532873a1910e599 | [
"MIT"
] | null | null | null | resources/robot/project_files/robot_drivers/hex_walker_data.py | ramk94/Thief_Policemen | 557701909a20f9a50c9bebed8532873a1910e599 | [
"MIT"
] | null | null | null | from leg_data import *
class Hex_Walker_Position(object): # Class name should be camelcase but I'll let it go
"""
Object to store the positions of all legs for a desired stance. Also has a list of all save
moves that the hexapod can make from this stance.
"""
def __init__(self, rf_pos, rm_pos, rr_pos, lr_pos, lm_pos, lf_pos, safe_move_list, description):
"""
:param rf_pos: Right front leg position
:param rm_pos: Right mid leg position
:param rr_pos: Right rear leg position
:param lr_pos: Left rear leg position
:param lm_pos: Left mid leg position
:param lf_pos: Left front leg position
:param safe_move_list: List of approved moves that won't damage the robot
:param description: Short description of the current stance
"""
self.rf_pos = rf_pos
self.rm_pos = rm_pos
self.rr_pos = rr_pos
self.lr_pos = lr_pos
self.lm_pos = lm_pos
self.lf_pos = lf_pos
self.safe_moves = safe_move_list
self.description = description
def __str__(self):
"""
Simple function to assemble a console message when the hex walker object is instantiated
:return: String with the positions of each leg in clockwise order
"""
start_str = "--------------------------hex_walker position is------------------\n"
rf_str = "rf: " + str(self.rf_pos) + "\n"
rm_str = "rm: " + str(self.rm_pos) + "\n"
rr_str = "rr: " + str(self.rr_pos) + "\n"
lr_str = "lr: " + str(self.lr_pos) + "\n"
lm_str = "lm: " + str(self.lm_pos) + "\n"
lf_str = "lf: " + str(self.lf_pos) + "\n"
return start_str + rf_str + rm_str + rr_str + lr_str + lm_str + lf_str
# NOTE: I have left in repeated steps and simply commented them out.
# It helps for continuity and error checking since you can see the entire process
# Enumerated list of all possible hex_walker positions
# possible hex_walker positions during a tripod "walk" cycle
NORMAL_NEUTRAL = 1
NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL = 2
NORMAL_TRI_RIGHT_BACK_LEFT_UP_FORWARD = 3
NORMAL_TRI_RIGHT_BACK_LEFT_FORWARD = 4
NORMAL_TRI_RIGHT_UP_BACK_LEFT_FORWARD = 5
NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL = 6
NORMAL_TRI_RIGHT_UP_FORWARD_LEFT_BACK = 7
NORMAL_TRI_RIGHT_FORWARD_LEFT_BACK = 8
NORMAL_TRI_RIGHT_FORWARD_LEFT_UP_BACK = 9
# NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
# possible hex_walker positions during a tripod "rotate" cycle
# NORMAL_NEUTRAL
# NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
NORMAL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT = 10
NORMAL_TRI_RIGHT_RIGHT_LEFT_LEFT = 11
NORMAL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT = 12
# NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
NORMAL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT = 13
NORMAL_TRI_RIGHT_LEFT_LEFT_RIGHT = 14
NORMAL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT = 15
# NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
# possible hex_walker positions during a tripod "walk" cycle
CROUCH_NEUTRAL = 16
CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL = 17
CROUCH_TRI_RIGHT_BACK_LEFT_UP_FORWARD = 18
CROUCH_TRI_RIGHT_BACK_LEFT_FORWARD = 19
CROUCH_TRI_RIGHT_UP_BACK_LEFT_FORWARD = 20
CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL = 21
CROUCH_TRI_RIGHT_UP_FORWARD_LEFT_BACK = 22
CROUCH_TRI_RIGHT_FORWARD_LEFT_BACK = 23
CROUCH_TRI_RIGHT_FORWARD_LEFT_UP_BACK = 24
# CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
# possible hex_walker positions during a tripod "rotate" cycle
# CROUCH_NEUTRAL
# CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
CROUCH_TRI_RIGHT_RIGHT_LEFT_UP_LEFT = 25
CROUCH_TRI_RIGHT_RIGHT_LEFT_LEFT = 26
CROUCH_TRI_RIGHT_UP_RIGHT_LEFT_LEFT = 27
# CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
CROUCH_TRI_RIGHT_UP_LEFT_LEFT_RIGHT = 28
CROUCH_TRI_RIGHT_LEFT_LEFT_RIGHT = 29
CROUCH_TRI_RIGHT_LEFT_LEFT_UP_RIGHT = 30
# CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
# possible hex_walker positions during a tripod "walk" cycle
TALL_NEUTRAL = 31
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL = 32
TALL_TRI_RIGHT_BACK_LEFT_UP_FORWARD = 33
TALL_TRI_RIGHT_BACK_LEFT_FORWARD = 34
TALL_TRI_RIGHT_UP_BACK_LEFT_FORWARD = 35
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL = 36
TALL_TRI_RIGHT_UP_FORWARD_LEFT_BACK = 37
TALL_TRI_RIGHT_FORWARD_LEFT_BACK = 38
TALL_TRI_RIGHT_FORWARD_LEFT_UP_BACK = 39
# TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
# possible hex_walker positions during a tripod "rotate" cycle
# TALL_NEUTRAL
# TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
TALL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT = 40
TALL_TRI_RIGHT_RIGHT_LEFT_LEFT = 41
TALL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT = 42
# TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
TALL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT = 43
TALL_TRI_RIGHT_LEFT_LEFT_RIGHT = 44
TALL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT = 45
# TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
# possible hex_walker positions during a tripod "side walk" cycle
# "front" doesn't refer to the label on the robot. The front is just the side that the robot is moving towards.
# TALL_NEUTRAL = 46
TALL_TRI_FRONT_CENTER_UP_OUT_BACK_NEUTRAL = 46
TALL_TRI_FRONT_CENTER_OUT_BACK_UP_NEUTRAL = 47
TALL_TRI_FRONT_BACKWARDS_BACK_UP_NEUTRAL= 48
TALL_TRI_FRONT_BACKWARDS_BACK_NEUTRAL= 49
TALL_TRI_FRONT_UP_NEUTRAL_BACK_NEUTRAL = 50
TALL_TRI_FRONT_UP_NEUTRAL_BACK_BACKWARDS = 51
TALL_TRI_FRONT_NEUTRAL_BACK_BACKWARDS = 52
TALL_TRI_FRONT_NEUTRAL_BACK_UP_NEUTRAL = 53
# bounce position
TALL_TRI_BOUNCE_DOWN = 54
# Fine rotations
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_UP_LEFT = 55
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_LEFT = 56
TALL_TRI_FINE_RIGHT_UP_RIGHT_LEFT_LEFT = 57
# TALL_TRI_FINE_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
TALL_TRI_FINE_RIGHT_UP_LEFT_LEFT_RIGHT = 58
TALL_TRI_FINE_RIGHT_LEFT_LEFT_RIGHT = 59
TALL_TRI_FINE_RIGHT_LEFT_LEFT_UP_RIGHT = 60
# testing positions
FRONT_LEGS_UP = 1001
# these are all defines as hex_walker_position(rf, rm, rr, lr, lm, lf)
HEX_WALKER_POSITIONS = {
# Normal (standard height) walking positions the order that they need to execute
# 1
NORMAL_NEUTRAL:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
[NORMAL_NEUTRAL, CROUCH_NEUTRAL,
NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
TALL_NEUTRAL
],
"normal neutral position",
),
# 2
NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
[NORMAL_NEUTRAL,
NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
NORMAL_TRI_RIGHT_BACK_LEFT_UP_FORWARD,
NORMAL_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
NORMAL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT,
NORMAL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT
],
"right is neutral, left is up",
),
# 3
NORMAL_TRI_RIGHT_BACK_LEFT_UP_FORWARD:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["CORN_IN"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_UP_LEFT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_UP_IN"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_LEFT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_UP_OUT"],
[NORMAL_TRI_RIGHT_BACK_LEFT_UP_FORWARD,
NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
NORMAL_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
NORMAL_TRI_RIGHT_BACK_LEFT_FORWARD
],
"right back, left up",
),
# 4
NORMAL_TRI_RIGHT_BACK_LEFT_FORWARD:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["CORN_IN"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_LEFT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_IN"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_LEFT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_OUT"],
[NORMAL_TRI_RIGHT_BACK_LEFT_FORWARD,
NORMAL_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
NORMAL_TRI_RIGHT_BACK_LEFT_UP_FORWARD
],
"right is back, left is forward",
),
# 5
NORMAL_TRI_RIGHT_UP_BACK_LEFT_FORWARD:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["CORN_UP_IN"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_LEFT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_UP_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_IN"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_UP_LEFT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_OUT"],
[NORMAL_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
NORMAL_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
NORMAL_TRI_RIGHT_BACK_LEFT_FORWARD
],
"right is up, left is forward",
),
# 6
NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
NORMAL_TRI_MOVEMENT_TABLE["NEUTRAL"],
[NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
NORMAL_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
NORMAL_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
NORMAL_NEUTRAL,
NORMAL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
NORMAL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT
],
"right is up, left is neutral",
),
# 7
NORMAL_TRI_RIGHT_UP_FORWARD_LEFT_BACK:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["CORN_UP_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_RIGHT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_UP_IN"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_UP_RIGHT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_IN"],
[NORMAL_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
NORMAL_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
NORMAL_TRI_RIGHT_FORWARD_LEFT_BACK
],
"right is up, left is back",
),
# 8
NORMAL_TRI_RIGHT_FORWARD_LEFT_BACK:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["CORN_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_RIGHT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_IN"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_RIGHT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_IN"],
[NORMAL_TRI_RIGHT_FORWARD_LEFT_BACK,
NORMAL_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
NORMAL_TRI_RIGHT_FORWARD_LEFT_UP_BACK
],
"right is forward, left is back",
),
# 9
NORMAL_TRI_RIGHT_FORWARD_LEFT_UP_BACK:
Hex_Walker_Position(NORMAL_TRI_MOVEMENT_TABLE["CORN_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_UP_RIGHT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_IN"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_UP_OUT"],
NORMAL_TRI_MOVEMENT_TABLE["SIDE_RIGHT"],
NORMAL_TRI_MOVEMENT_TABLE["CORN_UP_IN"],
[NORMAL_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
NORMAL_TRI_RIGHT_BACK_LEFT_UP_FORWARD,
NORMAL_TRI_RIGHT_FORWARD_LEFT_BACK
],
"right is forward, left is up",
),
# Normal rotation movements
# 10
NORMAL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT:
Hex_Walker_Position(NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["UP_LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["UP_LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["UP_LEFT"],
[NORMAL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT,
NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
NORMAL_TRI_RIGHT_RIGHT_LEFT_LEFT
],
"right is right, left is up",
),
# 11
NORMAL_TRI_RIGHT_RIGHT_LEFT_LEFT:
Hex_Walker_Position(NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
[NORMAL_TRI_RIGHT_RIGHT_LEFT_LEFT,
NORMAL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
NORMAL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT
],
"right is right, left is left",
),
# 12
NORMAL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT:
Hex_Walker_Position(NORMAL_TRI_ROTATION_TABLE["UP_RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["UP_RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["UP_RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
[NORMAL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
NORMAL_TRI_RIGHT_RIGHT_LEFT_LEFT,
],
"right is up, left is left",
),
# 13
NORMAL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT:
Hex_Walker_Position(NORMAL_TRI_ROTATION_TABLE["UP_LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["UP_LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["UP_LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
[NORMAL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT,
NORMAL_TRI_RIGHT_LEFT_LEFT_RIGHT,
NORMAL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
],
"right is up, left is right",
),
# 14
NORMAL_TRI_RIGHT_LEFT_LEFT_RIGHT:
Hex_Walker_Position(NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["RIGHT"],
[NORMAL_TRI_RIGHT_LEFT_LEFT_RIGHT,
NORMAL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT,
NORMAL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT
],
"Right is left, left is right",
),
# 15
NORMAL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT:
Hex_Walker_Position(NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["UP_RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["UP_RIGHT"],
NORMAL_TRI_ROTATION_TABLE["LEFT"],
NORMAL_TRI_ROTATION_TABLE["UP_RIGHT"],
[NORMAL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT,
NORMAL_TRI_RIGHT_LEFT_LEFT_RIGHT,
NORMAL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
],
"right is left, left is up",
),
# Crouch (low height) walking positions the order that they need to execute
# 16
CROUCH_NEUTRAL:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
[CROUCH_NEUTRAL, NORMAL_NEUTRAL,
CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
],
"crouch neutral position",
),
# 17
CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
[CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
CROUCH_TRI_RIGHT_RIGHT_LEFT_UP_LEFT,
CROUCH_TRI_RIGHT_LEFT_LEFT_UP_RIGHT,
CROUCH_TRI_RIGHT_BACK_LEFT_UP_FORWARD,
CROUCH_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
CROUCH_NEUTRAL
],
"right is neutral, left is up",
),
# 18
CROUCH_TRI_RIGHT_BACK_LEFT_UP_FORWARD:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["SIDE_UP_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_OUT_RIGHT"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_UP_OUT_RIGHT"],
[CROUCH_TRI_RIGHT_BACK_LEFT_UP_FORWARD,
CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
CROUCH_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
CROUCH_TRI_RIGHT_BACK_LEFT_FORWARD
],
"right neutral, left up",
),
# 19
CROUCH_TRI_RIGHT_BACK_LEFT_FORWARD:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["SIDE_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_OUT_RIGHT"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_OUT_RIGHT"],
[CROUCH_TRI_RIGHT_BACK_LEFT_FORWARD,
CROUCH_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
CROUCH_TRI_RIGHT_BACK_LEFT_UP_FORWARD
],
"right is neutral, left is forward",
),
# 20
CROUCH_TRI_RIGHT_UP_BACK_LEFT_FORWARD:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["SIDE_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_UP_OUT_RIGHT"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_OUT_RIGHT"],
[CROUCH_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
CROUCH_TRI_RIGHT_BACK_LEFT_FORWARD,
],
"right is up, left is forward",
),
# 21
CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
[CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
CROUCH_TRI_RIGHT_UP_LEFT_LEFT_RIGHT,
CROUCH_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
CROUCH_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
CROUCH_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
CROUCH_NEUTRAL
],
"right is up, left is neutral",
),
# 22
CROUCH_TRI_RIGHT_UP_FORWARD_LEFT_BACK:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["CORN_UP_OUT_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_OUT_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["SIDE_UP_RIGHT"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
[CROUCH_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
CROUCH_TRI_RIGHT_FORWARD_LEFT_BACK
],
"right is up, left is neutral",
),
# 23
CROUCH_TRI_RIGHT_FORWARD_LEFT_BACK:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["CORN_OUT_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_OUT_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["SIDE_RIGHT"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
[CROUCH_TRI_RIGHT_FORWARD_LEFT_BACK,
CROUCH_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
CROUCH_TRI_RIGHT_FORWARD_LEFT_UP_BACK
],
"right is forward, left is neutral",
),
# 24
CROUCH_TRI_RIGHT_FORWARD_LEFT_UP_BACK:
Hex_Walker_Position(CROUCH_TRI_MOVEMENT_TABLE["CORN_OUT_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["NEUTRAL"],
CROUCH_TRI_MOVEMENT_TABLE["CORN_UP_OUT_LEFT"],
CROUCH_TRI_MOVEMENT_TABLE["SIDE_RIGHT"],
CROUCH_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
[CROUCH_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
CROUCH_TRI_RIGHT_FORWARD_LEFT_BACK
],
"right is forward, left is up",
),
# crouch rotation movements
# 25
CROUCH_TRI_RIGHT_RIGHT_LEFT_UP_LEFT:
Hex_Walker_Position(CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["UP_LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["UP_LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["UP_LEFT"],
[CROUCH_TRI_RIGHT_RIGHT_LEFT_UP_LEFT,
CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
CROUCH_TRI_RIGHT_RIGHT_LEFT_LEFT
],
"right is right, left is up",
),
# 26
CROUCH_TRI_RIGHT_RIGHT_LEFT_LEFT:
Hex_Walker_Position(CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
[CROUCH_TRI_RIGHT_RIGHT_LEFT_LEFT,
CROUCH_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
CROUCH_TRI_RIGHT_RIGHT_LEFT_UP_LEFT
],
"right is right, left is left",
),
# 27
CROUCH_TRI_RIGHT_UP_RIGHT_LEFT_LEFT:
Hex_Walker_Position(CROUCH_TRI_ROTATION_TABLE["UP_RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["UP_RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["UP_RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
[CROUCH_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
CROUCH_TRI_RIGHT_RIGHT_LEFT_LEFT,
],
"right is up, left is left",
),
# 28
CROUCH_TRI_RIGHT_UP_LEFT_LEFT_RIGHT:
Hex_Walker_Position(CROUCH_TRI_ROTATION_TABLE["UP_LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["UP_LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["UP_LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
[CROUCH_TRI_RIGHT_UP_LEFT_LEFT_RIGHT,
CROUCH_TRI_RIGHT_LEFT_LEFT_RIGHT,
CROUCH_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
],
"right is up, left is right",
),
# 29
CROUCH_TRI_RIGHT_LEFT_LEFT_RIGHT:
Hex_Walker_Position(CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["RIGHT"],
[CROUCH_TRI_RIGHT_LEFT_LEFT_RIGHT,
CROUCH_TRI_RIGHT_LEFT_LEFT_UP_RIGHT,
CROUCH_TRI_RIGHT_UP_LEFT_LEFT_RIGHT
],
"Right is left, left is right",
),
# 30
CROUCH_TRI_RIGHT_LEFT_LEFT_UP_RIGHT:
Hex_Walker_Position(CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["UP_RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["UP_RIGHT"],
CROUCH_TRI_ROTATION_TABLE["LEFT"],
CROUCH_TRI_ROTATION_TABLE["UP_RIGHT"],
[CROUCH_TRI_RIGHT_LEFT_LEFT_UP_RIGHT,
CROUCH_TRI_RIGHT_LEFT_LEFT_RIGHT,
CROUCH_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
],
"right is left, left is up",
),
# Tall (tall height) walking positions the order that they need to execute
# 31
TALL_NEUTRAL:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
[TALL_NEUTRAL, NORMAL_NEUTRAL,
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
TALL_TRI_FRONT_CENTER_UP_OUT_BACK_NEUTRAL,
TALL_TRI_BOUNCE_DOWN
],
"tall neutral position",
),
# 32
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
[TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
TALL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT,
TALL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT,
TALL_TRI_RIGHT_BACK_LEFT_UP_FORWARD,
TALL_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
TALL_NEUTRAL
],
"right is neutral, left is up",
),
# 33
TALL_TRI_RIGHT_BACK_LEFT_UP_FORWARD:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["SIDE_UP_LEFT"],
TALL_TRI_MOVEMENT_TABLE["CORN_OUT_RIGHT"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["CORN_UP_OUT_RIGHT"],
[TALL_TRI_RIGHT_BACK_LEFT_UP_FORWARD,
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
TALL_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
TALL_TRI_RIGHT_BACK_LEFT_FORWARD
],
"right neutral, left up",
),
# 34
TALL_TRI_RIGHT_BACK_LEFT_FORWARD:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["SIDE_LEFT"],
TALL_TRI_MOVEMENT_TABLE["CORN_OUT_RIGHT"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["CORN_OUT_RIGHT"],
[TALL_TRI_RIGHT_BACK_LEFT_FORWARD,
TALL_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
TALL_TRI_RIGHT_BACK_LEFT_UP_FORWARD
],
"right is neutral, left is forward",
),
# 35
TALL_TRI_RIGHT_UP_BACK_LEFT_FORWARD:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["SIDE_LEFT"],
TALL_TRI_MOVEMENT_TABLE["CORN_UP_OUT_RIGHT"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["CORN_OUT_RIGHT"],
[TALL_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
TALL_TRI_RIGHT_BACK_LEFT_FORWARD,
],
"right is up, left is forward",
),
# 36
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
[TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
TALL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT,
TALL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
TALL_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
TALL_TRI_RIGHT_UP_BACK_LEFT_FORWARD,
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_UP_LEFT,
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_LEFT,
TALL_TRI_FINE_RIGHT_UP_RIGHT_LEFT_LEFT,
TALL_TRI_FINE_RIGHT_UP_LEFT_LEFT_RIGHT,
TALL_TRI_FINE_RIGHT_LEFT_LEFT_RIGHT,
TALL_TRI_FINE_RIGHT_LEFT_LEFT_UP_RIGHT,
TALL_NEUTRAL
],
"right is up, left is neutral",
),
# 37
TALL_TRI_RIGHT_UP_FORWARD_LEFT_BACK:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["CORN_UP_OUT_LEFT"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["CORN_OUT_LEFT"],
TALL_TRI_MOVEMENT_TABLE["SIDE_UP_RIGHT"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
[TALL_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
TALL_TRI_RIGHT_FORWARD_LEFT_BACK
],
"right is up, left is neutral",
),
# 38
TALL_TRI_RIGHT_FORWARD_LEFT_BACK:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["CORN_OUT_LEFT"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["CORN_OUT_LEFT"],
TALL_TRI_MOVEMENT_TABLE["SIDE_RIGHT"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
[TALL_TRI_RIGHT_FORWARD_LEFT_BACK,
TALL_TRI_RIGHT_UP_FORWARD_LEFT_BACK,
TALL_TRI_RIGHT_FORWARD_LEFT_UP_BACK
],
"right is forward, left is neutral",
),
# 39
TALL_TRI_RIGHT_FORWARD_LEFT_UP_BACK:
Hex_Walker_Position(TALL_TRI_MOVEMENT_TABLE["CORN_OUT_LEFT"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_MOVEMENT_TABLE["CORN_UP_OUT_LEFT"],
TALL_TRI_MOVEMENT_TABLE["SIDE_RIGHT"],
TALL_TRI_MOVEMENT_TABLE["UP_NEUTRAL"],
[TALL_TRI_RIGHT_FORWARD_LEFT_UP_BACK,
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
TALL_TRI_RIGHT_FORWARD_LEFT_BACK
],
"right is forward, left is up",
),
# crouch rotation movements
# 40
TALL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT:
Hex_Walker_Position(TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["UP_LEFT"],
[TALL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT,
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
TALL_TRI_RIGHT_RIGHT_LEFT_LEFT
],
"right is right, left is up",
),
# 41
TALL_TRI_RIGHT_RIGHT_LEFT_LEFT:
Hex_Walker_Position(TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
[TALL_TRI_RIGHT_RIGHT_LEFT_LEFT,
TALL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
TALL_TRI_RIGHT_RIGHT_LEFT_UP_LEFT
],
"right is right, left is left",
),
# 42
TALL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT:
Hex_Walker_Position(TALL_TRI_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
[TALL_TRI_RIGHT_UP_RIGHT_LEFT_LEFT,
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
TALL_TRI_RIGHT_RIGHT_LEFT_LEFT,
],
"right is up, left is left",
),
# 43
TALL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT:
Hex_Walker_Position(TALL_TRI_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
[TALL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT,
TALL_TRI_RIGHT_LEFT_LEFT_RIGHT,
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
],
"right is up, left is right",
),
# 44
TALL_TRI_RIGHT_LEFT_LEFT_RIGHT:
Hex_Walker_Position(TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["RIGHT"],
[TALL_TRI_RIGHT_LEFT_LEFT_RIGHT,
TALL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT,
TALL_TRI_RIGHT_UP_LEFT_LEFT_RIGHT
],
"Right is left, left is right",
),
# 45
TALL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT:
Hex_Walker_Position(TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_ROTATION_TABLE["LEFT"],
TALL_TRI_ROTATION_TABLE["UP_RIGHT"],
[TALL_TRI_RIGHT_LEFT_LEFT_UP_RIGHT,
TALL_TRI_RIGHT_LEFT_LEFT_RIGHT,
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
],
"right is left, left is up",
),
# 46
TALL_TRI_FRONT_CENTER_UP_OUT_BACK_NEUTRAL:
Hex_Walker_Position(TALL_TRI_SIDE_MOVEMENT_TABLE["CENTER_UP_OUT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
[TALL_TRI_FRONT_CENTER_OUT_BACK_UP_NEUTRAL
],
"front leg up-out, all others neutral",
),
# 47
TALL_TRI_FRONT_CENTER_OUT_BACK_UP_NEUTRAL:
Hex_Walker_Position(TALL_TRI_SIDE_MOVEMENT_TABLE["CENTER_OUT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
[TALL_TRI_FRONT_BACKWARDS_BACK_UP_NEUTRAL
],
"front leg out, all others neutral",
),
# 48
TALL_TRI_FRONT_BACKWARDS_BACK_UP_NEUTRAL:
Hex_Walker_Position(TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["SIDE_OUT_RIGHT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["SIDE_OUT_LEFT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
[TALL_TRI_FRONT_BACKWARDS_BACK_NEUTRAL
],
"front legs back, all others up neutral",
),
# 49
TALL_TRI_FRONT_BACKWARDS_BACK_NEUTRAL:
Hex_Walker_Position(TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["SIDE_OUT_RIGHT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["SIDE_OUT_LEFT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
[TALL_TRI_FRONT_UP_NEUTRAL_BACK_NEUTRAL
],
"front legs back, all others neutral",
),
# 50
TALL_TRI_FRONT_UP_NEUTRAL_BACK_NEUTRAL:
Hex_Walker_Position(TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
[TALL_TRI_FRONT_UP_NEUTRAL_BACK_BACKWARDS
],
"front legs up neutral, all others neutral",
),
# 51
TALL_TRI_FRONT_UP_NEUTRAL_BACK_BACKWARDS:
Hex_Walker_Position(TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["SIDE_OUT_RIGHT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["CENTER_OUT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["SIDE_OUT_LEFT"],
[TALL_TRI_FRONT_NEUTRAL_BACK_BACKWARDS
],
"front legs up neutral, all others back",
),
# 52
TALL_TRI_FRONT_NEUTRAL_BACK_BACKWARDS:
Hex_Walker_Position(TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["SIDE_OUT_RIGHT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["CENTER_OUT"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["SIDE_OUT_LEFT"],
[TALL_TRI_FRONT_NEUTRAL_BACK_UP_NEUTRAL
],
"front legs neutral, all others back",
),
# 53
TALL_TRI_FRONT_NEUTRAL_BACK_UP_NEUTRAL:
Hex_Walker_Position(TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["NEUTRAL"],
TALL_TRI_SIDE_MOVEMENT_TABLE["UP_NEUTRAL"],
[TALL_NEUTRAL
],
"front legs neutral, all others up neutral",
),
# 54
TALL_TRI_BOUNCE_DOWN:
Hex_Walker_Position(MISC_TABLE["BOUNCE"],
MISC_TABLE["BOUNCE"],
MISC_TABLE["BOUNCE"],
MISC_TABLE["BOUNCE"],
MISC_TABLE["BOUNCE"],
MISC_TABLE["BOUNCE"],
[TALL_NEUTRAL
],
"crouched down from tall height",
),
# Fine rotations
# 55
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_UP_LEFT:
Hex_Walker_Position(TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_LEFT"],
[TALL_TRI_FINE_RIGHT_RIGHT_LEFT_UP_LEFT,
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL,
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_LEFT
],
"right is right, left is up",
),
# 56
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_LEFT:
Hex_Walker_Position(TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
[TALL_TRI_FINE_RIGHT_RIGHT_LEFT_LEFT,
TALL_TRI_FINE_RIGHT_UP_RIGHT_LEFT_LEFT,
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_UP_LEFT
],
"right is right, left is left",
),
# 57
TALL_TRI_FINE_RIGHT_UP_RIGHT_LEFT_LEFT:
Hex_Walker_Position(TALL_TRI_FINE_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
[TALL_TRI_FINE_RIGHT_UP_RIGHT_LEFT_LEFT,
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL,
TALL_TRI_FINE_RIGHT_RIGHT_LEFT_LEFT,
],
"right is up, left is left",
),
# 58
TALL_TRI_FINE_RIGHT_UP_LEFT_LEFT_RIGHT:
Hex_Walker_Position(TALL_TRI_FINE_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
[TALL_TRI_FINE_RIGHT_UP_LEFT_LEFT_RIGHT,
TALL_TRI_FINE_RIGHT_LEFT_LEFT_RIGHT,
TALL_TRI_RIGHT_UP_NEUTRAL_LEFT_NEUTRAL
],
"right is up, left is right",
),
# 59
TALL_TRI_FINE_RIGHT_LEFT_LEFT_RIGHT:
Hex_Walker_Position(TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["RIGHT"],
[TALL_TRI_FINE_RIGHT_LEFT_LEFT_RIGHT,
TALL_TRI_FINE_RIGHT_LEFT_LEFT_UP_RIGHT,
TALL_TRI_FINE_RIGHT_UP_LEFT_LEFT_RIGHT
],
"Right is left, left is right",
),
# 60
TALL_TRI_FINE_RIGHT_LEFT_LEFT_UP_RIGHT:
Hex_Walker_Position(TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_RIGHT"],
TALL_TRI_FINE_ROTATION_TABLE["LEFT"],
TALL_TRI_FINE_ROTATION_TABLE["UP_RIGHT"],
[TALL_TRI_FINE_RIGHT_LEFT_LEFT_UP_RIGHT,
TALL_TRI_FINE_RIGHT_LEFT_LEFT_RIGHT,
TALL_TRI_RIGHT_NEUTRAL_LEFT_UP_NEUTRAL
],
"right is left, left is up",
),
# past here are just positions that are used for testing.
# past here are just positions that are used for testing.
# They can only be reached by __set_hex_walker_position direct calls
FRONT_LEGS_UP:
Hex_Walker_Position(Leg_Position(180, 180, 90),
NORMAL_TRI_ROTATION_TABLE["NEUTRAL"],
NORMAL_TRI_ROTATION_TABLE["NEUTRAL"],
NORMAL_TRI_ROTATION_TABLE["NEUTRAL"],
NORMAL_TRI_ROTATION_TABLE["NEUTRAL"],
Leg_Position(180, 180, 90),
[],
"front two legs are raised",
)
}
| 52.223394 | 111 | 0.521416 | 5,463 | 54,469 | 4.568552 | 0.041186 | 0.08947 | 0.103854 | 0.055293 | 0.921188 | 0.915097 | 0.891177 | 0.828151 | 0.804672 | 0.766127 | 0 | 0.00776 | 0.422699 | 54,469 | 1,042 | 112 | 52.273512 | 0.785944 | 0.051828 | 0 | 0.677928 | 0 | 0 | 0.090545 | 0.001128 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002252 | false | 0 | 0.001126 | 0 | 0.005631 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4ec41146cec996f07097b56ade6a998c59320b4 | 142 | pyw | Python | main.pyw | DavidddM/ReverseLookup | 0c40e785490c96e4603e6f57d0a2276c1d74f1ee | [
"MIT"
] | null | null | null | main.pyw | DavidddM/ReverseLookup | 0c40e785490c96e4603e6f57d0a2276c1d74f1ee | [
"MIT"
] | null | null | null | main.pyw | DavidddM/ReverseLookup | 0c40e785490c96e4603e6f57d0a2276c1d74f1ee | [
"MIT"
] | null | null | null | from PythonGUI import get_form_root
from config import init_gui
form_root = get_form_root()
init_gui(form_root[0])
form_root[1].mainloop()
| 15.777778 | 35 | 0.802817 | 25 | 142 | 4.2 | 0.48 | 0.380952 | 0.209524 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.112676 | 142 | 8 | 36 | 17.75 | 0.81746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f4f0e1e8c9b1228e093be19b38ae16f521de057f | 76 | py | Python | great_expectations/rule_based_profiler/domain_builder/types/__init__.py | victorcouste/great_expectations | 9ee46d83feb87e13c769e2ae35b899b3f18d73a4 | [
"Apache-2.0"
] | 6,451 | 2017-09-11T16:32:53.000Z | 2022-03-31T23:27:49.000Z | great_expectations/rule_based_profiler/domain_builder/types/__init__.py | victorcouste/great_expectations | 9ee46d83feb87e13c769e2ae35b899b3f18d73a4 | [
"Apache-2.0"
] | 3,892 | 2017-09-08T18:57:50.000Z | 2022-03-31T23:15:20.000Z | great_expectations/rule_based_profiler/domain_builder/types/__init__.py | victorcouste/great_expectations | 9ee46d83feb87e13c769e2ae35b899b3f18d73a4 | [
"Apache-2.0"
] | 1,023 | 2017-09-08T15:22:05.000Z | 2022-03-31T21:17:08.000Z | from .domain import Domain, InferredSemanticDomainType, SemanticDomainTypes
| 38 | 75 | 0.881579 | 6 | 76 | 11.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 76 | 1 | 76 | 76 | 0.957143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76082194ca949b56fe80c644af0dcb0e2ba47867 | 31 | py | Python | imtools/__init__.py | clavicule/periscope | 8d2613c112e1fed52ae241db9bec315f04074e77 | [
"Apache-2.0"
] | null | null | null | imtools/__init__.py | clavicule/periscope | 8d2613c112e1fed52ae241db9bec315f04074e77 | [
"Apache-2.0"
] | null | null | null | imtools/__init__.py | clavicule/periscope | 8d2613c112e1fed52ae241db9bec315f04074e77 | [
"Apache-2.0"
] | null | null | null | from .scissors import Scissors
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5208c2769760ab08e17812cc831fd9853ff12ec9 | 178 | py | Python | scripts/necklace/gui/__init__.py | r4inm4ker/neklace | 93d7ee3d3b9017144fcda16a34959933b4a48a06 | [
"Apache-2.0"
] | null | null | null | scripts/necklace/gui/__init__.py | r4inm4ker/neklace | 93d7ee3d3b9017144fcda16a34959933b4a48a06 | [
"Apache-2.0"
] | null | null | null | scripts/necklace/gui/__init__.py | r4inm4ker/neklace | 93d7ee3d3b9017144fcda16a34959933b4a48a06 | [
"Apache-2.0"
] | 1 | 2017-12-08T15:21:31.000Z | 2017-12-08T15:21:31.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Author: Jefri Haryono
# @Email : jefri.yeh@gmail.com
def launch():
from necklace.gui import simple_ui
simple_ui.launch() | 22.25 | 38 | 0.662921 | 26 | 178 | 4.461538 | 0.846154 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006757 | 0.168539 | 178 | 8 | 39 | 22.25 | 0.777027 | 0.52809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
520ab492c50d7771ff6f05d002cc0e999b4ab58d | 1,669 | py | Python | Code Examples/Z3 Examples/z3_quantified_unknown.py | codersguild/Formal-Methods | d96429a68f67c57107820d4c9a08849a939e5895 | [
"Apache-2.0"
] | 7 | 2021-06-10T21:38:24.000Z | 2022-03-06T15:53:06.000Z | Code Examples/Z3 Examples/z3_quantified_unknown.py | codersguild/Formal-Methods | d96429a68f67c57107820d4c9a08849a939e5895 | [
"Apache-2.0"
] | null | null | null | Code Examples/Z3 Examples/z3_quantified_unknown.py | codersguild/Formal-Methods | d96429a68f67c57107820d4c9a08849a939e5895 | [
"Apache-2.0"
] | 2 | 2021-10-02T08:17:57.000Z | 2022-03-06T15:47:41.000Z | from z3 import *
x, y, z = Reals('x y z')
m, n, l = Reals('m n l')
u, v = Ints('u v')
S = SolverFor("NRA")
S.add(x >= 0)
S.add(y >= 30, z <= 50)
S.add(m >= 5, n >= 5)
S.add(m * x + n * y + l > 300)
print(S.check())
print(S.model())
S.add(ForAll((u, v), Implies(m * u + n * v + l > 400, u + v + z <= 100)))
print(S.check())
print(S.reason_unknown())
print(S.sexpr())
S = SolverFor("NRA")
S.add(x >= 0)
S.add(y >= 30, z <= 50)
S.add(m >= 5, n >= 5)
S.add(m * x + n * y + l > 300)
S.add(ForAll([u, v], Implies(m * u + n * v + l > 300, u + v + z <= 50)))
print(S.check())
print(S.sexpr())
print(S.to_smt2())
"""
(set-logic ALL)
(set-option :produce-models true)
(declare-fun x () Real)
(declare-fun y () Real)
(declare-fun z () Real)
(declare-fun m () Real)
(declare-fun n () Real)
(declare-fun l () Real)
(assert (>= x 0.0))
(assert (>= y 30.0))
(assert (<= z 50.0))
(assert (>= m 5.0))
(assert (>= n 5.0))
(assert (not (<= (+ (* m x) (* n y) l) 300.0)))
(assert (forall ((u Int) (v Int))
(let ((a!1 (<= (+ (* m (to_real u)) (* n (to_real v)) l) 300.0)))
(or (<= (+ (to_real u) (to_real v) z) 50.0) a!1))))
(check-sat)
(get-model)
"""
"""
(set-logic ALL)
(set-option :produce-models true)
(declare-fun x () Real)
(declare-fun y () Real)
(declare-fun z () Real)
(declare-fun m () Real)
(declare-fun n () Real)
(declare-fun l () Real)
(assert
(>= x 0.0))
(assert
(>= y 30.0))
(assert
(<= z 50.0))
(assert
(>= m 5.0))
(assert
(>= n 5.0))
(assert
(not (<= (+ (* m x) (* n y) l) 300.0)))
(assert
(forall ((u Int) (v Int) )(or (<= (+ (to_real u) (to_real v) z) 50.0) (<= (+ (* m (to_real u)) (* n (to_real v)) l) 300.0)))
)
(check-sat)
(get-model)
"""
| 19.635294 | 125 | 0.513481 | 323 | 1,669 | 2.622291 | 0.164087 | 0.141677 | 0.165289 | 0.01889 | 0.829988 | 0.769776 | 0.769776 | 0.769776 | 0.769776 | 0.769776 | 0 | 0.061086 | 0.205512 | 1,669 | 84 | 126 | 19.869048 | 0.577677 | 0 | 0 | 0.625 | 0 | 0 | 0.030744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52134cfd30ea1a9668c66890beadbd51e745413b | 47 | py | Python | almanak/_helpers/__init__.py | clausjuhl/almanak | e29f98e2ebc7150930602b9dccb222354954fdc8 | [
"MIT"
] | null | null | null | almanak/_helpers/__init__.py | clausjuhl/almanak | e29f98e2ebc7150930602b9dccb222354954fdc8 | [
"MIT"
] | 1 | 2021-04-30T20:58:01.000Z | 2021-04-30T20:58:01.000Z | almanak/_helpers/__init__.py | clausjuhl/almanak | e29f98e2ebc7150930602b9dccb222354954fdc8 | [
"MIT"
] | null | null | null | from .response_handlers import response_handler | 47 | 47 | 0.914894 | 6 | 47 | 6.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 47 | 1 | 47 | 47 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
521afd2480f1bf2a63c4d291e9ccbff9bce1488b | 298 | py | Python | ku/backend_ext/__init__.py | tonandr/keras_unsupervised | fd2a2494bca2eb745027178e220b42b5e5882f94 | [
"BSD-3-Clause"
] | 4 | 2019-07-28T11:56:01.000Z | 2021-11-06T02:50:58.000Z | ku/backend_ext/__init__.py | tonandr/keras_unsupervised | fd2a2494bca2eb745027178e220b42b5e5882f94 | [
"BSD-3-Clause"
] | 2 | 2021-06-30T01:00:07.000Z | 2021-07-21T08:04:40.000Z | ku/backend_ext/__init__.py | tonandr/keras_unsupervised | fd2a2494bca2eb745027178e220b42b5e5882f94 | [
"BSD-3-Clause"
] | null | null | null | from .tensorflow_backend import pad
from .tensorflow_backend import transpose
from .tensorflow_backend import multivariate_normal_diag
from .tensorflow_backend import where
from .tensorflow_backend import cond
from .tensorflow_backend import broadcast_to
from .tensorflow_backend import add_n | 42.571429 | 57 | 0.865772 | 39 | 298 | 6.333333 | 0.384615 | 0.396761 | 0.595142 | 0.765182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110738 | 298 | 7 | 58 | 42.571429 | 0.932075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
524f2a3eb8d1c3977d0b92de6758d568a0ef1f7b | 12,694 | py | Python | suites/API/NetworkBroadcastApi/BroadcastTransactionWithCallback.py | echoprotocol/pytests | 5dce698558c2ba703aea03aab79906af1437da5d | [
"MIT"
] | 1 | 2021-03-12T05:17:02.000Z | 2021-03-12T05:17:02.000Z | suites/API/NetworkBroadcastApi/BroadcastTransactionWithCallback.py | echoprotocol/pytests | 5dce698558c2ba703aea03aab79906af1437da5d | [
"MIT"
] | 1 | 2019-11-19T12:10:59.000Z | 2019-11-19T12:10:59.000Z | suites/API/NetworkBroadcastApi/BroadcastTransactionWithCallback.py | echoprotocol/pytests | 5dce698558c2ba703aea03aab79906af1437da5d | [
"MIT"
] | 2 | 2019-04-29T10:46:48.000Z | 2019-10-29T10:01:03.000Z | # -*- coding: utf-8 -*-
import json
from common.base_test import BaseTest
import lemoncheesecake.api as lcc
from lemoncheesecake.matching import check_that, check_that_in, equal_to, is_none
SUITE = {
"description": "Method 'broadcast_transaction_with_callback'"
}
@lcc.prop("main", "type")
@lcc.prop("negative", "type")
@lcc.tags("api", "network_broadcast_api", "broadcast_transaction_with_callback")
@lcc.suite("Check work of method 'broadcast_transaction_with_callback'", rank=1)
class BroadcastTransactionWithCallback(BaseTest):
def __init__(self):
super().__init__()
self.__database_api_identifier = None
self.__registration_api_identifier = None
self.__network_broadcast_identifier = None
self.echo_acc0 = None
def setup_suite(self):
super().setup_suite()
self._connect_to_echopy_lib()
lcc.set_step("Setup for {}".format(self.__class__.__name__))
self.__database_api_identifier = self.get_identifier("database")
self.__registration_api_identifier = self.get_identifier("registration")
self.__network_broadcast_identifier = self.get_identifier("network_broadcast")
lcc.log_info(
"API identifiers are: database='{}', registration='{}', network_broadcast='{}'".format(
self.__database_api_identifier, self.__registration_api_identifier, self.__network_broadcast_identifier
)
)
self.echo_acc0 = self.get_account_id(
self.accounts[0], self.__database_api_identifier, self.__registration_api_identifier
)
lcc.log_info("Echo account are: '{}'".format(self.echo_acc0))
def setup_test(self, test):
lcc.set_step("Setup for '{}'".format(str(test).split(".")[-1]))
self.utils.cancel_all_subscriptions(self, self.__database_api_identifier)
lcc.log_info("Canceled all subscriptions successfully")
def teardown_test(self, test, status):
lcc.set_step("Teardown for '{}'".format(str(test).split(".")[-1]))
self.utils.cancel_all_subscriptions(self, self.__database_api_identifier)
lcc.log_info("Canceled all subscriptions successfully")
lcc.log_info("Test {}".format(status))
def teardown_suite(self):
self._disconnect_to_echopy_lib()
super().teardown_suite()
@lcc.prop("type", "method")
@lcc.test("Simple work of method 'broadcast_transaction_with_callback'")
def method_main_check(self, get_random_integer, get_random_integer_up_to_ten, get_random_valid_account_name):
subscription_callback_id = get_random_integer
transfer_amount = get_random_integer_up_to_ten
account_names = get_random_valid_account_name
lcc.set_step("Create new account")
account_id = self.get_account_id(
account_names, self.__database_api_identifier, self.__registration_api_identifier
)
lcc.log_info("New Echo account created, account_id='{}'".format(account_id))
lcc.set_step("Create signed transaction of transfer operation")
transfer_operation = self.echo_ops.get_transfer_operation(
echo=self.echo, from_account_id=self.echo_acc0, amount=transfer_amount, to_account_id=account_id
)
collected_operation = self.collect_operations(transfer_operation, self.__database_api_identifier)
signed_tx = self.echo_ops.broadcast(echo=self.echo, list_operations=collected_operation, no_broadcast=True)
lcc.log_info("Signed transaction of 'transfer_operation' created successfully")
lcc.set_step("Get account balance before transfer transaction broadcast")
response_id = self.send_request(
self.get_request("get_account_balances", [account_id, [self.echo_asset]]), self.__database_api_identifier
)
account_balance = self.get_response(response_id)["result"][0]["amount"]
lcc.log_info("'{}' account has '{}' in '{}' assets".format(account_id, account_balance, self.echo_asset))
lcc.set_step("Broadcast transaction by calling method 'broadcast_transaction_with_callback'")
params = [subscription_callback_id, signed_tx]
response_id = self.send_request(
self.get_request("broadcast_transaction_with_callback", params), self.__network_broadcast_identifier
)
response = self.get_response(response_id)
check_that("'broadcast_transaction_with_callback' result", response["result"], is_none(), quiet=True)
lcc.set_step("Get account balance after transfer transaction broadcast")
self.produce_block(self.__database_api_identifier)
response_id = self.send_request(
self.get_request("get_account_balances", [account_id, [self.echo_asset]]), self.__database_api_identifier
)
updated_account_balance = self.get_response(response_id)["result"][0]["amount"]
lcc.log_info(
"'{}' account has '{}' in '{}' assets".format(account_id, updated_account_balance, self.echo_asset)
)
lcc.set_step("Check that transfer operation completed successfully")
check_that(
"account balance increased by transfered amount", updated_account_balance - account_balance,
equal_to(transfer_amount)
)
@lcc.prop("negative", "type")
@lcc.tags("api", "network_broadcast_api", "broadcast_transaction_with_callback")
@lcc.suite("Negative testing of method 'broadcast_transaction_with_callback'", rank=3)
class NegativeTesting(BaseTest):
def __init__(self):
super().__init__()
self.__database_api_identifier = None
self.__registration_api_identifier = None
self.__network_broadcast_identifier = None
self.echo_acc0 = None
def setup_suite(self):
super().setup_suite()
self._connect_to_echopy_lib()
lcc.set_step("Setup for {}".format(self.__class__.__name__))
self.__database_api_identifier = self.get_identifier("database")
self.__registration_api_identifier = self.get_identifier("registration")
self.__network_broadcast_identifier = self.get_identifier("network_broadcast")
lcc.log_info(
"API identifiers are: database='{}', registration='{}', network_broadcast='{}'".format(
self.__database_api_identifier, self.__registration_api_identifier, self.__network_broadcast_identifier
)
)
self.echo_acc0 = self.get_account_id(
self.accounts[0], self.__database_api_identifier, self.__registration_api_identifier
)
lcc.log_info("Echo account are: '{}'".format(self.echo_acc0))
def setup_test(self, test):
lcc.set_step("Setup for '{}'".format(str(test).split(".")[-1]))
self.utils.cancel_all_subscriptions(self, self.__database_api_identifier)
lcc.log_info("Canceled all subscriptions successfully")
def teardown_test(self, test, status):
lcc.set_step("Teardown for '{}'".format(str(test).split(".")[-1]))
self.utils.cancel_all_subscriptions(self, self.__database_api_identifier)
lcc.log_info("Canceled all subscriptions successfully")
lcc.log_info("Test {}".format(status))
def teardown_suite(self):
self._disconnect_to_echopy_lib()
super().teardown_suite()
def get_error_message(self, response_id, debug_mode=False, log_response=False):
try:
response = self.get_response(response_id, debug_mode, log_response)
return response
except Exception as e:
ans = json.loads(str(e)[26:], strict=False)
return ans["error"]["message"]
def get_error_message_callback(self, response_id, debug_mode=False, log_response=False):
try:
null_response = self.get_error_message(response_id, debug_mode, log_response)
error_notice = self.get_notice(None, debug_mode, log_response)
return null_response, error_notice
except Exception as e:
ans = json.loads(str(e)[26:], strict=False)
return ans["error"]["message"]
@lcc.prop("type", "method")
@lcc.test("Negative test 'broadcast_transaction_with_callback' with wrong signature")
@lcc.depends_on(
"API.NetworkBroadcastApi.BroadcastTransactionWithCallback.BroadcastTransactionWithCallback.method_main_check"
)
def check_broadcast_transaction_with_callback_with_wrong_signature(
self, get_random_integer, get_random_integer_up_to_ten, get_random_valid_account_name
):
subscription_callback_id = get_random_integer
transfer_amount = get_random_integer_up_to_ten
expected_message = "irrelevant signature included: Unnecessary signature(s) detected"
account_names = get_random_valid_account_name
lcc.set_step("Create new account")
account_id = self.get_account_id(
account_names, self.__database_api_identifier, self.__registration_api_identifier
)
lcc.log_info("New Echo account created, account_id='{}'".format(account_id))
lcc.set_step("Create signed transaction of transfer operation")
transfer_operation = self.echo_ops.get_transfer_operation(
echo=self.echo,
from_account_id=self.echo_acc0,
amount=transfer_amount,
to_account_id=account_id,
signer=account_id
)
collected_operation = self.collect_operations(transfer_operation, self.__database_api_identifier)
signed_tx = self.echo_ops.broadcast(echo=self.echo, list_operations=collected_operation, no_broadcast=True)
lcc.log_info("Signed transaction of 'transfer_operation' with wrong signer created successfully")
lcc.set_step("Broadcast signed transfer transaction to get error message")
params = [subscription_callback_id, signed_tx]
response_id = self.send_request(
self.get_request("broadcast_transaction_with_callback", params), self.__network_broadcast_identifier
)
error_message = self.get_error_message(response_id)
check_that("message", error_message, equal_to(expected_message))
@lcc.prop("type", "method")
@lcc.test("Negative test 'broadcast_transaction_with_callback' with wrong expiration time")
@lcc.depends_on(
"API.NetworkBroadcastApi.BroadcastTransactionWithCallback.BroadcastTransactionWithCallback.method_main_check"
)
def check_broadcast_transaction_with_callback_with_wrong_expiration_time(
self, get_random_integer, get_random_integer_up_to_ten, get_random_valid_account_name
):
subscription_callback_id = get_random_integer
transfer_amount = get_random_integer_up_to_ten
expiration_time_offset = 500
expected_message = "Assert Exception: now <= trx.expiration: "
account_names = get_random_valid_account_name
lcc.set_step("Create new account")
account_id = self.get_account_id(
account_names, self.__database_api_identifier, self.__registration_api_identifier
)
lcc.log_info("New Echo account created, account_id='{}'".format(account_id))
lcc.set_step("Create signed transaction of transfer operation")
transfer_operation = self.echo_ops.get_transfer_operation(
echo=self.echo, from_account_id=self.echo_acc0, amount=transfer_amount, to_account_id=account_id
)
collected_operation = self.collect_operations(transfer_operation, self.__database_api_identifier)
datetime_str = self.get_datetime(global_datetime=True)
datetime_str = self.subtract_from_datetime(datetime_str, seconds=expiration_time_offset)
signed_tx = self.echo_ops.broadcast(
echo=self.echo, list_operations=collected_operation, expiration=datetime_str, no_broadcast=True
)
lcc.log_info("Signed transaction of 'transfer_operation' with expiration time offset created successfully")
lcc.set_step("Broadcast signed transfer transaction to get error message")
params = [subscription_callback_id, signed_tx]
response_id = self.send_request(
self.get_request("broadcast_transaction_with_callback", params), self.__network_broadcast_identifier
)
null_response, error_notice = self.get_error_message_callback(response_id, False, False)
check_that_in(null_response, "id", equal_to(response_id), "result", is_none(), quiet=False)
error_string = "{}: {}".format(error_notice[1][0]['message'], error_notice[1][0]['stack'][0]['format'])
check_that("broadcast with callback error notice format", error_string, equal_to(expected_message), quiet=False)
| 50.173913 | 120 | 0.713802 | 1,513 | 12,694 | 5.564442 | 0.110377 | 0.049412 | 0.037415 | 0.062359 | 0.835491 | 0.814824 | 0.786079 | 0.767787 | 0.755672 | 0.755672 | 0 | 0.003104 | 0.187805 | 12,694 | 252 | 121 | 50.373016 | 0.813482 | 0.001654 | 0 | 0.62212 | 0 | 0 | 0.218688 | 0.060848 | 0 | 0 | 0 | 0 | 0.004608 | 1 | 0.069124 | false | 0 | 0.018433 | 0 | 0.115207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
528f7e4864d8d239a3a307054e4a0a702c9bd655 | 116 | py | Python | Statistics/StandardDeviation.py | dahliamusa/statsCalculator | b9aeca9519ecb2e2d22fada35b5bd23adcfc1df5 | [
"MIT"
] | 1 | 2020-11-07T07:47:29.000Z | 2020-11-07T07:47:29.000Z | Statistics/StandardDeviation.py | dahliamusa/statsCalculator | b9aeca9519ecb2e2d22fada35b5bd23adcfc1df5 | [
"MIT"
] | 17 | 2020-11-09T01:07:43.000Z | 2020-11-09T01:09:31.000Z | Statistics/StandardDeviation.py | dahliamusa/statsCalculator-601 | b9aeca9519ecb2e2d22fada35b5bd23adcfc1df5 | [
"MIT"
] | null | null | null | from math import pow
from Statistics.Variance import variance
def stdev(data):
return pow(variance(data), 0.5) | 19.333333 | 40 | 0.758621 | 18 | 116 | 4.888889 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.155172 | 116 | 6 | 41 | 19.333333 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
8743fd88635b996bcada9ee7f74d887902ad3484 | 98 | py | Python | story/story_module.py | chimpdude2/pyrpg | 1f8d14a1e646cb6763330a5614ab8bbadceed8aa | [
"MIT"
] | null | null | null | story/story_module.py | chimpdude2/pyrpg | 1f8d14a1e646cb6763330a5614ab8bbadceed8aa | [
"MIT"
] | null | null | null | story/story_module.py | chimpdude2/pyrpg | 1f8d14a1e646cb6763330a5614ab8bbadceed8aa | [
"MIT"
] | null | null | null | class StoryModule:
def executeModule(self):
print
def executeModule(self, character):
print | 16.333333 | 36 | 0.765306 | 11 | 98 | 6.818182 | 0.636364 | 0.426667 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153061 | 98 | 6 | 37 | 16.333333 | 0.903614 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0.4 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
5ea67a426278315c73c1cd154734fa7f5a4f6943 | 77 | py | Python | tmp.py | skokal01/Interview-Practice | 3432f31e45ac70c3e570bd8f832c828b2836e622 | [
"Apache-2.0"
] | null | null | null | tmp.py | skokal01/Interview-Practice | 3432f31e45ac70c3e570bd8f832c828b2836e622 | [
"Apache-2.0"
] | null | null | null | tmp.py | skokal01/Interview-Practice | 3432f31e45ac70c3e570bd8f832c828b2836e622 | [
"Apache-2.0"
] | null | null | null | arr = [0,1,2,3,4,5,6,7,8,9,10]
for i in xrange(10,4,-1):
print arr[i]
| 19.25 | 31 | 0.519481 | 22 | 77 | 1.818182 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.262295 | 0.207792 | 77 | 3 | 32 | 25.666667 | 0.393443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ed3af9b941586fac5901c611df65051a44b1df8 | 38,074 | py | Python | allel/stats/diversity.py | yakkoroma/scikit-allel | ee2362c6bd4c3e39d2bd5e7ed890a9e3116d5367 | [
"MIT"
] | null | null | null | allel/stats/diversity.py | yakkoroma/scikit-allel | ee2362c6bd4c3e39d2bd5e7ed890a9e3116d5367 | [
"MIT"
] | null | null | null | allel/stats/diversity.py | yakkoroma/scikit-allel | ee2362c6bd4c3e39d2bd5e7ed890a9e3116d5367 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import, print_function, division
import logging
import numpy as np
from allel.model.ndarray import SortedIndex, AlleleCountsArray
from allel.model.util import locate_fixed_differences
from allel.util import asarray_ndim, ignore_invalid, check_dim0_aligned, \
ensure_dim1_aligned
from allel.stats.window import windowed_statistic, per_base, moving_statistic
logger = logging.getLogger(__name__)
debug = logger.debug
def mean_pairwise_difference(ac, an=None, fill=np.nan):
"""Calculate for each variant the mean number of pairwise differences
between chromosomes sampled from within a single population.
Parameters
----------
ac : array_like, int, shape (n_variants, n_alleles)
Allele counts array.
an : array_like, int, shape (n_variants,), optional
Allele numbers. If not provided, will be calculated from `ac`.
fill : float
Use this value where there are no pairs to compare (e.g.,
all allele calls are missing).
Returns
-------
mpd : ndarray, float, shape (n_variants,)
Notes
-----
The values returned by this function can be summed over a genome
region and divided by the number of accessible bases to estimate
nucleotide diversity, a.k.a. *pi*.
Examples
--------
>>> import allel
>>> h = allel.HaplotypeArray([[0, 0, 0, 0],
... [0, 0, 0, 1],
... [0, 0, 1, 1],
... [0, 1, 1, 1],
... [1, 1, 1, 1],
... [0, 0, 1, 2],
... [0, 1, 1, 2],
... [0, 1, -1, -1]])
>>> ac = h.count_alleles()
>>> allel.mean_pairwise_difference(ac)
array([0. , 0.5 , 0.66666667, 0.5 , 0. ,
0.83333333, 0.83333333, 1. ])
See Also
--------
sequence_diversity, windowed_diversity
"""
# This function calculates the mean number of pairwise differences
# between haplotypes within a single population, generalising to any number
# of alleles.
# check inputs
ac = asarray_ndim(ac, 2)
# total number of haplotypes
if an is None:
an = np.sum(ac, axis=1)
else:
an = asarray_ndim(an, 1)
check_dim0_aligned(ac, an)
# total number of pairwise comparisons for each variant:
# (an choose 2)
n_pairs = an * (an - 1) / 2
# number of pairwise comparisons where there is no difference:
# sum of (ac choose 2) for each allele (i.e., number of ways to
# choose the same allele twice)
n_same = np.sum(ac * (ac - 1) / 2, axis=1)
# number of pairwise differences
n_diff = n_pairs - n_same
# mean number of pairwise differences, accounting for cases where
# there are no pairs
with ignore_invalid():
mpd = np.where(n_pairs > 0, n_diff / n_pairs, fill)
return mpd
def mean_pairwise_difference_between(ac1, ac2, an1=None, an2=None,
fill=np.nan):
"""Calculate for each variant the mean number of pairwise differences
between chromosomes sampled from two different populations.
Parameters
----------
ac1 : array_like, int, shape (n_variants, n_alleles)
Allele counts array from the first population.
ac2 : array_like, int, shape (n_variants, n_alleles)
Allele counts array from the second population.
an1 : array_like, int, shape (n_variants,), optional
Allele numbers for the first population. If not provided, will be
calculated from `ac1`.
an2 : array_like, int, shape (n_variants,), optional
Allele numbers for the second population. If not provided, will be
calculated from `ac2`.
fill : float
Use this value where there are no pairs to compare (e.g.,
all allele calls are missing).
Returns
-------
mpd : ndarray, float, shape (n_variants,)
Notes
-----
The values returned by this function can be summed over a genome
region and divided by the number of accessible bases to estimate
nucleotide divergence between two populations, a.k.a. *Dxy*.
Examples
--------
>>> import allel
>>> h = allel.HaplotypeArray([[0, 0, 0, 0],
... [0, 0, 0, 1],
... [0, 0, 1, 1],
... [0, 1, 1, 1],
... [1, 1, 1, 1],
... [0, 0, 1, 2],
... [0, 1, 1, 2],
... [0, 1, -1, -1]])
>>> ac1 = h.count_alleles(subpop=[0, 1])
>>> ac2 = h.count_alleles(subpop=[2, 3])
>>> allel.mean_pairwise_difference_between(ac1, ac2)
array([0. , 0.5 , 1. , 0.5 , 0. , 1. , 0.75, nan])
See Also
--------
sequence_divergence, windowed_divergence
"""
# This function calculates the mean number of pairwise differences
# between haplotypes from two different populations, generalising to any
# number of alleles.
# check inputs
ac1 = asarray_ndim(ac1, 2)
ac2 = asarray_ndim(ac2, 2)
check_dim0_aligned(ac1, ac2)
ac1, ac2 = ensure_dim1_aligned(ac1, ac2)
# total number of haplotypes sampled from each population
if an1 is None:
an1 = np.sum(ac1, axis=1)
else:
an1 = asarray_ndim(an1, 1)
check_dim0_aligned(ac1, an1)
if an2 is None:
an2 = np.sum(ac2, axis=1)
else:
an2 = asarray_ndim(an2, 1)
check_dim0_aligned(ac2, an2)
# total number of pairwise comparisons for each variant
n_pairs = an1 * an2
# number of pairwise comparisons where there is no difference:
# sum of (ac1 * ac2) for each allele (i.e., number of ways to
# choose the same allele twice)
n_same = np.sum(ac1 * ac2, axis=1)
# number of pairwise differences
n_diff = n_pairs - n_same
# mean number of pairwise differences, accounting for cases where
# there are no pairs
with ignore_invalid():
mpd = np.where(n_pairs > 0, n_diff / n_pairs, fill)
return mpd
def sequence_diversity(pos, ac, start=None, stop=None,
is_accessible=None):
"""Estimate nucleotide diversity within a given region, which is the
average proportion of sites (including monomorphic sites not present in the
data) that differ between randomly chosen pairs of chromosomes.
Parameters
----------
pos : array_like, int, shape (n_items,)
Variant positions, using 1-based coordinates, in ascending order.
ac : array_like, int, shape (n_variants, n_alleles)
Allele counts array.
start : int, optional
The position at which to start (1-based). Defaults to the first position.
stop : int, optional
The position at which to stop (1-based). Defaults to the last position.
is_accessible : array_like, bool, shape (len(contig),), optional
Boolean array indicating accessibility status for all positions in the
chromosome/contig.
Returns
-------
pi : ndarray, float, shape (n_windows,)
Nucleotide diversity.
Notes
-----
If start and/or stop are not provided, uses the difference between the last
and the first position as a proxy for the total number of sites, which can
overestimate the sequence diversity.
Examples
--------
>>> import allel
>>> g = allel.GenotypeArray([[[0, 0], [0, 0]],
... [[0, 0], [0, 1]],
... [[0, 0], [1, 1]],
... [[0, 1], [1, 1]],
... [[1, 1], [1, 1]],
... [[0, 0], [1, 2]],
... [[0, 1], [1, 2]],
... [[0, 1], [-1, -1]],
... [[-1, -1], [-1, -1]]])
>>> ac = g.count_alleles()
>>> pos = [2, 4, 7, 14, 15, 18, 19, 25, 27]
>>> pi = allel.sequence_diversity(pos, ac, start=1, stop=31)
>>> pi
0.13978494623655915
"""
# check inputs
if not isinstance(pos, SortedIndex):
pos = SortedIndex(pos, copy=False)
ac = asarray_ndim(ac, 2)
is_accessible = asarray_ndim(is_accessible, 1, allow_none=True)
# deal with subregion
if start is not None or stop is not None:
loc = pos.locate_range(start, stop)
pos = pos[loc]
ac = ac[loc]
if start is None:
start = pos[0]
if stop is None:
stop = pos[-1]
# calculate mean pairwise difference
mpd = mean_pairwise_difference(ac, fill=0)
# sum differences over variants
mpd_sum = np.sum(mpd)
# calculate value per base
if is_accessible is None:
n_bases = stop - start + 1
else:
n_bases = np.count_nonzero(is_accessible[start-1:stop])
pi = mpd_sum / n_bases
return pi
def sequence_divergence(pos, ac1, ac2, an1=None, an2=None, start=None,
stop=None, is_accessible=None):
"""Estimate nucleotide divergence between two populations within a
given region, which is the average proportion of sites (including
monomorphic sites not present in the data) that differ between randomly
chosen pairs of chromosomes, one from each population.
Parameters
----------
pos : array_like, int, shape (n_items,)
Variant positions, using 1-based coordinates, in ascending order.
ac1 : array_like, int, shape (n_variants, n_alleles)
Allele counts array for the first population.
ac2 : array_like, int, shape (n_variants, n_alleles)
Allele counts array for the second population.
an1 : array_like, int, shape (n_variants,), optional
Allele numbers for the first population. If not provided, will be
calculated from `ac1`.
an2 : array_like, int, shape (n_variants,), optional
Allele numbers for the second population. If not provided, will be
calculated from `ac2`.
start : int, optional
The position at which to start (1-based). Defaults to the first position.
stop : int, optional
The position at which to stop (1-based). Defaults to the last position.
is_accessible : array_like, bool, shape (len(contig),), optional
Boolean array indicating accessibility status for all positions in the
chromosome/contig.
Returns
-------
Dxy : ndarray, float, shape (n_windows,)
Nucleotide divergence.
Examples
--------
Simplest case, two haplotypes in each population::
>>> import allel
>>> h = allel.HaplotypeArray([[0, 0, 0, 0],
... [0, 0, 0, 1],
... [0, 0, 1, 1],
... [0, 1, 1, 1],
... [1, 1, 1, 1],
... [0, 0, 1, 2],
... [0, 1, 1, 2],
... [0, 1, -1, -1],
... [-1, -1, -1, -1]])
>>> ac1 = h.count_alleles(subpop=[0, 1])
>>> ac2 = h.count_alleles(subpop=[2, 3])
>>> pos = [2, 4, 7, 14, 15, 18, 19, 25, 27]
>>> dxy = sequence_divergence(pos, ac1, ac2, start=1, stop=31)
>>> dxy
0.12096774193548387
"""
# check inputs
if not isinstance(pos, SortedIndex):
pos = SortedIndex(pos, copy=False)
ac1 = asarray_ndim(ac1, 2)
ac2 = asarray_ndim(ac2, 2)
if an1 is not None:
an1 = asarray_ndim(an1, 1)
if an2 is not None:
an2 = asarray_ndim(an2, 1)
is_accessible = asarray_ndim(is_accessible, 1, allow_none=True)
# handle start/stop
if start is not None or stop is not None:
loc = pos.locate_range(start, stop)
pos = pos[loc]
ac1 = ac1[loc]
ac2 = ac2[loc]
if an1 is not None:
an1 = an1[loc]
if an2 is not None:
an2 = an2[loc]
if start is None:
start = pos[0]
if stop is None:
stop = pos[-1]
# calculate mean pairwise difference between the two populations
mpd = mean_pairwise_difference_between(ac1, ac2, an1=an1, an2=an2, fill=0)
# sum differences over variants
mpd_sum = np.sum(mpd)
# calculate value per base, N.B., expect pos is 1-based
if is_accessible is None:
n_bases = stop - start + 1
else:
n_bases = np.count_nonzero(is_accessible[start-1:stop])
dxy = mpd_sum / n_bases
return dxy
def windowed_diversity(pos, ac, size=None, start=None, stop=None, step=None,
windows=None, is_accessible=None, fill=np.nan):
"""Estimate nucleotide diversity in windows over a single
chromosome/contig.
Parameters
----------
pos : array_like, int, shape (n_items,)
Variant positions, using 1-based coordinates, in ascending order.
ac : array_like, int, shape (n_variants, n_alleles)
Allele counts array.
size : int, optional
The window size (number of bases).
start : int, optional
The position at which to start (1-based).
stop : int, optional
The position at which to stop (1-based).
step : int, optional
The distance between start positions of windows. If not given,
defaults to the window size, i.e., non-overlapping windows.
windows : array_like, int, shape (n_windows, 2), optional
Manually specify the windows to use as a sequence of (window_start,
window_stop) positions, using 1-based coordinates. Overrides the
size/start/stop/step parameters.
is_accessible : array_like, bool, shape (len(contig),), optional
Boolean array indicating accessibility status for all positions in the
chromosome/contig.
fill : object, optional
The value to use where a window is completely inaccessible.
Returns
-------
pi : ndarray, float, shape (n_windows,)
Nucleotide diversity in each window.
windows : ndarray, int, shape (n_windows, 2)
The windows used, as an array of (window_start, window_stop) positions,
using 1-based coordinates.
n_bases : ndarray, int, shape (n_windows,)
Number of (accessible) bases in each window.
counts : ndarray, int, shape (n_windows,)
Number of variants in each window.
Examples
--------
>>> import allel
>>> g = allel.GenotypeArray([[[0, 0], [0, 0]],
... [[0, 0], [0, 1]],
... [[0, 0], [1, 1]],
... [[0, 1], [1, 1]],
... [[1, 1], [1, 1]],
... [[0, 0], [1, 2]],
... [[0, 1], [1, 2]],
... [[0, 1], [-1, -1]],
... [[-1, -1], [-1, -1]]])
>>> ac = g.count_alleles()
>>> pos = [2, 4, 7, 14, 15, 18, 19, 25, 27]
>>> pi, windows, n_bases, counts = allel.windowed_diversity(
... pos, ac, size=10, start=1, stop=31
... )
>>> pi
array([0.11666667, 0.21666667, 0.09090909])
>>> windows
array([[ 1, 10],
[11, 20],
[21, 31]])
>>> n_bases
array([10, 10, 11])
>>> counts
array([3, 4, 2])
"""
# check inputs
if not isinstance(pos, SortedIndex):
pos = SortedIndex(pos, copy=False)
is_accessible = asarray_ndim(is_accessible, 1, allow_none=True)
# calculate mean pairwise difference
mpd = mean_pairwise_difference(ac, fill=0)
# sum differences in windows
mpd_sum, windows, counts = windowed_statistic(
pos, values=mpd, statistic=np.sum, size=size, start=start, stop=stop,
step=step, windows=windows, fill=0
)
# calculate value per base
pi, n_bases = per_base(mpd_sum, windows, is_accessible=is_accessible,
fill=fill)
return pi, windows, n_bases, counts
def windowed_divergence(pos, ac1, ac2, size=None, start=None, stop=None,
step=None, windows=None, is_accessible=None,
fill=np.nan):
"""Estimate nucleotide divergence between two populations in windows
over a single chromosome/contig.
Parameters
----------
pos : array_like, int, shape (n_items,)
Variant positions, using 1-based coordinates, in ascending order.
ac1 : array_like, int, shape (n_variants, n_alleles)
Allele counts array for the first population.
ac2 : array_like, int, shape (n_variants, n_alleles)
Allele counts array for the second population.
size : int, optional
The window size (number of bases).
start : int, optional
The position at which to start (1-based).
stop : int, optional
The position at which to stop (1-based).
step : int, optional
The distance between start positions of windows. If not given,
defaults to the window size, i.e., non-overlapping windows.
windows : array_like, int, shape (n_windows, 2), optional
Manually specify the windows to use as a sequence of (window_start,
window_stop) positions, using 1-based coordinates. Overrides the
size/start/stop/step parameters.
is_accessible : array_like, bool, shape (len(contig),), optional
Boolean array indicating accessibility status for all positions in the
chromosome/contig.
fill : object, optional
The value to use where a window is completely inaccessible.
Returns
-------
Dxy : ndarray, float, shape (n_windows,)
Nucleotide divergence in each window.
windows : ndarray, int, shape (n_windows, 2)
The windows used, as an array of (window_start, window_stop) positions,
using 1-based coordinates.
n_bases : ndarray, int, shape (n_windows,)
Number of (accessible) bases in each window.
counts : ndarray, int, shape (n_windows,)
Number of variants in each window.
Examples
--------
Simplest case, two haplotypes in each population::
>>> import allel
>>> h = allel.HaplotypeArray([[0, 0, 0, 0],
... [0, 0, 0, 1],
... [0, 0, 1, 1],
... [0, 1, 1, 1],
... [1, 1, 1, 1],
... [0, 0, 1, 2],
... [0, 1, 1, 2],
... [0, 1, -1, -1],
... [-1, -1, -1, -1]])
>>> ac1 = h.count_alleles(subpop=[0, 1])
>>> ac2 = h.count_alleles(subpop=[2, 3])
>>> pos = [2, 4, 7, 14, 15, 18, 19, 25, 27]
>>> dxy, windows, n_bases, counts = windowed_divergence(
... pos, ac1, ac2, size=10, start=1, stop=31
... )
>>> dxy
array([0.15 , 0.225, 0. ])
>>> windows
array([[ 1, 10],
[11, 20],
[21, 31]])
>>> n_bases
array([10, 10, 11])
>>> counts
array([3, 4, 2])
"""
# check inputs
pos = SortedIndex(pos, copy=False)
is_accessible = asarray_ndim(is_accessible, 1, allow_none=True)
# calculate mean pairwise divergence
mpd = mean_pairwise_difference_between(ac1, ac2, fill=0)
# sum in windows
mpd_sum, windows, counts = windowed_statistic(
pos, values=mpd, statistic=np.sum, size=size, start=start,
stop=stop, step=step, windows=windows, fill=0
)
# calculate value per base
dxy, n_bases = per_base(mpd_sum, windows, is_accessible=is_accessible,
fill=fill)
return dxy, windows, n_bases, counts
def windowed_df(pos, ac1, ac2, size=None, start=None, stop=None, step=None,
windows=None, is_accessible=None, fill=np.nan):
"""Calculate the density of fixed differences between two populations in
windows over a single chromosome/contig.
Parameters
----------
pos : array_like, int, shape (n_items,)
Variant positions, using 1-based coordinates, in ascending order.
ac1 : array_like, int, shape (n_variants, n_alleles)
Allele counts array for the first population.
ac2 : array_like, int, shape (n_variants, n_alleles)
Allele counts array for the second population.
size : int, optional
The window size (number of bases).
start : int, optional
The position at which to start (1-based).
stop : int, optional
The position at which to stop (1-based).
step : int, optional
The distance between start positions of windows. If not given,
defaults to the window size, i.e., non-overlapping windows.
windows : array_like, int, shape (n_windows, 2), optional
Manually specify the windows to use as a sequence of (window_start,
window_stop) positions, using 1-based coordinates. Overrides the
size/start/stop/step parameters.
is_accessible : array_like, bool, shape (len(contig),), optional
Boolean array indicating accessibility status for all positions in the
chromosome/contig.
fill : object, optional
The value to use where a window is completely inaccessible.
Returns
-------
df : ndarray, float, shape (n_windows,)
Per-base density of fixed differences in each window.
windows : ndarray, int, shape (n_windows, 2)
The windows used, as an array of (window_start, window_stop) positions,
using 1-based coordinates.
n_bases : ndarray, int, shape (n_windows,)
Number of (accessible) bases in each window.
counts : ndarray, int, shape (n_windows,)
Number of variants in each window.
See Also
--------
allel.model.locate_fixed_differences
"""
# check inputs
pos = SortedIndex(pos, copy=False)
is_accessible = asarray_ndim(is_accessible, 1, allow_none=True)
# locate fixed differences
loc_df = locate_fixed_differences(ac1, ac2)
# count number of fixed differences in windows
n_df, windows, counts = windowed_statistic(
pos, values=loc_df, statistic=np.count_nonzero, size=size, start=start,
stop=stop, step=step, windows=windows, fill=0
)
# calculate value per base
df, n_bases = per_base(n_df, windows, is_accessible=is_accessible,
fill=fill)
return df, windows, n_bases, counts
# noinspection PyPep8Naming
def watterson_theta(pos, ac, start=None, stop=None,
is_accessible=None):
"""Calculate the value of Watterson's estimator over a given region.
Parameters
----------
pos : array_like, int, shape (n_items,)
Variant positions, using 1-based coordinates, in ascending order.
ac : array_like, int, shape (n_variants, n_alleles)
Allele counts array.
start : int, optional
The position at which to start (1-based). Defaults to the first position.
stop : int, optional
The position at which to stop (1-based). Defaults to the last position.
is_accessible : array_like, bool, shape (len(contig),), optional
Boolean array indicating accessibility status for all positions in the
chromosome/contig.
Returns
-------
theta_hat_w : float
Watterson's estimator (theta hat per base).
Examples
--------
>>> import allel
>>> g = allel.GenotypeArray([[[0, 0], [0, 0]],
... [[0, 0], [0, 1]],
... [[0, 0], [1, 1]],
... [[0, 1], [1, 1]],
... [[1, 1], [1, 1]],
... [[0, 0], [1, 2]],
... [[0, 1], [1, 2]],
... [[0, 1], [-1, -1]],
... [[-1, -1], [-1, -1]]])
>>> ac = g.count_alleles()
>>> pos = [2, 4, 7, 14, 15, 18, 19, 25, 27]
>>> theta_hat_w = allel.watterson_theta(pos, ac, start=1, stop=31)
>>> theta_hat_w
0.10557184750733138
"""
# check inputs
if not isinstance(pos, SortedIndex):
pos = SortedIndex(pos, copy=False)
is_accessible = asarray_ndim(is_accessible, 1, allow_none=True)
if not hasattr(ac, 'count_segregating'):
ac = AlleleCountsArray(ac, copy=False)
# deal with subregion
if start is not None or stop is not None:
loc = pos.locate_range(start, stop)
pos = pos[loc]
ac = ac[loc]
if start is None:
start = pos[0]
if stop is None:
stop = pos[-1]
# count segregating variants
S = ac.count_segregating()
# assume number of chromosomes sampled is constant for all variants
n = ac.sum(axis=1).max()
# (n-1)th harmonic number
a1 = np.sum(1 / np.arange(1, n))
# calculate absolute value
theta_hat_w_abs = S / a1
# calculate value per base
if is_accessible is None:
n_bases = stop - start + 1
else:
n_bases = np.count_nonzero(is_accessible[start-1:stop])
theta_hat_w = theta_hat_w_abs / n_bases
return theta_hat_w
# noinspection PyPep8Naming
def windowed_watterson_theta(pos, ac, size=None, start=None, stop=None,
step=None, windows=None, is_accessible=None,
fill=np.nan):
"""Calculate the value of Watterson's estimator in windows over a single
chromosome/contig.
Parameters
----------
pos : array_like, int, shape (n_items,)
Variant positions, using 1-based coordinates, in ascending order.
ac : array_like, int, shape (n_variants, n_alleles)
Allele counts array.
size : int, optional
The window size (number of bases).
start : int, optional
The position at which to start (1-based).
stop : int, optional
The position at which to stop (1-based).
step : int, optional
The distance between start positions of windows. If not given,
defaults to the window size, i.e., non-overlapping windows.
windows : array_like, int, shape (n_windows, 2), optional
Manually specify the windows to use as a sequence of (window_start,
window_stop) positions, using 1-based coordinates. Overrides the
size/start/stop/step parameters.
is_accessible : array_like, bool, shape (len(contig),), optional
Boolean array indicating accessibility status for all positions in the
chromosome/contig.
fill : object, optional
The value to use where a window is completely inaccessible.
Returns
-------
theta_hat_w : ndarray, float, shape (n_windows,)
Watterson's estimator (theta hat per base).
windows : ndarray, int, shape (n_windows, 2)
The windows used, as an array of (window_start, window_stop) positions,
using 1-based coordinates.
n_bases : ndarray, int, shape (n_windows,)
Number of (accessible) bases in each window.
counts : ndarray, int, shape (n_windows,)
Number of variants in each window.
Examples
--------
>>> import allel
>>> g = allel.GenotypeArray([[[0, 0], [0, 0]],
... [[0, 0], [0, 1]],
... [[0, 0], [1, 1]],
... [[0, 1], [1, 1]],
... [[1, 1], [1, 1]],
... [[0, 0], [1, 2]],
... [[0, 1], [1, 2]],
... [[0, 1], [-1, -1]],
... [[-1, -1], [-1, -1]]])
>>> ac = g.count_alleles()
>>> pos = [2, 4, 7, 14, 15, 18, 19, 25, 27]
>>> theta_hat_w, windows, n_bases, counts = allel.windowed_watterson_theta(
... pos, ac, size=10, start=1, stop=31
... )
>>> theta_hat_w
array([0.10909091, 0.16363636, 0.04958678])
>>> windows
array([[ 1, 10],
[11, 20],
[21, 31]])
>>> n_bases
array([10, 10, 11])
>>> counts
array([3, 4, 2])
""" # flake8: noqa
# check inputs
if not isinstance(pos, SortedIndex):
pos = SortedIndex(pos, copy=False)
is_accessible = asarray_ndim(is_accessible, 1, allow_none=True)
if not hasattr(ac, 'count_segregating'):
ac = AlleleCountsArray(ac, copy=False)
# locate segregating variants
is_seg = ac.is_segregating()
# count segregating variants in windows
S, windows, counts = windowed_statistic(pos, is_seg,
statistic=np.count_nonzero,
size=size, start=start,
stop=stop, step=step,
windows=windows, fill=0)
# assume number of chromosomes sampled is constant for all variants
n = ac.sum(axis=1).max()
# (n-1)th harmonic number
a1 = np.sum(1 / np.arange(1, n))
# absolute value of Watterson's theta
theta_hat_w_abs = S / a1
# theta per base
theta_hat_w, n_bases = per_base(theta_hat_w_abs, windows=windows,
is_accessible=is_accessible, fill=fill)
return theta_hat_w, windows, n_bases, counts
# noinspection PyPep8Naming
def tajima_d(ac, pos=None, start=None, stop=None, min_sites=3):
"""Calculate the value of Tajima's D over a given region.
Parameters
----------
ac : array_like, int, shape (n_variants, n_alleles)
Allele counts array.
pos : array_like, int, shape (n_items,), optional
Variant positions, using 1-based coordinates, in ascending order.
start : int, optional
The position at which to start (1-based). Defaults to the first position.
stop : int, optional
The position at which to stop (1-based). Defaults to the last position.
min_sites : int, optional
Minimum number of segregating sites for which to calculate a value. If
there are fewer, np.nan is returned. Defaults to 3.
Returns
-------
D : float
Examples
--------
>>> import allel
>>> g = allel.GenotypeArray([[[0, 0], [0, 0]],
... [[0, 0], [0, 1]],
... [[0, 0], [1, 1]],
... [[0, 1], [1, 1]],
... [[1, 1], [1, 1]],
... [[0, 0], [1, 2]],
... [[0, 1], [1, 2]],
... [[0, 1], [-1, -1]],
... [[-1, -1], [-1, -1]]])
>>> ac = g.count_alleles()
>>> allel.tajima_d(ac)
3.1445848780213814
>>> pos = [2, 4, 7, 14, 15, 18, 19, 25, 27]
>>> allel.tajima_d(ac, pos=pos, start=7, stop=25)
3.8779735196179366
"""
# check inputs
if not hasattr(ac, 'count_segregating'):
ac = AlleleCountsArray(ac, copy=False)
# deal with subregion
if pos is not None and (start is not None or stop is not None):
if not isinstance(pos, SortedIndex):
pos = SortedIndex(pos, copy=False)
loc = pos.locate_range(start, stop)
ac = ac[loc]
# count segregating variants
S = ac.count_segregating()
if S < min_sites:
return np.nan
# assume number of chromosomes sampled is constant for all variants
n = ac.sum(axis=1).max()
# (n-1)th harmonic number
a1 = np.sum(1 / np.arange(1, n))
# calculate Watterson's theta (absolute value)
theta_hat_w_abs = S / a1
# calculate mean pairwise difference
mpd = mean_pairwise_difference(ac, fill=0)
# calculate theta_hat pi (sum differences over variants)
theta_hat_pi_abs = np.sum(mpd)
# N.B., both theta estimates are usually divided by the number of
# (accessible) bases but here we want the absolute difference
d = theta_hat_pi_abs - theta_hat_w_abs
# calculate the denominator (standard deviation)
a2 = np.sum(1 / (np.arange(1, n)**2))
b1 = (n + 1) / (3 * (n - 1))
b2 = 2 * (n**2 + n + 3) / (9 * n * (n - 1))
c1 = b1 - (1 / a1)
c2 = b2 - ((n + 2) / (a1 * n)) + (a2 / (a1**2))
e1 = c1 / a1
e2 = c2 / (a1**2 + a2)
d_stdev = np.sqrt((e1 * S) + (e2 * S * (S - 1)))
# finally calculate Tajima's D
D = d / d_stdev
return D
# noinspection PyPep8Naming
def windowed_tajima_d(pos, ac, size=None, start=None, stop=None,
step=None, windows=None, min_sites=3):
"""Calculate the value of Tajima's D in windows over a single
chromosome/contig.
Parameters
----------
pos : array_like, int, shape (n_items,)
Variant positions, using 1-based coordinates, in ascending order.
ac : array_like, int, shape (n_variants, n_alleles)
Allele counts array.
size : int, optional
The window size (number of bases).
start : int, optional
The position at which to start (1-based).
stop : int, optional
The position at which to stop (1-based).
step : int, optional
The distance between start positions of windows. If not given,
defaults to the window size, i.e., non-overlapping windows.
windows : array_like, int, shape (n_windows, 2), optional
Manually specify the windows to use as a sequence of (window_start,
window_stop) positions, using 1-based coordinates. Overrides the
size/start/stop/step parameters.
min_sites : int, optional
Minimum number of segregating sites for which to calculate a value. If
there are fewer, np.nan is returned. Defaults to 3.
Returns
-------
D : ndarray, float, shape (n_windows,)
Tajima's D.
windows : ndarray, int, shape (n_windows, 2)
The windows used, as an array of (window_start, window_stop) positions,
using 1-based coordinates.
counts : ndarray, int, shape (n_windows,)
Number of variants in each window.
Examples
--------
>>> import allel
>>> g = allel.GenotypeArray([[[0, 0], [0, 0]],
... [[0, 0], [0, 1]],
... [[0, 0], [1, 1]],
... [[0, 1], [1, 1]],
... [[1, 1], [1, 1]],
... [[0, 0], [1, 2]],
... [[0, 1], [1, 2]],
... [[0, 1], [-1, -1]],
... [[-1, -1], [-1, -1]]])
>>> ac = g.count_alleles()
>>> pos = [2, 4, 7, 14, 15, 20, 22, 25, 27]
>>> D, windows, counts = allel.windowed_tajima_d(pos, ac, size=20, step=10, start=1, stop=31)
>>> D
array([1.36521524, 4.22566622])
>>> windows
array([[ 1, 20],
[11, 31]])
>>> counts
array([6, 6])
"""
# check inputs
if not isinstance(pos, SortedIndex):
pos = SortedIndex(pos, copy=False)
if not hasattr(ac, 'count_segregating'):
ac = AlleleCountsArray(ac, copy=False)
# assume number of chromosomes sampled is constant for all variants
n = ac.sum(axis=1).max()
# calculate constants
a1 = np.sum(1 / np.arange(1, n))
a2 = np.sum(1 / (np.arange(1, n)**2))
b1 = (n + 1) / (3 * (n - 1))
b2 = 2 * (n**2 + n + 3) / (9 * n * (n - 1))
c1 = b1 - (1 / a1)
c2 = b2 - ((n + 2) / (a1 * n)) + (a2 / (a1**2))
e1 = c1 / a1
e2 = c2 / (a1**2 + a2)
# locate segregating variants
is_seg = ac.is_segregating()
# calculate mean pairwise difference
mpd = mean_pairwise_difference(ac, fill=0)
# define statistic to compute for each window
# noinspection PyPep8Naming
def statistic(w_is_seg, w_mpd):
S = np.count_nonzero(w_is_seg)
if S < min_sites:
return np.nan
pi = np.sum(w_mpd)
d = pi - (S / a1)
d_stdev = np.sqrt((e1 * S) + (e2 * S * (S - 1)))
wD = d / d_stdev
return wD
D, windows, counts = windowed_statistic(pos, values=(is_seg, mpd),
statistic=statistic, size=size,
start=start, stop=stop, step=step,
windows=windows, fill=np.nan)
return D, windows, counts
def moving_tajima_d(ac, size, start=0, stop=None, step=None, min_sites=3):
"""Calculate the value of Tajima's D in moving windows of `size` variants.
Parameters
----------
ac : array_like, int, shape (n_variants, n_alleles)
Allele counts array.
size : int
The window size (number of variants).
start : int, optional
The index at which to start.
stop : int, optional
The index at which to stop.
step : int, optional
The number of variants between start positions of windows. If not
given, defaults to the window size, i.e., non-overlapping windows.
min_sites : int, optional
Minimum number of segregating sites for which to calculate a value. If
there are fewer, np.nan is returned. Defaults to 3.
Returns
-------
d : ndarray, float, shape (n_windows,)
Tajima's D.
Examples
--------
>>> import allel
>>> g = allel.GenotypeArray([[[0, 0], [0, 0]],
... [[0, 0], [0, 1]],
... [[0, 0], [1, 1]],
... [[0, 1], [1, 1]],
... [[1, 1], [1, 1]],
... [[0, 0], [1, 2]],
... [[0, 1], [1, 2]],
... [[0, 1], [-1, -1]],
... [[-1, -1], [-1, -1]]])
>>> ac = g.count_alleles()
>>> D = allel.moving_tajima_d(ac, size=4, step=2)
>>> D
array([0.1676558 , 2.01186954, 5.70029703])
"""
d = moving_statistic(values=ac, statistic=tajima_d, size=size, start=start, stop=stop,
step=step, min_sites=min_sites)
return d
| 34.116487 | 97 | 0.559017 | 4,902 | 38,074 | 4.247654 | 0.066095 | 0.014024 | 0.014696 | 0.015368 | 0.861445 | 0.834886 | 0.807127 | 0.774517 | 0.752617 | 0.727692 | 0 | 0.045685 | 0.321033 | 38,074 | 1,115 | 98 | 34.147085 | 0.759777 | 0.658533 | 0 | 0.614754 | 0 | 0 | 0.006445 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053279 | false | 0 | 0.028689 | 0 | 0.143443 | 0.004098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21b243351c8ab427bb1121e581a44eba454d04d2 | 29 | py | Python | Topsis-101903213/__init__.py | om-guptaa/Topsis-101903213 | a8333d6f90a201e0e4541971c5e13455bfb9a79e | [
"MIT"
] | null | null | null | Topsis-101903213/__init__.py | om-guptaa/Topsis-101903213 | a8333d6f90a201e0e4541971c5e13455bfb9a79e | [
"MIT"
] | null | null | null | Topsis-101903213/__init__.py | om-guptaa/Topsis-101903213 | a8333d6f90a201e0e4541971c5e13455bfb9a79e | [
"MIT"
] | null | null | null | from .101903213 import topsis | 29 | 29 | 0.862069 | 4 | 29 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.346154 | 0.103448 | 29 | 1 | 29 | 29 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 1 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
21e97f178a7d1f39d326b61c05bd1ce747f63f2c | 10,966 | py | Python | Software/VGG19.py | gkrish19/SIAM | 1e530d4c070054045fc2e8e7fe4ce82a54755132 | [
"MIT"
] | 4 | 2021-02-02T06:50:43.000Z | 2022-01-29T12:25:32.000Z | Software/VGG19.py | gkrish19/SIAM | 1e530d4c070054045fc2e8e7fe4ce82a54755132 | [
"MIT"
] | null | null | null | Software/VGG19.py | gkrish19/SIAM | 1e530d4c070054045fc2e8e7fe4ce82a54755132 | [
"MIT"
] | 2 | 2021-07-07T19:58:40.000Z | 2022-01-27T22:51:20.000Z | from utils import *
from pact_dorefa import *
import tensorflow as tf
import numpy as np
def build_VGG19(images, n_classes, is_training, keep_prob, wb, ab, quant, rram, xbar_size, adc_bits):
W_conv1_1 = tf.get_variable('conv1_1', shape=[3, 3, 3, 64], initializer=tf.contrib.keras.initializers.he_normal())
b_conv1_1 = bias_variable([64])
if quant:
W_conv1_1 = fw(W_conv1_1, wb)
if rram:
output = RRAM_conv2d(x=images, W=W_conv1_1, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(images, W_conv1_1) + b_conv1_1
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate (output, ab)
W_conv1_2 = tf.get_variable('conv1_2', shape=[3, 3, 64, 64], initializer=tf.contrib.keras.initializers.he_normal())
b_conv1_2 = bias_variable([64])
if quant:
W_conv1_2 = fw(W_conv1_2, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv1_2, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv1_2) + b_conv1_2
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
output = max_pool(output, 2, 2, "pool1")
W_conv2_1 = tf.get_variable('conv2_1', shape=[3, 3, 64, 128], initializer=tf.contrib.keras.initializers.he_normal())
b_conv2_1 = bias_variable([128])
if quant:
W_conv2_1 = fw(W_conv2_1, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv2_1, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv2_1) + b_conv2_1
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv2_2 = tf.get_variable('conv2_2', shape=[3, 3, 128, 128],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv2_2 = bias_variable([128])
if quant:
W_conv2_2 = fw(W_conv2_2, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv2_2, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv2_2) + b_conv2_2
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
output = max_pool(output, 2, 2, "pool2")
W_conv3_1 = tf.get_variable('conv3_1', shape=[3, 3, 128, 256],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv3_1 = bias_variable([256])
if quant:
W_conv3_1 = fw(W_conv3_1, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv3_1, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv3_1) + b_conv3_1
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv3_2 = tf.get_variable('conv3_2', shape=[3, 3, 256, 256],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv3_2 = bias_variable([256])
if quant:
W_conv3_2 = fw(W_conv3_2, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv3_2, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv3_2) + b_conv3_2
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv3_3 = tf.get_variable('conv3_3', shape=[3, 3, 256, 256],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv3_3 = bias_variable([256])
if quant:
W_conv3_3 = fw(W_conv3_3, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv3_3, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv3_3) + b_conv3_3
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv3_4 = tf.get_variable('conv3_4', shape=[3, 3, 256, 256],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv3_4 = bias_variable([256])
if quant:
W_conv3_4 = fw(W_conv3_4, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv3_4, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv3_4) + b_conv3_4
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
output = max_pool(output, 2, 2, "pool3")
W_conv4_1 = tf.get_variable('conv4_1', shape=[3, 3, 256, 512],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv4_1 = bias_variable([512])
if quant:
W_conv4_1 = fw(W_conv4_1, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv4_1, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv4_1) + b_conv4_1
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv4_2 = tf.get_variable('conv4_2', shape=[3, 3, 512, 512],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv4_2 = bias_variable([512])
if quant:
W_conv4_2 = fw(W_conv4_2, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv4_2, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv4_2) + b_conv4_2
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv4_3 = tf.get_variable('conv4_3', shape=[3, 3, 512, 512],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv4_3 = bias_variable([512])
if quant:
W_conv4_3 = fw(W_conv4_3, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv4_3, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv4_3) + b_conv4_3
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv4_4 = tf.get_variable('conv4_4', shape=[3, 3, 512, 512],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv4_4 = bias_variable([512])
if quant:
W_conv4_4 = fw(W_conv4_4, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv4_4, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv4_4) + b_conv4_4
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
output = max_pool(output, 2, 2)
W_conv5_1 = tf.get_variable('conv5_1', shape=[3, 3, 512, 512],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv5_1 = bias_variable([512])
if quant:
W_conv5_1 = fw(W_conv5_1, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv5_1, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv5_1) + b_conv5_1
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv5_2 = tf.get_variable('conv5_2', shape=[3, 3, 512, 512],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv5_2 = bias_variable([512])
if quant:
W_conv5_2 = fw(W_conv5_2, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv5_2, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv5_2) + b_conv5_2
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv5_3 = tf.get_variable('conv5_3', shape=[3, 3, 512, 512],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv5_3 = bias_variable([512])
if quant:
W_conv5_3 = fw(W_conv5_3, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv5_3, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv5_3) + b_conv5_3
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
W_conv5_4 = tf.get_variable('conv5_4', shape=[3, 3, 512, 512],
initializer=tf.contrib.keras.initializers.he_normal())
b_conv5_4 = bias_variable([512])
if quant:
W_conv5_4 = fw(W_conv5_4, wb)
if rram:
output = RRAM_conv2d(x=output, W=W_conv5_4, xbar_size=xbar_size, adc_bits=adc_bits,
strides=[1, 1, 1, 1], padding='SAME')
else:
output = conv2d(output, W_conv5_4) + b_conv5_4
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate(output, ab)
output = tf.reshape(output, [-1, 2 * 2 * 512])
W_fc1 = tf.get_variable('fc1', shape=[2048, 512], initializer=tf.contrib.keras.initializers.he_normal())
b_fc1 = bias_variable([512])
if quant:
W_fc1 = fw(W_fc1, wb)
if rram:
output = RRAM_fc2d(x=output, W=W_fc1, b=b_fc1, xbar_size=xbar_size, adc_bits=adc_bits)
else:
output = tf.matmul(output, W_fc1) + b_fc1
output = tf.nn.relu(batch_norm(output, is_training))
if quant:
output = activate (output, ab)
output = tf.nn.dropout(output, keep_prob)
W_fc3 = tf.get_variable('fc3', shape=[512, n_classes], initializer=tf.contrib.keras.initializers.he_normal())
b_fc3 = bias_variable([n_classes])
if quant:
W_fc3 = fw(W_fc3, wb)
if rram:
output = RRAM_fc2d(x=output, W=W_fc3, b=b_fc3, xbar_size=xbar_size, adc_bits=adc_bits)
else:
output = tf.matmul(output, W_fc3) + b_fc3
output = tf.nn.relu(batch_norm(output, is_training))
return output
| 42.669261 | 121 | 0.5911 | 1,613 | 10,966 | 3.739616 | 0.049597 | 0.015915 | 0.015915 | 0.047248 | 0.822944 | 0.822944 | 0.819131 | 0.743866 | 0.73193 | 0.717341 | 0 | 0.073211 | 0.287525 | 10,966 | 256 | 122 | 42.835938 | 0.698835 | 0 | 0 | 0.56962 | 0 | 0 | 0.018394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004219 | false | 0 | 0.016878 | 0 | 0.025316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0dfc30f38978efe05aafd51cb066548caeb508e2 | 130 | py | Python | src/lib/Bcfg2/Server/Hostbase/test/test_settings.py | amplify-education/bcfg2 | 02d7f574babfeb2da99e2aad3a92b4e8d6494f07 | [
"mpich2"
] | null | null | null | src/lib/Bcfg2/Server/Hostbase/test/test_settings.py | amplify-education/bcfg2 | 02d7f574babfeb2da99e2aad3a92b4e8d6494f07 | [
"mpich2"
] | null | null | null | src/lib/Bcfg2/Server/Hostbase/test/test_settings.py | amplify-education/bcfg2 | 02d7f574babfeb2da99e2aad3a92b4e8d6494f07 | [
"mpich2"
] | null | null | null | import sys
import os
import Hostbase.settings
def setup():
pass
def teardown():
pass
def test_mcs_settings():
pass
| 10 | 24 | 0.692308 | 18 | 130 | 4.888889 | 0.611111 | 0.159091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 130 | 12 | 25 | 10.833333 | 0.88 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
df008c743835903c2b123a246bcd0e39579e3009 | 18,293 | py | Python | quality/views.py | hisham2k9/IMS-and-CAPA | 9f70988a6411c72ab4f0cbc818b84db58a28076f | [
"MIT"
] | null | null | null | quality/views.py | hisham2k9/IMS-and-CAPA | 9f70988a6411c72ab4f0cbc818b84db58a28076f | [
"MIT"
] | 15 | 2021-03-19T03:43:56.000Z | 2022-03-12T00:30:55.000Z | quality/views.py | hisham2k9/IMS-and-CAPA | 9f70988a6411c72ab4f0cbc818b84db58a28076f | [
"MIT"
] | null | null | null | from django.shortcuts import render
from . import models
from hicdata import models
from nursequalitydata import models
from accounts import models
import hicdata
import nursequalitydata
import datetime
# Create your views here.
def quality(request):
##location list derived from query.
LocationList=models.Locations.objects.all()
## Textlist and count Dictionary for loop in template
ContentDict={}
if request.method=='POST':
ContentDict={}
fromdate=request.POST['FromDate']
loc=request.POST['locname']
todate=request.POST['ToDate']
Header='You are seeing data of %s from %s to %s'%(loc, fromdate, todate)
print(todate)
todate = datetime.datetime.strptime(todate, '%Y-%m-%d').date() ##converting str date to datetime object
fromdate = datetime.datetime.strptime(fromdate, '%Y-%m-%d').date()
difference=todate-fromdate #createing timedelta object for difference
if loc!='All': ##makes filter for location as well
##Text and count of Tracheostomy
TracheostomyText='Tracheostomy Cases'
Tracheostomycount=len(nursequalitydata.models.Tracheostomy.objects.filter(datetime_tracheostomy__date__lte=todate,
datetime_tracheostomy__date__gt=todate-difference).
filter(pt_location=loc))
ContentDict[TracheostomyText]=Tracheostomycount
##Text and count of Pressure Sore Injury
PressureInjuryText='Pressure Sore Injury'
PressureInjurycount=len(nursequalitydata.models.PressureInjury.objects.filter(dateofobservation__lte=todate,
dateofobservation__gt=todate-difference).
filter(pt_location=loc))
ContentDict[PressureInjuryText]=PressureInjurycount
##Text and count of Reintubation
ReintubationText='Reintubation Cases'
Reintubationcount=len(nursequalitydata.models.Reintubation.objects.filter(datetime_reintubation__date__lte=todate,
datetime_reintubation__date__gt=todate-difference).
filter(pt_location=loc))
ContentDict[ReintubationText]=Reintubationcount
##Text and count of Intubation
IntubationText='Intubation Cases'
Intubationcount=len(nursequalitydata.models.Intubation.objects.filter(datetime_intubation__date__lte=todate,
datetime_intubation__date__gt=todate-difference).
filter(pt_location=loc))
ContentDict[IntubationText]=Intubationcount
##Text and count of Return to ICU in 48 hours
ReturntoICUText='Return to ICU in 48 hours'
ReturntoICUcount=len(nursequalitydata.models.ReturnToICU.objects.filter(datetime_return__date__lte=todate,
datetime_return__date__gt=todate-difference).
filter(pt_location=loc))
ContentDict[ReturntoICUText]=ReturntoICUcount
##Text and count of cauti in request case
CAUTIText='Cauti Cases'
CAUTICount=len(hicdata.models.CAUTI.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference).filter(pt_location=loc))
ContentDict[CAUTIText]=CAUTICount
##Text and count for antibiotic in request case
AntibioticText='Antibiotic resistance cases'
AntibioticCount=len(hicdata.models.Antibiotic.objects.filter(dateofadministration__lte=todate,
dateofadministration__gt=todate-difference).filter(pt_location=loc))
ContentDict[AntibioticText]=AntibioticCount
##Text and count for CLABSI in request case
CLABSIText='CLABSI cases'
CLABSICount=len(hicdata.models.CLABSI.objects.filter(dateofrecognition__lte=todate,
dateofrecognition__gt=todate-difference).filter(pt_location=loc))
ContentDict[CLABSIText]=CLABSICount
##Text and count for BodyFluidExposure in request case
BodyFluidExposureText='BodyFluidExposure cases'
BodyFluidExposureCount=len(hicdata.models.BodyFluidExposure.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference).filter(incident_location=loc))
ContentDict[BodyFluidExposureText]=BodyFluidExposureCount
##Text and count for VAP in request case
VAPText='VAP cases'
VAPCount=len(hicdata.models.VAP.objects.filter(dateofrecognition__lte=todate,
dateofrecognition__gt=todate-difference).filter(pt_location=loc))
ContentDict[VAPText]=VAPCount
##Text and count for VAE in request case
VAEText='VAE cases'
VAECount=len(hicdata.models.VAE.objects.filter(dateofrecognition__lte=todate,
dateofrecognition__gt=todate-difference).filter(pt_location=loc))
ContentDict[VAEText]=VAECount
##Text and count for SSI in request case
SSIText='SSI cases'
SSICount=len(hicdata.models.SSI.objects.filter(dateofnotification__lte=todate,
dateofnotification__gt=todate-difference).filter(pt_location=loc))
ContentDict[SSIText]=SSICount
##Text and count for Thrombophlebitis in request case
ThrombophlebitisText='Thrombophlebitis cases'
ThrombophlebitisCount=len(hicdata.models.Thrombophlebitis.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference).filter(pt_location=loc))
ContentDict[ThrombophlebitisText]=ThrombophlebitisCount
##Text and count for NSI in request case
NSIText='NSI cases'
NSICount=len(hicdata.models.NSI.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference).filter(staff_location=loc))
ContentDict[NSIText]=NSICount
else:
##Text and count of Tracheostomy
TracheostomyText='Tracheostomy Cases'
Tracheostomycount=len(nursequalitydata.models.Tracheostomy.objects.filter(datetime_tracheostomy__date__lte=todate,
datetime_tracheostomy__date__gt=todate-difference))
ContentDict[TracheostomyText]=Tracheostomycount
##Text and count of Pressure Sore Injury
PressureInjuryText='Pressure Sore Injury'
PressureInjurycount=len(nursequalitydata.models.PressureInjury.objects.filter(dateofobservation__lte=todate,
dateofobservation__gt=todate-difference))
ContentDict[PressureInjuryText]=PressureInjurycount
##Text and count of Reintubation
ReintubationText='Reintubation Cases'
Reintubationcount=len(nursequalitydata.models.Reintubation.objects.filter(datetime_reintubation__date__lte=todate,
datetime_reintubation__date__gt=todate-difference))
ContentDict[ReintubationText]=Reintubationcount
##Text and count of Intubation
IntubationText='Intubation Cases'
Intubationcount=len(nursequalitydata.models.Intubation.objects.filter(datetime_intubation__date__lte=todate,
datetime_intubation__date__gt=todate-difference))
ContentDict[IntubationText]=Intubationcount
##Text and count of Return to ICU in 48 hours
ReturntoICUText='Return to ICU in 48 hours'
ReturntoICUcount=len(nursequalitydata.models.ReturnToICU.objects.filter(datetime_return__date__lte=todate,
datetime_return__date__gt=todate-difference))
ContentDict[ReturntoICUText]=ReturntoICUcount
##Text and count of cauti in request case
CAUTIText='Cauti Cases'
CAUTICount=len(hicdata.models.CAUTI.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference))
ContentDict[CAUTIText]=CAUTICount
##Text and count for antibiotic in request case
AntibioticText='Antibiotic resistance cases'
AntibioticCount=len(hicdata.models.Antibiotic.objects.filter(dateofadministration__lte=todate,
dateofadministration__gt=todate-difference))
ContentDict[AntibioticText]=AntibioticCount
##Text and count for CLABSI in request case
CLABSIText='CLABSI cases'
CLABSICount=len(hicdata.models.CLABSI.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference))
ContentDict[CLABSIText]=CLABSICount
##Text and count for BodyFluidExposure in request case
BodyFluidExposureText='BodyFluidExposure cases'
BodyFluidExposureCount=len(hicdata.models.BodyFluidExposure.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference))
ContentDict[BodyFluidExposureText]=BodyFluidExposureCount
##Text and count for VAP in request case
VAPText='VAP cases'
VAPCount=len(hicdata.models.VAP.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference))
ContentDict[VAPText]=VAPCount
##Text and count for VAE in request case
VAEText='VAE cases'
VAECount=len(hicdata.models.VAE.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference))
ContentDict[VAEText]=VAECount
##Text and count for SSI in request case
SSIText='SSI cases'
SSICount=len(hicdata.models.SSI.objects.filter(dateofnotification__lte=todate,
dateofnotification__gt=todate-difference))
ContentDict[SSIText]=SSICount
##Text and count for Thrombophlebitis in request case
ThrombophlebitisText='Thrombophlebitis cases'
ThrombophlebitisCount=len(hicdata.models.Thrombophlebitis.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference))
ContentDict[ThrombophlebitisText]=ThrombophlebitisCount
##Text and count for NSI in request case
NSIText='NSI cases'
NSICount=len(hicdata.models.NSI.objects.filter(dateofincident__lte=todate,
dateofincident__gt=todate-difference))
ContentDict[NSIText]=NSICount
return render(request, 'quality.html', {"LocationList":LocationList, "ContentDict": ContentDict,'Header':Header} )
##Default header content
else:
Header='You are seeing Data from past 30 days'
##Text and count of Tracheostomy
TracheostomyText='Tracheostomy Cases'
Tracheostomycount=len(nursequalitydata.models.Tracheostomy.objects.filter(
datetime_tracheostomy__date__lte=datetime.datetime.today(),
datetime_tracheostomy__date__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[TracheostomyText]=Tracheostomycount
##Text and count of Pressure Sore Injury
PressureInjuryText='Pressure Sore Injury'
PressureInjurycount=len(nursequalitydata.models.PressureInjury.objects.filter(
dateofobservation__lte=datetime.datetime.today(),
dateofobservation__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[PressureInjuryText]=PressureInjurycount
##Text and count of Reintubation
ReintubationText='Reintubation Cases'
Reintubationcount=len(nursequalitydata.models.Reintubation.objects.filter(
datetime_reintubation__date__lte=datetime.datetime.today(),
datetime_reintubation__date__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[ReintubationText]=Reintubationcount
##Text and count of Intubation
IntubationText='Intubation Cases'
Intubationcount=len(nursequalitydata.models.Intubation.objects.filter(
datetime_intubation__date__lte=datetime.datetime.today(),
datetime_intubation__date__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
print('count',Intubationcount)
ContentDict[IntubationText]=Intubationcount
##Text and count of Return to ICU
ReturntoICUText='Return to ICU in 48 hours'
ReturntoICUcount=len(nursequalitydata.models.ReturnToICU.objects.filter(
datetime_return__date__lte=datetime.datetime.today(),
datetime_return__date__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[ReturntoICUText]=ReturntoICUcount
##Text and count of cauti
CAUTIText='Cauti Cases'
CAUTICount=len(hicdata.models.CAUTI.objects.filter(dateofincident__lte=datetime.datetime.today(),
dateofincident__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[CAUTIText]=CAUTICount
##Text and count for antibiotic
AntibioticText='Antibiotic resistance cases'
AntibioticCount=len(hicdata.models.Antibiotic.objects.filter(dateofadministration__lte=datetime.datetime.today(),
dateofadministration__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[AntibioticText]=AntibioticCount
##Text and count for CLABSI
CLABSIText='CLABSI cases'
CLABSICount=len(hicdata.models.CLABSI.objects.filter(dateofrecognition__lte=datetime.datetime.today(),
dateofrecognition__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[CLABSIText]=CLABSICount
##Text and count for BodyFluidExposure
BodyFluidExposureText='BodyFluidExposure cases'
BodyFluidExposureCount=len(hicdata.models.BodyFluidExposure.objects.filter(dateofincident__lte=datetime.datetime.today(),
dateofincident__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[BodyFluidExposureText]=BodyFluidExposureCount
##Text and count for VAP
VAPText='VAP cases'
VAPCount=len(hicdata.models.VAP.objects.filter(dateofrecognition__lte=datetime.datetime.today(),
dateofrecognition__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[VAPText]=VAPCount
##Text and count for VAE
VAEText='VAE cases'
VAECount=len(hicdata.models.VAE.objects.filter(dateofrecognition__lte=datetime.datetime.today(),
dateofrecognition__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[VAEText]=VAECount
##Text and count for SSI
SSIText='SSI cases'
SSICount=len(hicdata.models.SSI.objects.filter(dateofnotification__lte=datetime.datetime.today(),
dateofnotification__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[SSIText]=SSICount
##Text and count for Thrombophlebitis
ThrombophlebitisText='Thrombophlebitis cases'
ThrombophlebitisCount=len(hicdata.models.Thrombophlebitis.objects.filter(dateofincident__lte=datetime.datetime.today(),
dateofincident__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[ThrombophlebitisText]=ThrombophlebitisCount
##Text and count for NSI
NSIText='NSI cases'
NSICount=len(hicdata.models.NSI.objects.filter(dateofincident__lte=datetime.datetime.today(),
dateofincident__gt=datetime.datetime.today()-datetime.timedelta(days=30)))
ContentDict[NSIText]=NSICount
return render(request, 'quality.html', {"LocationList":LocationList, "ContentDict": ContentDict,'Header':Header} )
| 56.113497 | 136 | 0.603783 | 1,507 | 18,293 | 7.160584 | 0.090909 | 0.031878 | 0.046706 | 0.033361 | 0.909554 | 0.905477 | 0.894727 | 0.894727 | 0.84969 | 0.823186 | 0 | 0.00325 | 0.327174 | 18,293 | 325 | 137 | 56.286154 | 0.873497 | 0.094736 | 0 | 0.660194 | 0 | 0 | 0.054121 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004854 | false | 0 | 0.038835 | 0 | 0.053398 | 0.009709 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df8ff71112795f8e9c85056f797e92b2cb0aab34 | 39 | py | Python | serial_scripts/discovery_regression/__init__.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | 1 | 2017-06-13T04:42:34.000Z | 2017-06-13T04:42:34.000Z | serial_scripts/discovery_regression/__init__.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | serial_scripts/discovery_regression/__init__.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | 'Discovery regression tests in serial'
| 19.5 | 38 | 0.820513 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 1 | 39 | 39 | 0.941176 | 0.923077 | 0 | 0 | 0 | 0 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10e8f67025f2afac07cb05e115d40401a78922ce | 227 | py | Python | tests/test_compatibility.py | python-pipe/hellp | 51fd7c9143ee8ce6392b9b877036ad4347ad29a5 | [
"MIT"
] | 123 | 2018-07-31T19:17:27.000Z | 2022-03-18T15:29:07.000Z | tests/test_compatibility.py | python-pipe/hellp | 51fd7c9143ee8ce6392b9b877036ad4347ad29a5 | [
"MIT"
] | 11 | 2019-05-01T18:01:59.000Z | 2022-01-01T06:43:36.000Z | tests/test_compatibility.py | python-pipe/hellp | 51fd7c9143ee8ce6392b9b877036ad4347ad29a5 | [
"MIT"
] | 4 | 2019-06-07T12:03:53.000Z | 2021-05-10T20:29:44.000Z | from sspipe import p, px
def test_simple():
assert range(3) | p.select(lambda x: x + 1) | p(list) | (px == [1, 2, 3])
def test_integration_with_px():
assert range(3) | p.select(px + 1) | p(list) | (px == [1, 2, 3])
| 22.7 | 77 | 0.572687 | 41 | 227 | 3.073171 | 0.463415 | 0.071429 | 0.190476 | 0.206349 | 0.47619 | 0.174603 | 0.174603 | 0 | 0 | 0 | 0 | 0.056818 | 0.22467 | 227 | 9 | 78 | 25.222222 | 0.659091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.4 | true | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
10fda39d56cb3fe36543943a49a0b04172d4bccd | 8,122 | py | Python | test_life.py | fallonni/game-of-life | 13669c08b7c4d1d27adbdf518d9ad0b00047e7f0 | [
"MIT"
] | null | null | null | test_life.py | fallonni/game-of-life | 13669c08b7c4d1d27adbdf518d9ad0b00047e7f0 | [
"MIT"
] | null | null | null | test_life.py | fallonni/game-of-life | 13669c08b7c4d1d27adbdf518d9ad0b00047e7f0 | [
"MIT"
] | null | null | null | import unittest
import life
import random
from itertools import chain
class TestLife(unittest.TestCase):
def setUp(self):
self.board = life.create_dead_board(3, 3)
def initialise_neighbours(self, board, neighbours):
locations = random.sample(list(chain(range(4), range(5, 9))), neighbours)
for pos in locations:
board[pos//3][pos % 3] = 1
return board
def test_create_dead_board(self):
result = life.create_dead_board(3, 3)
expected = [[0, 0, 0], [0, 0, 0], [0, 0, 0]]
self.assertEqual(result, expected)
def test_8_dead_neighbours(self):
"""
// depopulation, 8 dead neighbours
xxx xxx
xxx -> xxx
xxx xxx
"""
print('========= Test 8 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 0)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should stay dead when it has 8 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should die when it has 8 dead neighbours\n {} => {}"
.format(self.board, next_board))
def test_7_dead_neighbours(self):
"""
// depopulation, 7 dead neighbours
oxx oxx
xxx -> xxx
xxx xxx
"""
print('========= Test 7 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 1)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should stay dead when it has 7 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should die when it has 7 dead neighbours\n {} => {}"
.format(self.board, next_board))
def test_6_dead_neighbours(self):
"""
// Just right, 6 dead neighbours
oxx oxx
oox -> oox
xxx xxx
"""
print('========= Test 6 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 2)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should stay dead when it has 6 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(1, next_board[1][1],
"Cell should stay alive when it has 6 dead neighbours\n {} => {}"
.format(self.board, next_board))
def test_5_dead_neighbours(self):
"""
// Reproduction, 5 dead neighbours
oxx oxx
oxx -> oox
oxx oxx
"""
print('========= Test 5 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 3)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(1, next_board[1][1],
"Cell reproduces and becomes alive when it has 5 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(1, next_board[1][1],
"Cell stays alive when it has 5 dead neighbours\n {} => {}"
.format(self.board, next_board))
def test_4_dead_neighbours(self):
"""
// overpopulation, 4 dead neighbours
ooo ooo
xox -> xxx
oxx oxx
"""
print('========= Test 4 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 4)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should stay dead when it has 4 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should die when it has 4 dead neighbours\n {} => {}"
.format(self.board, next_board))
def test_3_dead_neighbours(self):
"""
// overpopulation, 3 dead neighbours
ooo ooo
xoo -> xxo
oxx oxx
"""
print('========= Test 3 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 5)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should stay dead when it has 3 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should die when it has 3 dead neighbours\n {} => {}"
.format(self.board, next_board))
def test_2_dead_neighbours(self):
"""
// overpopulation, 2 dead neighbours
ooo ooo
xoo -> xxo
oxo oxo
"""
print('========= Test 2 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 6)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should stay dead when it has 2 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should die when it has 2 dead neighbours\n {} => {}"
.format(self.board, next_board))
def test_1_dead_neighbours(self):
"""
// overpopulation, 1 dead neighbours
ooo ooo
ooo -> oxo
oxo oxo
"""
print('========= Test 1 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 7)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should stay dead when it has 1 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should die when it has 1 dead neighbours\n {} => {}"
.format(self.board, next_board))
def test_0_dead_neighbours(self):
"""
// overpopulation, 0 dead neighbours
ooo ooo
ooo -> oxo
ooo ooo
"""
print('========= Test 0 dead neighbours ==========')
self.board = self.initialise_neighbours(self.board, 8)
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should stay dead when it has 0 dead neighbours\n{} => {}"
.format(self.board, next_board))
self.board[1][1] = 1
next_board = life.calculate_next_board_state(self.board)
self.assertEqual(0, next_board[1][1],
"Cell should die when it has 0 dead neighbours\n {} => {}"
.format(self.board, next_board))
| 40.40796 | 100 | 0.535705 | 957 | 8,122 | 4.38767 | 0.07419 | 0.154322 | 0.083591 | 0.094308 | 0.840438 | 0.788521 | 0.743749 | 0.741367 | 0.739224 | 0.619195 | 0 | 0.028646 | 0.338094 | 8,122 | 200 | 101 | 40.61 | 0.752418 | 0.082861 | 0 | 0.508065 | 0 | 0 | 0.207859 | 0 | 0 | 0 | 0 | 0 | 0.153226 | 1 | 0.096774 | false | 0 | 0.032258 | 0 | 0.145161 | 0.072581 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33abf566e70c3c653c73683ce7265c7418320d35 | 50,360 | py | Python | macauff/tests/test_matching.py | lsst-uk/macauff | 02ce5caeaa1523957f914155dd433c7d1bf65869 | [
"BSD-3-Clause"
] | 5 | 2021-03-03T22:03:03.000Z | 2022-03-11T05:42:18.000Z | macauff/tests/test_matching.py | lsst-uk/macauff | 02ce5caeaa1523957f914155dd433c7d1bf65869 | [
"BSD-3-Clause"
] | 8 | 2020-07-09T09:26:17.000Z | 2022-03-30T14:24:11.000Z | macauff/tests/test_matching.py | lsst-uk/macauff | 02ce5caeaa1523957f914155dd433c7d1bf65869 | [
"BSD-3-Clause"
] | 1 | 2022-01-24T13:21:37.000Z | 2022-01-24T13:21:37.000Z | # Licensed under a 3-clause BSD style license - see LICENSE
'''
Tests for the "matching" module.
'''
import pytest
import os
from configparser import ConfigParser
from numpy.testing import assert_allclose
import numpy as np
from ..matching import CrossMatch
def _replace_line(file_name, line_num, text, out_file=None):
'''
Helper function to update the metadata file on-the-fly, allowing for
"run" flags to be set from run to no run once they have finished.
Parameters
----------
file_name : string
Name of the file to read in and change lines of.
line_num : integer
Line number of line to edit in ``file_name``.
text : string
New line to replace original line in ``file_name`` with.
out_file : string, optional
Name of the file to save new, edited version of ``file_name`` to.
If ``None`` then ``file_name`` is overwritten.
'''
if out_file is None:
out_file = file_name
lines = open(file_name, 'r').readlines()
lines[line_num] = text
out = open(out_file, 'w')
out.writelines(lines)
out.close()
class TestInputs:
def setup_class(self):
joint_config = ConfigParser()
with open(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt')) as f:
joint_config.read_string('[config]\n' + f.read())
joint_config = joint_config['config']
cat_a_config = ConfigParser()
with open(os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt')) as f:
cat_a_config.read_string('[config]\n' + f.read())
cat_a_config = cat_a_config['config']
cat_b_config = ConfigParser()
with open(os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt')) as f:
cat_b_config.read_string('[config]\n' + f.read())
cat_b_config = cat_b_config['config']
self.a_cat_folder_path = os.path.abspath(cat_a_config['cat_folder_path'])
self.b_cat_folder_path = os.path.abspath(cat_b_config['cat_folder_path'])
os.makedirs(self.a_cat_folder_path, exist_ok=True)
os.makedirs(self.b_cat_folder_path, exist_ok=True)
np.save('{}/con_cat_astro.npy'.format(self.a_cat_folder_path), np.zeros((2, 3), float))
np.save('{}/con_cat_photo.npy'.format(self.a_cat_folder_path), np.zeros((2, 3), float))
np.save('{}/magref.npy'.format(self.a_cat_folder_path), np.zeros(2, float))
np.save('{}/con_cat_astro.npy'.format(self.b_cat_folder_path), np.zeros((2, 3), float))
np.save('{}/con_cat_photo.npy'.format(self.b_cat_folder_path), np.zeros((2, 4), float))
np.save('{}/magref.npy'.format(self.b_cat_folder_path), np.zeros(2, float))
def test_crossmatch_run_input(self):
with pytest.raises(FileNotFoundError):
cm = CrossMatch('./file.txt', './file2.txt', './file3.txt')
with pytest.raises(FileNotFoundError):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
'./file2.txt', './file3.txt')
with pytest.raises(FileNotFoundError):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
'./file3.txt')
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.run_auf is False
assert cm.run_group is False
assert cm.run_cf is True
assert cm.run_source is True
# List of simple one line config file replacements for error message checking
f = open(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt')).readlines()
for old_line, new_line, match_text in zip(['run_cf = yes', 'run_auf = no', 'run_auf = no'],
['', 'run_auf = aye\n', 'run_auf = yes\n'],
['Missing key', 'Boolean flag key not set',
'Inconsistency between run/no run']):
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt'), idx, new_line,
out_file=os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_.txt'))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
def test_crossmatch_auf_cf_input(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.cf_region_frame == 'equatorial'
assert_allclose(cm.cf_region_points,
np.array([[131, -1], [132, -1], [133, -1], [134, -1],
[131, 0], [132, 0], [133, 0], [134, 0],
[131, 1], [132, 1], [133, 1], [134, 1]]))
f = open(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt')).readlines()
old_line = 'include_perturb_auf = no'
new_line = 'include_perturb_auf = yes\n'
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/crossmatch_params_.txt'))
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.a_auf_region_frame == 'equatorial'
assert_allclose(cm.a_auf_region_points,
np.array([[131, -1], [132, -1], [133, -1], [134, -1],
[131, 0], [132, 0], [133, 0], [134, 0],
[131, 1], [132, 1], [133, 1], [134, 1]]))
assert_allclose(cm.b_auf_region_points,
np.array([[131, -1], [132, -1], [133, -1], [134, -1],
[131, -1/3], [132, -1/3], [133, -1/3], [134, -1/3],
[131, 1/3], [132, 1/3], [133, 1/3], [134, 1/3],
[131, 1], [132, 1], [133, 1], [134, 1]]))
for kind in ['auf_region_', 'cf_region_']:
in_file = 'crossmatch_params' if 'cf' in kind else 'cat_a_params'
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
# List of simple one line config file replacements for error message checking
for old_line, new_line, match_text in zip(
['{}type = rectangle'.format(kind), '{}type = rectangle'.format(kind),
'{}points = 131 134 4 -1 1 3'.format(kind),
'{}points = 131 134 4 -1 1 3'.format(kind),
'{}frame = equatorial'.format(kind), '{}points = 131 134 4 -1 1 3'.format(kind)],
['', '{}type = triangle\n'.format(kind),
'{}points = 131 134 4 -1 1 a\n'.format(kind),
'{}points = 131 134 4 -1 1\n'.format(kind), '{}frame = ecliptic\n'.format(kind),
'{}points = 131 134 4 -1 1 3.4\n'.format(kind)],
['Missing key {}type'.format(kind),
"{}{}type should either be 'rectangle' or".format('' if 'cf' in kind
else 'a_', kind),
'{}{}points should be 6 numbers'.format('' if 'cf' in kind else 'a_', kind),
'{}{}points should be 6 numbers'.format('' if 'cf' in kind else 'a_', kind),
"{}{}frame should either be 'equatorial' or".format(
'' if 'cf' in kind else 'a_', kind),
'start and stop values for {}{}points'.format('' if 'cf' in kind
else 'a_', kind)]):
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, new_line,
out_file=os.path.join(os.path.dirname(__file__),
'data/{}_.txt'.format(in_file)))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format(
'_' if 'cf' in kind else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format(
'_' if 'cf' not in kind else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params.txt'))
# Check correct and incorrect *_region_points when *_region_type is 'points'
idx = np.where(['{}type = rectangle'.format(kind) in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, '{}type = points\n'.format(kind),
out_file=os.path.join(os.path.dirname(__file__),
'data/{}_.txt'.format(in_file)))
idx = np.where(['{}points = 131 134 4 -1 1 3'.format(kind) in line for
line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}_.txt'.format(in_file)), idx,
'{}points = (131, 0), (133, 0), (132, -1)\n'.format(kind),
out_file=os.path.join(os.path.dirname(__file__),
'data/{}_2.txt'.format(in_file)))
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format('_2' if 'cf' in kind else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_2' if 'cf' not in kind else '')),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert_allclose(getattr(cm, '{}{}points'.format('' if 'cf' in kind
else 'a_', kind)), np.array([[131, 0], [133, 0], [132, -1]]))
old_line = '{}points = 131 134 4 -1 1 3'.format(kind)
for new_line in ['{}points = (131, 0), (131, )\n'.format(kind),
'{}points = (131, 0), (131, 1, 2)\n'.format(kind),
'{}points = (131, 0), (131, a)\n'.format(kind)]:
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}_.txt'.format(in_file)), idx, new_line,
out_file=os.path.join(os.path.dirname(__file__),
'data/{}_2.txt'.format(in_file)))
with pytest.raises(ValueError):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format(
'_2' if 'cf' in kind else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format(
'_2' if 'cf' not in kind else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params.txt'))
# Check single-length point grids are fine
idx = np.where(['{}points = 131 134 4 -1 1 3'.format(kind) in line
for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx,
'{}points = 131 131 1 0 0 1\n'.format(kind),
out_file=os.path.join(os.path.dirname(__file__),
'data/{}_.txt'.format(in_file)))
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format('_' if 'cf' in kind else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if 'cf' not in kind else '')),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert_allclose(getattr(cm, '{}{}points'.format('' if 'cf' in kind
else 'a_', kind)), np.array([[131, 0]]))
idx = np.where(['{}type = rectangle'.format(kind) in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx,
'{}type = points\n'.format(kind),
out_file=os.path.join(os.path.dirname(__file__),
'data/{}_.txt'.format(in_file)))
idx = np.where(['{}points = 131 134 4 -1 1 3'.format(kind) in
line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}_.txt'.format(in_file)), idx,
'{}points = (131, 0)\n'.format(kind),
out_file=os.path.join(os.path.dirname(__file__),
'data/{}_2.txt'.format(in_file)))
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format('_2' if 'cf' in kind else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_2' if 'cf' not in kind else '')),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert_allclose(getattr(cm, '{}{}points'.format('' if 'cf' in kind
else 'a_', kind)), np.array([[131, 0]]))
# Check galactic run is also fine -- here we have to replace all 3 parameter
# options with "galactic", however.
for in_file in ['crossmatch_params', 'cat_a_params', 'cat_b_params']:
kind = 'cf_region_' if 'h_p' in in_file else 'auf_region_'
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
idx = np.where(['{}frame = equatorial'.format(kind) in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, '{}frame = galactic\n'.format(kind),
out_file=os.path.join(os.path.dirname(__file__),
'data/{}_.txt'.format(in_file)))
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params_.txt'))
for kind in ['auf_region_', 'cf_region_']:
assert getattr(cm, '{}{}frame'.format('' if 'cf' in kind
else 'a_', kind)) == 'galactic'
assert_allclose(getattr(cm, '{}{}points'.format('' if 'cf' in kind
else 'a_', kind)),
np.array([[131, -1], [132, -1], [133, -1], [134, -1],
[131, 0], [132, 0], [133, 0], [134, 0],
[131, 1], [132, 1], [133, 1], [134, 1]]))
def test_crossmatch_folder_path_inputs(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.joint_folder_path == os.path.join(os.getcwd(), 'test_path')
assert os.path.isdir(os.path.join(os.getcwd(), 'test_path'))
assert cm.a_auf_folder_path == os.path.join(os.getcwd(), 'gaia_auf_folder')
assert cm.b_auf_folder_path == os.path.join(os.getcwd(), 'wise_auf_folder')
# List of simple one line config file replacements for error message checking
for old_line, new_line, match_text, error, in_file in zip(
['joint_folder_path = test_path', 'joint_folder_path = test_path',
'auf_folder_path = gaia_auf_folder', 'auf_folder_path = wise_auf_folder'],
['', 'joint_folder_path = /User/test/some/path/\n', '',
'auf_folder_path = /User/test/some/path\n'],
['Missing key', 'Error when trying to create temporary',
'Missing key auf_folder_path from catalogue "a"',
'folder for catalogue "b" AUF outputs. Please ensure that b_auf_folder_path'],
[ValueError, OSError, ValueError, OSError],
['crossmatch_params', 'crossmatch_params', 'cat_a_params', 'cat_b_params']):
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/{}_.txt'.format(in_file)))
with pytest.raises(error, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format(
'_' if 'h_p' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if '_a_' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params{}.txt'.format('_' if '_b_' in in_file else '')))
def test_crossmatch_tri_inputs(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert not hasattr(cm, 'a_tri_set_name')
f = open(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt')).readlines()
old_line = 'include_perturb_auf = no'
new_line = 'include_perturb_auf = yes\n'
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt'), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/crossmatch_params_.txt'))
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.a_tri_set_name == 'gaiaDR2'
assert np.all(cm.b_tri_filt_names == np.array(['W1', 'W2', 'W3', 'W4']))
assert cm.a_tri_filt_num == 1
assert not cm.b_download_tri
# List of simple one line config file replacements for error message checking
for old_line, new_line, match_text, in_file in zip(
['tri_set_name = gaiaDR2', 'tri_filt_num = 11', 'tri_filt_num = 11',
'download_tri = no', 'download_tri = no'],
['', 'tri_filt_num = a\n', 'tri_filt_num = 3.4\n', 'download_tri = aye\n',
'download_tri = yes\n'],
['Missing key tri_set_name from catalogue "a"',
'tri_filt_num should be a single integer number in catalogue "b"',
'tri_filt_num should be a single integer number in catalogue "b"',
'Boolean flag key not set', 'a_download_tri is True and run_auf is False'],
['cat_a_params', 'cat_b_params', 'cat_b_params', 'cat_a_params', 'cat_a_params']):
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/{}_.txt'.format(in_file)))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if '_a_' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params{}.txt'.format('_' if '_b_' in in_file else '')))
def test_crossmatch_psf_param_inputs(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert np.all(cm.b_filt_names == np.array(['W1', 'W2', 'W3', 'W4']))
f = open(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt')).readlines()
old_line = 'include_perturb_auf = no'
new_line = 'include_perturb_auf = yes\n'
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt'), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/crossmatch_params_.txt'))
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert np.all(cm.a_psf_fwhms == np.array([0.12, 0.12, 0.12]))
# List of simple one line config file replacements for error message checking
for old_line, new_line, match_text, in_file in zip(
['filt_names = G_BP G G_RP', 'filt_names = G_BP G G_RP',
'psf_fwhms = 6.08 6.84 7.36 11.99', 'psf_fwhms = 6.08 6.84 7.36 11.99'],
['', 'filt_names = G_BP G\n',
'psf_fwhms = 6.08 6.84 7.36\n', 'psf_fwhms = 6.08 6.84 7.36 word\n'],
['Missing key filt_names from catalogue "a"',
'a_tri_filt_names and a_filt_names should contain the same',
'b_psf_fwhms and b_filt_names should contain the same',
'psf_fwhms should be a list of floats in catalogue "b".'],
['cat_a_params', 'cat_a_params', 'cat_b_params', 'cat_b_params']):
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/{}_.txt'.format(in_file)))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if '_a_' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params{}.txt'.format('_' if '_b_' in in_file else '')))
def test_crossmatch_cat_name_inputs(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.b_cat_name == 'WISE'
assert os.path.exists('{}/test_path/WISE'.format(os.getcwd()))
f = open(os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt')).readlines()
old_line = 'cat_name = Gaia'
new_line = ''
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/cat_a_params.txt'), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/cat_a_params_.txt'))
match_text = 'Missing key cat_name from catalogue "a"'
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
def test_crossmatch_search_inputs(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.pos_corr_dist == 11
assert not hasattr(cm, 'a_dens_dist')
assert not hasattr(cm, 'b_dens_mags')
f = open(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt')).readlines()
old_line = 'include_perturb_auf = no'
new_line = 'include_perturb_auf = yes\n'
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt'), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/crossmatch_params_.txt'))
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert np.all(cm.a_dens_mags == np.array([20, 20, 20]))
assert not hasattr(cm, 'b_dens_dist')
f = open(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt')).readlines()
old_line = 'compute_local_density = no'
new_line = 'compute_local_density = yes\n'
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_.txt'), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/crossmatch_params_2.txt'))
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params_2.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert np.all(cm.a_dens_mags == np.array([20, 20, 20]))
assert cm.b_dens_dist == 0.25
# List of simple one line config file replacements for error message checking
for old_line, new_line, match_text, in_file in zip(
['pos_corr_dist = 11', 'pos_corr_dist = 11', 'dens_dist = 0.25',
'dens_dist = 0.25', 'dens_mags = 20 20 20 20', 'dens_mags = 20 20 20 20',
'dens_mags = 20 20 20'],
['', 'pos_corr_dist = word\n', '', 'dens_dist = word\n', '',
'dens_mags = 20 20 20\n', 'dens_mags = word word word\n'],
['Missing key pos_corr_dist', 'pos_corr_dist must be a float',
'Missing key dens_dist from catalogue "b"', 'dens_dist in catalogue "a" must',
'Missing key dens_mags from catalogue "b"',
'b_dens_mags and b_filt_names should contain the same number',
'dens_mags should be a list of floats in catalogue "a'],
['crossmatch_params', 'crossmatch_params', 'cat_b_params', 'cat_a_params',
'cat_b_params', 'cat_b_params', 'cat_a_params']):
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}{}.txt'.format(in_file, '_2' if 'h_p' in in_file else '')), idx,
new_line, out_file=os.path.join(os.path.dirname(__file__),
'data/{}_{}.txt'.format(in_file, '3' if 'h_p' in in_file else '')))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format(
'_3' if 'h_p' in in_file else '_2')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if '_a_' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params{}.txt'.format('_' if '_b_' in in_file else '')))
def test_crossmatch_perturb_auf_inputs(self):
f = open(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt')).readlines()
old_line = 'include_perturb_auf = no'
new_line = 'include_perturb_auf = yes\n'
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt'), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/crossmatch_params_.txt'))
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.num_trials == 10000
assert not cm.compute_local_density
assert cm.dm_max == 10
assert cm.d_mag == 0.1
for old_line, new_line, match_text in zip(
['num_trials = 10000', 'num_trials = 10000', 'num_trials = 10000', 'dm_max = 10',
'dm_max = 10', 'd_mag = 0.1', 'd_mag = 0.1', 'compute_local_density = no',
'compute_local_density = no', 'compute_local_density = no'],
['', 'num_trials = word\n', 'num_trials = 10000.1\n', '', 'dm_max = word\n', '',
'd_mag = word\n', '', 'compute_local_density = word\n',
'compute_local_density = 10\n'],
['Missing key num_trials from joint', 'num_trials should be an integer',
'num_trials should be an integer', 'Missing key dm_max from joint',
'dm_max must be a float', 'Missing key d_mag from joint', 'd_mag must be a float',
'Missing key compute_local_density from joint',
'Boolean flag key not set to allowed', 'Boolean flag key not set to allowed']):
# Make sure to keep the first edit of crossmatch_params, adding each
# second change in turn.
f = open(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_.txt')).readlines()
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_.txt'), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/crossmatch_params_2.txt'))
f = open(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_2.txt')).readlines()
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_2.txt'),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params.txt'))
def test_crossmatch_fourier_inputs(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.real_hankel_points == 10000
assert cm.four_hankel_points == 10000
assert cm.four_max_rho == 100
# List of simple one line config file replacements for error message checking
for old_line, new_line, match_text in zip(
['real_hankel_points = 10000', 'four_hankel_points = 10000', 'four_max_rho = 100'],
['', 'four_hankel_points = 10000.1\n', 'four_max_rho = word\n'],
['Missing key real_hankel_points', 'four_hankel_points should be an integer.',
'four_max_rho should be an integer.']):
f = open(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt')).readlines()
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt'), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/crossmatch_params_.txt'))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params_.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
def test_crossmatch_frame_equality(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert cm.a_auf_region_frame == 'equatorial'
assert cm.b_auf_region_frame == 'equatorial'
assert cm.cf_region_frame == 'equatorial'
# List of simple one line config file replacements for error message checking
match_text = 'Region frames for c/f and AUF creation must all be the same.'
for old_line, new_line, in_file in zip(
['cf_region_frame = equatorial', 'auf_region_frame = equatorial',
'auf_region_frame = equatorial'],
['cf_region_frame = galactic\n', 'auf_region_frame = galactic\n',
'auf_region_frame = galactic\n'],
['crossmatch_params', 'cat_a_params', 'cat_b_params']):
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/{}_.txt'.format(in_file)))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format(
'_' if 'h_p' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if '_a_' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params{}.txt'.format('_' if '_b_' in in_file else '')))
def test_cross_match_extent(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert np.all(cm.cross_match_extent == np.array([131, 138, -3, 3]))
# List of simple one line config file replacements for error message checking
in_file = 'crossmatch_params'
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
old_line = 'cross_match_extent = 131 138 -3 3'
for new_line, match_text in zip(
['', 'cross_match_extent = 131 138 -3 word\n', 'cross_match_extent = 131 138 -3\n',
'cross_match_extent = 131 138 -3 3 1'],
['Missing key cross_match_extent', 'All elements of cross_match_extent should be',
'cross_match_extent should contain.', 'cross_match_extent should contain']):
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/{}_.txt'.format(in_file)))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format(
'_' if 'h_p' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if '_a_' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params{}.txt'.format('_' if '_b_' in in_file else '')))
def test_int_fracs(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert np.all(cm.int_fracs == np.array([0.63, 0.9, 0.99]))
# List of simple one line config file replacements for error message checking
in_file = 'crossmatch_params'
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
old_line = 'int_fracs = 0.63 0.9 0.99'
for new_line, match_text in zip(
['', 'int_fracs = 0.63 0.9 word\n', 'int_fracs = 0.63 0.9\n'],
['Missing key int_fracs', 'All elements of int_fracs should be',
'int_fracs should contain.']):
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/{}_.txt'.format(in_file)))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format(
'_' if 'h_p' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if '_a_' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params{}.txt'.format('_' if '_b_' in in_file else '')))
def test_crossmatch_chunk_num(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert np.all(cm.mem_chunk_num == 10)
# List of simple one line config file replacements for error message checking
in_file = 'crossmatch_params'
f = open(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file))).readlines()
old_line = 'mem_chunk_num = 10'
for new_line, match_text in zip(
['', 'mem_chunk_num = word\n', 'mem_chunk_num = 10.1\n'],
['Missing key mem_chunk_num', 'mem_chunk_num should be a single integer',
'mem_chunk_num should be a single integer']):
idx = np.where([old_line in line for line in f])[0][0]
_replace_line(os.path.join(os.path.dirname(__file__),
'data/{}.txt'.format(in_file)), idx, new_line, out_file=os.path.join(
os.path.dirname(__file__), 'data/{}_.txt'.format(in_file)))
with pytest.raises(ValueError, match=match_text):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params{}.txt'.format(
'_' if 'h_p' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_a_params{}.txt'.format('_' if '_a_' in in_file else '')),
os.path.join(os.path.dirname(__file__),
'data/cat_b_params{}.txt'.format('_' if '_b_' in in_file else '')))
def test_crossmatch_shared_data(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert np.all(cm.r == np.linspace(0, 11, 10000))
assert_allclose(cm.dr, np.ones(9999, float) * 11/9999)
assert np.all(cm.rho == np.linspace(0, 100, 10000))
assert_allclose(cm.drho, np.ones(9999, float) * 100/9999)
def test_cat_folder_path(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
assert os.path.exists(self.a_cat_folder_path)
assert os.path.exists(self.b_cat_folder_path)
assert cm.a_cat_folder_path == self.a_cat_folder_path
assert np.all(np.load('{}/con_cat_astro.npy'.format(
self.a_cat_folder_path)).shape == (2, 3))
assert np.all(np.load('{}/con_cat_photo.npy'.format(
self.b_cat_folder_path)).shape == (2, 4))
assert np.all(np.load('{}/magref.npy'.format(
self.b_cat_folder_path)).shape == (2,))
os.system('rm -rf {}'.format(self.a_cat_folder_path))
with pytest.raises(OSError, match="a_cat_folder_path does not exist."):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
self.setup_class()
os.system('rm -rf {}'.format(self.b_cat_folder_path))
with pytest.raises(OSError, match="b_cat_folder_path does not exist."):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
self.setup_class()
for catpath, file in zip([self.a_cat_folder_path, self.b_cat_folder_path],
['con_cat_astro', 'magref']):
os.system('rm {}/{}.npy'.format(catpath, file))
with pytest.raises(FileNotFoundError,
match='{} file not found in catalogue '.format(file)):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
self.setup_class()
for name, data, match in zip(['con_cat_astro', 'con_cat_photo', 'con_cat_astro',
'con_cat_photo', 'magref', 'con_cat_astro', 'con_cat_photo',
'magref'],
[np.zeros((2, 2), float), np.zeros((2, 5), float),
np.zeros((2, 3, 4), float), np.zeros(2, float),
np.zeros((2, 2), float), np.zeros((1, 3), float),
np.zeros((3, 4), float), np.zeros(3, float)],
["Second dimension of con_cat_astro",
"Second dimension of con_cat_photo in",
"Incorrect number of dimensions",
"Incorrect number of dimensions",
"Incorrect number of dimensions",
'Consolidated catalogue arrays for catalogue "b"',
'Consolidated catalogue arrays for catalogue "b"',
'Consolidated catalogue arrays for catalogue "b"']):
np.save('{}/{}.npy'.format(self.b_cat_folder_path, name), data)
with pytest.raises(ValueError, match=match):
cm = CrossMatch(os.path.join(os.path.dirname(__file__),
'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
self.setup_class()
def test_calculate_cf_areas(self):
cm = CrossMatch(os.path.join(os.path.dirname(__file__), 'data/crossmatch_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_a_params.txt'),
os.path.join(os.path.dirname(__file__), 'data/cat_b_params.txt'))
cm.cross_match_extent = np.array([131, 134, -1, 1])
cm.cf_region_points = np.array([[a, b] for a in [131.5, 132.5, 133.5]
for b in [-0.5, 0.5]])
cm._calculate_cf_areas()
assert_allclose(cm.cf_areas, np.ones((6), float), rtol=0.02)
cm.cross_match_extent = np.array([50, 55, 85, 90])
cm.cf_region_points = np.array([[a, b] for a in 0.5+np.arange(50, 55, 1)
for b in 0.5+np.arange(85, 90, 1)])
cm._calculate_cf_areas()
calculated_areas = np.array(
[(c[0]+0.5 - (c[0]-0.5))*180/np.pi * (np.sin(np.radians(c[1]+0.5)) -
np.sin(np.radians(c[1]-0.5))) for c in cm.cf_region_points])
assert_allclose(cm.cf_areas, calculated_areas, rtol=0.025)
| 64.481434 | 99 | 0.540886 | 6,481 | 50,360 | 3.878414 | 0.049529 | 0.101687 | 0.084341 | 0.101209 | 0.834739 | 0.80514 | 0.775263 | 0.742919 | 0.705761 | 0.691279 | 0 | 0.025318 | 0.318427 | 50,360 | 780 | 100 | 64.564103 | 0.706998 | 0.035048 | 0 | 0.569715 | 0 | 0 | 0.224957 | 0.085905 | 0 | 0 | 0 | 0 | 0.089955 | 1 | 0.026987 | false | 0 | 0.008996 | 0 | 0.037481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33bdd4ef20d9c0dc10a6776e939d8e7ced47d3de | 1,718 | py | Python | Algo and DSA/LeetCode-Solutions-master/Python/course-schedule.py | Sourav692/FAANG-Interview-Preparation | f523e5c94d582328b3edc449ea16ac6ab28cdc81 | [
"Unlicense"
] | 3,269 | 2018-10-12T01:29:40.000Z | 2022-03-31T17:58:41.000Z | Algo and DSA/LeetCode-Solutions-master/Python/course-schedule.py | Sourav692/FAANG-Interview-Preparation | f523e5c94d582328b3edc449ea16ac6ab28cdc81 | [
"Unlicense"
] | 53 | 2018-12-16T22:54:20.000Z | 2022-02-25T08:31:20.000Z | Algo and DSA/LeetCode-Solutions-master/Python/course-schedule.py | Sourav692/FAANG-Interview-Preparation | f523e5c94d582328b3edc449ea16ac6ab28cdc81 | [
"Unlicense"
] | 1,236 | 2018-10-12T02:51:40.000Z | 2022-03-30T13:30:37.000Z | # Time: O(|V| + |E|)
# Space: O(|E|)
import collections
# bfs solution
class Solution(object):
def canFinish(self, numCourses, prerequisites):
"""
:type numCourses: int
:type prerequisites: List[List[int]]
:rtype: List[int]
"""
in_degree = collections.defaultdict(set)
out_degree = collections.defaultdict(set)
for i, j in prerequisites:
in_degree[i].add(j)
out_degree[j].add(i)
q = collections.deque([i for i in xrange(numCourses) if i not in in_degree])
while q:
node = q.popleft()
for i in out_degree[node]:
in_degree[i].remove(node)
if not in_degree[i]:
q.append(i)
del in_degree[i]
del out_degree[node]
return not in_degree and not out_degree
# Time: O(|V| + |E|)
# Space: O(|E|)
# dfs solution
class Solution2(object):
def canFinish(self, numCourses, prerequisites):
"""
:type numCourses: int
:type prerequisites: List[List[int]]
:rtype: List[int]
"""
in_degree = collections.defaultdict(set)
out_degree = collections.defaultdict(set)
for i, j in prerequisites:
in_degree[i].add(j)
out_degree[j].add(i)
stk = [i for i in xrange(numCourses) if i not in in_degree]
while stk:
node = stk.pop()
for i in out_degree[node]:
in_degree[i].remove(node)
if not in_degree[i]:
stk.append(i)
del in_degree[i]
del out_degree[node]
return not in_degree and not out_degree
| 30.140351 | 84 | 0.538999 | 216 | 1,718 | 4.175926 | 0.212963 | 0.124169 | 0.079823 | 0.137472 | 0.871397 | 0.871397 | 0.871397 | 0.840355 | 0.840355 | 0.840355 | 0 | 0.000902 | 0.354482 | 1,718 | 56 | 85 | 30.678571 | 0.812444 | 0.144354 | 0 | 0.685714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.028571 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33f7cfc2c5db216e323701fb7628e1c2e98a415a | 13,826 | py | Python | tests/unit_tests/grid/test_cell_properties.py | poc11/resqpy | 5dfbfb924f8ee9b2712fb8e38bff96ee8ee9d8e2 | [
"MIT"
] | null | null | null | tests/unit_tests/grid/test_cell_properties.py | poc11/resqpy | 5dfbfb924f8ee9b2712fb8e38bff96ee8ee9d8e2 | [
"MIT"
] | null | null | null | tests/unit_tests/grid/test_cell_properties.py | poc11/resqpy | 5dfbfb924f8ee9b2712fb8e38bff96ee8ee9d8e2 | [
"MIT"
] | null | null | null | import numpy as np
import pytest
from resqpy.grid import Grid
import resqpy.grid as grr
from resqpy.model import Model
import resqpy.grid._cell_properties as cp
import resqpy.property.grid_property_collection as gpc
def test_thickness_array_thickness_already_set(basic_regular_grid: Grid):
# Arrange
extent = basic_regular_grid.extent_kji
array_thickness = np.random.random(extent)
basic_regular_grid.array_thickness = array_thickness # type: ignore
# Act
thickness = cp.thickness(basic_regular_grid)
# Assert
np.testing.assert_array_almost_equal(thickness, array_thickness)
def test_thickness_array_thickness_already_set_cell_kji0(basic_regular_grid: Grid):
# Arrange
extent = basic_regular_grid.extent_kji
array_thickness = np.random.random(extent)
basic_regular_grid.array_thickness = array_thickness # type: ignore
cell_kji0 = (1, 1, 1)
# Act
thickness = cp.thickness(basic_regular_grid, cell_kji0 = cell_kji0)
# Assert
assert thickness == array_thickness[cell_kji0]
def test_thickness_faulted_grid(faulted_grid: Grid):
# Arrange
expected_thickness = np.array([[[20., 20., 20., 20., 20., 20., 20., 20.], [20., 20., 20., 20., 20., 20., 20., 20.],
[20., 20., 20., 20., 20., 20., 20., 20.], [20., 20., 20., 20., 20., 20., 20., 20.],
[20., 20., 20., 20., 20., 20., 20., 20.]],
[[20., 20., 20., 20., 20., 20., 20., 20.], [20., 20., 20., 20., 20., 20., 20., 20.],
[20., 20., 20., 20., 20., 20., 20., 20.], [20., 20., 20., 20., 20., 20., 20., 20.],
[20., 20., 20., 20., 20., 20., 20., 20.]],
[[10., 10., 5., 0., 0., 5., 10., 10.], [10., 10., 5., 0., 0., 5., 10., 10.],
[10., 10., 5., 0., 0., 5., 10., 10.], [10., 10., 5., 0., 0., 5., 10., 10.],
[10., 10., 5., 0., 0., 5., 10., 10.]]])
# Act
thickness = cp.thickness(faulted_grid)
# Assert
np.testing.assert_array_almost_equal(thickness, expected_thickness)
def test_thickness_blank_property_collection(basic_regular_grid: Grid):
# Arrange
property_collection = gpc.GridPropertyCollection()
# Act
thickness = cp.thickness(basic_regular_grid, property_collection = property_collection)
# Assert
assert thickness is None
def test_thickness_property_collection(example_model_with_properties: Model):
# Arrange
grid = example_model_with_properties.grid()
extent = grid.extent_kji
property_collection = grid.property_collection
thickness_array = np.random.random(extent)
property_collection.add_cached_array_to_imported_list(thickness_array,
'test data',
'DZ',
False,
uom = grid.z_units(),
property_kind = 'cell length',
facet_type = 'direction',
indexable_element = 'cells',
facet = 'K')
property_collection.write_hdf5_for_imported_list()
property_collection.create_xml_for_imported_list_and_add_parts_to_model()
if hasattr(grid, 'array_thickness'):
delattr(grid, 'array_thickness')
# Act
thickness = cp.thickness(grid, property_collection = property_collection)
# Assert
np.testing.assert_array_almost_equal(thickness, thickness_array)
def test_thickness_multiple_property_collection(example_model_with_properties: Model):
# Arrange
grid = example_model_with_properties.grid()
extent = grid.extent_kji
property_collection = grid.property_collection
thickness_array_gross = np.random.random(extent)
property_collection.add_cached_array_to_imported_list(thickness_array_gross,
'test data',
'DZ',
False,
uom = grid.z_units(),
property_kind = 'thickness',
facet_type = 'netgross',
indexable_element = 'cells',
facet = 'gross')
thickness_array_net = np.random.random(extent) / 2
property_collection.add_cached_array_to_imported_list(thickness_array_net,
'test data',
'DZ',
False,
uom = grid.z_units(),
property_kind = 'thickness',
facet_type = 'netgross',
indexable_element = 'cells',
facet = 'net')
property_collection.write_hdf5_for_imported_list()
property_collection.create_xml_for_imported_list_and_add_parts_to_model()
if hasattr(grid, 'array_thickness'):
delattr(grid, 'array_thickness')
# Act
thickness = cp.thickness(grid, property_collection = property_collection)
# Assert
np.testing.assert_array_almost_equal(thickness, thickness_array_gross)
def test_thickness_from_points(example_model_with_properties: Model):
# Arrange
grid = example_model_with_properties.grid()
if hasattr(grid, 'array_thickness'):
delattr(grid, 'array_thickness')
if hasattr(grid, 'property_collection'):
delattr(grid, 'property_collection')
# Act
thickness = cp.thickness(grid)
# Assert
np.testing.assert_array_almost_equal(thickness, 20.0)
def test_volume_array_volume_already_set(basic_regular_grid: Grid):
# Arrange
extent = basic_regular_grid.extent_kji
array_volume = np.random.random(extent)
basic_regular_grid.array_volume = array_volume # type: ignore
# Act
volume = cp.volume(basic_regular_grid)
# Assert
np.testing.assert_array_almost_equal(volume, array_volume)
def test_volume_array_volume_already_set_cell_kji0(basic_regular_grid: Grid):
# Arrange
extent = basic_regular_grid.extent_kji
array_volume = np.random.random(extent)
basic_regular_grid.array_volume = array_volume # type: ignore
cell_kji0 = (1, 1, 1)
# Act
volume = cp.volume(basic_regular_grid, cell_kji0 = cell_kji0)
# Assert
assert volume == array_volume[cell_kji0]
def test_volume_faulted_grid(faulted_grid: Grid):
# Arrange
expected_volume = np.array([[[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.],
[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.],
[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.],
[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.],
[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.]],
[[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.],
[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.],
[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.],
[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.],
[200000., 200000., 200000., 200000., 200000., 200000., 200000., 200000.]],
[[100000., 100000., 50000., 0., 0., 50000., 100000., 100000.],
[100000., 100000., 50000., 0., 0., 50000., 100000., 100000.],
[100000., 100000., 50000., 0., 0., 50000., 100000., 100000.],
[100000., 100000., 50000., 0., 0., 50000., 100000., 100000.],
[100000., 100000., 50000., 0., 0., 50000., 100000., 100000.]]])
# Act
volume = cp.volume(faulted_grid)
# Assert
np.testing.assert_array_almost_equal(volume, expected_volume)
def test_volume_blank_property_collection(basic_regular_grid: Grid):
# Arrange
property_collection = gpc.GridPropertyCollection()
# Act
volume = cp.volume(basic_regular_grid, property_collection = property_collection)
# Assert
assert volume is None
def test_volume_property_collection(example_model_with_properties: Model):
# Arrange
grid = example_model_with_properties.grid()
extent = grid.extent_kji
property_collection = grid.property_collection
volume_array = np.random.random(extent)
property_collection.add_cached_array_to_imported_list(volume_array,
'test data',
'DZ',
property_kind = 'rock volume')
property_collection.write_hdf5_for_imported_list()
property_collection.create_xml_for_imported_list_and_add_parts_to_model()
if hasattr(grid, 'array_volume'):
delattr(grid, 'array_volume')
# Act
volume = cp.volume(grid, property_collection = property_collection)
# Assert
np.testing.assert_array_almost_equal(volume, volume_array)
def test_volume_multiple_property_collection(example_model_with_properties: Model):
# Arrange
grid = example_model_with_properties.grid()
extent = grid.extent_kji
property_collection = grid.property_collection
volume_array_gross = np.random.random(extent)
property_collection.add_cached_array_to_imported_list(volume_array_gross,
'test data',
'DZ',
property_kind = 'rock volume',
facet_type = 'netgross',
facet = 'gross')
volume_array_net = np.random.random(extent) / 2
property_collection.add_cached_array_to_imported_list(volume_array_net,
'test data',
'DZ',
property_kind = 'rock volume',
facet_type = 'netgross',
facet = 'net')
property_collection.write_hdf5_for_imported_list()
property_collection.create_xml_for_imported_list_and_add_parts_to_model()
if hasattr(grid, 'array_volume'):
delattr(grid, 'array_volume')
# Act
volume = cp.volume(grid, property_collection = property_collection)
# Assert
np.testing.assert_array_almost_equal(volume, volume_array_gross)
def test_volume_from_points(example_model_with_properties: Model):
# Arrange
grid = example_model_with_properties.grid()
if hasattr(grid, 'array_volume'):
delattr(grid, 'array_thickness')
if hasattr(grid, 'property_volume'):
delattr(grid, 'property_collection')
# Act
volume = cp.volume(grid)
# Assert
np.testing.assert_array_almost_equal(volume, 100000.0)
def test_cell_inactive_already_set(basic_regular_grid: Grid):
# Arrange
extent = basic_regular_grid.extent_kji
inactive = np.random.choice([True, False], extent)
basic_regular_grid.inactive = inactive # type: ignore
cell_kji0 = (1, 1, 1)
# Act
cell_inactive = cp.cell_inactive(basic_regular_grid, cell_kji0 = cell_kji0)
# Assert
assert cell_inactive == inactive[cell_kji0]
def test_cell_inactive_extract_inactive_mask(basic_regular_grid: Grid):
# Arrange
extent = tuple(basic_regular_grid.extent_kji)
# Act & Assert
for x, y, z in np.ndindex(extent):
cell = (x, y, z)
cell_inactive = cp.cell_inactive(basic_regular_grid, cell_kji0 = cell)
assert cell_inactive is not True
@pytest.mark.parametrize("dxyz", [(100.0, 50.0, 20.0), (40.0, 60.0, 30.0), (72.1, 28.7, 84.6)])
def test_interface_length(model_test: Model, dxyz):
# Arrange
grid = grr.RegularGrid(model_test, extent_kji = (2, 2, 2), dxyz = dxyz, as_irregular_grid = True)
cell_kji0 = (1, 1, 1)
# Act & Assert
for axis in range(3):
interface_length = cp.interface_length(grid, cell_kji0 = cell_kji0, axis = axis)
assert interface_length == dxyz[2 - axis]
@pytest.mark.parametrize("dxyz", [(100.0, 50.0, 20.0), (40.0, 60.0, 30.0), (72.1, 28.7, 84.6)])
def test_interface_lengths_kji(model_test: Model, dxyz):
# Arrange
grid = grr.RegularGrid(model_test, extent_kji = (2, 2, 2), dxyz = dxyz, as_irregular_grid = True)
cell_kji0 = (1, 1, 1)
# Act
interface_length = cp.interface_lengths_kji(grid, cell_kji0 = cell_kji0)
# Assert
np.testing.assert_array_almost_equal(interface_length, dxyz[::-1])
| 42.411043 | 119 | 0.562419 | 1,462 | 13,826 | 5.009576 | 0.082079 | 0.043146 | 0.0639 | 0.084107 | 0.858547 | 0.836838 | 0.829601 | 0.779492 | 0.762425 | 0.695658 | 0 | 0.108535 | 0.3356 | 13,826 | 325 | 120 | 42.541538 | 0.688766 | 0.029654 | 0 | 0.554455 | 0 | 0 | 0.033388 | 0 | 0 | 0 | 0 | 0 | 0.089109 | 1 | 0.089109 | false | 0 | 0.10396 | 0 | 0.193069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d60f71c2b5f26018091b936e7b5870703ae7754 | 86 | py | Python | 14-Python/Demos/Day-01/test_my_math.py | helghareeb/OSTrack2019 | 3ef5af0f56f8640e92c1f3c3b3d76b8df2783f48 | [
"MIT"
] | 5 | 2019-08-04T22:30:35.000Z | 2020-02-24T11:18:22.000Z | 14-Python/Demos/Day-01/test_my_math.py | helghareeb/OSTrack2019 | 3ef5af0f56f8640e92c1f3c3b3d76b8df2783f48 | [
"MIT"
] | 2 | 2019-08-11T21:51:32.000Z | 2019-08-21T11:12:22.000Z | 14-Python/Demos/Day-01/test_my_math.py | helghareeb/OSTrack2019 | 3ef5af0f56f8640e92c1f3c3b3d76b8df2783f48 | [
"MIT"
] | 14 | 2019-08-05T21:11:03.000Z | 2019-09-29T16:05:52.000Z | # بسم الله الرحمن الرحيم
from my_math import add_numbers
# print(add_numbers(10,11)) | 17.2 | 31 | 0.77907 | 15 | 86 | 4.266667 | 0.866667 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.139535 | 86 | 5 | 32 | 17.2 | 0.810811 | 0.55814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d76018f94034c229916255e70750289c18eea0c | 86 | py | Python | rackio_AI/readers/__init__.py | JesusDBS/RackioAI | 01bcb0c06e73ae6f3ed0bdcf25ce3328456d6786 | [
"MIT"
] | null | null | null | rackio_AI/readers/__init__.py | JesusDBS/RackioAI | 01bcb0c06e73ae6f3ed0bdcf25ce3328456d6786 | [
"MIT"
] | null | null | null | rackio_AI/readers/__init__.py | JesusDBS/RackioAI | 01bcb0c06e73ae6f3ed0bdcf25ce3328456d6786 | [
"MIT"
] | 1 | 2021-05-19T22:32:44.000Z | 2021-05-19T22:32:44.000Z | from .readers_core import *
from .tpl import *
from ._csv_ import *
from .exl import * | 21.5 | 27 | 0.732558 | 13 | 86 | 4.615385 | 0.538462 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174419 | 86 | 4 | 28 | 21.5 | 0.84507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d54416d18ffbe3b86a82941ddbc47f68c157f8b2 | 2,450 | py | Python | code/dataprocess/unzip.py | chenyangjun45/Mutimode-language-generation | e8fa0379768e2a1cb7dca70eceeac334b605a4e8 | [
"MIT"
] | 5 | 2020-10-22T01:25:47.000Z | 2020-12-21T10:38:46.000Z | code/dataprocess/unzip.py | woyaonidsh/Mutimode | 42cbcddb472f0f162ff546ee1107ee26b5c5e47e | [
"MIT"
] | 1 | 2021-04-15T02:35:48.000Z | 2021-04-15T13:17:48.000Z | code/dataprocess/unzip.py | woyaonidsh/Mutimode | 42cbcddb472f0f162ff546ee1107ee26b5c5e47e | [
"MIT"
] | 1 | 2021-04-14T12:13:58.000Z | 2021-04-14T12:13:58.000Z | import os
import zipfile
text_path = '../data/text/'
image_path = '../data/image/'
annotation_path = '../data/image/annotations/'
def unzip_text(filepath=text_path):
datasets = ['val.zip', 'val_sentence.zip', 'val_parents.zip']
jishu = 0
for data in datasets:
try:
file = zipfile.ZipFile(filepath + data)
if (jishu == 0):
dirname = data.replace('.zip', '.txt')
else:
dirname = data.replace('.zip', '.json')
# 如果存在与压缩包同名文件夹 提示信息并跳过
if os.path.exists(filepath + dirname):
print(f'{os.path.realpath(filepath + dirname)} dir has already existed', '\n')
jishu += 1
else:
file.extractall(filepath)
file.close()
print('The ' + os.path.realpath(filepath + dirname) + ' unzip successfully', '\n')
jishu += 1
except:
print(f'{os.path.realpath(filepath + data)} unzip fail', '\n')
jishu += 1
def unzip_image(filepath=image_path):
datasets = ['new_val2017.zip']
for data in datasets:
try:
file = zipfile.ZipFile(filepath + data)
dirname = data.replace('.zip', '')
# 如果存在与压缩包同名文件夹 提示信息并跳过
if os.path.exists(filepath + dirname):
print(f'{os.path.realpath(filepath + dirname)} dir has already existed', '\n')
else:
file.extractall(filepath)
file.close()
print('The ' + os.path.realpath(filepath + dirname) + ' unzip successfully', '\n')
except:
print(f'{os.path.realpath(filepath + data)} unzip fail', '\n')
def unzip_annotation(filepath=annotation_path):
datasets = ['processed_captions_val2017.zip', 'processed_captions_train2017.zip']
for data in datasets:
try:
file = zipfile.ZipFile(filepath + data)
dirname = data.replace('.zip', '.json')
# 如果存在与压缩包同名文件夹 提示信息并跳过
if os.path.exists(filepath + dirname):
print(f'{os.path.realpath(filepath + dirname)} dir has already existed', '\n')
else:
file.extractall(filepath)
file.close()
print('The ' + os.path.realpath(filepath + dirname) + ' unzip successfully', '\n')
except:
print(f'{os.path.realpath(filepath + data)} unzip fail', '\n')
| 37.692308 | 98 | 0.546531 | 259 | 2,450 | 5.108108 | 0.200772 | 0.054422 | 0.095238 | 0.14966 | 0.7226 | 0.7226 | 0.7226 | 0.7226 | 0.7226 | 0.7226 | 0 | 0.010186 | 0.318776 | 2,450 | 64 | 99 | 38.28125 | 0.782504 | 0.026531 | 0 | 0.722222 | 0 | 0 | 0.255775 | 0.102478 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.037037 | 0 | 0.092593 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d592cd8f7d4efae974512be75f2d4566d4127b20 | 4,662 | py | Python | rkn/acrkn/Decoder.py | rohits5496/action-conditional-rkn | 91ed35ccb0aeb410ed817e0c30b2a31cb264ac47 | [
"MIT"
] | 3 | 2021-10-15T17:44:10.000Z | 2022-03-04T17:00:26.000Z | rkn/acrkn/Decoder.py | rohits5496/action-conditional-rkn | 91ed35ccb0aeb410ed817e0c30b2a31cb264ac47 | [
"MIT"
] | null | null | null | rkn/acrkn/Decoder.py | rohits5496/action-conditional-rkn | 91ed35ccb0aeb410ed817e0c30b2a31cb264ac47 | [
"MIT"
] | 3 | 2021-07-04T05:47:46.000Z | 2022-03-04T17:00:15.000Z | import torch
from typing import Tuple, Iterable
nn = torch.nn
def elup1(x: torch.Tensor) -> torch.Tensor:
return torch.exp(x).where(x < 0.0, x + 1.0)
class SplitDiagGaussianDecoder(nn.Module):
def __init__(self, out_dim: int):
""" Decoder for low dimensional outputs as described in the paper. This one is "split", i.e., there are
completely separate networks mapping from latent mean to output mean and from latent cov to output var
:param lod: latent observation dim (used to compute input sizes)
:param out_dim: dimensionality of target data (assumed to be a vector, images not supported by this decoder)
"""
super(SplitDiagGaussianDecoder, self).__init__()
self._out_dim = out_dim
self._hidden_layers_mean, num_last_hidden_mean = self._build_hidden_layers_mean()
assert isinstance(self._hidden_layers_mean, nn.ModuleList), "_build_hidden_layers_means needs to return a " \
"torch.nn.ModuleList or else the hidden weights " \
"are not found by the optimizer"
self._hidden_layers_var, num_last_hidden_var = self._build_hidden_layers_var()
assert isinstance(self._hidden_layers_var, nn.ModuleList), "_build_hidden_layers_var needs to return a " \
"torch.nn.ModuleList or else the hidden weights " \
"are not found by the optimizer"
self._out_layer_mean = nn.Linear(in_features=num_last_hidden_mean, out_features=out_dim)
self._out_layer_var = nn.Linear(in_features=num_last_hidden_var, out_features=out_dim)
def _build_hidden_layers_mean(self) -> Tuple[nn.ModuleList, int]:
"""
Builds hidden layers for mean decoder
:return: nn.ModuleList of hidden Layers, size of output of last layer
"""
raise NotImplementedError
def _build_hidden_layers_var(self) -> Tuple[nn.ModuleList, int]:
"""
Builds hidden layers for variance decoder
:return: nn.ModuleList of hidden Layers, size of output of last layer
"""
raise NotImplementedError
def forward(self, latent_mean: torch.Tensor, latent_cov: Iterable[torch.Tensor]) \
-> Tuple[torch.Tensor, torch.Tensor]:
""" forward pass of decoder
:param latent_mean:
:param latent_cov:
:return: output mean and variance
"""
h_mean = latent_mean
for layer in self._hidden_layers_mean:
h_mean = layer(h_mean)
mean = self._out_layer_mean(h_mean)
h_var = latent_cov
for layer in self._hidden_layers_var:
h_var = layer(h_var)
log_var = self._out_layer_var(h_var)
var = elup1(log_var)
return mean, var
class SimpleDecoder(nn.Module):
def __init__(self, out_dim: int):
""" Decoder for low dimensional outputs as described in the paper. This one is "split", i.e., there are
completely separate networks mapping from latent mean to output mean and from latent cov to output var
:param lod: latent observation dim (used to compute input sizes)
:param out_dim: dimensionality of target data (assumed to be a vector, images not supported by this decoder)
"""
super(SimpleDecoder, self).__init__()
self._out_dim = out_dim
self._hidden_layers_mean, num_last_hidden_mean = self._build_hidden_layers_mean()
assert isinstance(self._hidden_layers_mean, nn.ModuleList), "_build_hidden_layers_means needs to return a " \
"torch.nn.ModuleList or else the hidden weights " \
"are not found by the optimizer"
self._out_layer_mean = nn.Linear(in_features=num_last_hidden_mean, out_features=out_dim)
def _build_hidden_layers_mean(self) -> Tuple[nn.ModuleList, int]:
"""
Builds hidden layers for mean decoder
:return: nn.ModuleList of hidden Layers, size of output of last layer
"""
raise NotImplementedError
def forward(self, input: torch.Tensor) \
-> Tuple[torch.Tensor]:
""" forward pass of decoder
:param input:
:return: output mean
"""
h_mean = input
for layer in self._hidden_layers_mean:
h_mean = layer(h_mean)
mean = self._out_layer_mean(h_mean)
return mean | 43.570093 | 119 | 0.62248 | 587 | 4,662 | 4.681431 | 0.172061 | 0.104803 | 0.058224 | 0.043668 | 0.80786 | 0.770015 | 0.760553 | 0.723071 | 0.723071 | 0.706696 | 0 | 0.001854 | 0.305877 | 4,662 | 107 | 120 | 43.570093 | 0.847342 | 0.265337 | 0 | 0.509434 | 0 | 0 | 0.114322 | 0.023869 | 0 | 0 | 0 | 0 | 0.056604 | 1 | 0.150943 | false | 0 | 0.037736 | 0.018868 | 0.283019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6379da1ada3a56665e1522ad571ef818ad2bee0b | 184 | py | Python | openpeerpower/components/websocket_api/error.py | pcaston/Open-Peer-Power | 81805d455c548e0f86b0f7fedc793b588b2afdfd | [
"Apache-2.0"
] | null | null | null | openpeerpower/components/websocket_api/error.py | pcaston/Open-Peer-Power | 81805d455c548e0f86b0f7fedc793b588b2afdfd | [
"Apache-2.0"
] | null | null | null | openpeerpower/components/websocket_api/error.py | pcaston/Open-Peer-Power | 81805d455c548e0f86b0f7fedc793b588b2afdfd | [
"Apache-2.0"
] | 1 | 2019-04-24T14:10:08.000Z | 2019-04-24T14:10:08.000Z | """WebSocket API related errors."""
from openpeerpower.exceptions import OpenPeerPowerError
class Disconnect(OpenPeerPowerError):
"""Disconnect the current session."""
pass
| 20.444444 | 55 | 0.76087 | 17 | 184 | 8.235294 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141304 | 184 | 8 | 56 | 23 | 0.886076 | 0.331522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
637e3e115488c2ae453cc1ece8e9704d42e872a8 | 213 | py | Python | pykotor/resource/formats/rim/__init__.py | NickHugi/PyKotor | cab1089f8a8a135861bef45340203718d39f5e1f | [
"MIT"
] | 1 | 2022-02-21T15:17:28.000Z | 2022-02-21T15:17:28.000Z | pykotor/resource/formats/rim/__init__.py | NickHugi/PyKotor | cab1089f8a8a135861bef45340203718d39f5e1f | [
"MIT"
] | 1 | 2022-03-12T16:06:23.000Z | 2022-03-12T16:06:23.000Z | pykotor/resource/formats/rim/__init__.py | NickHugi/PyKotor | cab1089f8a8a135861bef45340203718d39f5e1f | [
"MIT"
] | null | null | null | from pykotor.resource.formats.rim.data import RIM, RIMResource
from pykotor.resource.formats.rim.io_binary import RIMBinaryReader, RIMBinaryWriter
from pykotor.resource.formats.rim.auto import load_rim, write_rim
| 53.25 | 83 | 0.859155 | 30 | 213 | 6 | 0.5 | 0.183333 | 0.316667 | 0.433333 | 0.483333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070423 | 213 | 3 | 84 | 71 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63963559ced24d8a29a238c3ccbed5856379412a | 165 | py | Python | src/DataReader.py | PavelStupnitski/Student-Rating | 41e038e6ce4a1ece8dffbd9373a61b1009801aa2 | [
"Apache-2.0"
] | null | null | null | src/DataReader.py | PavelStupnitski/Student-Rating | 41e038e6ce4a1ece8dffbd9373a61b1009801aa2 | [
"Apache-2.0"
] | 1 | 2021-12-12T16:00:29.000Z | 2021-12-12T16:00:29.000Z | src/DataReader.py | PavelStupnitski/Student-Rating | 41e038e6ce4a1ece8dffbd9373a61b1009801aa2 | [
"Apache-2.0"
] | null | null | null | from Types import DataType
from abc import ABC, abstractmethod
class DataReader(ABC):
@abstractmethod
def read(self, path: str) -> DataType:
pass
| 16.5 | 42 | 0.69697 | 20 | 165 | 5.75 | 0.7 | 0.295652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230303 | 165 | 9 | 43 | 18.333333 | 0.905512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6398c5430b4e4bdc693e75b25c075d51bce6a790 | 1,057 | py | Python | units/volume/teaspoons.py | putridparrot/PyUnits | 4f1095c6fc0bee6ba936921c391913dbefd9307c | [
"MIT"
] | null | null | null | units/volume/teaspoons.py | putridparrot/PyUnits | 4f1095c6fc0bee6ba936921c391913dbefd9307c | [
"MIT"
] | null | null | null | units/volume/teaspoons.py | putridparrot/PyUnits | 4f1095c6fc0bee6ba936921c391913dbefd9307c | [
"MIT"
] | null | null | null | # <auto-generated>
# This code was generated by the UnitCodeGenerator tool
#
# Changes to this file will be lost if the code is regenerated
# </auto-generated>
def to_millilitres(value):
return value * 5.9193904674479161344
def to_litres(value):
return value * 0.005919390467447916134
def to_kilolitres(value):
return value * 0.000005919390467447916
def to_tablespoons(value):
return value / 3.0
def to_quarts(value):
return value / 192.0
def to_pints(value):
return value / 96.0
def to_gallons(value):
return value / 768.0
def to_fluid_ounces(value):
return value / 4.8
def to_u_s_teaspoons(value):
return value / 0.83267384046639071232
def to_u_s_tablespoons(value):
return value / 2.4980215213991718912
def to_u_s_quarts(value):
return value / 159.87337736954701824
def to_u_s_pints(value):
return value / 79.936688684773507072
def to_u_s_gallons(value):
return value / 639.49350947818807296
def to_u_s_fluid_ounces(value):
return value / 4.9960430427983437824
def to_u_s_cups(value):
return value / 39.968344342386753536
| 27.815789 | 62 | 0.776727 | 160 | 1,057 | 4.9375 | 0.35625 | 0.094937 | 0.303797 | 0.062025 | 0.070886 | 0.070886 | 0 | 0 | 0 | 0 | 0 | 0.242291 | 0.140965 | 1,057 | 37 | 63 | 28.567568 | 0.627753 | 0.140965 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8936321a109f98e668a6494d8b1440bec2bb7347 | 10,522 | py | Python | metrics.py | Chen-Yifan/DEM_building_segmentation | 1e9a41e87ec0ab1777a65146c5b31d88938480b7 | [
"MIT"
] | null | null | null | metrics.py | Chen-Yifan/DEM_building_segmentation | 1e9a41e87ec0ab1777a65146c5b31d88938480b7 | [
"MIT"
] | null | null | null | metrics.py | Chen-Yifan/DEM_building_segmentation | 1e9a41e87ec0ab1777a65146c5b31d88938480b7 | [
"MIT"
] | null | null | null | import keras.backend as K
import numpy as np
import os
import glob
import skimage.io as io
import tensorflow as tf
import cv2
from itertools import product
from skimage.morphology import skeletonize
def centerline_acc(y_true, y_pred):
"""
acc = (#( y_true_center & y_pred ) / #y_true_center + #( y_pred_center & y_true ) / #y_pred_center) / 2
Average of ( ratio of right prediction on centerline + ratio of predicted centerline in the groundtruth buffer)
"""
smooth = 0.01
y_pred = (y_pred >= 0.5).astype('uint8')
y_true = y_true.astype('uint8')
n = len(y_true)
acc = 0
for i in range(n):
y_pred_curr = np.squeeze(y_pred[i])
y_true_curr = np.squeeze(y_true[i])
y_true_center = skeletonize(y_true_curr).astype('uint8')
tmp = np.sum(y_true_center&y_pred_curr)/(np.sum(y_true_center) + smooth)
y_pred_center = skeletonize(y_pred_curr).astype('uint8')
tmp2 = np.sum(y_pred_center&y_true_curr)/(np.sum(y_pred_center) + smooth)
# if(np.sum(y_true_center)<10 or np.sum(y_pred_center)<10): # if there is too little features in an image, ignore it
# n-=1
# continue
acc += (tmp + tmp2)/2
return acc/n
def Mean_IoU_cl(cl=2, shape=128):
def Mean_IOU(y_true, y_pred):
s = K.shape(y_true)
# reshape such that w and h dim are multiplied together
#revise
y_true_reshaped = tf.reshape(tensor=y_true, shape=(-1, shape*shape, cl))
y_pred_reshaped = tf.reshape(tensor=y_pred, shape=(-1, shape*shape, cl))
# correctly classified
clf_pred = K.one_hot( K.argmax(y_pred_reshaped), num_classes = s[-1])
print(y_true_reshaped.dtype, y_pred_reshaped.dtype, clf_pred.dtype)
print(np.shape(clf_pred), np.shape(y_true_reshaped), np.shape(y_pred_reshaped))
equal_entries = K.cast(K.equal(clf_pred,y_true_reshaped), dtype='float32') * y_true_reshaped
# IoU for labeled class
# y_true_reshaped = tf.reshape(tensor=y_true, shape=(-1, 128*128, 2))
# y_pred_reshaped = tf.reshape(tensor=y_pred, shape=(-1, 128*128, 2))
# y_true_reshaped = K.cast(K.argmax(y_true_reshaped),dtype='float32')
# clf_pred = K.cast(K.argmax(y_pred_reshaped),dtype='float32')
# equal_entries = K.cast(K.equal(clf_pred,y_true_reshaped), dtype='float32') * y_true_reshaped
intersection = K.sum(equal_entries, axis=1)
union_per_class = K.sum(y_true_reshaped,axis=1) + K.sum(clf_pred,axis=1)
iou = intersection / (union_per_class - intersection)
iou_mask = tf.is_finite(iou)
iou_masked = tf.boolean_mask(iou,iou_mask)
return K.mean( iou_masked )
return Mean_IOU
def Mean_IOU_label(y_true, y_pred, shape=128):
s = K.shape(y_true)
# reshape such that w and h dim are multiplied together
#MeanIoU all classes
# y_true_reshaped = tf.reshape(tensor=y_true, shape=(-1, shape*shape, 2))
# y_pred_reshaped = tf.reshape(tensor=y_pred, shape=(-1, shape*shape, 2))
# # correctly classified
# clf_pred = K.one_hot( K.argmax(y_pred_reshaped), num_classes = s[-1])
# print(y_true_reshaped.dtype, y_pred_reshaped.dtype, clf_pred.dtype)
# print(np.shape(clf_pred), np.shape(y_true_reshaped), np.shape(y_pred_reshaped))
# equal_entries = K.cast(K.equal(clf_pred,y_true_reshaped), dtype='float32') * y_true_reshaped
# IoU for labeled class
y_true_reshaped = tf.reshape(tensor=y_true, shape=(-1, 128*128, 2))
y_pred_reshaped = tf.reshape(tensor=y_pred, shape=(-1, 128*128, 2))
y_true_reshaped = K.cast(K.argmax(y_true_reshaped),dtype='float32')
clf_pred = K.cast(K.argmax(y_pred_reshaped),dtype='float32')
equal_entries = K.cast(K.equal(clf_pred,y_true_reshaped), dtype='float32') * y_true_reshaped
intersection = K.sum(equal_entries, axis=1)
union_per_class = K.sum(y_true_reshaped,axis=1) + K.sum(clf_pred,axis=1)
iou = intersection / (union_per_class - intersection)
iou_mask = tf.is_finite(iou)
iou_masked = tf.boolean_mask(iou,iou_mask)
return K.mean( iou_masked )
def precision_1(y_true, y_pred):
"""Precision metric.
precision = TP/(TP + FP)
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
y_pred = K.argmax(y_pred)
y_true = K.argmax(y_true)
# TP = tf.compat.v2.math.count_nonzero(y_pred * y_true)
TP = tf.math.count_nonzero(y_pred * y_true)
FP = tf.math.count_nonzero(y_pred*(1-y_true))
return TP/(TP + FP)
def precision_0(y_true, y_pred):
"""Precision metric.
precision = TP/(TP + FP)
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
y_pred = 1-K.argmax(y_pred)
y_true = 1-K.argmax(y_true)
# TP = tf.compat.v2.math.count_nonzero(y_pred * y_true)
TP = tf.math.count_nonzero(y_pred * y_true)
FP = tf.math.count_nonzero(y_pred*(1-y_true))
return TP/(TP + FP)
def recall_1(y_true, y_pred):
"""Recall metric.
recall = TP/(TP+FN)
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
y_pred = K.argmax(y_pred)
y_true = K.argmax(y_true)
# TP = tf.compat.v2.math.count_nonzero(y_pred * y_true)
TP = tf.math.count_nonzero(y_pred * y_true)
FN = tf.math.count_nonzero((1-y_pred)*y_true)
return TP/(TP + FN)
def recall_0(y_true, y_pred):
"""Recall metric.
recall = TP/(TP+FN)
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
y_pred = 1-K.argmax(y_pred)
y_true = 1-K.argmax(y_true)
# TP = tf.compat.v2.math.count_nonzero(y_pred * y_true)
TP = tf.math.count_nonzero(y_pred * y_true)
FN = tf.math.count_nonzero((1-y_pred)*y_true)
return TP/(TP + FN)
def f1score_1(y_true, y_pred):
pre = precision_1(y_true, y_pred)
rec = recall_1(y_true, y_pred)
denominator = (pre + rec)
numerator = (pre * rec)
result = (numerator/denominator)*2
return result
def f1score_0(y_true, y_pred):
pre = precision_0(y_true, y_pred)
rec = recall_0(y_true, y_pred)
denominator = (pre + rec)
numerator = (pre * rec)
result = (numerator/denominator)*2
return result
def FP(y_true, y_pred):
y_pred = K.argmax(y_pred)
y_true = K.argmax(y_true)
FP = tf.math.count_nonzero(y_pred*(1-y_true))
FN = tf.math.count_nonzero((1-y_pred)*y_true)
if(FP+FN == 0):
return 0
return FP/(FP+FN)
def FN(y_true, y_pred):
y_pred = K.argmax(y_pred)
y_true = K.argmax(y_true)
FP = tf.math.count_nonzero(y_pred*(1-y_true))
FN = tf.math.count_nonzero((1-y_pred)*y_true)
if(FP+FN == 0):
return 0
return FN/(FP + FN)
def dice_coefficient(threshold=0.5): # class1 and class0 actually the same
def dice(y_true, y_pred):
# accuracy=(TP+TN)/(TP+TN+FP+FN)
#class 1
if(y_pred.shape[-1]==2): # one-hot
y_pred = K.cast(K.argmax(y_pred,axis=-1),'uint8')
elif(y_pred.shape[-1]==1):
y_pred = K.cast(K.greater(K.squeeze(y_pred,axis=-1),threshold),'uint8')
y_true = K.cast(K.squeeze(y_true,axis=-1),'uint8')
TP = tf.math.count_nonzero(y_pred * y_true)
TN = tf.math.count_nonzero((1-y_pred)*(1-y_true))
FP = tf.math.count_nonzero(y_pred*(1-y_true))
FN = tf.math.count_nonzero((1-y_pred)*y_true)
acc1 = (2*TP)/(2*TP+FN+FP)
return acc1
return dice
def iou_label(threshold=0.5):
def iou(y_true, y_pred):
'''
calculate iou for label class
IOU = true_positive / (true_positive + false_positive + false_negative)
'''
print(y_true.shape,y_pred.shape)
if(y_pred.shape[-1]==2): # one-hot
y_pred = K.cast(K.argmax(y_pred,axis=-1),'uint8')
elif(y_pred.shape[-1]==1):
y_pred = K.cast(K.greater(K.squeeze(y_pred,axis=-1),threshold),'uint8')
y_true = K.cast(K.squeeze(y_true,axis=-1),'uint8')
TP = tf.math.count_nonzero(y_pred * y_true)
TN = tf.math.count_nonzero((1-y_pred)*(1-y_true))
FP = tf.math.count_nonzero(y_pred*(1-y_true))
FN = tf.math.count_nonzero((1-y_pred)*y_true)
return TP/(TP+FP+FN)
return iou
def iou_back(y_true, y_pred):
'''
calculate iou for background class
IOU = true_positive / (true_positive + false_positive + false_negative)
'''
y_pred = 1-K.argmax(y_pred)
y_true = 1-K.argmax(y_true)
# TP = tf.compat.v2.math.count_nonzero(y_pred * y_true)
TP = tf.math.count_nonzero(y_pred * y_true)
TN = tf.math.count_nonzero((1-y_pred)*(1-y_true))
FP = tf.math.count_nonzero(y_pred*(1-y_true))
FN = tf.math.count_nonzero((1-y_pred)*y_true)
return TP/(TP+FP+FN)
def accuracy(threshold=0.5):
def acc(y_true, y_pred):
'''calculate classification accuracy'''
if(y_pred.shape[-1]==2): # one-hot
y_pred = K.cast(K.argmax(y_pred,axis=-1),'uint8')
elif(y_pred.shape[-1]==1):
y_pred = K.cast(K.greater(K.squeeze(y_pred,axis=-1), threshold),'uint8')
y_true = K.cast(K.squeeze(y_true,axis=-1),'uint8')
TP = tf.math.count_nonzero(y_pred * y_true)
TN = tf.math.count_nonzero((1-y_pred)*(1-y_true))
FP = tf.math.count_nonzero(y_pred*(1-y_true))
FN = tf.math.count_nonzero((1-y_pred)*y_true)
result = (TP+TN)/(TP+TN+FP+FN)
return result
return acc
def recall_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_m(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
| 37.049296 | 124 | 0.649401 | 1,742 | 10,522 | 3.684271 | 0.097015 | 0.095824 | 0.046276 | 0.045185 | 0.809754 | 0.77345 | 0.742911 | 0.73512 | 0.731225 | 0.726706 | 0 | 0.023579 | 0.214028 | 10,522 | 283 | 125 | 37.180212 | 0.752479 | 0.26915 | 0 | 0.549708 | 0 | 0 | 0.012441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128655 | false | 0 | 0.052632 | 0 | 0.321637 | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
896865224d961a4828bb69bec7ae530d89805e1b | 245 | py | Python | materials/class_and_instance.py | vyahello/python-classes-cheetsheet | c5c5f0e87a0988380345601b1209865f0b4d8f24 | [
"Apache-2.0"
] | null | null | null | materials/class_and_instance.py | vyahello/python-classes-cheetsheet | c5c5f0e87a0988380345601b1209865f0b4d8f24 | [
"Apache-2.0"
] | null | null | null | materials/class_and_instance.py | vyahello/python-classes-cheetsheet | c5c5f0e87a0988380345601b1209865f0b4d8f24 | [
"Apache-2.0"
] | null | null | null | class ClassName:
def method(self):
pass
print(dir(ClassName))
print(ClassName)
print(type(ClassName))
print(ClassName())
print(type(ClassName()))
print(isinstance(ClassName(), ClassName))
print(isinstance(ClassName, ClassName))
| 15.3125 | 41 | 0.726531 | 27 | 245 | 6.592593 | 0.37037 | 0.47191 | 0.258427 | 0.314607 | 0.780899 | 0.438202 | 0.438202 | 0 | 0 | 0 | 0 | 0 | 0.126531 | 245 | 15 | 42 | 16.333333 | 0.831776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.1 | 0 | 0 | 0.2 | 0.7 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 6 |
8985f34694f27674e79472608651422a4a29bac0 | 15,020 | py | Python | sgtpy/vrmie_mixtures/a1sB_monomer.py | MatKie/SGTPy | 8e98d92fedd2b07d834e547e5154ec8f70d80728 | [
"MIT"
] | 12 | 2020-12-27T17:04:33.000Z | 2021-07-19T06:28:28.000Z | sgtpy/vrmie_mixtures/a1sB_monomer.py | MatKie/SGTPy | 8e98d92fedd2b07d834e547e5154ec8f70d80728 | [
"MIT"
] | 2 | 2021-05-15T14:27:57.000Z | 2021-08-19T15:42:24.000Z | sgtpy/vrmie_mixtures/a1sB_monomer.py | MatKie/SGTPy | 8e98d92fedd2b07d834e547e5154ec8f70d80728 | [
"MIT"
] | 5 | 2021-02-21T01:33:29.000Z | 2021-07-26T15:11:08.000Z | from __future__ import division, print_function, absolute_import
import numpy as np
from .a1s_monomer import a1s, da1s_dxhi00, d2a1s_dxhi00, d3a1s_dxhi00
from .B_monomer import B, dB_dxhi00, d2B_dxhi00, d3B_dxhi00
from .a1s_monomer import da1s_dx_dxhi00_dxxhi, da1s_dx_d2xhi00_dxxhi
from .B_monomer import dB_dx_dxhi00_dxxhi, dB_dx_d2xhi00_dxxhi
def a1sB(xhi00, xhix, xhix_vec, xm, Ilam, Jlam, cictes, a1vdw, a1vdw_cte):
a1 = a1s(xhi00, xhix_vec, xm, cictes, a1vdw)
b = B(xhi00, xhix, xm, Ilam, Jlam, a1vdw_cte)
return a1 + b
def da1sB_dxhi00(xhi00, xhix, xhix_vec, xm, Ilam, Jlam, cictes, a1vdw,
a1vdw_cte, dxhix_dxhi00):
a1, da1 = da1s_dxhi00(xhi00, xhix_vec, xm, cictes, a1vdw, dxhix_dxhi00)
b, db = dB_dxhi00(xhi00, xhix, xm, Ilam, Jlam, a1vdw_cte, dxhix_dxhi00)
return a1 + b, da1 + db
def d2a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, Ilam, Jlam, cictes,
a1vdw, a1vdw_cte, dxhix_dxhi00):
a1, da1, d2a1 = d2a1s_dxhi00(xhi00, xhix_vec, xm, cictes, a1vdw,
dxhix_dxhi00)
b, db, d2b = d2B_dxhi00(xhi00, xhix, xm, Ilam, Jlam, a1vdw_cte,
dxhix_dxhi00)
return a1 + b, da1 + db, d2a1 + d2b
def d3a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, Ilam, Jlam, cictes, a1vdw,
a1vdw_cte, dxhix_dxhi00):
a1, da1, d2a1, d3a1 = d3a1s_dxhi00(xhi00, xhix_vec, xm, cictes, a1vdw,
dxhix_dxhi00)
b, db, d2b, d3b = d3B_dxhi00(xhi00, xhix, xm, Ilam, Jlam, a1vdw_cte,
dxhix_dxhi00)
return a1 + b, da1 + db, d2a1 + d2b, d3a1 + d3b
def da1sB_dx_dxhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_ij,
J_ij, cctesij, a1vdwij, a1vdw_cteij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00):
out = da1s_dx_dxhi00_dxxhi(xhi00, xhix_vec, xm, ms, cctesij, a1vdwij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00)
a1, da1, da1x, da1xxhi = out
out = dB_dx_dxhi00_dxxhi(xhi00, xhix, xm, ms, I_ij, J_ij, a1vdw_cteij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00)
b, db, dbx, dbxxhi = out
return a1+b, da1+db, da1x+dbx, da1xxhi+dbxxhi
def da1sB_dx_d2xhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_ij,
J_ij, cctesij, a1vdwij, a1vdw_cteij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00):
out = da1s_dx_d2xhi00_dxxhi(xhi00, xhix_vec, xm, ms, cctesij, a1vdwij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00)
a1, da1, d2a1, da1x, da1xxhi = out
out = dB_dx_d2xhi00_dxxhi(xhi00, xhix, xm, ms, I_ij, J_ij, a1vdw_cteij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00)
b, db, d2b, dbx, dbxxhi = out
return a1+b, da1+db, d2a1+d2b, da1x+dbx, da1xxhi+dbxxhi
def a1sB_eval(xhi00, xhix, xhix_vec, xm, I_lambdasij, J_lambdasij, cctesij,
a1vdwij, a1vdw_cteij):
# laij, lrij, larij = lambdas
cctes_laij, cctes_lrij, cctes_2laij, cctes_2lrij, cctes_larij = cctesij
a1vdw_laij, a1vdw_lrij, a1vdw_2laij, a1vdw_2lrij, a1vdw_larij = a1vdwij
I_la, I_lr, I_2la, I_2lr, I_lar = I_lambdasij
J_la, J_lr, J_2la, J_2lr, J_lar = J_lambdasij
a1sb_a = a1sB(xhi00, xhix, xhix_vec, xm, I_la, J_la, cctes_laij,
a1vdw_laij, a1vdw_cteij)
a1sb_r = a1sB(xhi00, xhix, xhix_vec, xm, I_lr, J_lr, cctes_lrij,
a1vdw_lrij, a1vdw_cteij)
a1sb_2a = a1sB(xhi00, xhix, xhix_vec, xm, I_2la, J_2la, cctes_2laij,
a1vdw_2laij, a1vdw_cteij)
a1sb_2r = a1sB(xhi00, xhix, xhix_vec, xm, I_2lr, J_2lr, cctes_2lrij,
a1vdw_2lrij, a1vdw_cteij)
a1sb_ar = a1sB(xhi00, xhix, xhix_vec, xm, I_lar, J_lar, cctes_larij,
a1vdw_larij, a1vdw_cteij)
a1sb_a1 = np.array([a1sb_a, a1sb_r])
a1sb_a2 = np.array([a1sb_2a, a1sb_ar, a1sb_2r])
return a1sb_a1, a1sb_a2
def da1sB_dxhi00_eval(xhi00, xhix, xhix_vec, xm, I_lambdasij, J_lambdasij,
cctesij, a1vdwij, a1vdw_cteij, dxhix_dxhi00):
# laij, lrij, larij = lambdas
cctes_laij, cctes_lrij, cctes_2laij, cctes_2lrij, cctes_larij = cctesij
a1vdw_laij, a1vdw_lrij, a1vdw_2laij, a1vdw_2lrij, a1vdw_larij = a1vdwij
I_la, I_lr, I_2la, I_2lr, I_lar = I_lambdasij
J_la, J_lr, J_2la, J_2lr, J_lar = J_lambdasij
a1sb_a, da1sb_a = da1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_la, J_la,
cctes_laij, a1vdw_laij, a1vdw_cteij,
dxhix_dxhi00)
a1sb_r, da1sb_r = da1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_lr, J_lr,
cctes_lrij, a1vdw_lrij, a1vdw_cteij,
dxhix_dxhi00)
a1sb_2a, da1sb_2a = da1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_2la,
J_2la, cctes_2laij, a1vdw_2laij,
a1vdw_cteij, dxhix_dxhi00)
a1sb_2r, da1sb_2r = da1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_2lr,
J_2lr, cctes_2lrij, a1vdw_2lrij,
a1vdw_cteij, dxhix_dxhi00)
a1sb_ar, da1sb_ar = da1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_lar,
J_lar, cctes_larij, a1vdw_larij,
a1vdw_cteij, dxhix_dxhi00)
a1sb_a1 = np.array([[a1sb_a, a1sb_r],
[da1sb_a, da1sb_r]])
a1sb_a2 = np.array([[a1sb_2a, a1sb_ar, a1sb_2r],
[da1sb_2a, da1sb_ar, da1sb_2r]])
return a1sb_a1, a1sb_a2
def d2a1sB_dxhi00_eval(xhi00, xhix, xhix_vec, xm, I_lambdasij, J_lambdasij,
cctesij, a1vdwij, a1vdw_cteij, dxhix_dxhi00):
cctes_laij, cctes_lrij, cctes_2laij, cctes_2lrij, cctes_larij = cctesij
a1vdw_laij, a1vdw_lrij, a1vdw_2laij, a1vdw_2lrij, a1vdw_larij = a1vdwij
I_la, I_lr, I_2la, I_2lr, I_lar = I_lambdasij
J_la, J_lr, J_2la, J_2lr, J_lar = J_lambdasij
out = d2a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_la, J_la,
cctes_laij, a1vdw_laij, a1vdw_cteij, dxhix_dxhi00)
a1sb_a, da1sb_a, d2a1sb_a = out
out = d2a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_lr, J_lr,
cctes_lrij, a1vdw_lrij, a1vdw_cteij, dxhix_dxhi00)
a1sb_r, da1sb_r, d2a1sb_r = out
out = d2a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_2la, J_2la,
cctes_2laij, a1vdw_2laij, a1vdw_cteij, dxhix_dxhi00)
a1sb_2a, da1sb_2a, d2a1sb_2a = out
out = d2a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_2lr, J_2lr,
cctes_2lrij, a1vdw_2lrij, a1vdw_cteij, dxhix_dxhi00)
a1sb_2r, da1sb_2r, d2a1sb_2r = out
out = d2a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_lar, J_lar,
cctes_larij, a1vdw_larij, a1vdw_cteij, dxhix_dxhi00)
a1sb_ar, da1sb_ar, d2a1sb_ar = out
a1sb_a1 = np.array([[a1sb_a, a1sb_r],
[da1sb_a, da1sb_r],
[d2a1sb_a, d2a1sb_r]])
a1sb_a2 = np.array([[a1sb_2a, a1sb_ar, a1sb_2r],
[da1sb_2a, da1sb_ar, da1sb_2r],
[d2a1sb_2a, d2a1sb_ar, d2a1sb_2r]])
return a1sb_a1, a1sb_a2
def d3a1sB_dxhi00_eval(xhi00, xhix, xhix_vec, xm, I_lambdasij, J_lambdasij,
cctesij, a1vdwij, a1vdw_cteij, dxhix_dxhi00):
# laij, lrij, larij = lambdas
cctes_laij, cctes_lrij, cctes_2laij, cctes_2lrij, cctes_larij = cctesij
a1vdw_laij, a1vdw_lrij, a1vdw_2laij, a1vdw_2lrij, a1vdw_larij = a1vdwij
I_la, I_lr, I_2la, I_2lr, I_lar = I_lambdasij
J_la, J_lr, J_2la, J_2lr, J_lar = J_lambdasij
out = d3a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_la, J_la, cctes_laij,
a1vdw_laij, a1vdw_cteij, dxhix_dxhi00)
a1sb_a, da1sb_a, d2a1sb_a, d3a1sb_a = out
out = d3a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_lr, J_lr, cctes_lrij,
a1vdw_lrij, a1vdw_cteij, dxhix_dxhi00)
a1sb_r, da1sb_r, d2a1sb_r, d3a1sb_r = out
out = d3a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_2la, J_2la,
cctes_2laij, a1vdw_2laij, a1vdw_cteij, dxhix_dxhi00)
a1sb_2a, da1sb_2a, d2a1sb_2a, d3a1sb_2a = out
out = d3a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_2lr, J_2lr,
cctes_2lrij, a1vdw_2lrij, a1vdw_cteij, dxhix_dxhi00)
a1sb_2r, da1sb_2r, d2a1sb_2r, d3a1sb_2r = out
out = d3a1sB_dxhi00(xhi00, xhix, xhix_vec, xm, I_lar, J_lar,
cctes_larij, a1vdw_larij, a1vdw_cteij, dxhix_dxhi00)
a1sb_ar, da1sb_ar, d2a1sb_ar, d3a1sb_ar = out
a1sb_a1 = np.array([[a1sb_a, a1sb_r],
[da1sb_a, da1sb_r],
[d2a1sb_a, d2a1sb_r],
[d3a1sb_a, d3a1sb_r]])
a1sb_a2 = np.array([[a1sb_2a, a1sb_ar, a1sb_2r],
[da1sb_2a, da1sb_ar, da1sb_2r],
[d2a1sb_2a, d2a1sb_ar, d2a1sb_2r],
[d3a1sb_2a, d3a1sb_ar, d3a1sb_2r]])
return a1sb_a1, a1sb_a2
def da1sB_dx_dxhi00_dxxhi_eval(xhi00, xhix, xhix_vec, xm, ms, I_lambdasij,
J_lambdasij, cctesij, a1vdwij, a1vdw_cteij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00):
cctes_laij, cctes_lrij, cctes_2laij, cctes_2lrij, cctes_larij = cctesij
a1vdw_laij, a1vdw_lrij, a1vdw_2laij, a1vdw_2lrij, a1vdw_larij = a1vdwij
I_laij, I_lrij, I_2laij, I_2lrij, I_larij = I_lambdasij
J_laij, J_lrij, J_2laij, J_2lrij, J_larij = J_lambdasij
out_la = da1sB_dx_dxhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_laij,
J_laij, cctes_laij, a1vdw_laij, a1vdw_cteij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00)
a1sb_a, da1sb_a, da1sb_ax, da1sb_axxhi = out_la
out_lr = da1sB_dx_dxhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_lrij,
J_lrij, cctes_lrij, a1vdw_lrij, a1vdw_cteij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00)
a1sb_r, da1sb_r, da1sb_rx, da1sb_rxxhi = out_lr
out_2la = da1sB_dx_dxhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_2laij,
J_2laij, cctes_2laij, a1vdw_2laij,
a1vdw_cteij, dxhix_dxhi00, dxhix_dx,
dxhix_dx_dxhi00)
a1sb_2a, da1sb_2a, da1sb_2ax, da1sb_2axxhi = out_2la
out_2lr = da1sB_dx_dxhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_2lrij,
J_2lrij, cctes_2lrij, a1vdw_2lrij,
a1vdw_cteij, dxhix_dxhi00, dxhix_dx,
dxhix_dx_dxhi00)
a1sb_2r, da1sb_2r, da1sb_2rx, da1sb_2rxxhi = out_2lr
out_lar = da1sB_dx_dxhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_larij,
J_larij, cctes_larij, a1vdw_larij,
a1vdw_cteij, dxhix_dxhi00, dxhix_dx,
dxhix_dx_dxhi00)
a1sb_ar, da1sb_ar, da1sb_arx, da1sb_arxxhi = out_lar
a1sb_a1 = np.array([[a1sb_a, a1sb_r],
[da1sb_a, da1sb_r]])
a1sb_a2 = np.array([[a1sb_2a, a1sb_ar, a1sb_2r],
[da1sb_2a, da1sb_ar, da1sb_2r]])
a1sb_a1x = np.array([da1sb_ax, da1sb_rx])
a1sb_a2x = np.array([da1sb_2ax, da1sb_arx, da1sb_2rx])
a1sb_a1xxhi = np.array([da1sb_axxhi, da1sb_rxxhi])
a1sb_a2xxhi = np.array([da1sb_2axxhi, da1sb_arxxhi, da1sb_2rxxhi])
return a1sb_a1, a1sb_a2, a1sb_a1x, a1sb_a2x, a1sb_a1xxhi, a1sb_a2xxhi
def da1sB_dx_d2xhi00_dxxhi_eval(xhi00, xhix, xhix_vec, xm, ms, I_lambdasij,
J_lambdasij, cctesij, a1vdwij, a1vdw_cteij,
dxhix_dxhi00, dxhix_dx, dxhix_dx_dxhi00):
cctes_laij, cctes_lrij, cctes_2laij, cctes_2lrij, cctes_larij = cctesij
a1vdw_laij, a1vdw_lrij, a1vdw_2laij, a1vdw_2lrij, a1vdw_larij = a1vdwij
I_laij, I_lrij, I_2laij, I_2lrij, I_larij = I_lambdasij
J_laij, J_lrij, J_2laij, J_2lrij, J_larij = J_lambdasij
out_la = da1sB_dx_d2xhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_laij,
J_laij, cctes_laij, a1vdw_laij,
a1vdw_cteij, dxhix_dxhi00, dxhix_dx,
dxhix_dx_dxhi00)
a1sb_a, da1sb_a, d2a1sb_a, da1sb_ax, da1sb_axxhi = out_la
out_lr = da1sB_dx_d2xhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_lrij,
J_lrij, cctes_lrij, a1vdw_lrij,
a1vdw_cteij, dxhix_dxhi00, dxhix_dx,
dxhix_dx_dxhi00)
a1sb_r, da1sb_r, d2a1sb_r, da1sb_rx, da1sb_rxxhi = out_lr
out_2la = da1sB_dx_d2xhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_2laij,
J_2laij, cctes_2laij, a1vdw_2laij,
a1vdw_cteij, dxhix_dxhi00, dxhix_dx,
dxhix_dx_dxhi00)
a1sb_2a, da1sb_2a, d2a1sb_2a, da1sb_2ax, da1sb_2axxhi = out_2la
out_2lr = da1sB_dx_d2xhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_2lrij,
J_2lrij, cctes_2lrij, a1vdw_2lrij,
a1vdw_cteij, dxhix_dxhi00, dxhix_dx,
dxhix_dx_dxhi00)
a1sb_2r, da1sb_2r, d2a1sb_2r, da1sb_2rx, da1sb_2rxxhi = out_2lr
out_lar = da1sB_dx_d2xhi00_dxxhi(xhi00, xhix, xhix_vec, xm, ms, I_larij,
J_larij, cctes_larij, a1vdw_larij,
a1vdw_cteij, dxhix_dxhi00, dxhix_dx,
dxhix_dx_dxhi00)
a1sb_ar, da1sb_ar, d2a1sb_ar, da1sb_arx, da1sb_arxxhi = out_lar
a1sb_a1 = np.array([[a1sb_a, a1sb_r],
[da1sb_a, da1sb_r],
[d2a1sb_a, d2a1sb_r]])
a1sb_a2 = np.array([[a1sb_2a, a1sb_ar, a1sb_2r],
[da1sb_2a, da1sb_ar, da1sb_2r],
[d2a1sb_2a, d2a1sb_ar, d2a1sb_2r]])
a1sb_a1x = np.array([da1sb_ax, da1sb_rx])
a1sb_a2x = np.array([da1sb_2ax, da1sb_arx, da1sb_2rx])
a1sb_a1xxhi = np.array([da1sb_axxhi, da1sb_rxxhi])
a1sb_a2xxhi = np.array([da1sb_2axxhi, da1sb_arxxhi, da1sb_2rxxhi])
return a1sb_a1, a1sb_a2, a1sb_a1x, a1sb_a2x, a1sb_a1xxhi, a1sb_a2xxhi
def x0lambda_eval(x0, la, lr, lar, laij, lrij, larij, diag_index):
x0la = x0**laij
x0lr = x0**lrij
x02la = x0**(2*laij)
x02lr = x0**(2*lrij)
x0lar = x0**larij
# To be used for a1 and a2 of monomer
x0_a1 = np.array([x0la, -x0lr])
x0_a2 = np.array([x02la, -2*x0lar, x02lr])
# To be used in g1 and g2 of chain
x0_g1 = np.array([la * x0la[diag_index], -lr*x0lr[diag_index]])
x0_g2 = np.array([la * x02la[diag_index], -lar*x0lar[diag_index],
lr * x02lr[diag_index]])
return x0_a1, x0_a2, x0_g1, x0_g2
| 46.215385 | 79 | 0.601065 | 2,131 | 15,020 | 3.839043 | 0.051619 | 0.059406 | 0.052805 | 0.082142 | 0.905879 | 0.893289 | 0.884611 | 0.867131 | 0.844273 | 0.842807 | 0 | 0.11237 | 0.312716 | 15,020 | 324 | 80 | 46.358025 | 0.680132 | 0.01012 | 0 | 0.479675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052846 | false | 0 | 0.02439 | 0 | 0.130081 | 0.004065 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
89a46bf97794ad51f213719a9e8e8977eff47815 | 42 | py | Python | app/schemas/__init__.py | serchip/test_py | 5ebb7498034364bbaa764cd3fb59f7868154cccb | [
"MIT"
] | null | null | null | app/schemas/__init__.py | serchip/test_py | 5ebb7498034364bbaa764cd3fb59f7868154cccb | [
"MIT"
] | null | null | null | app/schemas/__init__.py | serchip/test_py | 5ebb7498034364bbaa764cd3fb59f7868154cccb | [
"MIT"
] | null | null | null | from .balance import *
from .auth import * | 21 | 22 | 0.738095 | 6 | 42 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 2 | 23 | 21 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
983771fe543a37c06717e5bbdb8108d4f7d6f40b | 12,527 | py | Python | pencilsketch_webapp.py | BandiSamuel/glowing-adventure | 3f7e66bf87561ad6ca02f0e71ba04c526baa86df | [
"Apache-2.0"
] | null | null | null | pencilsketch_webapp.py | BandiSamuel/glowing-adventure | 3f7e66bf87561ad6ca02f0e71ba04c526baa86df | [
"Apache-2.0"
] | null | null | null | pencilsketch_webapp.py | BandiSamuel/glowing-adventure | 3f7e66bf87561ad6ca02f0e71ba04c526baa86df | [
"Apache-2.0"
] | null | null | null | import streamlit as st
import numpy as np
from PIL import Image
import cv2
def dodgeV2(x, y):
return cv2.divide(x, 255 - y, scale=256)
def pencilsketch(inp_img):
img_gray = cv2.cvtColor(inp_img, cv2.COLOR_BGR2GRAY)
img_invert = cv2.bitwise_not(img_gray)
img_smoothing = cv2.GaussianBlur(img_invert, (21, 21),sigmaX=0, sigmaY=0)
final_img = dodgeV2(img_gray, img_smoothing)
return(final_img)
st.title("PencilSketcher App - updated with Github Dev")
st.write("This Web App is to help convert your photos to realistic Pencil Sketches")
file_image = st.sidebar.file_uploader("Upload your Photos", type=['jpeg','jpg','png'])
if file_image is None:
st.write("You haven't uploaded any Excel file")
else:
input_img = Image.open(file_image)
final_sketch = pencilsketch(np.array(input_img))
st.write("**Input Photo**")
st.image(input_img, use_column_width=True)
st.write("**Output Pencil Sketch**")
st.image(final_sketch, use_column_width=True)
if st.button("Download Sketch Images"):
im_pil = Image.fromarray(final_sketch)
im_pil.save('final_image.jpeg')
st.write('Download completed')
st.write("Courtesy: 1littlecoder Youtube Channel - [Sketch Code]()")
st.markdown("")
| 313.175 | 11,291 | 0.947314 | 482 | 12,527 | 24.558091 | 0.80083 | 0.003548 | 0.00169 | 0.00321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136065 | 0.016125 | 12,527 | 39 | 11,292 | 321.205128 | 0.824341 | 0 | 0 | 0 | 0 | 0.033333 | 0.926479 | 0.900136 | 0 | 1 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0.033333 | 0.233333 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f2cb4d15e898bd800da1437d78ce1b3cbfd9228 | 3,945 | py | Python | great_international/migrations/0042_auto_20190617_1133.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 6 | 2018-03-20T11:19:07.000Z | 2021-10-05T07:53:11.000Z | great_international/migrations/0042_auto_20190617_1133.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 802 | 2018-02-05T14:16:13.000Z | 2022-02-10T10:59:21.000Z | great_international/migrations/0042_auto_20190617_1133.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 6 | 2019-01-22T13:19:37.000Z | 2019-07-01T10:35:26.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.21 on 2019-06-17 11:33
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('great_international', '0041_auto_20190613_1346'),
]
operations = [
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title',
new_name='breadcrumbs_label',
),
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title_ar',
new_name='breadcrumbs_label_ar',
),
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title_de',
new_name='breadcrumbs_label_de',
),
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title_en_gb',
new_name='breadcrumbs_label_en_gb',
),
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title_es',
new_name='breadcrumbs_label_es',
),
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title_fr',
new_name='breadcrumbs_label_fr',
),
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title_ja',
new_name='breadcrumbs_label_ja',
),
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title_pt',
new_name='breadcrumbs_label_pt',
),
migrations.RenameField(
model_name='capitalinvestopportunitylistingpage',
old_name='hero_title_zh_hans',
new_name='breadcrumbs_label_zh_hans',
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title',
field=models.CharField(default=' project ', max_length=255),
preserve_default=False,
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title_ar',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title_de',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title_en_gb',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title_es',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title_fr',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title_ja',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title_pt',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='capitalinvestopportunitylistingpage',
name='search_results_title_zh_hans',
field=models.CharField(max_length=255, null=True),
),
]
| 36.869159 | 72 | 0.625856 | 341 | 3,945 | 6.885631 | 0.193548 | 0.068995 | 0.337308 | 0.114991 | 0.768739 | 0.768739 | 0.768739 | 0.768739 | 0.751704 | 0.369676 | 0 | 0.02154 | 0.282129 | 3,945 | 106 | 73 | 37.216981 | 0.807557 | 0.01749 | 0 | 0.626263 | 1 | 0 | 0.309837 | 0.230571 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020202 | 0 | 0.050505 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f395b06b61df7aff02568e119bdac56b3e17f61 | 82 | py | Python | i18n/__init__.py | LeiQiao/Parasite-Plugins | 96a20819f2cf625f22e06be9dc03a997291e1fc6 | [
"MIT"
] | null | null | null | i18n/__init__.py | LeiQiao/Parasite-Plugins | 96a20819f2cf625f22e06be9dc03a997291e1fc6 | [
"MIT"
] | null | null | null | i18n/__init__.py | LeiQiao/Parasite-Plugins | 96a20819f2cf625f22e06be9dc03a997291e1fc6 | [
"MIT"
] | null | null | null | from .i18n_plugin import I18nPlugin
from .i18n import I18n, i18n, i18n_set_locale
| 27.333333 | 45 | 0.829268 | 13 | 82 | 5 | 0.538462 | 0.246154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0.121951 | 82 | 2 | 46 | 41 | 0.736111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f7240cdb8df36630e93cab281e9dab1f1c414a0 | 202 | py | Python | templates/email_templates.py | Gumbew/mr-client | 3ccd08c1c4191a6f281505a1b86c11422870b3ae | [
"MIT"
] | null | null | null | templates/email_templates.py | Gumbew/mr-client | 3ccd08c1c4191a6f281505a1b86c11422870b3ae | [
"MIT"
] | 1 | 2021-05-08T12:30:56.000Z | 2021-05-08T12:30:56.000Z | templates/email_templates.py | Gumbew/mr-client | 3ccd08c1c4191a6f281505a1b86c11422870b3ae | [
"MIT"
] | null | null | null | RESET_PASSWORD_REQUEST = {
"from": "The MapReduce Service Team",
"subject": "[MapReduce] Reset Password Request",
"template_path": "templates/email_templates/reset-password-template.html"
}
| 33.666667 | 77 | 0.727723 | 22 | 202 | 6.5 | 0.636364 | 0.272727 | 0.27972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138614 | 202 | 5 | 78 | 40.4 | 0.821839 | 0 | 0 | 0 | 0 | 0 | 0.683168 | 0.267327 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.6 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
7f74ded55335c9c41b30149f7ab0c423a7fd69bf | 8,228 | py | Python | jira/tests/test_sprints.py | danrneal/jackbot | 318ca1d10476c0a3ca38e9ab625c79adf6e5d37a | [
"MIT"
] | 1 | 2020-02-08T22:26:35.000Z | 2020-02-08T22:26:35.000Z | jira/tests/test_sprints.py | danrneal/JackBot | 318ca1d10476c0a3ca38e9ab625c79adf6e5d37a | [
"MIT"
] | null | null | null | jira/tests/test_sprints.py | danrneal/JackBot | 318ca1d10476c0a3ca38e9ab625c79adf6e5d37a | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import patch
from jira import jira
from jira.sprints import (
sprint_event, get_sprint_issues_by_type, get_message_info,
get_active_sprint_info
)
@patch('jira.sprints.get_sprint_issues_by_type')
class SprintsTest(unittest.TestCase):
sprint = {
"id": 1,
"name": 'TEST Sprint',
}
def test_incorrect_board_is_ignored(self, mock_get_sprint_issues_by_type):
self.sprint['originBoardId'] = jira.BOARD_ID + 1
sprint_event(self.sprint)
mock_get_sprint_issues_by_type.assert_not_called()
def test_correct_board_is_acted_on(self, mock_get_sprint_issues_by_type):
self.sprint['originBoardId'] = jira.BOARD_ID
sprint_event(self.sprint)
mock_get_sprint_issues_by_type.assert_called_once_with(1, 'TEST Sprint')
class GetActiveSprintInfo(unittest.TestCase):
@patch('jira.sprints.get_sprint_issues_by_type')
@patch('jira.jira.get_active_sprint')
def test_get_active_sprint_id_and_name(
self, mock_get_active_sprint, mock_get_sprint_issues_by_type
):
mock_get_active_sprint.return_value = {
'id': 1,
'name': 'TEST Sprint'
}
get_active_sprint_info()
mock_get_sprint_issues_by_type.assert_called_once_with(1, 'TEST Sprint')
@patch('jira.sprints.get_sprint_issues_by_type')
@patch('jira.jira.get_active_sprint')
def test_get_active_sprint_info_returns_when_there_is_no_active_sprint(
self, mock_get_active_sprint, mock_get_sprint_issues_by_type
):
mock_get_active_sprint.return_value = None
get_active_sprint_info()
mock_get_sprint_issues_by_type.assert_not_called()
@patch('jira.sprints.get_message_info')
@patch('jira.jira.get_issues_for_sprint')
class GetSprintIssueByType(unittest.TestCase):
issue_1 = {
'key': 'TEST-1',
'fields': {
'issuetype': {},
'status': {
"statusCategory": {}
},
'assignee': None,
'subtasks': []
}
}
issue_2 = {
'key': 'TEST-2',
'fields': {
'issuetype': {},
'status': {
"statusCategory": {}
},
'assignee': None,
'subtasks': []
}
}
def test_ignores_done_issues(
self, mock_get_issues_for_sprint, mock_get_message_info
):
self.issue_1['fields']['status']['statusCategory']['name'] = 'Done'
mock_get_issues_for_sprint.return_value = [self.issue_1]
get_sprint_issues_by_type(1, 'TEST Sprint')
mock_get_message_info.assert_called_once_with('TEST Sprint', [], [], [])
def test_ignores_stories_with_subtasks(
self, mock_get_issues_for_sprint, mock_get_message_info
):
self.issue_1['fields']['issuetype'] = {'name': "Story"}
self.issue_1['fields']['status']['statusCategory']['name'] = 'Not Done'
self.issue_1['fields']['subtasks'].append({'key': 'TEST-3'})
mock_get_issues_for_sprint.return_value = [self.issue_1]
get_sprint_issues_by_type(1, 'TEST Sprint')
mock_get_message_info.assert_called_once_with('TEST Sprint', [], [], [])
def test_separates_stories_wo_subtasks(
self, mock_get_issues_for_sprint, mock_get_message_info
):
self.issue_1['fields']['issuetype'] = {'name': "Story"}
self.issue_1['fields']['status']['statusCategory']['name'] = 'Not Done'
self.issue_1['fields']['subtasks'].clear()
self.issue_1['fields']['assignee'] = None
mock_get_issues_for_sprint.return_value = [self.issue_1]
get_sprint_issues_by_type(1, 'TEST Sprint')
mock_get_message_info.assert_called_once_with('TEST Sprint', [{
'key': 'TEST-1',
'type': 'story',
'assignee': None
}], [], [])
def test_get_sprint_issuses_by_type_passes_assignee_when_exists(
self, mock_get_issues_for_sprint, mock_get_message_info
):
self.issue_1['fields']['issuetype'] = {'name': "Story"}
self.issue_1['fields']['status']['statusCategory']['name'] = 'Not Done'
self.issue_1['fields']['subtasks'].clear()
self.issue_1['fields']['assignee'] = {'displayName': 'someone'}
mock_get_issues_for_sprint.return_value = [self.issue_1]
get_sprint_issues_by_type(1, 'TEST Sprint')
mock_get_message_info.assert_called_once_with('TEST Sprint', [{
'key': 'TEST-1',
'type': 'story',
'assignee': 'someone'
}], [], [])
def test_get_sprint_issues_by_type_separates_out_bugs(
self, mock_get_issues_for_sprint, mock_get_message_info
):
self.issue_1['fields']['issuetype'] = {'name': "Bug"}
self.issue_1['fields']['status']['statusCategory']['name'] = "Not Done"
self.issue_1['fields']['assignee'] = None
self.issue_2['fields']['issuetype'] = {'name': "Critical"}
self.issue_2['fields']['status']['statusCategory']['name'] = "Not Done"
self.issue_2['fields']['assignee'] = None
mock_get_issues_for_sprint.return_value = [self.issue_1, self.issue_2]
get_sprint_issues_by_type(1, 'TEST Sprint')
mock_get_message_info.assert_called_once_with('TEST Sprint', [], [
{
'key': 'TEST-1',
'type': 'bug',
'assignee': None
},
{
'key': 'TEST-2',
'type': 'bug',
'assignee': None
},
], [])
def test_get_sprint_issues_by_type_separates_out_tasks(
self, mock_get_issues_for_sprint, mock_get_message_info
):
self.issue_1['fields']['issuetype'] = {'name': "Task"}
self.issue_1['fields']['status']['statusCategory']['name'] = "Not Done"
self.issue_1['fields']['assignee'] = None
self.issue_2['fields']['issuetype'] = {'name': "Story Task"}
self.issue_2['fields']['status']['statusCategory']['name'] = "Not Done"
self.issue_2['fields']['assignee'] = None
mock_get_issues_for_sprint.return_value = [self.issue_1, self.issue_2]
get_sprint_issues_by_type(1, 'TEST Sprint')
mock_get_message_info.assert_called_once_with('TEST Sprint', [], [], [
{
'key': 'TEST-1',
'type': 'task',
'assignee': None
},
{
'key': 'TEST-2',
'type': 'task',
'assignee': None
},
])
@patch('slack.webhooks.build_message')
@patch('jira.jira.get_estimate')
class GetMessageInfoTest(unittest.TestCase):
issue = {}
sprint_info = {
'name': 'TEST Sprint',
'burndown': 0
}
def test_issue_estimates_are_added_up(
self, mock_get_estimate, mock_build_message
):
self.issue['key'] = 'TEST-1'
self.issue['type'] = 'task'
self.issue['key'] = 'someone'
mock_get_estimate.side_effect = [2, 8]
get_message_info('TEST Sprint', [], [], [self.issue, self.issue])
self.sprint_info['burndown'] = 10
mock_build_message.assert_called_once_with(self.sprint_info, [], [], [])
def test_missing_estimate_issues_are_passed_along(
self, mock_get_estimate, mock_build_message
):
self.issue['key'] = 'TEST-1'
self.issue['type'] = 'bug'
self.issue['key'] = 'someone'
mock_get_estimate.return_value = None
get_message_info('TEST Sprint', [], [self.issue], [])
self.sprint_info['burndown'] = 0
mock_build_message.assert_called_once_with(
self.sprint_info, [], [self.issue], []
)
def test_large_estimate_issues_are_passed_along(
self, mock_get_estimate, mock_build_message
):
self.issue['key'] = 'TEST-1'
self.issue['type'] = 'task'
self.issue['key'] = 'someone'
mock_get_estimate.return_value = 17
get_message_info('TEST Sprint', [], [], [self.issue])
self.sprint_info['burndown'] = 17
mock_build_message.assert_called_once_with(
self.sprint_info, [], [], [self.issue]
)
| 36.568889 | 80 | 0.611084 | 980 | 8,228 | 4.721429 | 0.1 | 0.09142 | 0.051869 | 0.073482 | 0.815647 | 0.803761 | 0.780635 | 0.751675 | 0.735682 | 0.702399 | 0 | 0.010834 | 0.24842 | 8,228 | 224 | 81 | 36.732143 | 0.737387 | 0 | 0 | 0.560606 | 0 | 0 | 0.188989 | 0.033787 | 0 | 0 | 0 | 0 | 0.065657 | 1 | 0.065657 | false | 0.015152 | 0.020202 | 0 | 0.131313 | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f98efcee5d5596688774e4b9f44fa826ff399fd | 44 | py | Python | examples/math.isnan/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/math.isnan/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/math.isnan/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | import math
print(math.isnan(float('nan')))
| 14.666667 | 31 | 0.727273 | 7 | 44 | 4.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 2 | 32 | 22 | 0.780488 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
7f9faaa656fb0aa233baeaaa23ce6aedc3484601 | 84 | py | Python | shipyard2/rules/py/g1/devtools/buildtools/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | 3 | 2016-01-04T06:28:52.000Z | 2020-09-20T13:18:40.000Z | shipyard2/rules/py/g1/devtools/buildtools/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | null | null | null | shipyard2/rules/py/g1/devtools/buildtools/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | null | null | null | import shipyard2.rules.pythons
shipyard2.rules.pythons.define_build_time_package()
| 21 | 51 | 0.869048 | 11 | 84 | 6.363636 | 0.727273 | 0.4 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.047619 | 84 | 3 | 52 | 28 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7fb10947720cf2335f3129624fb64550d66e7530 | 1,365 | py | Python | Cw2/Cw2 - perceptron usage.py | deadsmond/SieciNeuronowe | 48d2c337b58b72dc2a7218c63dbec6d2a1e0eebb | [
"MIT"
] | null | null | null | Cw2/Cw2 - perceptron usage.py | deadsmond/SieciNeuronowe | 48d2c337b58b72dc2a7218c63dbec6d2a1e0eebb | [
"MIT"
] | null | null | null | Cw2/Cw2 - perceptron usage.py | deadsmond/SieciNeuronowe | 48d2c337b58b72dc2a7218c63dbec6d2a1e0eebb | [
"MIT"
] | null | null | null | # Cw. 2
# Python
u1 = [0,0,0,0,0,
0,1,1,0,0,
0,0,1,0,0,
0,0,1,0,0,
0,0,1,0,0,1]
u2 = [0,0,1,1,0,
0,0,0,1,0,
0,0,0,1,0,
0,0,0,0,0,
0,0,0,0,0,1]
u3 = [0,0,0,0,0,
1,1,0,0,0,
0,1,0,0,0,
0,1,0,0,0,
0,1,0,0,0,1]
u4 = [0,0,0,0,0,
0,1,1,1,0,
0,1,0,1,0,
0,1,1,1,0,
0,0,0,0,0,1]
u5 = [0,0,0,0,0,
0,0,0,0,0,
1,1,1,0,0,
1,0,1,0,0,
1,1,1,0,0,1]
u = [u1, u2, u3, u4, u5]
global w
w = [1] * 26
def perceptron_learning(c):
t = 0
counter = 0
while counter != 5:
z = 1 * ( t%5 + 1 <= 3 )
y = 1 * ( sum( u[t%5][i]*w[i] for i in range (len (u[t%5]))) >= 0 )
for i in range (len (u[t%5])):
w[i] = w[i] + c * (z - y) * u[t%5][i]
t = t + 1
if z == y:
counter = counter + 1
else:
counter = 0
def perceptron_usage(ub):
e = sum( ub[i]*w[i] for i in range (len (ub)))
return 1 * (e > 0)
#perceptron_learning(1)
#perceptron_learning(0.1)
perceptron_learning(0.01)
print(perceptron_usage([0,0,1,0,0,
0,0,1,0,0,
1,1,1,0,0,
1,0,1,0,0,
1,1,1,0,0,1]))
| 18.69863 | 76 | 0.350916 | 275 | 1,365 | 1.72 | 0.141818 | 0.334038 | 0.323467 | 0.295983 | 0.44186 | 0.44186 | 0.44186 | 0.43129 | 0.315011 | 0.315011 | 0 | 0.253595 | 0.43956 | 1,365 | 72 | 77 | 18.958333 | 0.364706 | 0.042491 | 0 | 0.38 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0 | 0 | 0.06 | 0.02 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6a9560796cbe0298ac1d6278774fd0deb12cc7b | 176 | py | Python | torchpie/metrics/__init__.py | kiototeko/Torchpie | a2f7d8c7fcab2224dd56925f8db0d329166ec744 | [
"BSD-3-Clause"
] | 1 | 2022-02-18T15:50:11.000Z | 2022-02-18T15:50:11.000Z | torchpie/metrics/__init__.py | kiototeko/Torchpie | a2f7d8c7fcab2224dd56925f8db0d329166ec744 | [
"BSD-3-Clause"
] | null | null | null | torchpie/metrics/__init__.py | kiototeko/Torchpie | a2f7d8c7fcab2224dd56925f8db0d329166ec744 | [
"BSD-3-Clause"
] | null | null | null | class Metric:
def __init__(self, compute_fn):
self.compute_fn = compute_fn
def update(self, output, target):
pass
def compute(self):
pass
| 17.6 | 37 | 0.607955 | 22 | 176 | 4.545455 | 0.5 | 0.27 | 0.26 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.306818 | 176 | 9 | 38 | 19.555556 | 0.819672 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.285714 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f6b7d83a0db162a9da15237e3daad24ad481c61b | 208 | py | Python | rasa_nlu/__init__.py | dharampal/rasa_nlu | 202b9041393a3f0e5667e3a33e18c661bd695232 | [
"Apache-2.0"
] | 1 | 2019-06-12T08:21:32.000Z | 2019-06-12T08:21:32.000Z | rasa_nlu/__init__.py | dharampal/rasa_nlu | 202b9041393a3f0e5667e3a33e18c661bd695232 | [
"Apache-2.0"
] | null | null | null | rasa_nlu/__init__.py | dharampal/rasa_nlu | 202b9041393a3f0e5667e3a33e18c661bd695232 | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import rasa_nlu.version
__version__ = version.__version__
| 26 | 39 | 0.879808 | 26 | 208 | 5.961538 | 0.461538 | 0.258065 | 0.412903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105769 | 208 | 7 | 40 | 29.714286 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0.166667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6d584392958cb43c77c94468c1d5feb053fe60a | 27 | py | Python | linum/loader/__init__.py | chabErch/Linum | e32ec01f0b43cfb03fd33ad90cf25df9a0c6565f | [
"MIT"
] | null | null | null | linum/loader/__init__.py | chabErch/Linum | e32ec01f0b43cfb03fd33ad90cf25df9a0c6565f | [
"MIT"
] | null | null | null | linum/loader/__init__.py | chabErch/Linum | e32ec01f0b43cfb03fd33ad90cf25df9a0c6565f | [
"MIT"
] | null | null | null | from .loader import Loader
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6de517a7e75524bed342d79016a130c40443edd | 40 | py | Python | holobot/sdk/network/resilience/models/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 1 | 2021-05-24T00:17:46.000Z | 2021-05-24T00:17:46.000Z | holobot/sdk/network/resilience/models/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 41 | 2021-03-24T22:50:09.000Z | 2021-12-17T12:15:13.000Z | holobot/sdk/network/resilience/models/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | null | null | null | from .circuit_state import CircuitState
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1011251ffedc098e7916a85664b26cf90b7368df | 32 | py | Python | coffeeRequests/__init__.py | chenjiahui0131/coffeeRequests | 9786aa248b19b2d839d375f3a18365bc5628d964 | [
"MIT"
] | 1 | 2020-04-25T16:33:31.000Z | 2020-04-25T16:33:31.000Z | coffeeRequests/__init__.py | chenjiahui0131/coffeeRequests | 9786aa248b19b2d839d375f3a18365bc5628d964 | [
"MIT"
] | 6 | 2020-04-25T10:23:09.000Z | 2020-05-15T14:27:53.000Z | coffeeRequests/__init__.py | chenjiahui0131/coffeeRequests | 9786aa248b19b2d839d375f3a18365bc5628d964 | [
"MIT"
] | null | null | null | from .coffeeRequests import get
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63e2ec195f9b5327f2475d85083bf389cf468120 | 5,921 | py | Python | riskanalysis/src/api/tilt_resource.py | dittmanndennis/tilt-riskanalysis | 26f1d561cf3a3cb451a375f0d63b2e07aeaa537c | [
"MIT"
] | null | null | null | riskanalysis/src/api/tilt_resource.py | dittmanndennis/tilt-riskanalysis | 26f1d561cf3a3cb451a375f0d63b2e07aeaa537c | [
"MIT"
] | null | null | null | riskanalysis/src/api/tilt_resource.py | dittmanndennis/tilt-riskanalysis | 26f1d561cf3a3cb451a375f0d63b2e07aeaa537c | [
"MIT"
] | null | null | null | import falcon
import json
import validators as val
from ..common.constants import *
from ..controller.controller import *
# Falcon follows the REST architectural style, meaning (among
# other things) that you think in terms of resources and state
# transitions, which map to HTTP verbs.
class TILTResource:
async def on_get_update(self, req, resp):
try:
Controller.update()
doc = { "SUCCESS": "Database was updated!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_updateDomain(self, req, resp, domain):
try:
if val.domain(domain):
if Controller.updateDomain(domain):
doc = { "ERROR": "TILT not found" }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
else:
doc = { "SUCCESS": "Database was updated!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
else:
doc = { "ERROR": "TILT not found" }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_calculate(self, req, resp):
try:
Controller.calculateMeasures()
doc = { "SUCCESS": "Measures were calculated!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_calculateRiskDomain(self, req, resp, domain):
try:
Controller.calculateRiskDomain(domain)
doc = { "SUCCESS": "Risks were calculated!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_calculateRisks(self, req, resp):
try:
Controller.calculateRisks()
doc = { "SUCCESS": "Risks were calculated!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_domain(self, req, resp, domain):
try:
if val.domain(domain):
doc = Controller.getRiskScore(domain)
if doc["riskScore"] == None:
doc = { "ERROR": "Risk not found" }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
else:
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
else:
doc = { "ERROR": "Risk not found" }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_deleteGraph(self, req, resp):
try:
Controller.deleteGraph()
doc = { "SUCCESS": "Graph database was deleted!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_deleteProperties(self, req, resp):
try:
Controller.removeProperties()
doc = { "SUCCESS": "Graph database was deleted!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_deleteCollection(self, req, resp, collection):
try:
Controller.deleteCollection(collection)
doc = { "SUCCESS": "Collection was deleted!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_generate(self, req, resp, i):
try:
Controller.generate(int(i))
doc = { "SUCCESS": "TILTs were generated!"}
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404
async def on_get_path(self, req, resp):
try:
doc = Controller.path()
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_200
except Exception as e:
doc = { "ERROR": e }
resp.text = json.dumps(doc, ensure_ascii=False)
resp.status = falcon.HTTP_404 | 39.473333 | 67 | 0.557169 | 676 | 5,921 | 4.77071 | 0.139053 | 0.064496 | 0.096744 | 0.137054 | 0.767442 | 0.724031 | 0.724031 | 0.724031 | 0.724031 | 0.701085 | 0 | 0.020031 | 0.342341 | 5,921 | 150 | 68 | 39.473333 | 0.808166 | 0.026685 | 0 | 0.759399 | 0 | 0 | 0.071528 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037594 | 0 | 0.045113 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
121610cb0980e5f94f4bee4da76dc6af10136baf | 195 | py | Python | weather/test/conftest.py | tobiasli/my_weather | 6e2b82bd94ab61e9e091f4f7565fd7d2f78cfd61 | [
"MIT"
] | null | null | null | weather/test/conftest.py | tobiasli/my_weather | 6e2b82bd94ab61e9e091f4f7565fd7d2f78cfd61 | [
"MIT"
] | 14 | 2019-02-23T13:02:21.000Z | 2019-08-28T21:14:50.000Z | weather/test/conftest.py | tobiasli/my_weather | 6e2b82bd94ab61e9e091f4f7565fd7d2f78cfd61 | [
"MIT"
] | null | null | null | """Configuration of pytest for weather."""
def pytest_addoption(parser):
parser.addoption("--password", action="store", default="")
parser.addoption("--salt", action="store", default="") | 39 | 62 | 0.687179 | 21 | 195 | 6.333333 | 0.619048 | 0.225564 | 0.270677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 195 | 5 | 63 | 39 | 0.764368 | 0.184615 | 0 | 0 | 0 | 0 | 0.168831 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
1239b0933a02960b50668de30a54bcd0308fc658 | 29 | py | Python | astropy/wcs/wcsapi/__init__.py | PriyankaH21/astropy | 159fb9637ce4acdc60329d20517ed3dc7ba79581 | [
"BSD-3-Clause"
] | null | null | null | astropy/wcs/wcsapi/__init__.py | PriyankaH21/astropy | 159fb9637ce4acdc60329d20517ed3dc7ba79581 | [
"BSD-3-Clause"
] | null | null | null | astropy/wcs/wcsapi/__init__.py | PriyankaH21/astropy | 159fb9637ce4acdc60329d20517ed3dc7ba79581 | [
"BSD-3-Clause"
] | null | null | null | from .low_level_api import *
| 14.5 | 28 | 0.793103 | 5 | 29 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
123e684eb47fe7763d698ca68e8f3568e7ef09c5 | 78 | py | Python | src/helper_functions/feval.py | Sascha0912/MAP_Elites | 13e411a3cec8eb5ec8d467f7275a372ed231e701 | [
"MIT"
] | 2 | 2019-06-25T06:51:36.000Z | 2020-09-30T12:40:20.000Z | src/helper_functions/feval.py | Sascha0912/MAP_Elites | 13e411a3cec8eb5ec8d467f7275a372ed231e701 | [
"MIT"
] | null | null | null | src/helper_functions/feval.py | Sascha0912/MAP_Elites | 13e411a3cec8eb5ec8d467f7275a372ed231e701 | [
"MIT"
] | null | null | null | from math import *
def feval(funcName,*args):
return eval(funcName)(*args) | 26 | 32 | 0.717949 | 11 | 78 | 5.090909 | 0.818182 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141026 | 78 | 3 | 32 | 26 | 0.835821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
1251764f0a6a3987cab7d3cd8dd3a3c3cc90f2cd | 22 | py | Python | src/__init__.py | Otumian-empire/extended-set | 45adbebe5ba643f09663bc9e1e826d9a18576ce3 | [
"MIT"
] | 1 | 2019-09-09T15:21:28.000Z | 2019-09-09T15:21:28.000Z | src/__init__.py | Otumian-empire/extended-set | 45adbebe5ba643f09663bc9e1e826d9a18576ce3 | [
"MIT"
] | null | null | null | src/__init__.py | Otumian-empire/extended-set | 45adbebe5ba643f09663bc9e1e826d9a18576ce3 | [
"MIT"
] | null | null | null | from .setX import Setx | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c389a9d46b5ecf9f3fb2a661fd0484fcaa9d9123 | 91 | py | Python | cflearn/api/cv/__init__.py | carefree0910/carefree-learn | 2043812afbe9c56f01ec1639961736313ee062ba | [
"MIT"
] | 400 | 2020-07-05T18:55:49.000Z | 2022-02-21T02:33:08.000Z | cflearn/api/cv/__init__.py | carefree0910/carefree-learn | 2043812afbe9c56f01ec1639961736313ee062ba | [
"MIT"
] | 82 | 2020-08-01T13:29:38.000Z | 2021-10-09T07:13:44.000Z | cflearn/api/cv/__init__.py | carefree0910/carefree-learn | 2043812afbe9c56f01ec1639961736313ee062ba | [
"MIT"
] | 34 | 2020-07-05T21:15:34.000Z | 2021-12-20T08:45:17.000Z | from .data import *
from .models import *
from .pipeline import *
from .interface import *
| 18.2 | 24 | 0.736264 | 12 | 91 | 5.583333 | 0.5 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175824 | 91 | 4 | 25 | 22.75 | 0.893333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
613d75fc15d7cc447100ba07133472bcf7ad2d87 | 200 | py | Python | Ogrenciler/Ersan/carpimtablosu.py | ProEgitim/Python-Dersleri-BEM | b25e9fdb1fa3026925a46b2fcbcba348726b775c | [
"MIT"
] | 1 | 2021-04-18T17:35:22.000Z | 2021-04-18T17:35:22.000Z | Ogrenciler/Ersan/carpimtablosu.py | waroi/Python-Dersleri-BEM | b25e9fdb1fa3026925a46b2fcbcba348726b775c | [
"MIT"
] | null | null | null | Ogrenciler/Ersan/carpimtablosu.py | waroi/Python-Dersleri-BEM | b25e9fdb1fa3026925a46b2fcbcba348726b775c | [
"MIT"
] | 2 | 2021-04-18T18:22:26.000Z | 2021-04-24T17:16:19.000Z | print("\n---------- Çarpım Tablosu ----------")
for e in range(1,10):
print("""\n---------------------------
\n""")
for p in range(1,10):
print("{} x {} = {}".format(e,p,e*p)) | 28.571429 | 47 | 0.35 | 26 | 200 | 2.692308 | 0.5 | 0.171429 | 0.228571 | 0.285714 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038217 | 0.215 | 200 | 7 | 48 | 28.571429 | 0.407643 | 0 | 0 | 0 | 0 | 0 | 0.427861 | 0.144279 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f606da03484e46d67e4eb29b2c46e25975fec36f | 8,416 | py | Python | rl_sandbox/model_architectures/actor_critics/fully_connected_q_actor_critic.py | chanb/rl_sandbox_public | e55f954a29880f83a5b0c3358badda4d900f1564 | [
"MIT"
] | 14 | 2020-11-09T22:05:37.000Z | 2022-02-11T12:41:33.000Z | rl_sandbox/model_architectures/actor_critics/fully_connected_q_actor_critic.py | chanb/rl_sandbox_public | e55f954a29880f83a5b0c3358badda4d900f1564 | [
"MIT"
] | null | null | null | rl_sandbox/model_architectures/actor_critics/fully_connected_q_actor_critic.py | chanb/rl_sandbox_public | e55f954a29880f83a5b0c3358badda4d900f1564 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributions import Categorical, Normal
from rl_sandbox.constants import OBS_RMS, CPU
from rl_sandbox.model_architectures.actor_critics.actor_critic import QActorCritic
from rl_sandbox.model_architectures.shared import Flatten
from rl_sandbox.model_architectures.utils import construct_linear_layers
class FullyConnectedGaussianQACSeparate(QActorCritic):
def __init__(self,
obs_dim,
action_dim,
shared_layers,
eps=1e-7,
device=torch.device(CPU),
normalize_obs=False,
normalize_value=False):
super().__init__(obs_dim=obs_dim,
norm_dim=(0,),
device=device,
normalize_obs=normalize_obs,
normalize_value=normalize_value)
self._eps = eps
self._action_dim = action_dim
self._flatten = Flatten()
# NOTE: Separate architecture grants stable learning for GRAC
self._policy = nn.Sequential(nn.Linear(obs_dim, 256),
nn.ReLU(),
nn.Linear(256, 256),
nn.ReLU(),
nn.Linear(256, action_dim * 2))
self._q1 = nn.Sequential(nn.Linear(obs_dim + action_dim, 256),
nn.ReLU(),
nn.Linear(256, 256),
nn.ReLU(),
nn.Linear(256, 1))
self._q2 = nn.Sequential(nn.Linear(obs_dim + action_dim, 256),
nn.ReLU(),
nn.Linear(256, 256),
nn.ReLU(),
nn.Linear(256, 1))
self.to(self.device)
def _extract_features(self, x):
x = self._flatten(x)
obs, extra_features = x[:, :self._obs_dim], x[:, self._obs_dim:]
if hasattr(self, OBS_RMS):
obs = self.obs_rms.normalize(obs)
x = torch.cat((obs, extra_features), dim=1)
x = x.to(self.device)
return x
def forward(self, x, h, **kwargs):
x = self._extract_features(x)
a_mean, a_raw_std = torch.chunk(self._policy(x), chunks=2, dim=1)
# NOTE: This hyperbolic tangent is important to get reasonable action log prob
a_mean = torch.tanh(a_mean)
# NOTE: If self._eps is too small, we risk running into bad log prob with CEM's choice of action...
a_std = F.softplus(a_raw_std) + self._eps
min_q, _, _, _ = self._q_vals(x, h, a_mean)
return Normal(loc=a_mean, scale=a_std), min_q, h
@property
def policy_parameters(self):
return list(self._policy.parameters())
@property
def qs_parameters(self):
return list(self._q1.parameters()) + list(self._q2.parameters())
class FullyConnectedGaussianQAC(QActorCritic):
def __init__(self,
obs_dim,
action_dim,
shared_layers,
eps=1e-7,
device=torch.device(CPU),
normalize_obs=False,
normalize_value=False):
super().__init__(obs_dim=obs_dim,
norm_dim=(0,),
device=device,
normalize_obs=normalize_obs,
normalize_value=normalize_value)
self._eps = eps
self._action_dim = action_dim
self._flatten = Flatten()
self._shared_network = construct_linear_layers(shared_layers)
self._policy = nn.Sequential(nn.Linear(shared_layers[-1][1], 256),
nn.ReLU(),
nn.Linear(256, action_dim * 2))
self._q1 = nn.Sequential(nn.Linear(shared_layers[-1][1] + action_dim, 256),
nn.ReLU(),
nn.Linear(256, 1))
self._q2 = nn.Sequential(nn.Linear(shared_layers[-1][1] + action_dim, 256),
nn.ReLU(),
nn.Linear(256, 1))
self.to(self.device)
def _extract_features(self, x):
x = super()._extract_features(x)
for layer in self._shared_network:
x = layer(x)
return x
def forward(self, x, h, **kwargs):
x = self._extract_features(x)
a_mean, a_raw_std = torch.chunk(self._policy(x), chunks=2, dim=1)
# NOTE: This hyperbolic tangent is important to get reasonable action log prob
a_mean = torch.tanh(a_mean)
# NOTE: If self._eps is too small, we risk running into bad log prob with CEM's choice of action...
a_std = F.softplus(a_raw_std) + self._eps
min_q, _, _, _ = self._q_vals(x, h, a_mean)
return Normal(loc=a_mean, scale=a_std), min_q, h
@property
def policy_parameters(self):
return list(self._policy.parameters())
@property
def qs_parameters(self):
return list(self._q1.parameters()) + list(self._q2.parameters()) + list(self._shared_network.parameters())
class FullyConnectedGaussianCEMQAC(QActorCritic):
def __init__(self,
obs_dim,
action_dim,
shared_layers,
cem,
eps=1e-7,
device=torch.device(CPU),
normalize_obs=False,
normalize_value=False):
super().__init__(obs_dim=obs_dim,
norm_dim=(0,),
device=device,
normalize_obs=normalize_obs,
normalize_value=normalize_value)
self._eps = eps
self._action_dim = action_dim
self._flatten = Flatten()
self._shared_network = construct_linear_layers(shared_layers)
self._policy = nn.Sequential(nn.Linear(shared_layers[-1][1], 256),
nn.ReLU(),
nn.Linear(256, action_dim * 2))
self._q1 = nn.Sequential(nn.Linear(shared_layers[-1][1] + action_dim, 256),
nn.ReLU(),
nn.Linear(256, 1))
self._q2 = nn.Sequential(nn.Linear(shared_layers[-1][1] + action_dim, 256),
nn.ReLU(),
nn.Linear(256, 1))
self.to(self.device)
self._cem = cem
def _extract_features(self, x):
x = super()._extract_features(x)
for layer in self._shared_network:
x = layer(x)
return x
def forward(self, x, h, **kwargs):
x = self._extract_features(x)
a_mean, a_raw_std = torch.chunk(self._policy(x), chunks=2, dim=1)
# NOTE: This hyperbolic tangent is important to get reasonable action log prob
a_mean = torch.tanh(a_mean)
a_std = F.softplus(a_raw_std) + self._eps
min_q, _, _, _ = self._q_vals(x, h, a_mean)
return Normal(loc=a_mean, scale=a_std), min_q, h
@property
def policy_parameters(self):
return list(self._policy.parameters())
@property
def qs_parameters(self):
return list(self._q1.parameters()) + list(self._q2.parameters()) + list(self._shared_network.parameters())
def compute_cem_score(self, x, h, a, lengths):
return self.q_vals(x, h, a, length=lengths)[1]
def compute_action(self, x, h, **kwargs):
self.eval()
with torch.no_grad():
dist, value, h = self.forward(x, h=h)
action = dist.rsample().clamp(min=self._cem.min_action, max=self._cem.max_action)
cem_action = self._cem.compute_action(self.compute_cem_score, x, h, dist.mean, dist.variance, None)
pi_min_q, _, _, _ = self.q_vals(x, h, action)
cem_min_q, _, _, _ = self.q_vals(x, h, cem_action)
if cem_min_q > pi_min_q:
action = cem_action
log_prob = dist.log_prob(action).sum(dim=-1, keepdim=True)
self.train()
return action[0].cpu().numpy(), value[0].cpu().numpy(), h[0].cpu().numpy(), log_prob[0].cpu().numpy(), dist.entropy()[0].cpu().numpy(), dist.mean[0].cpu().numpy(), dist.variance[0].cpu().numpy()
| 38.962963 | 202 | 0.543132 | 1,016 | 8,416 | 4.233268 | 0.133858 | 0.039061 | 0.02511 | 0.030691 | 0.764938 | 0.743316 | 0.732853 | 0.725878 | 0.725878 | 0.725878 | 0 | 0.023692 | 0.348028 | 8,416 | 215 | 203 | 39.144186 | 0.76016 | 0.057747 | 0 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.099415 | false | 0 | 0.046784 | 0.040936 | 0.245614 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f614291f9e268ea12df8b3097d906210c4bdf8f8 | 23 | py | Python | pypesto/sample/__init__.py | LukasSp/pyPESTO | f4260ff6cacce982bb25fe104e04fb761efdf0ec | [
"BSD-3-Clause"
] | null | null | null | pypesto/sample/__init__.py | LukasSp/pyPESTO | f4260ff6cacce982bb25fe104e04fb761efdf0ec | [
"BSD-3-Clause"
] | null | null | null | pypesto/sample/__init__.py | LukasSp/pyPESTO | f4260ff6cacce982bb25fe104e04fb761efdf0ec | [
"BSD-3-Clause"
] | null | null | null | """
Sample
======
"""
| 3.833333 | 6 | 0.26087 | 1 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 5 | 7 | 4.6 | 0.333333 | 0.565217 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6236b4e31621d0c649b6fdf67488c793be6f36d | 448 | py | Python | addin_assistant/projects_codes/KeyBoardEffects/Install/KeyBoardEffects_addin.py | chenjl0710/arcpyTools | 4f31e79f402cc2a0827450ab3aaba8f8d9a5f502 | [
"MIT"
] | 1 | 2019-07-07T17:46:55.000Z | 2019-07-07T17:46:55.000Z | addin_assistant/projects_codes/KeyBoardEffects/Install/KeyBoardEffects_addin.py | chenjl0710/arcpyTools | 4f31e79f402cc2a0827450ab3aaba8f8d9a5f502 | [
"MIT"
] | 7 | 2021-03-31T18:45:40.000Z | 2022-03-11T23:25:36.000Z | addin_assistant/projects_codes/KeyBoardEffects/Install/KeyBoardEffects_addin.py | chenjl0710/arcpyTools | 4f31e79f402cc2a0827450ab3aaba8f8d9a5f502 | [
"MIT"
] | 1 | 2020-07-21T00:13:07.000Z | 2020-07-21T00:13:07.000Z | import arcpy
import pythonaddins
class End_Effect(object):
"""Implementation for End_Effect_addin.button (Button)"""
def __init__(self):
self.enabled = True
self.checked = False
def onClick(self):
pass
class Start_Effect(object):
"""Implementation for Start_Effect_addin.button (Button)"""
def __init__(self):
self.enabled = True
self.checked = False
def onClick(self):
pass | 24.888889 | 63 | 0.654018 | 52 | 448 | 5.365385 | 0.403846 | 0.064516 | 0.18638 | 0.207885 | 0.594982 | 0.594982 | 0.594982 | 0.594982 | 0.594982 | 0.594982 | 0 | 0 | 0.247768 | 448 | 18 | 64 | 24.888889 | 0.827893 | 0.234375 | 0 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
f64e8e797953d471ba69081078965a615f16fe08 | 13,034 | py | Python | src/commands/setup.py | Tauseef-Hilal/iCODE-BOT | dd4efa9084c35d238f1170ff3af69eeeb055abec | [
"MIT"
] | 1 | 2022-03-31T15:31:10.000Z | 2022-03-31T15:31:10.000Z | src/commands/setup.py | Tauseef-Hilal/iCODE-BOT | dd4efa9084c35d238f1170ff3af69eeeb055abec | [
"MIT"
] | null | null | null | src/commands/setup.py | Tauseef-Hilal/iCODE-BOT | dd4efa9084c35d238f1170ff3af69eeeb055abec | [
"MIT"
] | null | null | null | from discord import (
Cog,
Embed,
Guild,
Interaction,
Option,
Role,
SlashCommandGroup,
ApplicationContext,
TextChannel
)
from ..utils.color import Colors
from ..bot import ICodeBot
from ..utils.checks import (
maintenance_check,
permission_check
)
class SetupCommands(Cog):
"""
Commands for setup
"""
SETUP = SlashCommandGroup(
"setup",
"Commands for setting bot features."
)
def __init__(self, bot: ICodeBot) -> None:
"""
Initialize
"""
super().__init__()
self._bot = bot
@SETUP.command(name="modlogs")
@maintenance_check()
@permission_check(administrator=True)
async def _modlogs(
self,
ctx: ApplicationContext,
channel: Option(
TextChannel,
"The channel where you want to log. "
"Defaults to the current channel"
) = None
) -> None:
"""
Setup a channel for moderation logs
Args:
ctx (ApplicationContext)
channel (TextChannel): The log channel
"""
# Select current channel if no channel provided
if not channel:
channel: TextChannel = ctx.channel
# ---
emoji = self._bot.emoji_group.get_emoji("loading_dots")
res: Interaction = await ctx.respond(
embed=Embed(
description=f"Setting {channel.mention} for "
f"moderation logs {emoji}",
color=Colors.GOLD
)
)
guild: Guild = ctx.guild
if str(guild.id) not in self._bot.db.list_collection_names():
collection = self._bot.db.create_collection(str(guild.id))
collection.insert_one(
{
"channel_ids": {
"modlogs_channel": channel.id
}
}
)
else:
collection = self._bot.db.get_collection(str(guild.id))
if "channel_ids" in collection.find_one():
channels_dict = collection.find_one()["channel_ids"]
channels_dict["modlogs_channel"] = channel.id
collection.update_one(
collection.find_one(),
{"$set": {"channel_ids": channels_dict}}
)
else:
collection.update_one(
collection.find_one(),
{"$set":
{
"channel_ids": {
"modlogs_channel": channel.id
}
}
}
)
emoji = self._bot.emoji_group.get_emoji("green_tick")
await res.edit_original_message(
embed=Embed(
description=f"Set {channel.mention} for "
f"moderation logs {emoji}",
color=Colors.GREEN
),
delete_after=2
)
@SETUP.command(name="bump-reminder")
@maintenance_check()
@permission_check(administrator=True)
async def _bump_timer(
self,
ctx: ApplicationContext,
channel: Option(
TextChannel,
"The channel where you want to send reminder. "
"Defaults to the current channel"
) = None
) -> None:
"""
Setup a channel for bump reminder
Args:
ctx (ApplicationContext)
channel (TextChannel): The reminder channel
"""
# Select current channel if no channel provided
if not channel:
channel: TextChannel = ctx.channel
# ---
emoji = self._bot.emoji_group.get_emoji("loading_dots")
res: Interaction = await ctx.respond(
embed=Embed(
description=f"Setting {channel.mention} for bump "
f"reminders {emoji}",
color=Colors.GOLD
)
)
guild: Guild = ctx.guild
if str(guild.id) not in self._bot.db.list_collection_names():
collection = self._bot.db.create_collection(str(guild.id))
collection.insert_one(
{
"channel_ids": {
"bump_reminder_channel": channel.id
}
}
)
else:
collection = self._bot.db.get_collection(str(guild.id))
if "channel_ids" in collection.find_one():
channels_dict = collection.find_one()["channel_ids"]
channels_dict["bump_reminder_channel"] = channel.id
collection.update_one(
collection.find_one(),
{"$set": {"channel_ids": channels_dict}}
)
else:
collection.update_one(
collection.find_one(),
{"$set":
{
"channel_ids": {
"bump_reminder_channel": channel.id
}
}
}
)
# ---
emoji = self._bot.emoji_group.get_emoji("green_tick")
await res.edit_original_message(
embed=Embed(
description=f"Set {channel.mention} for bump "
f"reminders {emoji}",
color=Colors.GREEN
),
delete_after=2
)
@SETUP.command(name="bumper-role")
@maintenance_check()
@permission_check(administrator=True)
async def _bumper_role(
self,
ctx: ApplicationContext,
role: Option(
Role,
"The role to ping in bump reminder message."
)
) -> None:
"""
Setup a role for bump reminder pings
Args:
ctx (ApplicationContext)
role (Role): The bumper role
"""
# ---
emoji = self._bot.emoji_group.get_emoji("loading_dots")
res: Interaction = await ctx.respond(
embed=Embed(
description=f"Setting {role.mention} for bump "
f"reminder pings {emoji}",
color=Colors.GOLD
)
)
guild: Guild = ctx.guild
if str(guild.id) not in self._bot.db.list_collection_names():
collection = self._bot.db.create_collection(str(guild.id))
collection.insert_one(
{
"role_ids": {
"server_bumper_role": role.id
}
}
)
else:
collection = self._bot.db.get_collection(str(guild.id))
if "role_ids" in collection.find_one():
roles_dict = collection.find_one()["role_ids"]
roles_dict["server_bumper_role"] = role.id
collection.update_one(
collection.find_one(),
{"$set": {"role_ids": roles_dict}}
)
else:
collection.update_one(
collection.find_one(),
{"$set":
{
"role_ids": {
"server_bumper_role": role.id
}
}
}
)
# ---
emoji = self._bot.emoji_group.get_emoji("green_tick")
await res.edit_original_message(
embed=Embed(
description=f"Set {role.mention} for bump "
f"reminder pings {emoji}",
color=Colors.GREEN
),
delete_after=2
)
@SETUP.command(name="console")
@maintenance_check()
@permission_check(administrator=True)
async def _console(
self,
ctx: ApplicationContext,
channel: Option(
TextChannel,
"The channel where you want to welcome new users. "
"Defaults to the current channel"
) = None
) -> None:
"""
Setup a channel for member join/leave events
Args:
ctx (ApplicationContext)
channel (TextChannel): The welcome channel
"""
# Select current channel if no channel provided
if not channel:
channel: TextChannel = ctx.channel
# ---
emoji = self._bot.emoji_group.get_emoji("loading_dots")
res: Interaction = await ctx.respond(
embed=Embed(
description=f"Setting {channel.mention} for member "
f"join/leave events {emoji}",
color=Colors.GOLD
)
)
guild: Guild = ctx.guild
if str(guild.id) not in self._bot.db.list_collection_names():
collection = self._bot.db.create_collection(str(guild.id))
collection.insert_one(
{
"channel_ids": {
"console_channel": channel.id
}
}
)
else:
collection = self._bot.db.get_collection(str(guild.id))
if "channel_ids" in collection.find_one():
channels_dict = collection.find_one()["channel_ids"]
channels_dict["console_channel"] = channel.id
collection.update_one(
collection.find_one(),
{"$set": {"channel_ids": channels_dict}}
)
else:
collection.update_one(
collection.find_one(),
{"$set":
{
"channel_ids": {
"console_channel": channel.id
}
}
}
)
# ---
emoji = self._bot.emoji_group.get_emoji("green_tick")
await res.edit_original_message(
embed=Embed(
description=f"Set {channel.mention} for member "
f"join/leave events {emoji}",
color=Colors.GREEN
),
delete_after=2
)
@SETUP.command(name="suggestions")
@maintenance_check()
@permission_check(administrator=True)
async def _suggestions(
self,
ctx: ApplicationContext,
channel: Option(
TextChannel,
"The channel where you want to put suggestions. "
"Defaults to the current channel"
) = None
) -> None:
"""
Setup a channel for suggestions
Args:
ctx (ApplicationContext)
channel (TextChannel): The welcome channel
"""
# Select current channel if no channel provided
if not channel:
channel: TextChannel = ctx.channel
# ---
emoji = self._bot.emoji_group.get_emoji("loading_dots")
res: Interaction = await ctx.respond(
embed=Embed(
description=f"Setting {channel.mention} for suggestions "
f"{emoji}",
color=Colors.GOLD
)
)
guild: Guild = ctx.guild
if str(guild.id) not in self._bot.db.list_collection_names():
collection = self._bot.db.create_collection(str(guild.id))
collection.insert_one(
{
"channel_ids": {
"suggestions_channel": channel.id
}
}
)
else:
collection = self._bot.db.get_collection(str(guild.id))
if "channel_ids" in collection.find_one():
channels_dict = collection.find_one()["channel_ids"]
channels_dict["suggestions_channel"] = channel.id
collection.update_one(
collection.find_one(),
{"$set": {"channel_ids": channels_dict}}
)
else:
collection.update_one(
collection.find_one(),
{"$set":
{
"channel_ids": {
"suggestions_channel": channel.id
}
}
}
)
# ---
emoji = self._bot.emoji_group.get_emoji("green_tick")
await res.edit_original_message(
embed=Embed(
description=f"Set {channel.mention} for suggestions "
f"{emoji}",
color=Colors.GREEN
),
delete_after=2
)
| 31.407229 | 73 | 0.467546 | 1,111 | 13,034 | 5.292529 | 0.10441 | 0.032143 | 0.057823 | 0.028912 | 0.865136 | 0.855952 | 0.831803 | 0.816837 | 0.762075 | 0.735204 | 0 | 0.000689 | 0.442919 | 13,034 | 414 | 74 | 31.483092 | 0.809117 | 0.019181 | 0 | 0.620795 | 0 | 0 | 0.136287 | 0.005287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003058 | false | 0 | 0.012232 | 0 | 0.021407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1434f07ce6c572a162727d01fbca23824a887ba1 | 130 | py | Python | pyavreceiver/template/http_api.py | JPHutchins/pyavreceiver | 2c86d0ab1f3bca886d2a876096ac760ffb1dcd5f | [
"Apache-2.0"
] | 2 | 2020-12-28T06:09:18.000Z | 2021-01-09T22:36:57.000Z | pyavreceiver/template/http_api.py | JPHutchins/pyavreceiver | 2c86d0ab1f3bca886d2a876096ac760ffb1dcd5f | [
"Apache-2.0"
] | 1 | 2021-02-03T22:59:49.000Z | 2021-02-03T22:59:49.000Z | pyavreceiver/template/http_api.py | JPHutchins/pyavreceiver | 2c86d0ab1f3bca886d2a876096ac760ffb1dcd5f | [
"Apache-2.0"
] | null | null | null | """Define HTTP API."""
from pyavreceiver.http_api import HTTPApi
class TemplateHTTPApi(HTTPApi):
"""Define the HTTP API."""
| 18.571429 | 41 | 0.715385 | 16 | 130 | 5.75 | 0.625 | 0.228261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146154 | 130 | 6 | 42 | 21.666667 | 0.828829 | 0.284615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
14744221d646f011d814c71304325a8222de8e19 | 9,436 | py | Python | src/putils/findDiagonal/find5dRotation.py | dmft-wien2k/dmft-wien2k-v2 | 83481be27e8a9ff14b9635d6cc1cd9d96f053487 | [
"Apache-2.0"
] | 5 | 2021-05-13T13:04:26.000Z | 2022-01-18T10:08:09.000Z | src/putils/findDiagonal/find5dRotation.py | dmft-wien2k/dmft-wien2k-v2 | 83481be27e8a9ff14b9635d6cc1cd9d96f053487 | [
"Apache-2.0"
] | 2 | 2016-07-12T21:37:53.000Z | 2016-07-12T21:42:01.000Z | src/putils/findDiagonal/find5dRotation.py | dmft-wien2k/dmft-wien2k | 83481be27e8a9ff14b9635d6cc1cd9d96f053487 | [
"Apache-2.0"
] | 2 | 2016-07-22T15:46:56.000Z | 2016-08-02T15:05:12.000Z | #!/usr/bin/env python
# @Copyright 2007 Kristjan Haule
from scipy import *
from scipy import linalg
import copy
import sys
def mprint(Us):
for i in range(shape(Us)[0]):
for j in range(shape(Us)[1]):
print "%11.8f %11.8f " % (real(Us[i,j]), imag(Us[i,j])),
print
def MakeOrthogonal(a, b, ii):
a -= (a[ii]/b[ii])*b
a *= 1/sqrt(dot(a,a.conj()))
b -= dot(b,a.conj())*a
b *= 1/sqrt(dot(b,b.conj()))
return (a,b)
def StringToMatrix(cfstr):
mm=[]
for line in cfstr.split('\n'):
line = line.strip()
if line:
data = array(map(float,line.split()))
mm.append( data[0::2]+data[1::2]*1j )
mm=matrix(mm)
return mm
def RealPhase(vec):
for j in range(len(vec)):
v = vec[j]
imax = 0
vmax = abs(v[imax])
for i in range(len(v)):
if abs(v[i])>vmax:
vmax=abs(v[i])
imax = i
vec[j,:] = array(v)*abs(v[imax])/v[imax]
return vec
def to_normalize(a):
return 1./sqrt(abs(dot(conj(a), a)))
def swap(a,b):
an = copy.deepcopy(a)
bn = copy.deepcopy(b)
return (bn,an)
def findMax(v):
av=abs(v)
ind=range(len(v))
ind.sort(lambda a,b: cmp(av[b],av[a]))
return ind
def GiveNewT2C(Hc, T2C):
ee = linalg.eigh(Hc)
Es = ee[0]
Us = matrix(ee[1])
Es = Es[::-1]
Us = Us[:,::-1]
#print 'In Eigensystem:'
#mprint(Us.H * Hc * Us)
# Us.H * Hc * Us === diagonal
dim = len(Hc)
#print 'Eigenvalues=', Es.tolist()
for i0 in range(0,dim,2):
i2=i0+2
vects = Us[:,i0:i2]
vtc = transpose(conj(vects))
#print 'vects^H='
#mprint(vtc)
ind0=findMax(vects[:,0]) # Finds which components of the eigenvector are large for the first atomic state
ind1=findMax(vects[:,1]) # Finds which components of the eigenvector are large for the second atomic state
# The two largest components will be analized
if ind0[0]!=ind1[0]:
j0,j1 = min(ind0[0],ind1[0]), max(ind0[0],ind1[0])
else:
j0,j1 = min(ind0[1],ind1[0]), max(ind0[1],ind1[0])
# We will make sure that the largest components of the two eigenvectors are maximally orthogonal
O = hstack((vtc[:,j0],vtc[:,j1]))
print 'O='
mprint(O)
(u_,s_,v_) = linalg.svd(O)
print 'S=', s_.tolist()
m = min(shape(u_)[1],shape(v_)[0])
R = dot(u_[:,:m],v_[:m,:])
#print 'R=', R
vectn = dot(vects,R)
#print 'vectn^H='
#mprint(transpose(conj(vectn)))
Us[:,i0:i2] = vectn[:,:]
#Us = u_ * s_ * v_
#print 'Eigenvalues'
#print "%10.5f "*len(Es) % tuple(Es)
print 'Transformation in crystal harmonics='
mprint(Us)
print
#print 'shape(T2C)=', shape(T2C)
#mprint(T2C)
#print
final = Us.T*T2C
final = array(final)
final2 = RealPhase(final)
final=copy.deepcopy(final2)
return final
def Check(final, T2C, Hc):
# the modified final transofrmation is rotated back to t2g-eg base to see how weell diagonal remains
Us_new = transpose(matrix(final)*T2C.H)
print 'Check-diagonal:'
mprint(Us_new.H * Hc * Us_new)
print 'Check unitary:'
mprint( matrix(final) * matrix(final).H )
print
def CheckDet(final, T2Crest):
totalfinal = vstack((final,T2Crest))
Det = linalg.det(totalfinal)
print 'Determinant=', Det
if abs(Det+1)<1e-3:
print 'Determinant is -1, you need to change an eigenvector, to make the rotation proper!'
return Det
if __name__ == '__main__':
#strHc1="""
#-1.52945958 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.06742707 0.01545415 0.00000000 0.00000000 0.00000000 0.00000000
#0.00000000 0.00000000 -1.52945958 0.00000000 0.06742705 -0.01545415 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
#0.00000000 0.00000000 0.06742705 0.01545415 -2.30650667 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
#0.06742707 -0.01545415 0.00000000 0.00000000 0.00000000 0.00000000 -2.30650668 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
#0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 -2.28241642 0.00000000 0.00000000 0.00000000
#0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 -2.28241642 0.00000000
#"""
#
#strHc2="""
#0.13290679 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.05659197 -0.02929068 0.00000000 0.00000000 0.00000000 0.00000000
#0.00000000 0.00000000 0.13290691 0.00000000 0.05659196 0.02929068 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
#0.00000000 0.00000000 0.05659196 -0.02929068 -1.08239101 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
#0.05659197 0.02929068 0.00000000 0.00000000 0.00000000 0.00000000 -1.08239098 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
#0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 -0.70798000 0.00000000 0.00000000 0.00000000
#0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 -0.70797997 0.00000000
#"""
#
#
#strT2C="""
#-0.00000000 -0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.87675550 0.00000000 0.00000000 0.00000000 -0.33968059 -0.01634050 -0.00000000 -0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.33968059 0.01634050
# 0.33968059 -0.01634050 -0.00000000 -0.00000000 -0.00000000 -0.00000000 0.00000000 -0.00000000 -0.33968059 0.01634050 -0.00000000 0.00000000 0.87675550 -0.00000000 -0.00000000 -0.00000000 -0.00000000 -0.00000000 0.00000000 -0.00000000
# 0.61995975 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 -0.00000000 -0.61995975 0.00000000 0.00000000 -0.00000000 -0.48038090 -0.02310896 0.00000000 0.00000000 0.00000000 0.00000000 -0.00000000 0.00000000
# 0.00000000 0.00000000 -0.00000000 -0.00000000 -0.00000000 -0.00000000 -0.48038090 0.02310896 -0.00000000 -0.00000000 -0.61995976 -0.00000000 0.00000000 0.00000000 -0.00000000 -0.00000000 -0.00000000 -0.00000000 0.61995976 -0.00000000
# 0.00000000 0.00000000 1.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
# 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 1.00000000 0.00000000 0.00000000 0.00000000
#"""
#strT2Crest="""
# 0.00000000 0.00000000 0.00000000 0.00000000 1.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
# 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 1.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
# 0.70710679 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.70710679 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
# 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.70710679 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.70710679 0.00000000
#"""
if len(sys.argv)<2:
print
print "Give input file which conatins impurity levels (strHc) and transformation (strT2C)"
print
sys.exit(0)
fpar=sys.argv[1]
execfile(fpar)
Hc = StringToMatrix(strHc)
print 'shape(Hc)=', shape(Hc)
T2C0=StringToMatrix(strT2C)
print 'shape(T2C0)=', shape(T2C0)
T2C = T2C0[:len(Hc),:]
print 'shape(T2C)=', shape(T2C)
T2Crest = T2C0[len(Hc):,:]
print 'shape(T2Crest)=', shape(T2Crest)
final = GiveNewT2C(Hc, T2C)
print 'Rotation to input : '
mprint( final )
mprint( T2Crest )
| 46.945274 | 264 | 0.58298 | 1,317 | 9,436 | 4.159453 | 0.159453 | 0.476451 | 0.505659 | 0.824754 | 0.582329 | 0.566448 | 0.553669 | 0.553669 | 0.553669 | 0.553304 | 0 | 0.48373 | 0.296524 | 9,436 | 200 | 265 | 47.18 | 0.341519 | 0.600042 | 0 | 0.04386 | 0 | 0 | 0.090616 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.035088 | null | null | 0.22807 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1493e193bda34613f5a54ac01f0761b787035c24 | 6,478 | py | Python | spinoffs/oryx/oryx/experimental/nn/convolution_test.py | bourov/probability | 1e4053a0938b4773c3425bcbb07b3f1e5d50c7e2 | [
"Apache-2.0"
] | 2 | 2020-12-17T20:43:24.000Z | 2021-06-11T22:09:16.000Z | spinoffs/oryx/oryx/experimental/nn/convolution_test.py | bourov/probability | 1e4053a0938b4773c3425bcbb07b3f1e5d50c7e2 | [
"Apache-2.0"
] | 2 | 2021-08-25T16:14:51.000Z | 2022-02-10T04:47:11.000Z | spinoffs/oryx/oryx/experimental/nn/convolution_test.py | bourov/probability | 1e4053a0938b4773c3425bcbb07b3f1e5d50c7e2 | [
"Apache-2.0"
] | 1 | 2021-01-03T20:23:52.000Z | 2021-01-03T20:23:52.000Z | # Copyright 2020 The TensorFlow Probability Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Tests for tensorflow_probability.spinoffs.oryx.experimental.nn.convolution."""
from absl.testing import absltest
import jax
from jax import random
from oryx.core import state
from oryx.experimental.nn import convolution
class ConvolutionTest(absltest.TestCase):
def setUp(self):
super().setUp()
self._seed = random.PRNGKey(0)
def test_conv_filter_shape(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (28, 28, 1))
net_init = convolution.Conv(
64, (3, 3),
strides=(1, 1),
padding='SAME'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (28, 28, 64))
self.assertEqual(net(x).shape, out_shape)
def test_conv_kernel_shape(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (28, 28, 1))
net_init = convolution.Conv(
64, (5, 5),
strides=(1, 1),
padding='VALID'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (24, 24, 64))
self.assertEqual(net(x).shape, out_shape)
def test_conv_padding_shape(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (28, 28, 1))
net_init = convolution.Conv(
64, (3, 3),
strides=(1, 1),
padding='VALID'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (26, 26, 64))
self.assertEqual(net(x).shape, out_shape)
def test_conv_strides_shape(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (28, 28, 1))
net_init = convolution.Conv(
64, (2, 2),
strides=(2, 2),
padding='VALID'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (14, 14, 64))
net_init = convolution.Conv(
64, (3, 3),
strides=(2, 2),
padding='VALID'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (13, 13, 64))
self.assertEqual(net(x).shape, out_shape)
def test_deconv_filter_shape(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (28, 28, 1))
net_init = convolution.Deconv(
64, (3, 3),
strides=(1, 1),
padding='SAME'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (28, 28, 64))
self.assertEqual(net(x).shape, out_shape)
def test_deconv_kernel_shape(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (28, 28, 1))
net_init = convolution.Deconv(
64, (5, 5),
strides=(1, 1),
padding='VALID'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (32, 32, 64))
self.assertEqual(net(x).shape, out_shape)
def test_deconv_padding_shape(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (28, 28, 1))
net_init = convolution.Deconv(
64, (3, 3),
strides=(1, 1),
padding='VALID'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (30, 30, 64))
self.assertEqual(net(x).shape, out_shape)
def test_deconv_strides_shape(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (28, 28, 1))
net_init = convolution.Deconv(
64, (2, 2),
strides=(2, 2),
padding='VALID'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (56, 56, 64))
self.assertEqual(net(x).shape, out_shape)
net_init = convolution.Deconv(
64, (3, 3),
strides=(2, 2),
padding='VALID'
)
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
self.assertEqual(out_shape, (57, 57, 64))
self.assertEqual(net(x).shape, out_shape)
def test_conv_vmap(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (10, 28, 28, 1))
net_init = convolution.Conv(
64, (2, 2),
strides=(2, 2),
padding='VALID'
)
with self.assertRaises(ValueError):
out_shape = net_init.spec(state.Shape((10, 28, 28, 1))).shape
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
with self.assertRaises(ValueError):
net(x)
y = jax.vmap(net)(x)
self.assertEqual(y.shape, (10,) + out_shape)
def test_deconv_vmap(self):
data_rng, net_rng = random.split(self._seed)
x = random.normal(data_rng, (10, 28, 28, 1))
net_init = convolution.Deconv(
64, (2, 2),
strides=(2, 2),
padding='VALID'
)
with self.assertRaises(ValueError):
out_shape = net_init.spec(state.Shape((10, 28, 28, 1))).shape
out_shape = net_init.spec(state.Shape((28, 28, 1))).shape
net = net_init.init(net_rng, state.Shape((28, 28, 1)))
with self.assertRaises(ValueError):
net(x)
self.assertEqual(jax.vmap(net)(x).shape, (10,) + out_shape)
if __name__ == '__main__':
absltest.main()
| 32.228856 | 81 | 0.629515 | 969 | 6,478 | 4.040248 | 0.133127 | 0.038825 | 0.045977 | 0.085824 | 0.778289 | 0.772925 | 0.772925 | 0.772925 | 0.75249 | 0.75249 | 0 | 0.064963 | 0.208706 | 6,478 | 200 | 82 | 32.39 | 0.69879 | 0.112072 | 0 | 0.716129 | 0 | 0 | 0.011512 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 1 | 0.070968 | false | 0 | 0.032258 | 0 | 0.109677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
14a1733092529e7add1ebf0bd8be51a8ab9e8053 | 30 | py | Python | box/models/common/__init__.py | mamalmaleki/maktab-community | 8ce25053ea0f6f0a6c082617c9ff306d1ada9707 | [
"MIT"
] | null | null | null | box/models/common/__init__.py | mamalmaleki/maktab-community | 8ce25053ea0f6f0a6c082617c9ff306d1ada9707 | [
"MIT"
] | null | null | null | box/models/common/__init__.py | mamalmaleki/maktab-community | 8ce25053ea0f6f0a6c082617c9ff306d1ada9707 | [
"MIT"
] | null | null | null | from .image import ImageModel
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ae9f7e5fb316fa032cff2788291314107d5a11b | 153 | py | Python | A-Byte-of-Python/9_3_for.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | A-Byte-of-Python/9_3_for.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | A-Byte-of-Python/9_3_for.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | for i in range(1, 5):
print(i)
else:
print('Loop \'for\' finish')
for i in range(1, 10, 2):
print(i)
else:
print('Loop \'for\' finish')
| 15.3 | 32 | 0.54902 | 27 | 153 | 3.111111 | 0.444444 | 0.095238 | 0.142857 | 0.261905 | 0.952381 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0.052174 | 0.248366 | 153 | 9 | 33 | 17 | 0.678261 | 0 | 0 | 0.75 | 0 | 0 | 0.169935 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2126770adf35c5a929e22362fdbb1924a60794fa | 6,517 | py | Python | test/programytest/parser/template/node_tests/test_attrib.py | cdoebler1/AIML2 | ee692ec5ea3794cd1bc4cc8ec2a6b5e5c20a0d6a | [
"MIT"
] | 345 | 2016-11-23T22:37:04.000Z | 2022-03-30T20:44:44.000Z | test/programytest/parser/template/node_tests/test_attrib.py | MikeyBeez/program-y | 00d7a0c7d50062f18f0ab6f4a041068e119ef7f0 | [
"MIT"
] | 275 | 2016-12-07T10:30:28.000Z | 2022-02-08T21:28:33.000Z | test/programytest/parser/template/node_tests/test_attrib.py | VProgramMist/modified-program-y | f32efcafafd773683b3fe30054d5485fe9002b7d | [
"MIT"
] | 159 | 2016-11-28T18:59:30.000Z | 2022-03-20T18:02:44.000Z | import xml.etree.ElementTree as ET
from programy.parser.template.nodes.attrib import TemplateAttribNode
from programytest.parser.base import ParserTestsBaseClass
class TestTemplateAttribNode(TemplateAttribNode):
def __init__(self):
TemplateAttribNode.__init__(self)
self.pairs = {}
def set_attrib(self, attrib_name: str, attrib_value):
self.pairs[attrib_name] = attrib_value
class TemplateAttribNodeTests(ParserTestsBaseClass):
def test_node(self):
attrib = TemplateAttribNode()
self.assertIsNotNone(attrib)
with self.assertRaises(Exception):
attrib.set_attrib("Something", "Other")
def test_parse_node_with_attrib_no_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node name="test">Test</node>')
attrib_name = "name"
attrib._parse_node_with_attrib(graph, expression, attrib_name)
self.assertTrue(attrib_name in attrib.pairs)
self.assertEquals("test", attrib.pairs[attrib_name].word)
def test_parse_node_with_child_attrib_no_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node><name>test</name> Test</node>')
attrib_name = "name"
attrib._parse_node_with_attrib(graph, expression, attrib_name)
self.assertTrue(attrib_name in attrib.pairs)
self.assertEquals("test", attrib.pairs[attrib_name].children[0].word)
def test_parse_node_with_no_attrib_no_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node>Test</node>')
attrib_name = "name"
attrib._parse_node_with_attrib(graph, expression, attrib_name)
self.assertFalse(attrib_name in attrib.pairs)
def test_parse_node_with_no_attrib_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node>Test</node>')
attrib_name = "name"
attrib._parse_node_with_attrib(graph, expression, attrib_name, default_value="test")
self.assertTrue(attrib_name in attrib.pairs)
def test_parse_node_with_diff_attrib_no_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node nameX="test">Test</node>')
attrib_name = "name"
attrib._parse_node_with_attrib(graph, expression, attrib_name)
self.assertFalse(attrib_name in attrib.pairs)
def test_parse_node_with_diff_child_attrib_no_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node><nameX>test</nameX>Test</node>')
attrib_name = "name"
attrib._parse_node_with_attrib(graph, expression, attrib_name)
self.assertFalse(attrib_name in attrib.pairs)
def test_parse_node_with_diff_child_attrib_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node><nameX>test</nameX>Test</node>')
attrib_name = "name"
attrib._parse_node_with_attrib(graph, expression, attrib_name, default_value="test2")
self.assertTrue(attrib_name in attrib.pairs)
self.assertEquals("test2", attrib.pairs[attrib_name].word)
def test_parse_node_with_attribs_no_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node name1="test1" name2="test2">Test</node>')
attrib._parse_node_with_attribs(graph, expression, [["name1", None], ["name2", None]])
self.assertTrue("name1" in attrib.pairs)
self.assertEquals("test1", attrib.pairs["name1"].word)
self.assertTrue("name2" in attrib.pairs)
self.assertEquals("test2", attrib.pairs["name2"].word)
def test_parse_node_with_child_attribs_no_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node> <name1>test1</name1> <name2>test2</name2> Test</node>')
attrib._parse_node_with_attribs(graph, expression, [["name1", None], ["name2", None]])
self.assertTrue("name1" in attrib.pairs)
self.assertEquals("test1", attrib.pairs["name1"].children[0].word)
self.assertTrue("name2" in attrib.pairs)
self.assertEquals("test2", attrib.pairs["name2"].children[0].word)
def test_parse_node_with_no_attribs_no_default_values(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node>Test</node>')
attrib._parse_node_with_attribs(graph, expression, [])
self.assertFalse("name1" in attrib.pairs)
self.assertFalse("name2" in attrib.pairs)
def test_parse_node_with_attribs_default_values(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node>Test</node>')
attrib._parse_node_with_attribs(graph, expression, [["name1", "test1"], ["name2", "test2"]])
self.assertTrue("name1" in attrib.pairs)
self.assertEquals("test1", attrib.pairs["name1"].word)
self.assertTrue("name2" in attrib.pairs)
self.assertEquals("test2", attrib.pairs["name2"].word)
def test_parse_node_with_child_attribs_with_default_value(self):
attrib = TestTemplateAttribNode()
graph = self._client_context.brain.aiml_parser.template_parser
expression = ET.fromstring('<node> <name1X>test1</name1X> <name2Y>test2</name2Y> Test</node>')
attrib._parse_node_with_attribs(graph, expression, [["name1", "test1"], ["name2", "test2"]])
self.assertTrue("name1" in attrib.pairs)
self.assertEquals("test1", attrib.pairs["name1"].word)
self.assertTrue("name2" in attrib.pairs)
self.assertEquals("test2", attrib.pairs["name2"].word)
| 39.023952 | 102 | 0.707381 | 764 | 6,517 | 5.734293 | 0.08377 | 0.070304 | 0.071217 | 0.043826 | 0.86236 | 0.852773 | 0.851176 | 0.847523 | 0.83885 | 0.810774 | 0 | 0.011028 | 0.17907 | 6,517 | 166 | 103 | 39.259036 | 0.80785 | 0 | 0 | 0.609091 | 0 | 0.018182 | 0.100813 | 0.032377 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.136364 | false | 0 | 0.027273 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2140ba2a2ce8589f0238a4916a298db9d4c667b1 | 14,302 | py | Python | src/util/create_graph.py | imagexdsearch/imagesearch | 7f4d18906d6ebd9f5d7b4e0db4bc6c7e675fbb1d | [
"BSD-2-Clause"
] | null | null | null | src/util/create_graph.py | imagexdsearch/imagesearch | 7f4d18906d6ebd9f5d7b4e0db4bc6c7e675fbb1d | [
"BSD-2-Clause"
] | null | null | null | src/util/create_graph.py | imagexdsearch/imagesearch | 7f4d18906d6ebd9f5d7b4e0db4bc6c7e675fbb1d | [
"BSD-2-Clause"
] | null | null | null | '''
Created on Nov 18, 2016
@author: flavio
'''
import util.convert_database_to_files
from evaluation.evaluation import evaluation
import run
import numpy as np
import glob
import os
import math
def remove_files_pickle(path_output):
#name_files = glob.glob(path_output + '*.ckpt')
name_files = glob.glob(path_output + '*.pickle')
for name in name_files:
if(os.path.isfile(name)):
os.remove(name)
def remove_files_cnn(path_output):
name_files = glob.glob(path_output + '*.ckpt')
#name_files = glob.glob(path_output + '*.pickle')
for name in name_files:
if(os.path.isfile(name)):
os.remove(name)
def get_number_examples_per_class(labels_database):
classes = np.unique(labels_database)
number_per_class = np.zeros(len(classes))
cont_index=0
for class_ in classes:
number_per_class[cont_index] = np.sum(labels_database == class_)
cont_index+=1
return number_per_class
def tan_sigmoid(x):
return 2/(1+math.pow(math.e,(-2*x))) -1
def new_learning_new(c_learning,factor_dec,epoch,e_0,c_e,t):
return (c_learning - (1-tan_sigmoid( (e_0-c_e)/t ))*factor_dec )
def run_create_graph_map():
#cnn machine
path_database_train = '/home/users/flavio/databases/fmd/fmd_train_resize_augmentation/'
path_database_test = '/home/users/flavio/databases/cells/cells_test/'
path_retrieval = '/home/users/flavio/databases/fiberFlaRom/fiberFlaRom_train/query/'
path_cnn_trained = '/home/users/flavio/databases/fmd/fmd_train_resize_augmentation/features/model_test.ckpt'
path_output_train = path_database_train + 'features/'
path_output_test = path_database_test + 'features/'
preprocessing_method = 'None'
distance = 'ed'
searching_method = 'kd'
percent_database = 0.1
percent_query = 0.001
number_of_images = 10
feature_extraction_method = 'cnn_training'
#jump_num_epoch = [1,4,5,10,20,30,30]#cells
#learning_rate =[0.1,0.1,0.08,0.04,0.02,0.01,0.009]#cells
#jump_num_epoch = [1,9,10,20,30,30,50,100,100]#fmd
#learning_rate =[0.1,0.1,0.03,0.02,0.01,0.008,0.004,0.002,0.001]#fmd
#jump_num_epoch = [1,4,5,5,5]#fibers
#learning_rate =[0.1,0.1,0.08,0.06,0.004]#fibers
#learning_rate =[0.1,0.1,0.1,0.08,0.06,0.02,0.008,0.006,0.004,0.001]#cells
#learning_rate =[0.001,0.001,0.1,0.08,0.06,0.04,0.02,0.02,0.01,0.01]#cells
NUM_LEVEL = [0]
learning_rate_0 = 0.1
factor_dec = 0.01
learning_rate_f = 0.05
for num_level in NUM_LEVEL:
#remove_files_cnn(path_output_train)
list_train_time = []
list_map = []
list_accuracy = []
list_number_epoch = []
list_error_total = []
#removing files
remove_files_pickle(path_output_train)
cont_index=0
#for num_epoch in jump_num_epoch:
for num_epoch in range(1,61,1):
if(num_epoch == 1 or num_epoch ==2):
new_learning_rate = learning_rate_0
list_of_parameters = [str(new_learning_rate),str(1),str(num_level)]
else:
new_learning_rate = new_learning_new(new_learning_rate,factor_dec,num_epoch,list_error_total[-2][2],list_error_total[-1][2],num_epoch)
if(new_learning_rate < learning_rate_f):
new_learning_rate = learning_rate_f
list_of_parameters = [str(new_learning_rate),str(1),str(num_level)]
#train
#get the list of names and labels
name_images_database, labels_database, name_images_query, labels_query = convert_database_to_files.get_name_labels(path_database_train,path_retrieval)
_, train_time, _, error = run.run_command_line(name_images_database,labels_database,name_images_query,labels_query,path_cnn_trained,path_output_train,feature_extraction_method,distance,number_of_images,list_of_parameters,preprocessing_method,searching_method, isEvaluation=True,do_searching_processing=False,save_csv=False)
if(not list_train_time):
list_train_time.append(train_time[0])
else:
list_train_time.append(train_time[0] + list_train_time[-1])
print('train time epoch', num_epoch, '=', list_train_time[-1])
list_error_total.append([num_epoch, new_learning_rate, (error[0][1] + error[1][1])/2 ])
print('Num_epoch =', list_error_total[-1][0],'Learning rate =', list_error_total[-1][1], 'Error =',list_error_total[-1][2])
'''
if(not list_train_time):
list_train_time.append(train_time[0])
else:
list_train_time.append(train_time[0] + list_train_time[-1])
#evaluation
list_of_parameters = ['0.1','0',str(num_level)]
name_images_database, labels_database = convert_database_to_files.get_name_labels(path_database_test)
MAP, ACCURACY, fig = evaluation.evaluation(name_images_database, labels_database, name_images_database, labels_database,path_output_test,feature_extraction_method,distance,list_of_parameters,preprocessing_method,searching_method,path_cnn_trained=path_cnn_trained,percent_query=percent_query,percent_database=percent_database)
list_number_epoch.append(np.sum(jump_num_epoch[0:cont_index+1]))
list_map.append(MAP)
list_accuracy.append(ACCURACY)
print('Num_epoch =', list_number_epoch[-1],'train_time =', list_train_time[-1], 'MAP =',np.mean(MAP), 'Accuracy =',np.mean(ACCURACY))
for i in range(len(list_map[-1])):
print('Map for class ', i, list_map[-1][i])
for i in range(len(list_map[-1])):
print('Accuracy for class ', i, list_accuracy[-1][i])
#removing files
remove_files_pickle(path_output_test)
cont_index+=1
np.savetxt(path_output_test + feature_extraction_method + '_train_time_' + preprocessing_method + '_' + str(num_level) + '_level' + '.csv', np.asarray(list_train_time),delimiter = ',')
np.savetxt(path_output_test + feature_extraction_method + '_Map_' + preprocessing_method + '_' + str(num_level) + '_level' + '.csv', np.asarray(list_map),delimiter = ',')
np.savetxt(path_output_test + feature_extraction_method + '_number_epoch_' + preprocessing_method + '_' + str(num_level) + '_level' + '.csv', np.asarray(list_number_epoch),delimiter = ',')
np.savetxt(path_output_test + feature_extraction_method + '_accuracy_' + preprocessing_method + '_' + str(num_level) + '_level' + '.csv', np.asarray(list_accuracy),delimiter = ',')
'''
np.savetxt(path_output_train + feature_extraction_method + '_learning_rate_error_' + '.csv', np.asarray(list_error_total),delimiter = ',')
def run_create_graph_loss_decay():
#cnn machine
path_database_train = '/home/users/flavio/databases/fmd/fmd_train_resize_augmentation/'
path_database_test = '/home/users/flavio/databases/cells/cells_test/'
path_retrieval = '/home/users/flavio/databases/fiberFlaRom/fiberFlaRom_train/query/'
path_cnn_trained = '/home/users/flavio/databases/fmd/fmd_train_resize_augmentation/features/model_test.ckpt'
path_output_train = path_database_train + 'features/'
path_output_test = path_database_test + 'features/'
preprocessing_method = 'None'
distance = 'ed'
searching_method = 'kd'
percent_database = 0.1
percent_query = 0.001
number_of_images = 10
feature_extraction_method = 'cnn_training'
#jump_num_epoch = [1,4,5,10,20,30,30]#cells
#learning_rate =[0.1,0.1,0.08,0.04,0.02,0.01,0.009]#cells
#jump_num_epoch = [1,9,10,20,30,30,50,100,100]#fmd
#learning_rate =[0.1,0.1,0.03,0.02,0.01,0.008,0.004,0.002,0.001]#fmd
#jump_num_epoch = [1,4,5,5,5]#fibers
#learning_rate =[0.1,0.1,0.08,0.06,0.004]#fibers
learning_rate =[0.1,0.1,0.09,0.09,0.08,0.08,0.07,0.07,0.06,0.06,0.05,0.05,0.04,0.04,0.03,0.03,0.02,0.02,0.01,0.01]
#learning_rate =[0.001,0.001,0.1,0.08,0.06,0.04,0.02,0.02,0.01,0.01]#cells
NUM_LEVEL = [0]
#learning_rate_0 = 0.1
#factor_dec = 0.01
learning_rate_f = 0.01
for num_level in NUM_LEVEL:
#remove_files_cnn(path_output_train)
list_train_time = []
list_map = []
list_accuracy = []
list_number_epoch = []
list_error_total = []
#removing files
remove_files_pickle(path_output_train)
cont_index=0
#for num_epoch in jump_num_epoch:
for num_epoch in range(1,21,1):
if(num_epoch == 1 or num_epoch ==2):
try:
new_learning_rate = learning_rate[num_epoch-1]
except:
print('Learning rate', new_learning_rate)
new_learning_rate = learning_rate_f
list_of_parameters = [str(new_learning_rate),str(1),str(num_level)]
#train
#get the list of names and labels
name_images_database, labels_database, name_images_query, labels_query = convert_database_to_files.get_name_labels(path_database_train,path_retrieval)
_, train_time, _, error = run.run_command_line(name_images_database,labels_database,name_images_query,labels_query,path_cnn_trained,path_output_train,feature_extraction_method,distance,number_of_images,list_of_parameters,preprocessing_method,searching_method, isEvaluation=True,do_searching_processing=False,save_csv=False)
list_error_total.append([num_epoch, new_learning_rate, (error[0][1] + error[1][1])/2 ])
print('Num_epoch =', list_error_total[-1][0],'Learning rate =', list_error_total[-1][1], 'Error =',list_error_total[-1][2])
'''
if(not list_train_time):
list_train_time.append(train_time[0])
else:
list_train_time.append(train_time[0] + list_train_time[-1])
#evaluation
list_of_parameters = ['0.1','0',str(num_level)]
name_images_database, labels_database = convert_database_to_files.get_name_labels(path_database_test)
MAP, ACCURACY, fig = evaluation.evaluation(name_images_database, labels_database, name_images_database, labels_database,path_output_test,feature_extraction_method,distance,list_of_parameters,preprocessing_method,searching_method,path_cnn_trained=path_cnn_trained,percent_query=percent_query,percent_database=percent_database)
list_number_epoch.append(np.sum(jump_num_epoch[0:cont_index+1]))
list_map.append(MAP)
list_accuracy.append(ACCURACY)
print('Num_epoch =', list_number_epoch[-1],'train_time =', list_train_time[-1], 'MAP =',np.mean(MAP), 'Accuracy =',np.mean(ACCURACY))
for i in range(len(list_map[-1])):
print('Map for class ', i, list_map[-1][i])
for i in range(len(list_map[-1])):
print('Accuracy for class ', i, list_accuracy[-1][i])
#removing files
remove_files_pickle(path_output_test)
cont_index+=1
np.savetxt(path_output_test + feature_extraction_method + '_train_time_' + preprocessing_method + '_' + str(num_level) + '_level' + '.csv', np.asarray(list_train_time),delimiter = ',')
np.savetxt(path_output_test + feature_extraction_method + '_Map_' + preprocessing_method + '_' + str(num_level) + '_level' + '.csv', np.asarray(list_map),delimiter = ',')
np.savetxt(path_output_test + feature_extraction_method + '_number_epoch_' + preprocessing_method + '_' + str(num_level) + '_level' + '.csv', np.asarray(list_number_epoch),delimiter = ',')
np.savetxt(path_output_test + feature_extraction_method + '_accuracy_' + preprocessing_method + '_' + str(num_level) + '_level' + '.csv', np.asarray(list_accuracy),delimiter = ',')
'''
np.savetxt(path_output_train + feature_extraction_method + '_learning_rate_error_decay' + '.csv', np.asarray(list_error_total),delimiter = ',')
def run_create_graph_accuracy():
#cnn
path_database = '/home/users/flavio/databases/cells/cells_test/' #'/home/users/flavio/databases/new_database_split/new_database_split_test/'
#path_cnn_trained = '/home/users/flavio/databases/fiberFlaRom/fiberFlaRom_train/features/model.ckpt' #'/home/users/flavio/databases/new_databa$
path_cnn_trained = '/home/users/flavio/databases/inception_resnet_v2_2016_08_30.ckpt'
path_output = path_database + 'features/'
#flavio machine
#path_database = '/Users/flavio/Desktop/cells/'
#path_cnn_trained = '/Users/flavio/Desktop/cells/features/model.ckpt'
#path_output = path_database + 'features/'
preprocessing_method = 'None'
distance = 'ed'
searching_method = 'kd'
percent_database = 1
percent_query = 1
feature_extraction_method = 'cnn'
jump = 10
#evaluation
list_of_parameters = ['0.1','0','0']
name_images_database, labels_database = convert_database_to_files.get_name_labels(path_database)
list_k_accuracy = range(1,np.int(np.min(get_number_examples_per_class(labels_database))),jump)
list_accuracy = evaluation.get_accuracy_using_list_k_accuracy(name_images_database, labels_database, name_images_database, labels_database,path_output,feature_extraction_method,distance,list_of_parameters,preprocessing_method,searching_method,list_k_accuracy,path_cnn_trained=path_cnn_trained,percent_query=percent_query,percent_database=percent_database)
np.savetxt(path_output + feature_extraction_method + '_accuracy_per_class_' + preprocessing_method + '.csv', np.asarray(list_accuracy),delimiter = ',')
np.savetxt(path_output + feature_extraction_method + '_list_k_accuracy_' + preprocessing_method + '.csv', np.asarray(list_k_accuracy),delimiter = ',')
run_create_graph_loss_decay()
| 49.832753 | 359 | 0.668648 | 1,972 | 14,302 | 4.48073 | 0.087221 | 0.050249 | 0.007469 | 0.03531 | 0.883658 | 0.852535 | 0.836464 | 0.801041 | 0.799683 | 0.793572 | 0 | 0.043675 | 0.21074 | 14,302 | 287 | 360 | 49.832753 | 0.739103 | 0.11411 | 0 | 0.546154 | 0 | 0 | 0.115193 | 0.082593 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061538 | false | 0 | 0.053846 | 0.015385 | 0.138462 | 0.030769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
214b446017db4dd73f3fcdf587e49f196feba2bf | 292 | py | Python | src/jk_commentjson/__init__.py | jkpubsrc/python-module-jk-commentjson | 7727e325b949f447c902e1a1f32e4c22e07264e1 | [
"Apache-1.1"
] | null | null | null | src/jk_commentjson/__init__.py | jkpubsrc/python-module-jk-commentjson | 7727e325b949f447c902e1a1f32e4c22e07264e1 | [
"Apache-1.1"
] | null | null | null | src/jk_commentjson/__init__.py | jkpubsrc/python-module-jk-commentjson | 7727e325b949f447c902e1a1f32e4c22e07264e1 | [
"Apache-1.1"
] | null | null | null | from jk_commentjson.commentjson import dump
from jk_commentjson.commentjson import dumps
from jk_commentjson.commentjson import JSONLibraryException
from jk_commentjson.commentjson import load
from jk_commentjson.commentjson import loads
from jk_commentjson.commentjson import loadFromFile
| 32.444444 | 59 | 0.890411 | 36 | 292 | 7.055556 | 0.277778 | 0.141732 | 0.401575 | 0.661417 | 0.80315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089041 | 292 | 8 | 60 | 36.5 | 0.954887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
215b4b2d0b9293116f6e72abe2b09465db9cd22c | 67 | py | Python | terrapin/plot.py | dharhas/terrapin | a448e89e111055795db2d9ec4c04864b04b9f177 | [
"BSD-2-Clause"
] | 1 | 2020-02-12T01:03:55.000Z | 2020-02-12T01:03:55.000Z | terrapin/plot.py | dharhas/terrapin | a448e89e111055795db2d9ec4c04864b04b9f177 | [
"BSD-2-Clause"
] | null | null | null | terrapin/plot.py | dharhas/terrapin | a448e89e111055795db2d9ec4c04864b04b9f177 | [
"BSD-2-Clause"
] | 2 | 2015-02-15T18:14:01.000Z | 2019-07-28T12:26:38.000Z | import matplotlib.pyplot as plt
def flow_grid(dem, angles):
pass | 13.4 | 31 | 0.776119 | 11 | 67 | 4.636364 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149254 | 67 | 5 | 32 | 13.4 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
21647268efcb6b6bcee747e62603eb1783262fc3 | 152 | py | Python | pub_site/src/pub_site/api/account/__init__.py | webee/pay | b48c6892686bf3f9014bb67ed119506e41050d45 | [
"W3C"
] | 1 | 2019-10-14T11:51:49.000Z | 2019-10-14T11:51:49.000Z | pub_site/src/pub_site/api/account/__init__.py | webee/pay | b48c6892686bf3f9014bb67ed119506e41050d45 | [
"W3C"
] | null | null | null | pub_site/src/pub_site/api/account/__init__.py | webee/pay | b48c6892686bf3f9014bb67ed119506e41050d45 | [
"W3C"
] | null | null | null | # coding=utf-8
from ..utils import SubBlueprint
from .. import api_mod
account_mod = SubBlueprint('account', api_mod, '/account')
from . import views | 19 | 58 | 0.743421 | 21 | 152 | 5.238095 | 0.52381 | 0.181818 | 0.236364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.138158 | 152 | 8 | 59 | 19 | 0.832061 | 0.078947 | 0 | 0 | 0 | 0 | 0.107914 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
216c93af0f5cbf3f010be3b7c855e7e5ef10faf4 | 2,095 | py | Python | tests/test_softmax.py | wakamezake/deep-learning-from-scratch-3 | 92614028be0bcd0f0b2b6ada419a20110bae7ea7 | [
"MIT"
] | null | null | null | tests/test_softmax.py | wakamezake/deep-learning-from-scratch-3 | 92614028be0bcd0f0b2b6ada419a20110bae7ea7 | [
"MIT"
] | null | null | null | tests/test_softmax.py | wakamezake/deep-learning-from-scratch-3 | 92614028be0bcd0f0b2b6ada419a20110bae7ea7 | [
"MIT"
] | null | null | null | import unittest
import numpy as np
from dezero import Variable
import dezero.functions as F
from dezero.utils import check_backward
import chainer.functions as CF
class TestSoftmax(unittest.TestCase):
def test_forward1(self):
x = np.array([[0, 1, 2], [0, 2, 4]], np.float32)
y2 = CF.softmax(x, axis=1)
y = F.softmax(Variable(x))
res = np.allclose(y.data, y2.data)
self.assertTrue(res)
def test_forward2(self):
np.random.seed(0)
x = np.random.rand(10, 10).astype('f')
y2 = CF.softmax(x, axis=1)
y = F.softmax(Variable(x))
res = np.allclose(y.data, y2.data)
self.assertTrue(res)
def test_forward3(self):
np.random.seed(0)
x = np.random.rand(10, 10, 10).astype('f')
y2 = CF.softmax(x, axis=1)
y = F.softmax(Variable(x))
res = np.allclose(y.data, y2.data)
self.assertTrue(res)
def test_backward1(self):
x_data = np.array([[0, 1, 2], [0, 2, 4]])
f = lambda x: F.softmax(x, axis=1)
self.assertTrue(check_backward(f, x_data))
def test_backward2(self):
np.random.seed(0)
x_data = np.random.rand(10, 10)
f = lambda x: F.softmax(x, axis=1)
self.assertTrue(check_backward(f, x_data))
def test_backward3(self):
np.random.seed(0)
x_data = np.random.rand(10, 10, 10)
f = lambda x: F.softmax(x, axis=1)
self.assertTrue(check_backward(f, x_data))
class TestSoftmaxCrossEntropy(unittest.TestCase):
def test_forward1(self):
x = np.array([[-1, 0, 1, 2], [2, 0, 1, -1]], np.float32)
t = np.array([3, 0]).astype(np.int32)
y = F.softmax_cross_entropy(x, t)
y2 = CF.softmax_cross_entropy(x, t)
res = np.allclose(y.data, y2.data)
self.assertTrue(res)
def test_backward1(self):
x_data = np.array([[-1, 0, 1, 2], [2, 0, 1, -1]], np.float32)
t = np.array([3, 0]).astype(np.int32)
f = lambda x: F.softmax_cross_entropy(x, Variable(t))
self.assertTrue(check_backward(f, x_data)) | 32.734375 | 69 | 0.589976 | 327 | 2,095 | 3.697248 | 0.165138 | 0.046319 | 0.059553 | 0.064516 | 0.803143 | 0.74359 | 0.74359 | 0.716294 | 0.706369 | 0.641026 | 0 | 0.056483 | 0.256325 | 2,095 | 64 | 70 | 32.734375 | 0.719512 | 0 | 0 | 0.574074 | 0 | 0 | 0.000954 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 1 | 0.148148 | false | 0 | 0.111111 | 0 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d07294f5680e269b86dc38791d465897b102073 | 11,904 | py | Python | mstrio/api/monitors.py | LLejoly/mstrio-py | 497fb041318d0def12cf72917ede2c02c1808067 | [
"Apache-2.0"
] | null | null | null | mstrio/api/monitors.py | LLejoly/mstrio-py | 497fb041318d0def12cf72917ede2c02c1808067 | [
"Apache-2.0"
] | null | null | null | mstrio/api/monitors.py | LLejoly/mstrio-py | 497fb041318d0def12cf72917ede2c02c1808067 | [
"Apache-2.0"
] | null | null | null | from mstrio.utils.helper import response_handler
def get_projects(connection, offset=0, limit=-1, error_msg=None):
"""Get list of all projects from metadata.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
offset(int): Starting point within the collection of returned search
results. Used to control paging behavior.
limit(int): Maximum number of items returned for a single search
request. Used to control paging behavior. Use -1 (default ) for no
limit (subject to governing settings).
error_msg (string, optional): Custom Error Message for Error Handling
Returns:
HTTP response object returned by the MicroStrategy REST server.
"""
response = connection.session.get(url=connection.base_url + '/api/monitors/projects',
headers={'X-MSTR-ProjectID': None},
params={'offset': offset,
'limit': limit})
if not response.ok:
if error_msg is None:
error_msg = "Error getting list of all projects from metadata."
response_handler(response, error_msg)
return response
def get_projects_async(future_session, connection, offset=0, limit=-1, error_msg=None):
"""Get list of all projects from metadata asynchronously.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
offset(int): Starting point within the collection of returned search
results. Used to control paging behavior.
limit(int): Maximum number of items returned for a single search
request. Used to control paging behavior. Use -1 (default ) for no
limit (subject to governing settings).
error_msg (string, optional): Custom Error Message for Error Handling
Returns:
HTTP response object returned by the MicroStrategy REST server.
"""
url = connection.base_url + '/api/monitors/projects'
headers = {'X-MSTR-ProjectID': None}
params = {'offset': offset,
'limit': limit}
future = future_session.get(url=url, headers=headers, params=params)
return future
def get_node_info(connection, id=None, node_name=None, error_msg=None):
"""Get information about nodes in the connected Intelligence Server
cluster.
This includes basic information, runtime state and information of projects
on each node. This operation requires the "Monitor cluster" privilege.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
id (str, optional): Project ID
node_name (str, optional): Node Name
error_msg (string, optional): Custom Error Message for Error Handling
"""
response = connection.session.get(url=connection.base_url + '/api/monitors/iServer/nodes',
headers={'X-MSTR-ProjectID': None},
params={'projects.id': id,
'name': node_name})
if not response.ok:
if error_msg is None:
error_msg = "Error getting information about nodes in the connected Intelligence Server cluster."
response_handler(response, error_msg)
return response
def update_node_properties(connection, node_name, project_id, body, error_msg=None, whitelist=[]):
"""Update properties such as project status for a specific project for
respective cluster node. You obtain cluster node name and project id from
GET /monitors/iServer/nodes.
{
"operationList": [
{
"op": "replace",
"path": "/status",
"value": "loaded"
}
]
}
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
node_name (string): Node Name.
project_id (string): Project ID.
body (JSON): Body 'op' can have "value" set to "add", "replace",
"remove"; 'path' can have pattern: /([/A-Za-z0-9~])*-* example:
/status; 'values' for '/status' we can choose [loaded, unloaded,
request_idle, exec_idle, wh_exec_idle, partial_idle, full_idle]
error_msg (string, optional): Custom Error Message for Error Handling
Returns:
HTTP response object returned by the MicroStrategy REST server.
"""
response = connection.session.patch(url=connection.base_url + '/api/monitors/iServer/nodes/' +
node_name + '/projects/' + project_id,
headers={'X-MSTR-ProjectID': None},
json=body)
if not response.ok:
if error_msg is None:
error_msg = "Error updating properties for a specific project for respective cluster node."
response_handler(response, error_msg, whitelist=whitelist)
return response
def add_node(connection, node_name, error_msg=None, whitelist=[]):
"""Add a node to the connected Intelligence Server cluster. The node must
meet I-Server clustering requirements. If the node is part of a multi-node
cluster, all the nodes in that cluster will be added. If the node is
already in the cluster, the operation succeeds without making any change.
This operation requires the "Monitor cluster" and "Administer cluster"
privilege.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
node_name (string): Node Name.
error_msg (string, optional): Custom Error Message for Error Handling
whitelist(list): list of tuples of I-Server Error and HTTP errors codes
respectively, which will not be handled
i.e. whitelist = [('ERR001', 500),('ERR004', 404)]
Returns:
HTTP response object returned by the MicroStrategy REST server.
"""
response = connection.session.put(url=connection.base_url + '/api/monitors/iServer/nodes/' + node_name,
headers={'X-MSTR-ProjectID': None})
if not response.ok:
if error_msg is None:
error_msg = "Error adding node '{}' to the connected Intelligence Server cluster".format(node_name)
response_handler(response, error_msg, whitelist=whitelist)
return response
def remove_node(connection, node_name, error_msg=None, whitelist=[]):
"""Remove a node from the connected Intelligence Server cluster. After a
successful removal, some existing authorization tokens may become
invalidated and in this case re-login is needed. You cannot remove a node
if it's the configured default node of Library Server, or there is only one
node in the cluster. This operation requires the "Monitor cluster" and
"Administer cluster" privilege.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
node_name (string): Node Name.
error_msg (string, optional): Custom Error Message for Error Handling
whitelist(list): list of tuples of I-Server Error and HTTP errors codes
respectively, which will not be handled
i.e. whitelist = [('ERR001', 500),('ERR004', 404)]
Returns:
HTTP response object returned by the MicroStrategy REST server.
"""
response = connection.session.delete(url=connection.base_url + '/api/monitors/iServer/nodes/' + node_name,
headers={'X-MSTR-ProjectID': None})
if not response.ok:
if error_msg is None:
error_msg = "Error removing node '{}' from the connected Intelligence Server cluster.".format(node_name)
response_handler(response, error_msg, whitelist=whitelist)
return response
def get_user_connections(connection, node_name, offset=0, limit=100, error_msg=None):
"""Get user connections information on specific intelligence server node.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
offset(int): Starting point within the collection of returned search
results. Used to control paging behavior.
limit(int): Maximum number of items returned for a single search
request. Used to control paging behavior. Use -1 (default ) for no
limit (subject to governing settings).
node_name (string): Node Name.
error_msg (string, optional): Custom Error Message for Error Handling
Returns:
HTTP response object returned by the MicroStrategy REST server.
"""
response = connection.session.get(url=connection.base_url + '/api/monitors/userConnections',
headers={'X-MSTR-ProjectID': None},
params={'clusterNode': node_name,
'offset': offset,
'limit': limit})
if not response.ok:
if error_msg is None:
error_msg = "Error getting user connections for '{}' cluster node.".format(node_name)
response_handler(response, error_msg)
return response
def get_user_connections_async(future_session, connection, node_name, offset=0, limit=100):
"""Get user connections information on specific intelligence server node.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
node_name (string): Node Name.
offset(int): Starting point within the collection of returned search
results. Used to control paging behavior.
limit(int): Maximum number of items returned for a single search
request. Used to control paging behavior. Use -1 (default ) for no
limit (subject to governing settings).
Returns:
HTTP response object returned by the MicroStrategy REST server.
"""
params = {'clusterNode': node_name,
'offset': offset,
'limit': limit}
url = connection.base_url + '/api/monitors/userConnections'
headers = {'X-MSTR-ProjectID': None}
future = future_session.get(url=url, headers=headers, params=params)
return future
def delete_user_connection(connection, id, error_msg=None):
"""Disconnect an user connection on specific intelligence server node.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
id (str, optional): Project ID
error_msg (string, optional): Custom Error Message for Error Handling
"""
response = connection.session.delete(url=connection.base_url + '/api/monitors/userConnections/' + id,
headers={'X-MSTR-ProjectID': None})
if not response.ok:
if error_msg is None:
error_msg = "Error deleting user connections '{}'.".format(id)
# whitelist error related to disconnecting yourself or other unallowed
response_handler(response, error_msg, whitelist=[('ERR001', 500)])
return response
def delete_user_connection_async(future_session, connection, id, error_msg=None):
"""Disconnect an user connection on specific intelligence server node.
Args:
connection(object): MicroStrategy connection object returned by
`connection.Connection()`.
id (str, optional): Project ID
"""
url = connection.base_url + '/api/monitors/userConnections/' + id
headers = {'X-MSTR-ProjectID': None}
future = future_session.delete(url=url, headers=headers)
return future
| 44.75188 | 116 | 0.645749 | 1,378 | 11,904 | 5.497823 | 0.156749 | 0.040127 | 0.035903 | 0.043559 | 0.813358 | 0.813358 | 0.788147 | 0.765971 | 0.721489 | 0.692186 | 0 | 0.005533 | 0.271169 | 11,904 | 265 | 117 | 44.920755 | 0.867681 | 0.524278 | 0 | 0.604651 | 0 | 0 | 0.190075 | 0.05355 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116279 | false | 0 | 0.011628 | 0 | 0.244186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b4df2cdf988a4ecf3368cf78f3327edd7575cf1d | 30 | py | Python | __init__.py | drakkhen/python-adafruitdisplay | 9705cad68dd6e7834219a3b68f38ff67a99a0604 | [
"MIT"
] | null | null | null | __init__.py | drakkhen/python-adafruitdisplay | 9705cad68dd6e7834219a3b68f38ff67a99a0604 | [
"MIT"
] | null | null | null | __init__.py | drakkhen/python-adafruitdisplay | 9705cad68dd6e7834219a3b68f38ff67a99a0604 | [
"MIT"
] | null | null | null | from adafruitdisplay import *
| 15 | 29 | 0.833333 | 3 | 30 | 8.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
371360a378c6ca6de14d5076b2b4232f54d11c9f | 31 | py | Python | Hello World Programs/Python/helloWorld_python.py | TeacherManoj0131/HacktoberFest2020-Contributions | c7119202fdf211b8a6fc1eadd0760dbb706a679b | [
"MIT"
] | 256 | 2020-09-30T19:31:34.000Z | 2021-11-20T18:09:15.000Z | Hello World Programs/Python/helloWorld_python.py | TeacherManoj0131/HacktoberFest2020-Contributions | c7119202fdf211b8a6fc1eadd0760dbb706a679b | [
"MIT"
] | 293 | 2020-09-30T19:14:54.000Z | 2021-06-06T02:34:47.000Z | Hello World Programs/Python/helloWorld_python.py | TeacherManoj0131/HacktoberFest2020-Contributions | c7119202fdf211b8a6fc1eadd0760dbb706a679b | [
"MIT"
] | 1,620 | 2020-09-30T18:37:44.000Z | 2022-03-03T20:54:22.000Z | print("Hello World! Welcome!")
| 15.5 | 30 | 0.709677 | 4 | 31 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2ed2f8c395ef091938cdb64973bfc057dbdd2f8a | 98 | py | Python | users/urls.py | Nizhuuum/CSE_303_SEC_02_GROUP_07 | d53ca01cace500851840696d1d8943f4447f5297 | [
"MIT"
] | null | null | null | users/urls.py | Nizhuuum/CSE_303_SEC_02_GROUP_07 | d53ca01cace500851840696d1d8943f4447f5297 | [
"MIT"
] | null | null | null | users/urls.py | Nizhuuum/CSE_303_SEC_02_GROUP_07 | d53ca01cace500851840696d1d8943f4447f5297 | [
"MIT"
] | 3 | 2021-09-04T17:40:27.000Z | 2021-09-11T05:44:59.000Z | from django.urls import path, include
from . import views
#from users import views as user_views
| 19.6 | 38 | 0.795918 | 16 | 98 | 4.8125 | 0.625 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 98 | 4 | 39 | 24.5 | 0.939024 | 0.377551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25a59ed495e0114de5d6eedac28a261f3824e71f | 506 | py | Python | ysl/twisted/log.py | jianingy/sitebase | 7afe00b7e2c642461207786e9ab851e1d3b59015 | [
"BSD-3-Clause"
] | 1 | 2021-02-19T06:31:43.000Z | 2021-02-19T06:31:43.000Z | ysl/twisted/log.py | jianingy/sitebase | 7afe00b7e2c642461207786e9ab851e1d3b59015 | [
"BSD-3-Clause"
] | null | null | null | ysl/twisted/log.py | jianingy/sitebase | 7afe00b7e2c642461207786e9ab851e1d3b59015 | [
"BSD-3-Clause"
] | 2 | 2015-09-18T02:21:32.000Z | 2021-02-19T06:31:47.000Z | #!/usr/bin/env python2.6
from twisted.python.log import msg as _log
import logging
__all__ = ["debug", "info", "warn", "error", "crit"]
def debug(msg, *args):
return _log(msg, *args, level=logging.DEBUG)
def info(msg, *args):
return _log(msg, *args, level=logging.INFO)
def warn(msg, *args):
return _log(msg, *args, level=logging.WARNING)
def error(msg, *args):
return _log(msg, *args, level=logging.ERROR)
def crit(msg, *args):
return _log(msg, *args, level=logging.CRITICAL)
| 21.083333 | 52 | 0.671937 | 76 | 506 | 4.342105 | 0.328947 | 0.212121 | 0.19697 | 0.242424 | 0.530303 | 0.530303 | 0.530303 | 0.530303 | 0 | 0 | 0 | 0.004706 | 0.160079 | 506 | 23 | 53 | 22 | 0.771765 | 0.045455 | 0 | 0 | 0 | 0 | 0.045738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.384615 | false | 0 | 0.153846 | 0.384615 | 0.923077 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
25c3126a6437ada5baf57a627a918d26488a22a7 | 23,128 | py | Python | test/unit/findings_api_v1_tests/test_findings_api_v1.py | prince737/security-advisor-sdk-python | a06f6fe8180377a6ca8291ba74cff326cb56b539 | [
"Apache-2.0"
] | null | null | null | test/unit/findings_api_v1_tests/test_findings_api_v1.py | prince737/security-advisor-sdk-python | a06f6fe8180377a6ca8291ba74cff326cb56b539 | [
"Apache-2.0"
] | 17 | 2020-05-30T11:21:06.000Z | 2021-04-20T10:01:09.000Z | test/unit/findings_api_v1_tests/test_findings_api_v1.py | prince737/security-advisor-sdk-python | a06f6fe8180377a6ca8291ba74cff326cb56b539 | [
"Apache-2.0"
] | 4 | 2020-05-18T12:38:03.000Z | 2021-04-20T07:13:47.000Z | # coding: utf-8
# Copyright 2020 IBM All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Test the ibm_security_advisor_findings_api_sdk service API operations
"""
import pytest
import unittest
import datetime
# import json
# import os
from ibm_cloud_security_advisor import FindingsApiV1
from ibm_cloud_sdk_core import BaseService
from unittest.mock import patch
from unittest import mock
m = mock.Mock()
class TestFindingsApi(unittest.TestCase):
app = {}
@classmethod
def setup_class(cls):
print("\nrunning setup preparation...")
with mock.patch('ibm_cloud_security_advisor.findings_api_v1.BaseService') as mocked_os:
TestFindingsApi.app = FindingsApiV1({},)
# read env vars
#envvars = read_credentials()
@classmethod
def teardown_class(cls):
print("\nrunning teardown, cleaning up the env...")
#print("teardown:delete note")
def test_init(self):
with mock.patch('ibm_cloud_security_advisor.findings_api_v1.BaseService') as mocked_os:
app = FindingsApiV1({},)
@patch.object(BaseService, '__init__')
def test_new_instance(self, mock1):
assert BaseService.__init__ is mock1
with mock.patch('ibm_cloud_security_advisor.findings_api_v1.get_authenticator_from_environment') as mocked_os:
FindingsApiV1.new_instance()
"""
post_graph test cases
"""
def test_post_graph_account_id_is_none(self):
account_id = None
query = "query {occurrence(providerId:\"provider_id\",id:\"id\") {name id}}"
self.assertRaises(
ValueError, TestFindingsApi.app.post_graph, account_id, body=query)
def test_post_graph_body_is_none(self):
account_id = "abc"
query = None
self.assertRaises(
ValueError, TestFindingsApi.app.post_graph, account_id, body=query)
@patch.object(BaseService, 'prepare_request')
@patch.object(BaseService, 'send')
def test_post_graph_success(self, mock1, mock2):
query = "query {occurrence(providerId:\"provider_id\",id:\"id\") {name id}}"
TestFindingsApi.app.post_graph("abc", body=query,
content_type="application/graphql")
@patch.object(BaseService, 'prepare_request')
@patch.object(BaseService, 'send')
def test_post_graph_pass_kwargs(self, mock1, mock2):
query = "query {occurrence(providerId:\"provider_id\",id:\"id\") {name id}}"
headers = {"headers": {}}
TestFindingsApi.app.post_graph("abc", body=query,
content_type="application/graphql", **headers)
@patch.object(BaseService, 'prepare_request')
@patch.object(BaseService, 'send')
def test_post_graph_content_type_is_application_json(self, mock1, mock2):
query = {}
headers = {"headers": {}}
TestFindingsApi.app.post_graph("abc", body=query,
content_type="application/json", **headers)
"""
create_note test cases
"""
def test_create_note_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_note, account_id=None, provider_id="provider_id",
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by={}
)
def test_create_note_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_note, account_id="account_id", provider_id=None,
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by={}
)
def test_create_note_short_description_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_note, account_id="account_id", provider_id="provider_id",
short_description=None, long_description="long_description",
kind="kind", id="id", reported_by={}
)
def test_create_note_long_description_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_note, account_id="account_id", provider_id="provider_id",
short_description="short_description", long_description=None,
kind="kind", id="id", reported_by={}
)
def test_create_note_kind_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_note, account_id="account_id", provider_id="provider_id",
short_description="short_description", long_description="long_description",
kind=None, id="id", reported_by={}
)
def test_create_note_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_note, account_id="account_id", provider_id="provider_id",
short_description="short_description", long_description="long_description",
kind="kind", id=None, reported_by={}
)
def test_create_note_reported_by_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_note, account_id="account_id", provider_id="provider_id",
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_create_note_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.create_note(account_id="account_id", provider_id="provider_id",
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by={},
related_url=[], finding={}, kpi={}, card={}, section={}, **headers)
"""
list_note test cases
"""
def test_list_notes_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.list_notes, account_id=None, provider_id="provider_id"
)
def test_list_notes_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.list_notes, account_id="account_id", provider_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_list_notes_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.list_notes(
account_id="account_id", provider_id="provider_id", **headers)
"""
get_note test cases
"""
def test_get_note_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_note, account_id=None, provider_id="provider_id",
note_id="abc"
)
def test_get_note_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_note, account_id="account_id", provider_id=None,
note_id="abc"
)
def test_get_note_note_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_note, account_id="account_id", provider_id="abc",
note_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_get_note_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.get_note(
account_id="account_id", provider_id="provider_id", note_id="abc", **headers)
"""
update_note test cases
"""
def test_update_note_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_note, account_id=None, provider_id="provider_id", note_id="abc",
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by={}
)
def test_update_note_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_note, account_id="account_id", provider_id=None, note_id="abc",
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by={}
)
def test_update_note_note_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_note, account_id="account_id", provider_id="abc", note_id=None,
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by={}
)
def test_update_note_short_description_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_note, account_id="account_id", provider_id="provider_id", note_id="abc",
short_description=None, long_description="long_description",
kind="kind", id="id", reported_by={}
)
def test_update_note_long_description_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_note, account_id="account_id", provider_id="provider_id", note_id="abc",
short_description="short_description", long_description=None,
kind="kind", id="id", reported_by={}
)
def test_update_note_kind_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_note, account_id="account_id", provider_id="provider_id", note_id="abc",
short_description="short_description", long_description="long_description",
kind=None, id="id", reported_by={}
)
def test_update_note_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_note, account_id="account_id", provider_id="provider_id", note_id="abc",
short_description="short_description", long_description="long_description",
kind="kind", id=None, reported_by={}
)
def test_update_note_reported_by_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_note, account_id="account_id", provider_id="provider_id", note_id="abc",
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_update_note_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.update_note(account_id="account_id", provider_id="provider_id", note_id="abc",
short_description="short_description", long_description="long_description",
kind="kind", id="id", reported_by={},
related_url=[], finding={}, kpi={}, card={}, section={}, **headers)
"""
delete_note test cases
"""
def test_delete_note_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.delete_note, account_id=None, provider_id="provider_id",
note_id="abc"
)
def test_delete_note_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.delete_note, account_id="account_id", provider_id=None,
note_id="abc"
)
def test_delete_note_note_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.delete_note, account_id="account_id", provider_id="abc",
note_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_delete_note_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.delete_note(
account_id="account_id", provider_id="provider_id", note_id="abc", **headers)
"""
get_occurrence_note test cases
"""
def test_get_occurrence_note_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_occurrence_note, account_id=None, provider_id="provider_id",
occurrence_id="abc"
)
def test_get_occurrence_note_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_occurrence_note, account_id="account_id", provider_id=None,
occurrence_id="abc"
)
def test_get_occurrence_note_occurrence_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_occurrence_note, account_id="account_id", provider_id="abc",
occurrence_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_get_occurrence_note_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.get_occurrence_note(
account_id="account_id", provider_id="provider_id", occurrence_id="abc", **headers)
"""
create_occurrence test cases
"""
def test_create_occurrence_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_occurrence, account_id=None, provider_id="provider_id",
note_name="abc",
kind="kind", id="id", reported_by={}
)
def test_create_occurrence_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_occurrence, account_id="account_id", provider_id=None,
note_name="abc",
kind="kind", id="id", reported_by={}
)
def test_create_occurrence_note_name_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_occurrence, account_id="account_id", provider_id="provider_id",
note_name=None,
kind="kind", id="id", reported_by={}
)
def test_create_occurrence_kind_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_occurrence, account_id="account_id", provider_id="provider_id",
note_name="abc",
kind=None, id="id", reported_by={}
)
def test_create_occurrence_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.create_occurrence, account_id="account_id", provider_id="provider_id",
note_name="abc",
kind="kind", id=None, reported_by={}
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_create_occurrence_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.create_occurrence(account_id="account_id", provider_id="provider_id",
note_name="abc",
kind="kind", id="id", context={},
finding={}, kpi={}, **headers)
"""
list_occurrence test cases
"""
def test_list_occurrences_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.list_occurrences, account_id=None, provider_id="provider_id"
)
def test_list_occurrences_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.list_occurrences, account_id="account_id", provider_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_list_occurrences_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.list_occurrences(
account_id="account_id", provider_id="provider_id", **headers)
"""
list_note_occurrences test cases
"""
def test_list_note_occurrences_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.list_note_occurrences, account_id=None, provider_id="provider_id",
note_id="abc"
)
def test_list_note_occurrences_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.list_note_occurrences, account_id="account_id", provider_id=None,
note_id="abc"
)
def test_list_note_occurrences_note_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.list_note_occurrences, account_id="account_id", provider_id="abc",
note_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_list_note_occurrences_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.list_note_occurrences(
account_id="account_id", provider_id="provider_id", note_id="abc", **headers)
"""
get_occurrence test cases
"""
def test_get_occurrence_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_occurrence, account_id=None, provider_id="provider_id",
occurrence_id="abc"
)
def test_get_occurrence_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_occurrence, account_id="account_id", provider_id=None,
occurrence_id="abc"
)
def test_get_occurrence_occurrence_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.get_occurrence, account_id="account_id", provider_id="abc",
occurrence_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_get_occurrence_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.get_occurrence(
account_id="account_id", provider_id="provider_id", occurrence_id="abc", **headers)
"""
update_occurrence test cases
"""
def test_update_occurrence_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_occurrence, account_id=None, provider_id="provider_id",
note_name="abc", occurrence_id="abc",
kind="kind", id="id", reported_by={}
)
def test_update_occurrence_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_occurrence, account_id="account_id", provider_id=None,
note_name="abc", occurrence_id="abc",
kind="kind", id="id", reported_by={}
)
def test_update_occurrence_occurrence_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_occurrence, account_id="account_id", provider_id="abc",
note_name="abc", occurrence_id=None,
kind="kind", id="id", reported_by={}
)
def test_update_occurrence_note_name_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_occurrence, account_id="account_id", provider_id="provider_id",
note_name=None, occurrence_id="abc",
kind="kind", id="id", reported_by={}
)
def test_update_occurrence_kind_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_occurrence, account_id="account_id", provider_id="provider_id",
note_name="abc", occurrence_id="abc",
kind=None, id="id", reported_by={}
)
def test_update_occurrence_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.update_occurrence, account_id="account_id", provider_id="provider_id",
note_name="abc", occurrence_id="abc",
kind="kind", id=None, reported_by={}
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_update_occurrence_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.update_occurrence(account_id="account_id", provider_id="provider_id",
note_name="abc", occurrence_id="abc",
kind="kind", id="id", context={},
finding={}, kpi={}, **headers)
"""
delete_occurrence test cases
"""
def test_delete_occurrence_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.delete_occurrence, account_id=None, provider_id="provider_id",
occurrence_id="abc"
)
def test_delete_occurrence_provider_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.delete_occurrence, account_id="account_id", provider_id=None,
occurrence_id="abc"
)
def test_delete_occurrence_occurrence_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.delete_occurrence, account_id="account_id", provider_id="abc",
occurrence_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_delete_occurrence_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.delete_occurrence(
account_id="account_id", provider_id="provider_id", occurrence_id="abc", **headers)
"""
list_providers test cases
"""
def test_list_providers_account_id_is_none(self):
self.assertRaises(
ValueError, TestFindingsApi.app.list_providers, account_id=None
)
@patch.object(BaseService, '_convert_model')
@patch.object(BaseService, 'send')
@patch.object(BaseService, 'prepare_request')
def test_list_providers_success(self, mock1, mock2, mock3):
headers = {"headers": {}}
TestFindingsApi.app.list_providers(
account_id="account_id", **headers)
| 40.013841 | 123 | 0.658207 | 2,605 | 23,128 | 5.495969 | 0.065259 | 0.081092 | 0.073758 | 0.14605 | 0.900678 | 0.87679 | 0.868338 | 0.866243 | 0.863309 | 0.852413 | 0 | 0.00354 | 0.2305 | 23,128 | 577 | 124 | 40.083189 | 0.800922 | 0.032039 | 0 | 0.536817 | 0 | 0 | 0.128373 | 0.011795 | 0 | 0 | 0 | 0 | 0.123515 | 1 | 0.168646 | false | 0.002375 | 0.016627 | 0 | 0.190024 | 0.004751 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
25ea23445c1cf05673cc1c187abfd497d1c9b759 | 75 | py | Python | tests/test_cockroach/test_init.py | chatties-io/cockroach | 611f9cd855be89bb31d727e60af82cf5697aef04 | [
"MIT"
] | null | null | null | tests/test_cockroach/test_init.py | chatties-io/cockroach | 611f9cd855be89bb31d727e60af82cf5697aef04 | [
"MIT"
] | null | null | null | tests/test_cockroach/test_init.py | chatties-io/cockroach | 611f9cd855be89bb31d727e60af82cf5697aef04 | [
"MIT"
] | null | null | null | from cockroach import hello
def test_hello():
assert hello() is None
| 12.5 | 27 | 0.72 | 11 | 75 | 4.818182 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213333 | 75 | 5 | 28 | 15 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6c9dbe018381c9d9ea68e1f4452c831ddcbb22ae | 38 | py | Python | python/tvm/tools/__init__.py | dayanandasiet/tvmdbg | 5e3266a65422990d385c43424d51a4e5e8dfe6ee | [
"Apache-2.0"
] | null | null | null | python/tvm/tools/__init__.py | dayanandasiet/tvmdbg | 5e3266a65422990d385c43424d51a4e5e8dfe6ee | [
"Apache-2.0"
] | null | null | null | python/tvm/tools/__init__.py | dayanandasiet/tvmdbg | 5e3266a65422990d385c43424d51a4e5e8dfe6ee | [
"Apache-2.0"
] | null | null | null | """TVM: Tools."""
from . import debug
| 12.666667 | 19 | 0.605263 | 5 | 38 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 38 | 2 | 20 | 19 | 0.71875 | 0.289474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cadc2a453747487ca9c1633c543b6224becc538 | 133 | py | Python | tests/test_readme.py | grzegorzwojdyga/ESIM | 8a385e41d1ac39dd841d4630ca217102a5797788 | [
"Apache-2.0"
] | 1 | 2019-08-09T15:44:12.000Z | 2019-08-09T15:44:12.000Z | tests/test_readme.py | grzegorzwojdyga/ESIM | 8a385e41d1ac39dd841d4630ca217102a5797788 | [
"Apache-2.0"
] | null | null | null | tests/test_readme.py | grzegorzwojdyga/ESIM | 8a385e41d1ac39dd841d4630ca217102a5797788 | [
"Apache-2.0"
] | null | null | null | import pytest
import os
def test_if_readme_exists():
"""Check if README file exitst"""
assert os.path.isfile('./README.md')
| 19 | 40 | 0.699248 | 20 | 133 | 4.5 | 0.75 | 0.177778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165414 | 133 | 6 | 41 | 22.166667 | 0.810811 | 0.203008 | 0 | 0 | 0 | 0 | 0.11 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cf678441eb878d8a63c51901b32fa247511a746 | 1,782 | py | Python | terrascript/newrelic/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/newrelic/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/newrelic/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/newrelic/r.py
# Automatically generated by tools/makecode.py ()
import warnings
warnings.warn(
"using the 'legacy layout' is deprecated", DeprecationWarning, stacklevel=2
)
import terrascript
class newrelic_alert_channel(terrascript.Resource):
pass
class newrelic_alert_condition(terrascript.Resource):
pass
class newrelic_alert_muting_rule(terrascript.Resource):
pass
class newrelic_alert_policy(terrascript.Resource):
pass
class newrelic_alert_policy_channel(terrascript.Resource):
pass
class newrelic_api_access_key(terrascript.Resource):
pass
class newrelic_application_settings(terrascript.Resource):
pass
class newrelic_dashboard(terrascript.Resource):
pass
class newrelic_entity_tags(terrascript.Resource):
pass
class newrelic_events_to_metrics_rule(terrascript.Resource):
pass
class newrelic_infra_alert_condition(terrascript.Resource):
pass
class newrelic_insights_event(terrascript.Resource):
pass
class newrelic_nrql_alert_condition(terrascript.Resource):
pass
class newrelic_nrql_drop_rule(terrascript.Resource):
pass
class newrelic_one_dashboard(terrascript.Resource):
pass
class newrelic_one_dashboard_raw(terrascript.Resource):
pass
class newrelic_plugins_alert_condition(terrascript.Resource):
pass
class newrelic_synthetics_alert_condition(terrascript.Resource):
pass
class newrelic_synthetics_monitor(terrascript.Resource):
pass
class newrelic_synthetics_monitor_script(terrascript.Resource):
pass
class newrelic_synthetics_multilocation_alert_condition(terrascript.Resource):
pass
class newrelic_synthetics_secure_credential(terrascript.Resource):
pass
class newrelic_workload(terrascript.Resource):
pass
| 17.470588 | 79 | 0.805836 | 199 | 1,782 | 6.919598 | 0.281407 | 0.217139 | 0.384168 | 0.447349 | 0.750908 | 0.620189 | 0.421206 | 0.130719 | 0 | 0 | 0 | 0.000646 | 0.131874 | 1,782 | 101 | 80 | 17.643564 | 0.889463 | 0.040965 | 0 | 0.45098 | 1 | 0 | 0.02286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.45098 | 0.039216 | 0 | 0.490196 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
9f5c5b538a6688e0a7fee58099e643cbcd2a72c7 | 38 | py | Python | circa/__init__.py | fnamer/circa | 3db09df4cd889225b03c65198118703f5efa999d | [
"MIT"
] | null | null | null | circa/__init__.py | fnamer/circa | 3db09df4cd889225b03c65198118703f5efa999d | [
"MIT"
] | null | null | null | circa/__init__.py | fnamer/circa | 3db09df4cd889225b03c65198118703f5efa999d | [
"MIT"
] | null | null | null | from .main import trace # noqa: F401
| 19 | 37 | 0.710526 | 6 | 38 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.210526 | 38 | 1 | 38 | 38 | 0.8 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9f78e260e83739820567ced2e180c0b5c92b03b9 | 28 | py | Python | micra_scheduler/__init__.py | xyla-io/micra_scheduler | 56f93e6f0e69d0278be25729bc061e3390fd707c | [
"MIT"
] | null | null | null | micra_scheduler/__init__.py | xyla-io/micra_scheduler | 56f93e6f0e69d0278be25729bc061e3390fd707c | [
"MIT"
] | null | null | null | micra_scheduler/__init__.py | xyla-io/micra_scheduler | 56f93e6f0e69d0278be25729bc061e3390fd707c | [
"MIT"
] | null | null | null | from .base import Scheduler
| 14 | 27 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9f89e00cff6a8ce2cedac4d0ccbe3393cc85bf1e | 6,110 | py | Python | scripts/training_ner.py | jianlins/SDoH_SODA | 2842a9e1f36f26b9bfc66df888ae97019a21793f | [
"MIT"
] | null | null | null | scripts/training_ner.py | jianlins/SDoH_SODA | 2842a9e1f36f26b9bfc66df888ae97019a21793f | [
"MIT"
] | null | null | null | scripts/training_ner.py | jianlins/SDoH_SODA | 2842a9e1f36f26b9bfc66df888ae97019a21793f | [
"MIT"
] | null | null | null | # create training and test bio for NER
import sys
sys.path.append("../ClinicalTransformerNER/")
sys.path.append("../NLPreprocessing/")
import os
from pathlib import Path
from collections import defaultdict, Counter
import numpy as np
from sklearn.model_selection import train_test_split
import shutil
import fileinput
from annotation2BIO import generate_BIO, pre_processing, read_annotation_brat, BIOdata_to_file
MIMICIII_PATTERN = "\[\*\*|\*\*\]"
data_dir = sys.argv[1]
tag_types = None
if len(sys.argv) > 2:
tag_types = sys.argv[2].split(',')
# output_name='test'
# data stat
file_ids = set()
enss = []
for fn in Path(data_dir).glob("*.ann"):
file_ids.add(fn.stem)
_, ens, _ = read_annotation_brat(fn)
# print( _)
enss.extend(ens)
print("test files: ", len(file_ids), list(file_ids)[:5])
print("total test eneitites: ", len(enss))
print("Entities distribution by types:\n", "\n".join([str(c) for c in Counter([each[1] for each in enss]).most_common()]))
# generate bio
file_ids = list(file_ids)
train_dev_ids, test_ids = train_test_split(file_ids, train_size=0.75, random_state=13, shuffle=True) # use 150 for training
print('length of training and test')
len(train_dev_ids), len(test_ids)
train_dev_root = Path('../data/training_set_150')
test_root = Path('../data/test_set_150')
# create notes file
Path(train_dev_root).mkdir(parents=True, exist_ok=True)
Path(test_root).mkdir(parents=True, exist_ok=True)
train_root = Path(data_dir)
# copy file to train and test
for fid in train_dev_ids:
txt_fn = train_root / (fid + ".txt")
ann_fn = train_root / (fid + ".ann")
txt_fn1 = train_dev_root / (fid + ".txt")
ann_fn1 = train_dev_root / (fid + ".ann")
shutil.copyfile(txt_fn, txt_fn1)
shutil.copyfile(ann_fn, ann_fn1)
for fid in test_ids:
txt_fn = train_root / (fid + ".txt")
ann_fn = train_root / (fid + ".ann")
txt_fn1 = test_root / (fid + ".txt")
ann_fn1 = test_root / (fid + ".ann")
shutil.copyfile(txt_fn, txt_fn1)
shutil.copyfile(ann_fn, ann_fn1)
train_dev_ids = sorted(list(train_dev_ids))
train_ids, dev_ids = train_test_split(train_dev_ids, train_size=0.9, random_state=13, shuffle=True)
test_bio = "../bio/" + 'bio_test_150'
training_bio = "../bio/" + 'bio_training_150'
output_root1 = Path(test_bio)
output_root2 = Path(training_bio)
output_root1.mkdir(parents=True, exist_ok=True)
output_root2.mkdir(parents=True, exist_ok=True)
for fid in train_dev_ids:
txt_fn = train_dev_root / (fid + ".txt")
ann_fn = train_dev_root / (fid + ".ann")
bio_fn = output_root2 / (fid + ".bio.txt")
txt, sents = pre_processing(txt_fn, deid_pattern=MIMICIII_PATTERN)
e2idx, entities, rels = read_annotation_brat(ann_fn)
nsents, sent_bound = generate_BIO(sents, entities, file_id=fid, no_overlap=False, tag_types=tag_types)
# print(nsents)
# print(bio_fn)
# break
BIOdata_to_file(bio_fn, nsents)
# train
with open(training_bio + "/train.txt", "w") as f:
for fid in train_ids:
f.writelines(fileinput.input(output_root2 / (fid + ".bio.txt")))
fileinput.close()
# dev
with open(training_bio + "/dev.txt", "w") as f:
for fid in dev_ids:
f.writelines(fileinput.input(output_root2 / (fid + ".bio.txt")))
fileinput.close()
# test
for fn in test_root.glob("*.txt"):
txt_fn = fn
bio_fn = output_root1 / (fn.stem + ".bio.txt")
txt, sents = pre_processing(txt_fn, deid_pattern=MIMICIII_PATTERN)
nsents, sent_bound = generate_BIO(sents, [], file_id=txt_fn, no_overlap=False)
BIOdata_to_file(bio_fn, nsents)
# same process but have train test split as 1:1
train_dev_ids, test_ids = train_test_split(file_ids, train_size=0.5, random_state=13, shuffle=True) # use 8:2 split
print('length of training and test')
len(train_dev_ids), len(test_ids)
train_dev_root = Path('../data/training_set_100')
test_root = Path('../data/test_set_100')
# create notes file
Path(train_dev_root).mkdir(parents=True, exist_ok=True)
Path(test_root).mkdir(parents=True, exist_ok=True)
train_root = Path(data_dir)
# copy file to train and test
for fid in train_dev_ids:
txt_fn = train_root / (fid + ".txt")
ann_fn = train_root / (fid + ".ann")
txt_fn1 = train_dev_root / (fid + ".txt")
ann_fn1 = train_dev_root / (fid + ".ann")
shutil.copyfile(txt_fn, txt_fn1)
shutil.copyfile(ann_fn, ann_fn1)
for fid in test_ids:
txt_fn = train_root / (fid + ".txt")
ann_fn = train_root / (fid + ".ann")
txt_fn1 = test_root / (fid + ".txt")
ann_fn1 = test_root / (fid + ".ann")
shutil.copyfile(txt_fn, txt_fn1)
shutil.copyfile(ann_fn, ann_fn1)
train_dev_ids = list(train_dev_ids)
train_ids, dev_ids = train_test_split(train_dev_ids, train_size=0.9, random_state=13, shuffle=True)
test_bio = "../bio/" + 'bio_test_100'
training_bio = "../bio/" + 'bio_training_100'
output_root1 = Path(test_bio)
output_root2 = Path(training_bio)
output_root1.mkdir(parents=True, exist_ok=True)
output_root2.mkdir(parents=True, exist_ok=True)
for fid in train_dev_ids:
txt_fn = train_dev_root / (fid + ".txt")
ann_fn = train_dev_root / (fid + ".ann")
bio_fn = output_root2 / (fid + ".bio.txt")
txt, sents = pre_processing(txt_fn, deid_pattern=MIMICIII_PATTERN)
e2idx, entities, rels = read_annotation_brat(ann_fn)
nsents, sent_bound = generate_BIO(sents, entities, file_id=fid, no_overlap=False)
# print(nsents)
# print(bio_fn)
# break
BIOdata_to_file(bio_fn, nsents)
# train
with open(training_bio + "/train.txt", "w") as f:
for fid in train_ids:
f.writelines(fileinput.input(output_root2 / (fid + ".bio.txt")))
fileinput.close()
# dev
with open(training_bio + "/dev.txt", "w") as f:
for fid in dev_ids:
f.writelines(fileinput.input(output_root2 / (fid + ".bio.txt")))
fileinput.close()
# test
for fn in test_root.glob("*.txt"):
txt_fn = fn
bio_fn = output_root1 / (fn.stem + ".bio.txt")
txt, sents = pre_processing(txt_fn, deid_pattern=MIMICIII_PATTERN)
nsents, sent_bound = generate_BIO(sents, [], file_id=txt_fn, no_overlap=False)
BIOdata_to_file(bio_fn, nsents)
| 35.317919 | 125 | 0.700655 | 975 | 6,110 | 4.100513 | 0.137436 | 0.052026 | 0.038519 | 0.032516 | 0.805403 | 0.783892 | 0.758879 | 0.758879 | 0.758879 | 0.758879 | 0 | 0.017127 | 0.159083 | 6,110 | 172 | 126 | 35.523256 | 0.760997 | 0.059083 | 0 | 0.692308 | 0 | 0 | 0.096611 | 0.012928 | 0.030769 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.069231 | 0 | 0.069231 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9fb08df4704ced33327884c86b3f54dbe64fd5b3 | 5,057 | py | Python | paz/core/sequencer.py | SushmaDG/MaskRCNN | 10f27fed31a2927b585aa1815cb5e096da540952 | [
"MIT"
] | 1 | 2021-11-30T03:40:35.000Z | 2021-11-30T03:40:35.000Z | paz/core/sequencer.py | SushmaDG/MaskRCNN | 10f27fed31a2927b585aa1815cb5e096da540952 | [
"MIT"
] | null | null | null | paz/core/sequencer.py | SushmaDG/MaskRCNN | 10f27fed31a2927b585aa1815cb5e096da540952 | [
"MIT"
] | 1 | 2021-12-22T01:54:31.000Z | 2021-12-22T01:54:31.000Z | from tensorflow.keras.utils import Sequence
import numpy as np
class ProcessingSequencer(Sequence):
"""Base sequencer class for processing or generating batches.
If data is ``None`` the sequencer assumes that ``processor``
generates the data. If data is not ``None`` the sequencer
assumes the ``processor`` works as data processing pipeline.
# Arguments
processor: Function. If data is not ``None``, ``processor``
takes a sample (see data) as input and returns a dictionary
with keys ``inputs`` and ``labels`` and values dictionaries
with keys being the ``layer names`` in which the values
(numpy arrays) will be inputted.
batch_size: Int.
data: List of dictionaries. The length of the list corresponds to the
amount of samples in the data. Inside each sample there should
be a dictionary with `keys` indicating the data types/topics
e.g. ``image``, ``depth``, ``boxes`` and as `values` of these
`keys` the corresponding data e.g. strings, numpy arrays, etc.
"""
def __init__(self, processor, batch_size, data):
self.processor = processor
self.input_topics = self.processor.processors[-1].input_topics
self.label_topics = self.processor.processors[-1].label_topics
self.batch_size = batch_size
self.data = data
def __len__(self):
return int(np.ceil(len(self.data) / float(self.batch_size)))
def __getitem__(self, batch_index):
batch_arg_A = self.batch_size * (batch_index)
batch_arg_B = self.batch_size * (batch_index + 1)
batch = self.data[batch_arg_A:batch_arg_B]
inputs_batch = self.get_empty_batch(
self.input_topics, self.processor.input_shapes)
labels_batch = self.get_empty_batch(
self.label_topics, self.processor.label_shapes)
for sample_arg, unprocessed_sample in enumerate(batch):
sample = self.processor(unprocessed_sample.copy())
for topic, data in sample['inputs'].items():
inputs_batch[topic][sample_arg] = data
for topic, data in sample['labels'].items():
labels_batch[topic][sample_arg] = data
return inputs_batch, labels_batch
def get_empty_batch(self, topics, shapes):
batch = {}
for topic, shape in zip(topics, shapes):
batch[topic] = np.zeros((self.batch_size, *shape))
return batch
class GeneratingSequencer(Sequence):
"""Base sequencer class for processing or generating batches.
If data is ``None`` the sequencer assumes that ``processor``
generates the data. If data is not ``None`` the sequencer
assumes the ``processor`` works as data processing pipeline.
# Arguments
processor: Function. If data is not ``None``, ``processor``
takes a sample (see data) as input and returns a dictionary
with keys ``inputs`` and ``labels`` and values dictionaries
with keys being the ``layer names`` in which the values
(numpy arrays) will be inputted.
batch_size: Int.
data: List of dictionaries. The length of the list corresponds to the
amount of samples in the data. Inside each sample there should
be a dictionary with `keys` indicating the data types/topics
e.g. ``image``, ``depth``, ``boxes`` and as `values` of these
`keys` the corresponding data e.g. strings, numpy arrays, etc.
"""
def __init__(self, processor, batch_size=32, as_list=False, num_steps=100):
self.processor = processor
self.input_topics = self.processor.processors[-1].input_topics
self.label_topics = self.processor.processors[-1].label_topics
self.batch_size = batch_size
self.as_list = as_list
self.num_steps = num_steps
def __len__(self):
return self.num_steps
def __getitem__(self, batch_index):
inputs_batch = self.get_empty_batch(
self.input_topics, self.processor.input_shapes)
labels_batch = self.get_empty_batch(
self.label_topics, self.processor.label_shapes)
for sample_arg in range(self.batch_size):
sample = self.processor({'image': None})
for topic, data in sample['inputs'].items():
inputs_batch[topic][sample_arg] = data
for topic, data in sample['labels'].items():
labels_batch[topic][sample_arg] = data
if self.as_list:
inputs_batch = self.to_list(inputs_batch, self.input_topics)
labels_batch = self.to_list(labels_batch, self.label_topics)
return inputs_batch, labels_batch
def get_empty_batch(self, topics, shapes):
batch = {}
for topic, shape in zip(topics, shapes):
batch[topic] = np.zeros((self.batch_size, *shape))
return batch
def to_list(self, batch, topics):
return [batch[topic] for topic in topics]
| 46.394495 | 79 | 0.645244 | 654 | 5,057 | 4.813456 | 0.172783 | 0.042884 | 0.048285 | 0.032402 | 0.814485 | 0.784625 | 0.784625 | 0.784625 | 0.784625 | 0.784625 | 0 | 0.002677 | 0.26142 | 5,057 | 108 | 80 | 46.824074 | 0.840161 | 0.371564 | 0 | 0.634921 | 0 | 0 | 0.009549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.031746 | 0.047619 | 0.31746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9fb9bd03215572acffa67793b8864337d13fe3af | 52,356 | py | Python | SMS-Back-End/apigateway/helloworld_api.py | mresti/StudentsManagementSystem | a1d67af517379b249630cac70a55bdfd9f77c54a | [
"Apache-2.0"
] | null | null | null | SMS-Back-End/apigateway/helloworld_api.py | mresti/StudentsManagementSystem | a1d67af517379b249630cac70a55bdfd9f77c54a | [
"Apache-2.0"
] | null | null | null | SMS-Back-End/apigateway/helloworld_api.py | mresti/StudentsManagementSystem | a1d67af517379b249630cac70a55bdfd9f77c54a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Hello World API implemented using Google Cloud Endpoints.
Defined here are the ProtoRPC messages needed to define Schemas for methods
as well as those methods defined in an API.
"""
import endpoints
from protorpc import messages
from protorpc import message_types
from protorpc import remote
import os
##Doc de urlfetch: https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.urlfetch
##Librerías usadas para la llamada a las APIRest de los microservicios
from google.appengine.api import urlfetch
import urllib
#Para el descubrimiento de los módulos
import urllib2
from google.appengine.api import modules
#Para la decodificaciónd e los datos recibidos en JSON desde las APIs
import jsonpickle
#Variable habilitadora del modo verbose
v=True
nombreMicroservicio = '\n## API Gateway ##\n'
# TODO: Replace the following lines with client IDs obtained from the APIs
# Console or Cloud Console.
WEB_CLIENT_ID = 'replace this with your web client application ID'
ANDROID_CLIENT_ID = 'replace this with your Android client ID'
IOS_CLIENT_ID = 'replace this with your iOS client ID'
ANDROID_AUDIENCE = WEB_CLIENT_ID
package = 'Hello'
class MensajeRespuesta(messages.Message):
message = messages.StringField(1)
class MensajePeticion(messages.Message):
message = messages.StringField(1)
'''
Como vemos, no aparecen argumentos en el cuerpo de la petición ya que se trata
de una petición de tipo GET.
'''
#######################################
# TIPOS DE MENSAJES QUE MANEJA LA API #
#######################################
class Alumno(messages.Message):
nombre = messages.StringField(1)
id = messages.StringField(2)
class AlumnoCompleto(messages.Message):
id = messages.StringField(1)
nombre = messages.StringField(2)
apellidos = messages.StringField(3)
dni = messages.StringField(4)
direccion = messages.StringField(5)
localidad = messages.StringField(6)
provincia = messages.StringField(7)
fecha_nacimiento = messages.StringField(8)
telefono = messages.StringField(9)
class ID(messages.Message):
id = messages.StringField(1)
class ListaAlumnos(messages.Message):
alumnos = messages.MessageField(Alumno, 1, repeated=True)
class Profesor(messages.Message):
nombre = messages.StringField(1)
apellidos = messages.StringField(2)
id = messages.StringField(3)
class ProfesorCompleto(messages.Message):
id = messages.StringField(1)
nombre = messages.StringField(2)
apellidos = messages.StringField(3)
dni = messages.StringField(4)
direccion = messages.StringField(5)
localidad = messages.StringField(6)
provincia = messages.StringField(7)
fecha_nacimiento = messages.StringField(8)
telefono = messages.StringField(9)
class ListaProfesores(messages.Message):
profesores = messages.MessageField(Profesor, 1, repeated=True)
class Asignatura(messages.Message):
id = messages.StringField(1)
nombre = messages.StringField(2)
class AsignaturaCompleta(messages.Message):
id = messages.StringField(1)
nombre = messages.StringField(2)
class ListaAsignaturas(messages.Message):
asignaturas = messages.MessageField(Asignatura, 1, repeated=True)
class Clase(messages.Message):
id = messages.StringField(1)
curso = messages.StringField(2)
grupo = messages.StringField(3)
nivel = messages.StringField(4)
#Para ampliar en el futuro y no usar el mismo tipo de mensaje:
class ClaseCompleta(messages.Message):
id = messages.StringField(1)
curso = messages.StringField(2)
grupo = messages.StringField(3)
nivel = messages.StringField(4)
class ListaClases(messages.Message):
clases = messages.MessageField(Clase, 1, repeated=True)
#Decorador que establace nombre y versión de la api
@endpoints.api(name='helloworld', version='v1')
class HelloWorldApi(remote.Service):
"""Helloworld API v1."""
##############################################
# métodos de alumnos #
##############################################
@endpoints.method(message_types.VoidMessage, ListaAlumnos,
#path=nombre del recurso a llamar
path='alumnos/getAlumnos', http_method='GET',
#Puede que sea la forma en la que se llama desde la api:
#response = service.alumnos().listGreeting().execute()
name='alumnos.getAlumnos')
def getAlumnos(self, unused_request):
'''
getAlumnos() [GET sin parámetros]
Devuelve una lista con todos los estudiantes registrados en el sistema, de forma simplificada (solo nombre y ID)
Llamada desde terminal:
curl -X GET localhost:8001/_ah/api/helloworld/v1/alumnos/getAlumnos
Llamada desde JavaScript:
response =service.alumnos().getAlumnos().execute()
'''
#Transformación de la llamada al endpoints a la llamada a la api rest del servicio.
#Info de seguimiento
if v:
print nombreMicroservicio
print "Petición GET a alumnos.getAlumnos"
print '\n'
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos a que microservicio queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos el recurso al que queremos conectarnos.
url+="alumnos"
#result = urllib2.urlopen(url)
#print result
if v:
print "Llamando a: "+str(url)
#Llamamos al microservicio y recibimos los resultados con URLFetch
#Al no especificar nada se llama al método GET de la URL.
result = urlfetch.fetch(url)
#Vamos a intentar consumir los datos en JSON y convertirlos a un mensje enviable :)
if v:
print nombreMicroservicio
print "Resultados de la petición: "
print result.content
print "Código de estado: "+str(result.status_code)+'\n'
listaAlumnos = jsonpickle.decode(result.content)
'''
miListaAlumnos=ListaAlumnos()
miListaAlumnos.alumnos = listaAlumnos
'''
#Creamos un vector
alumnosItems= []
#Que rellenamos con todo los alumnos de la listaAlumnos
if v:
print "Construcción del mensaje de salida: \n"
for alumno in listaAlumnos:
nombreAlumno = str(alumno.get('nombre'))
idAlumno = str(alumno.get('id'))
if v:
print "Nombre: "+nombreAlumno
print "ID: "+idAlumno
alumnosItems.append(Alumno( nombre=nombreAlumno, id=idAlumno ) )
#id=str(alumno.get('id')),
#Los adaptamos al tipo de mensaje y enviamos
#return Greeting(message=str(result.content))
return ListaAlumnos(alumnos=alumnosItems)
@endpoints.method(ID, AlumnoCompleto, path='alumnos/getAlumno', http_method='GET', name='alumnos.getAlumno')
def getAlumno(self,request):
'''
getAlumno() [GET con dni]
Devuelve toda la información de un estudiante en caso de estar en el sistema.
Llamada ejemplo desde terminal:
curl -X GET localhost:8001/_ah/api/helloworld/v1/alumnos/getAlumno?dni=11AA22BBZ
'''
#Info de seguimiento
if v:
print nombreMicroservicio
print "Petición GET a alumnos.getAlumno"
print "request: "+str(request)
print '\n'
#Cuando se llama a este recurso lo que se quiere es recibir toda la información
#de una entidad Alumno, para ello primero vamos a recuperar la información del microsrevicio apropiado:
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos al microservicio que queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
'''
Según la doc. de urlfetch (ver arriba) no podemos pasar parámetros con el payload, así que como conocemos
la api del microservicios al que vamos a llamr realizamos la petición bajo su especificacion, según la cual
solo tenemos que llamar a /alumnos/<id_alumno> entonces concatenamos a la url esa id qu recibimos en la llamada
a este procedimiento.
'''
#Recursos más entidad
url+='alumnos/'+request.id
if v:
print "Llamando a: "+str(url)
#Petición al microservicio
result = urlfetch.fetch(url=url, method=urlfetch.GET)
print "RESULTADO:"+str(result.status_code)
#print result.content
if v:
print result.status_code
if str(result.status_code) == '400':
raise endpoints.BadRequestException('Peticion erronea')
if str(result.status_code) == '404':
raise endpoints.NotFoundException('Alumno con ID %s no encontrado.' % (request.dni))
alumno = jsonpickle.decode(result.content)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "\nCódigo de estado: "+str(result.status_code)+'\n'
#Componemos un mensaje de tipo AlumnoCompleto.
#Las partes que son enteros las pasamos a string para enviarlos como mensajes de tipo string.
alumno = AlumnoCompleto(id=str(alumno.get('id')),
nombre=alumno.get('nombre'),
apellidos=alumno.get('apellidos'),
dni=str(alumno.get('dni')),
direccion=alumno.get('direccion'),
localidad=alumno.get('localidad'),
provincia=alumno.get('provincia'),
fecha_nacimiento=str(alumno.get('fecha_nacimiento')),
telefono=str(alumno.get('telefono'))
)
return alumno
@endpoints.method(AlumnoCompleto,MensajeRespuesta,
path='insertarAlumno', http_method='POST',
name='alumnos.insertarAlumno')
def insertar_alumno(self, request):
'''
insertarAlumno() [POST con todos los atributos de un alumno]
Introduce un nuevo alumno en el sistema.
Ejemplo de llamada en terminal:
curl -i -d "nombre=Juan&dni=45301218Z&direccion=Calle&localidad=Jerezfrontera&provincia=Granada&fecha_nac=1988-2-6&telefono=699164459" -X POST -G localhost:8001/_ah/api/helloworld/v1/alumnos/insertarAlumno
(-i para ver las cabeceras)
'''
if v:
print nombreMicroservicio
print "Petición POST a alumnos.insertarAlumno"
print "Contenido de la petición:"
print str(request)
print '\n'
#Si no tenemos todos los atributos entonces enviamos un error de bad request.
if request.nombre==None or request.apellidos==None or request.dni==None or request.direccion==None or request.localidad==None or request.provincia==None or request.fecha_nacimiento==None or request.telefono==None:
raise endpoints.BadRequestException('Peticion erronea, faltan datos.')
#Conformamos la dirección:
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos el servicio al que queremos conectarnos.
url+="alumnos"
#Extraemos lo datos de la petición al endpoints
form_fields = {
"nombre": request.nombre,
"apellidos": request.apellidos,
"dni": request.dni,
"direccion": request.direccion,
"localidad": request.localidad,
"provincia": request.provincia,
"fecha_nacimiento": request.fecha_nacimiento,
"telefono": request.telefono
}
if v:
print "Llamando a: "+url
##Doc de urlfetch: https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.urlfetch
form_data = urllib.urlencode(form_fields)
#Realizamos la petición al servicio con los datos pasados al endpoint
result = urlfetch.fetch(url=url, payload=form_data, method=urlfetch.POST)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "Código de estado: "
print result.status_code
if str(result.status_code) == '404':
raise endpoints.NotFoundException('Alumno con ID %s ya existe en el sistema.' % (request.dni))
#return MensajeRespuesta(message="Todo OK man!")
#Mandamos la respuesta que nos devuelve la llamada al microservicio:
return MensajeRespuesta(message=result.content)
@endpoints.method(ID,MensajeRespuesta,path='delAlumno', http_method='DELETE', name='alumnos.delAlumno')
def eliminar_alumno(self, request):
'''
delAlumno() [DELETE con dniAlumno]
#Ejemplo de borrado de un recurso pasando el dni de un alumno
Ubuntu> curl -d "id=1" -X DELETE -G localhost:8001/_ah/api/helloworld/v1/alumnos/eliminaralumn
{
"message": "OK"
}
#Ejemplo de ejecución en el caso de no encontrar el recurso:
Ubuntu> curl -d "id=1" -X DELETE -G localhost:8001/_ah/api/hellworld/v1/alumnos/eliminaralumno
{
"message": "Elemento no encontrado"
}
'''
if v:
print nombreMicroservicio
print "Petición DELETE a alumnos.delAlumno"
print "Contenido de la petición:"
print str(request)
print '\n'
#Conformamos la dirección:
url = "http://%s/" % modules.get_hostname(module="microservicio1")
'''
Parece que urlfetch da problemas a al hora de pasar parámetros (payload) cuando se trata del
método DELETE.
Extracto de la doc:
payload: POST, PUT, or PATCH payload (implies method is not GET, HEAD, or DELETE). this is ignored if the method is not POST, PUT, or PATCH.
Además no somos los primeros en encontrarse este problema:
http://grokbase.com/t/gg/google-appengine/13bvr5qjyq/is-there-any-reason-that-urlfetch-delete-method-does-not-support-a-payload
Por eso en lugar de pasar los datos por payload los añadimos a la url, que es algo equivalente.
'''
#Extraemos el argumento id de la petición y la añadimos a la URL
url+='alumnos/'+request.id
if v:
print "Llamando a: "+url
#Realizamos la petición a la url del servicio con el método apropiado.
result = urlfetch.fetch(url=url, method=urlfetch.DELETE)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "Código de estado: "
print result.status_code
#Mandamos la respuesta que nos devuelve la llamada al microservicio:
return MensajeRespuesta(message=result.content)
@endpoints.method(AlumnoCompleto,MensajeRespuesta,path='alumnos/modAlumnoCompleto', http_method='POST', name='alumnos.modAlumnoCompleto')
def modificarAlumnoCompleto(self, request):
'''
modificarAlumnoCompleto() [POST]
Modifica todos los atributos de un alumno, aunque algunos queden igual.
curl -d "id=1&nombre=Pedro&apellidos=Torrssr&dni=23&direccion=CREalCartuja&localidad=Granada&provincia=Granada&fecha_nacimiento=1988-12-4&telefono=23287282" -i -X POST -G localhost:8001/_ah/api/helloworld/v1/alumnos/modAlumnoCompleto
HTTP/1.1 200 OK
content-type: application/json
Cache-Control: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Server: Development/2.0
Content-Length: 20
Server: Development/2.0
Date: Mon, 14 Mar 2016 10:17:12 GMT
{
"message": "OK"
}
'''
if v:
print nombreMicroservicio
print "Petición POST a alumnos.modAlumnoCompleto"
print "Contenido de la petición:"
print str(request)
print '\n'
if request.nombre==None or request.apellidos==None or request.dni==None or request.direccion==None or request.localidad==None or request.provincia==None or request.fecha_nacimiento==None or request.telefono==None:
raise endpoints.BadRequestException('Peticion erronea, faltan datos.')
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos el recurso al que queremos conectarnos, colección alumnos / alumno con id concreto.
url+="alumnos/"+request.id
#Extraemos lo datos de la petición que se reciben aquí en el endpoints
form_fields = {
"nombre": request.nombre,
"apellidos": request.apellidos,
"dni": request.dni,
"direccion": request.direccion,
"localidad": request.localidad,
"provincia": request.provincia,
"fecha_nacimiento": request.fecha_nacimiento,
"telefono": request.telefono
}
if v:
print "Llamando a: "+url
form_data = urllib.urlencode(form_fields)
result = urlfetch.fetch(url=url, payload=form_data, method=urlfetch.POST)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "Código de estado: "
print result.status_code
#return MensajeRespuesta(message="Todo OK man!")
#Mandamos la respuesta que nos devuelve la llamada al microservicio:
return MensajeRespuesta(message=result.content)
# Métodos de información sobre relaciones con otras entidades
@endpoints.method(ID, ListaProfesores, path='alumnos/getProfesoresAlumno', http_method='GET', name='alumnos.getProfesoresAlumno')
def getProfesoresAlumno(self, request):
'''
Devuelve una lista con los datos completos de los profesores que dan clase al alumno de dni pasado
curl -i -X GET localhost:8001/_ah/api/helloworld/v1/alumnos/getProfesoresAlumno?dni=1
'''
#Transformación de la llamada al endpoints a la llamada a la api rest del servicio.
if v:
print ("Ejecución de getProfesoresAlumno en apigateway")
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos a que microservicio queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos a la url la coleccion (alumnos), el recurso (alumno dado por su dni) y el recurso anidado de este (profesores)
url+='alumnos/'+str(request.id)+"/profesores"
print url
#Realizamos la petición
result = urlfetch.fetch(url)
#Vamos a intentar consumir los datos en JSON y convertirlos a un mensje enviable :)
print "IMPRESION DE LOS DATOS RECIBIDOS"
print result.content
listaProfesores = jsonpickle.decode(result.content)
#Creamos un vector
profesoresItems= []
#Que rellenamos con todo los alumnos de la listaAlumnos
for profesor in listaProfesores:
profesoresItems.append(Profesor( nombre=str(profesor.get('nombre')),
apellidos=str(profesor.get('apellidos')),
dni=str(profesor.get('dni'))
)
)
#Los adaptamos al tipo de mensaje y enviamos
#return Greeting(message=str(result.content))
return ListaProfesores(profesores=profesoresItems)
@endpoints.method(ID, ListaAsignaturas, path='alumnos/getAsignaturasAlumno', http_method='GET', name='alumnos.getAsignaturasAlumno')
def getAsignaturasAlumno(self, request):
'''
Devuelve una lista con los datos completos de las asignatuas en las que está matriculado el alumno con dni pasado.
Ejemplo de llamada:
> curl -i -X GET localhost:8001/_ah/api/helloworld/v1/alumos/getAsignaturasAlumno?dni=1
'''
if v:
print ("Ejecución de getAsignaturasAlumno en apigateway")
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
url = "http://%s/" % modules.get_hostname(module="microservicio1")
url+='alumnos/'+request.dni+"/asignaturas"
result = urlfetch.fetch(url)
if v:
print result.content
listaAsignaturas = jsonpickle.decode(result.content)
print listaAsignaturas
asignaturasItems= []
for asignatura in listaAsignaturas:
asignaturasItems.append( Asignatura( id=str(asignatura.get('id')), nombre=str(asignatura.get('nombre')) ) )
return ListaAsignaturas(asignaturas=asignaturasItems)
@endpoints.method(ID, ListaClases, path='alumnos/getClasesAlumno', http_method='GET', name='alumnos.getClasesAlumno')
def getClasesAlumno(self, request):
'''
Devuelve una lista con los datos completos de las clases en las que está matriculado el alumno con dni pasado.
Ejemplo de llamada:
> curl -i -X GET localhost:8001/_ah/api/helloworld/v1/alumos/getClasesAlumno?dni=1
'''
if v:
print ("Ejecución de getCursosAlumno en apigateway")
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
url = "http://%s/" % modules.get_hostname(module="microservicio1")
url+='alumnos/'+request.dni+"/clases"
result = urlfetch.fetch(url)
if v:
print result.content
listaClases = jsonpickle.decode(result.content)
print listaClases
clasesItems= []
for curso in listaClases:
clasesItems.append(Curso(id=str(clase.get('id')),clase=str(clase.get('nombre')),grupo=str(clase.get('grupo')),nivel=str(clase.get('nivel'))))
return ListaClases(clases=clasesItems)
##############################################
# métodos de profesores #
##############################################
@endpoints.method(message_types.VoidMessage, ListaProfesores, path='profesores/getProfesores', http_method='GET', name='profesores.getProfesores')
def getProfesores(self, unused_request):
'''
Devuelve una lista con todos los profesores registrados en el sistema, de forma simplificada (solo nombre y ID)
Llamada desde terminal:
curl -X GET localhost:8001/_ah/api/helloworld/v1/profesores/getProfesores
Llamada desde JavaScript:
response =service.profesores.getProfesores().execute()
'''
#Identificación del módulo en el que estamos.
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Leclear decimos a que microservicio queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos el recurso al que queremos conectarnos.
url+="profesores"
if v:
print str(url)
#Al no especificar nada se llama al método GET de la URL.
result = urlfetch.fetch(url)
if v:
print result.content
listaProfesores = jsonpickle.decode(result.content)
#Creamos un vector
profesoresItems= []
#Que rellenamos con todo los profesores de la listaProfesores
for profesor in listaProfesores:
profesoresItems.append(Profesor( nombre=str(profesor.get('nombre')), apellidos=str(profesor.get('apellidos')), id=str(profesor.get('id')) ))
#Los adaptamos al tipo de mensaje y enviamos
#return Greeting(message=str(result.content))
return ListaProfesores(profesores=profesoresItems)
@endpoints.method(ID, ProfesorCompleto, path='profesores/getProfesor', http_method='GET', name='profesores.getProfesor')
def getProfesor(self,request):
'''
Devuelve toda la información de un profesor en caso de estar en el sistema.
Llamada ejemplo desde terminal:
curl -X GET localhost:8001/_ah/api/helloworld/v1/profesores/getProfesor?id=1
'''
#Info de seguimiento
if v:
print nombreMicroservicio
print "Petición GET a profesores.getProfesor"
print "request: "+str(request)
print '\n'
#Cuando se llama a este recurso lo que se quiere es recibir toda la información
#de una entidad Alumno, para ello primero vamos a recuperar la información del microsrevicio apropiado:
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos al microservicio que queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
'''
Según la doc. de urlfetch (ver arriba) no podemos pasar parámetros con el payload, así que como conocemos
la api del microservicios al que vamos a llamr realizamos la petición bajo su especificacion, según la cual
solo tenemos que llamar a /alumnos/<id_alumno> entonces concatenamos a la url esa id qu recibimos en la llamada
a este procedimiento.
'''
#Recursos más entidad
url+='profesores/'+request.id
if v:
print "Llamando a: "+str(url)
#Petición al microservicio
result = urlfetch.fetch(url=url, method=urlfetch.GET)
print "RESULTADO:"+str(result.status_code)
#print result.content
if v:
print result.status_code
if str(result.status_code) == '400':
raise endpoints.BadRequestException('Peticion erronea')
if str(result.status_code) == '404':
raise endpoints.NotFoundException('Profesor con ID %s no encontrado.' % (request.id))
profesor = jsonpickle.decode(result.content)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "\nCódigo de estado: "+str(result.status_code)+'\n'
#Componemos un mensaje de tipo AlumnoCompleto.
#Las partes que son enteros las pasamos a string para enviarlos como mensajes de tipo string.
#Los campos que tengan NULL en la bd no se pasan al tipo message y ese campo queda vaćio y no se muestra.
profesor = ProfesorCompleto(id=str(profesor.get('id')),
nombre=profesor.get('nombre'),
apellidos=profesor.get('apellidos'),
dni=str(profesor.get('dni')),
direccion=profesor.get('direccion'),
localidad=profesor.get('localidad'),
provincia=profesor.get('provincia'),
fecha_nacimiento=str(profesor.get('fecha_nacimiento')),
telefono=str(profesor.get('telefono'))
)
return profesor
#Añadir insertarProfesor
@endpoints.method(ID,MensajeRespuesta,path='profesores/delProfesor', http_method='DELETE', name='profesores.delProfesor')
def delProfesor(self, request):
'''
delProfesor()
#Ejemplo de borrado de un recurso pasando el id de un profesor
Ubuntu> curl -d "dni=1" -X DELETE -G localhost:8001/_ah/api/helloworld/v1/profesores/delProfesor
{
"message": "OK"
}
#Ejemplo de ejecución en el caso de no encontrar el recurso:
Ubuntu> curl -d "dni=1" -X DELETE -G localhost:8001/_ah/api/hellworld/v1/profesor/delProfesor
{
"message": "Elemento no encontrado"
}
'''
if v:
print nombreMicroservicio
print "Petición al método profesores.delProfesor de APIGateway"
print "Contenido de la petición:"
print str(request)
print '\n'
#Conformamos la dirección:
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Extraemos el argumento id de la petición y la añadimos a la URL
url+='alumnos/'+request.id
if v:
print "Llamando a: "+url
#Realizamos la petición a la url del servicio con el método apropiado.
result = urlfetch.fetch(url=url, method=urlfetch.DELETE)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "Código de estado: "
print result.status_code
#Mandamos la respuesta que nos devuelve la llamada al microservicio:
return MensajeRespuesta(message=result.content)
#Añadir modificarProfesor
#Métodos de relación con otras entidades.
@endpoints.method(ID, ListaAlumnos, path='profesores/getAlumnosProfesor', http_method='GET', name='profesores.getAlumnosProfesor')
def getAlumnosProfesores(self, request):
'''
Devuelve una lista con los datos resumidos de los alumnos a los que el profesor con id pasado da clase.
curl -i -X GET localhost:8001/_ah/api/helloworld/v1/profesores/getAlumnosProfesor?id=1
'''
#Transformación de la llamada al endpoints a la llamada a la api rest del servicio.
if v:
print ("Ejecución de getAlumnosProfesor en apigateway")
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos a que microservicio queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos a la url la coleccion (alumnos), el recurso (alumno dado por su dni) y el recurso anidado de este (profesores)
url+='profesores/'+str(request.id)+"/alumnos"
print url
#Realizamos la petición
result = urlfetch.fetch(url)
#Vamos a intentar consumir los datos en JSON y convertirlos a un mensje enviable :)
print "IMPRESION DE LOS DATOS RECIBIDOS"
print result.content
listaAlumnos = jsonpickle.decode(result.content)
#Creamos un vector
vectorAlumnos= []
#Que rellenamos con todo los alumnos de la listaAlumnos
for alumno in listaAlumnos:
vectorAlumnos.append(Alumno( nombre=str(alumno.get('nombre')),
#apellidos=str(alumno.get('apellidos')),
id=str(alumno.get('dni'))
)
)
#Los adaptamos al tipo de mensaje y enviamos
#return Greeting(message=str(result.content))
return ListaAlumnos(alumnos=vectorAlumnos)
@endpoints.method(ID, ListaAsignaturas, path='profesores/getAsignaturasProfesor', http_method='GET', name='profesores.getAsignaturasProfesor')
def getAsignaturasProfesor(self, request):
'''
Devuelve una lista con los datos completos de las asignatuas que el profesor en cuestión imparte.
Ejemplo de llamada:
> curl -i -X GET localhost:8001/_ah/api/helloworld/v1/profesores/getAsignaturasProfesor?id=1
'''
if v:
print ("Ejecución de getAsignaturasProfesor en apigateway")
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
url = "http://%s/" % modules.get_hostname(module="microservicio1")
url+='profesores/'+request.id+"/asignaturas"
result = urlfetch.fetch(url)
if v:
print result.content
listaAsignaturas = jsonpickle.decode(result.content)
print listaAsignaturas
asignaturasItems= []
for asignatura in listaAsignaturas:
asignaturasItems.append( Asignatura( id=str(asignatura.get('id')), nombre=str(asignatura.get('nombre')) ) )
return ListaAsignaturas(asignaturas=asignaturasItems)
@endpoints.method(ID, ListaClases, path='profesores/getClasesProfesor', http_method='GET', name='profesores.getClasesProfesor')
def getClasesProfesor(self, request):
'''
Devuelve una lista con los datos minimos de las clases a las que ese profesor imparte.
Ejemplo de llamada:
> curl -i -X GET localhost:8001/_ah/api/helloworld/v1/profesores/getClasesProfesor?id=1
'''
if v:
print ("Ejecución de getClasesProfesor en apigateway")
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
url = "http://%s/" % modules.get_hostname(module="microservicio1")
url+='profesores/'+request.id+"/clases"
result = urlfetch.fetch(url)
if v:
print result.content
listaClases = jsonpickle.decode(result.content)
print listaClases
clasesItems= []
for clase in listaClases:
clasesItems.append(Clase(id=str(clase.get('id')),curso=str(clase.get('nombre')),grupo=str(clase.get('grupo')),nivel=str(clase.get('nivel'))))
return ListaClases(clases=clasesItems)
##############################################
# métodos de asignaturas #
##############################################
@endpoints.method(message_types.VoidMessage, ListaAsignaturas, path='asignaturas/getAsignaturas', http_method='GET', name='asignaturas.getAsignaturas')
def getAsignaturas(self, unused_request):
'''
Devuelve una lista con todos las asignaturas registrados en el sistema, de forma simplificada (solo nombre y ID)
Llamada desde terminal:
curl -X GET localhost:8001/_ah/api/helloworld/v1/asignaturas/getAsignaturas
Llamada desde JavaScript:
response = service.asignaturas.getAsignaturas().execute()
'''
#Identificación del módulo en el que estamos.
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Leclear decimos a que microservicio queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos el recurso al que queremos conectarnos.
url+="asignaturas"
if v:
print str(url)
#Al no especificar nada se llama al método GET de la URL.
result = urlfetch.fetch(url)
if v:
print result.content
listaAsignaturas = jsonpickle.decode(result.content)
#Creamos un vector
asignaturasItems= []
#Que rellenamos con todo los asignaturas de la listaProfesores
for asignatura in listaAsignaturas:
asignaturasItems.append(Asignatura( id=str(asignatura.get('id')), nombre=str(asignatura.get('nombre')) ))
#Los adaptamos al tipo de mensaje y enviamos
#return Greeting(message=str(result.content))
return ListaAsignaturas(asignaturas=asignaturasItems)
@endpoints.method(ID, AsignaturaCompleta, path='asignaturas/getAsignatura', http_method='GET', name='asignaturas.getAsignatura')
def getAsignatura(self,request):
'''
Devuelve toda la información de un profesor en caso de estar en el sistema.
Llamada ejemplo desde terminal:
curl -X GET localhost:8001/_ah/api/helloworld/v1/asignaturas/getAsignatura?id=1
'''
#Info de seguimiento
if v:
print nombreMicroservicio
print "Petición GET a asignaturas.getAsignatura"
print "request: "+str(request)
print '\n'
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos al microservicio que queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Recursos más entidad
url+='asignaturas/'+request.id
if v:
print "Llamando a: "+str(url)
#Petición al microservicio
result = urlfetch.fetch(url=url, method=urlfetch.GET)
print "RESULTADO:"+str(result.status_code)
#print result.content
if v:
print result.status_code
if str(result.status_code) == '400':
raise endpoints.BadRequestException('Peticion erronea')
if str(result.status_code) == '404':
raise endpoints.NotFoundException('Profesor con ID %s no encontrado.' % (request.id))
profesor = jsonpickle.decode(result.content)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "\nCódigo de estado: "+str(result.status_code)+'\n'
#Componemos un mensaje de tipo AlumnoCompleto.
#Las partes que son enteros las pasamos a string para enviarlos como mensajes de tipo string.
#Los campos que tengan NULL en la bd no se pasan al tipo message y ese campo queda vaćio y no se muestra.
asignatura = AsignaturaCompleta(id=str(profesor.get('id')),
nombre=profesor.get('nombre')
)
return asignatura
@endpoints.method(ID,MensajeRespuesta,path='asignaturas/delAsignatura', http_method='DELETE', name='asignaturas.delAsignatura')
def delAsignatura(self, request):
'''
delProfesor()
#Ejemplo de borrado de un recurso pasando el id de un profesor
Ubuntu> curl -d "id=1" -X DELETE -G localhost:8001/_ah/api/helloworld/v1/asignaturas/delAsignatura
{
"message": "OK"
}
#Ejemplo de ejecución en el caso de no encontrar el recurso:
Ubuntu> curl -d "dni=1" -X DELETE -G localhost:8001/_ah/api/hellworld/v1/asignaturas/delAsignatura
{
"message": "Elemento no encontrado"
}
'''
if v:
print nombreMicroservicio
print "Petición al método asignaturas.delAsignatura de APIGateway"
print "Contenido de la petición:"
print str(request)
print '\n'
#Conformamos la dirección:
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Extraemos el argumento id de la petición y la añadimos a la URL
url+='asignaturas/'+request.id
if v:
print "Llamando a: "+url
#Realizamos la petición a la url del servicio con el método apropiado DELETE
result = urlfetch.fetch(url=url, method=urlfetch.DELETE)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "Código de estado: "
print result.status_code
#Mandamos la respuesta que nos devuelve la llamada al microservicio:
return MensajeRespuesta(message=result.content)
#Métodos de relaciones con otras entidades
@endpoints.method(ID, ListaAlumnos, path='asignaturas/getAlumnosAsignatura', http_method='GET', name='asignaturas.getAlumnosAsignatura')
def getAlumnosAsignatura(self, request):
'''
Devuelve una lista con los datos resumidos de los alumnos que esta matriculados en esa clase
curl -i -X GET localhost:8001/_ah/api/helloworld/v1/asignaturas/getAlumnosAsignatura?id=1
'''
#Transformación de la llamada al endpoints a la llamada a la api rest del servicio.
if v:
print ("Ejecución de getAlumnosAsignatura en apigateway")
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos a que microservicio queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos a la url la coleccion (alumnos), el recurso (alumno dado por su dni) y el recurso anidado de este (profesores)
url+='asignaturas/'+str(request.id)+"/alumnos"
print url
#Realizamos la petición
result = urlfetch.fetch(url)
#Vamos a intentar consumir los datos en JSON y convertirlos a un mensje enviable :)
print "IMPRESION DE LOS DATOS RECIBIDOS"
print result.content
listaAlumnos = jsonpickle.decode(result.content)
#Creamos un vector
vectorAlumnos= []
#Que rellenamos con todo los alumnos de la listaAlumnos
for alumno in listaAlumnos:
vectorAlumnos.append(Alumno( nombre=str(alumno.get('nombre')),
#apellidos=str(alumno.get('apellidos')),
id=str(alumno.get('dni'))
)
)
#Los adaptamos al tipo de mensaje y enviamos
#return Greeting(message=str(result.content))
return ListaAlumnos(alumnos=vectorAlumnos)
@endpoints.method(ID, ListaProfesores, path='asignaturas/getProfesoresAsignatura', http_method='GET', name='asignaturas.getProfesoresAsignatura')
def getProfesoresAsignatura(self, request):
'''
Devuelve una lista con los datos simplificados de los profesores que imparten clase en una asignatura.
Ejemplo de llamada:
> curl -i -X GET localhost:8001/_ah/api/helloworld/v1/asignaturas/getProfesoresAsignatura?id=1
'''
if v:
print ("Ejecución de getProfesoresAsignatura en apigateway")
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
url = "http://%s/" % modules.get_hostname(module="microservicio1")
url+='asignaturas/'+request.id+"/profesores"
result = urlfetch.fetch(url)
if v:
print result.content
listaProfesores = jsonpickle.decode(result.content)
print listaProfesores
profesoresItems= []
for profesor in listaProfesores:
profesoresItems.append( Profesor( id=str(profesor.get('id')), nombre=str(profesor.get('nombre')), apellidos=str(profesor.get('apellidos')) ) )
return ListaProfesores(profesores=profesoresItems)
@endpoints.method(ID, ListaClases, path='asignaturas/getClasesAsignatura', http_method='GET', name='asignaturas.getClasesAsignatura')
def getClasesAsignatura(self, request):
'''
Devuelve una lista con los datos minimos de las clases en las que se imparte esa asignatura
Ejemplo de llamada:
> curl -i -X GET localhost:8001/_ah/api/helloworld/v1/asignaturas/getClasesAsignatura?id=1
'''
if v:
print ("Ejecución de getClasesProfesor en apigateway")
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
url = "http://%s/" % modules.get_hostname(module="microservicio1")
url+='asignaturas/'+request.id+"/clases"
result = urlfetch.fetch(url)
if v:
print url
print "Respuesta del microservicio: \n"
print result.content
print "\n"
listaClases = jsonpickle.decode(result.content)
print listaClases
clasesItems= []
for clase in listaClases:
clasesItems.append(Clase(id=str(clase.get('id')),curso=str(clase.get('curso')),grupo=str(clase.get('grupo')),nivel=str(clase.get('nivel'))))
return ListaClases(clases=clasesItems)
##############################################
# métodos de clases #
##############################################
@endpoints.method(message_types.VoidMessage, ListaClases, path='clases/getClases', http_method='GET', name='clases.getClases')
def getClases(self, unused_request):
'''
Devuelve una lista con todos las clases registrados en el sistema, de forma simplificada, id_clase, curso, grupo y nivel
Llamada desde terminal:
curl -X GET localhost:8001/_ah/api/helloworld/v1/clases/getClases
Llamada desde JavaScript:
response = service.clases.getClases().execute()
'''
#Identificación del módulo en el que estamos.
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Leclear decimos a que microservicio queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos el recurso al que queremos conectarnos.
url+="clases"
if v:
print str(url)
#Al no especificar nada se llama al método GET de la URL.
result = urlfetch.fetch(url)
if v:
print result.content
listaClases = jsonpickle.decode(result.content)
#Creamos un vector
clasesItems= []
#Que rellenamos con todo los asignaturas de la listaProfesores
for clase in listaClases:
clasesItems.append(Clase( id=str(clase.get('id')), curso=str(clase.get('curso')), grupo=str(clase.get('grupo')), nivel=str(clase.get('nivel')) ))
return ListaClases(clases=clasesItems)
@endpoints.method(ID, ClaseCompleta, path='clases/getClase', http_method='GET', name='clases.getClase')
def getClase(self,request):
'''
Devuelve toda la información de una clase en caso de estar en el sistema.
Llamada ejemplo desde terminal:
curl -X GET localhost:8001/_ah/api/helloworld/v1/clases/getClase?id=1
'''
#Info de seguimiento
if v:
print nombreMicroservicio
print "Petición GET a clases.getClase"
print "request: "+str(request)
print '\n'
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos al microservicio que queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Recursos más entidad
url+='clases/'+request.id
if v:
print "Llamando a: "+str(url)
#Petición al microservicio
result = urlfetch.fetch(url=url, method=urlfetch.GET)
print "RESULTADO:"+str(result.status_code)
#print result.content
if v:
print result.status_code
if str(result.status_code) == '400':
raise endpoints.BadRequestException('Peticion erronea')
if str(result.status_code) == '404':
raise endpoints.NotFoundException('Profesor con ID %s no encontrado.' % (request.id))
clase = jsonpickle.decode(result.content)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "\nCódigo de estado: "+str(result.status_code)+'\n'
#Componemos un mensaje de tipo AlumnoCompleto.
#Las partes que son enteros las pasamos a string para enviarlos como mensajes de tipo string.
#Los campos que tengan NULL en la bd no se pasan al tipo message y ese campo queda vaćio y no se muestra.
clase = ClaseCompleta(id=str(clase.get('id')), curso=str(clase.get('curso')), grupo=str(clase.get('grupo')), nivel=str(clase.get('nivel')) )
return clase
@endpoints.method(ID,MensajeRespuesta,path='clases/delClase', http_method='DELETE', name='clases.delClase')
def delClase(self, request):
'''
Elimina la clase con id pasado en caso de existir en el sistema.
#Ejemplo de borrado de un recurso pasando el id de la clase.
Ubuntu> curl -d "id=1" -X DELETE -G localhost:8001/_ah/api/helloworld/v1/clases/delClase
{
"message": "OK"
}
'''
if v:
print nombreMicroservicio
print "Petición al método clases.delClase de APIGateway"
print "Contenido de la petición:"
print str(request)
print '\n'
#Conformamos la dirección:
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Extraemos el argumento id de la petición y la añadimos a la URL
url+='clases/'+request.id
if v:
print "Llamando a: "+url
#Realizamos la petición a la url del servicio con el método apropiado DELETE
result = urlfetch.fetch(url=url, method=urlfetch.DELETE)
#Infro después de la petición:
if v:
print nombreMicroservicio
print "Resultado de la petición: "
print result.content
print "Código de estado: "
print result.status_code
#Mandamos la respuesta que nos devuelve la llamada al microservicio:
return MensajeRespuesta(message=result.content)
@endpoints.method(ID, ListaAlumnos, path='clases/getAlumnosClase', http_method='GET', name='clases.getAlumnosClase')
def getAlumnosClase(self, request):
'''
Devuelve una lista con los datos resumidos de los alumnos que esta matriculados en esa clase
curl -i -X GET localhost:8001/_ah/api/helloworld/v1/clases/getAlumnosClase?id=1
'''
#Transformación de la llamada al endpoints a la llamada a la api rest del servicio.
if v:
print ("Ejecución de getAlumnosClase en apigateway")
#Conexión a un microservicio específico:
module = modules.get_current_module_name()
instance = modules.get_current_instance_id()
#Le decimos a que microservicio queremos conectarnos (solo usando el nombre del mismo), GAE descubre su URL solo.
url = "http://%s/" % modules.get_hostname(module="microservicio1")
#Añadimos a la url la coleccion (alumnos), el recurso (alumno dado por su dni) y el recurso anidado de este (profesores)
url+='clases/'+str(request.id)+"/alumnos"
print url
#Realizamos la petición
result = urlfetch.fetch(url)
#Vamos a intentar consumir los datos en JSON y convertirlos a un mensje enviable :)
print "IMPRESION DE LOS DATOS RECIBIDOS"
print result.content
listaAlumnos = jsonpickle.decode(result.content)
#Creamos un vector
vectorAlumnos= []
#Que rellenamos con todo los alumnos de la listaAlumnos
for alumno in listaAlumnos:
vectorAlumnos.append(Alumno( nombre=str(alumno.get('nombre')),
#apellidos=str(alumno.get('apellidos')),
id=str(alumno.get('dni'))
)
)
#Los adaptamos al tipo de mensaje y enviamos
#return Greeting(message=str(result.content))
return ListaAlumnos(alumnos=vectorAlumnos)
#seguir aquí
APPLICATION = endpoints.api_server([HelloWorldApi])
| 40.18112 | 241 | 0.637673 | 5,926 | 52,356 | 5.590449 | 0.089099 | 0.005524 | 0.01473 | 0.01467 | 0.784992 | 0.741103 | 0.72266 | 0.707477 | 0.697154 | 0.685864 | 0 | 0.008746 | 0.266216 | 52,356 | 1,302 | 242 | 40.211982 | 0.853584 | 0.185518 | 0 | 0.696343 | 0 | 0 | 0.15634 | 0.034517 | 0 | 0 | 0 | 0.020737 | 0 | 0 | null | null | 0 | 0.015898 | null | null | 0.260731 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c82e6679e4f01663f994fb0aa852f85c864460a | 1,229 | py | Python | warriors/iscsi_warrior.py | alegrey91/legion | c234c54cc6255e744a0cfde9a9d5909263850480 | [
"MIT"
] | 430 | 2019-06-10T09:43:39.000Z | 2022-03-31T19:46:11.000Z | warriors/iscsi_warrior.py | alegrey91/legion | c234c54cc6255e744a0cfde9a9d5909263850480 | [
"MIT"
] | 10 | 2019-09-17T15:48:47.000Z | 2021-02-17T11:09:59.000Z | warriors/iscsi_warrior.py | alegrey91/legion | c234c54cc6255e744a0cfde9a9d5909263850480 | [
"MIT"
] | 110 | 2019-06-10T17:22:17.000Z | 2022-03-28T03:23:08.000Z | # -*- coding: utf-8 -*-
from warriors.warrior import Warrior
class Iscsi_warrior (Warrior):
def __init__(self, host, port, workdir, protocol, intensity, username, ulist, password, plist, notuse, extensions, path, reexec, ipv6, domain, interactive, verbose, executed, exec):
Warrior.__init__(self, host, port, workdir, protocol, intensity, username, ulist, password, plist, notuse, extensions, path, reexec, ipv6, domain, interactive, verbose, executed, exec)
self.cmds = [
{"name": self.proto+"_nmap_"+self.port, "cmd": 'nnmap -n -sV --script=iscsi-info -p '+self.port+' '+ self.host, "shell": True, "chain": False},
]
if self.intensity == "3":
if username != "":
self.cmds = [{"name": self.proto+"_brute_nmap_"+self.port, "cmd": 'nmap -sV --script iscsi-brute --script-args userdb='+self.username+',passdb='+self.plist+' -p ' + self.port + ' ' + self.host, "shell": True, "chain": False}]
else:
self.cmds = [{"name": self.proto+"_brute_nmap_"+self.port, "cmd": 'nmap -sV --script iscsi-brute --script-args userdb='+self.ulist+',passdb='+self.plist+' -p ' + self.port + ' ' + self.host, "shell": True, "chain": False}]
| 61.45 | 241 | 0.62083 | 151 | 1,229 | 4.940397 | 0.350993 | 0.064343 | 0.048257 | 0.064343 | 0.789544 | 0.761394 | 0.761394 | 0.761394 | 0.761394 | 0.713137 | 0 | 0.00404 | 0.194467 | 1,229 | 19 | 242 | 64.684211 | 0.749495 | 0.017087 | 0 | 0 | 0 | 0 | 0.204809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.333333 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
4ca5613ff47ac7f36a25e7eab65b347595df9d0a | 187 | py | Python | app/apps/page/tasks.py | atseplyaev/django-flatpages-api | 29479e8e4f5844b53fb04279f8456241458227fa | [
"MIT"
] | null | null | null | app/apps/page/tasks.py | atseplyaev/django-flatpages-api | 29479e8e4f5844b53fb04279f8456241458227fa | [
"MIT"
] | null | null | null | app/apps/page/tasks.py | atseplyaev/django-flatpages-api | 29479e8e4f5844b53fb04279f8456241458227fa | [
"MIT"
] | null | null | null | from celery import shared_task
@shared_task
def increment_show_counter_task(page_id: int) -> None:
from .services import increment_show_counter
increment_show_counter(page_id)
| 20.777778 | 54 | 0.807487 | 27 | 187 | 5.185185 | 0.518519 | 0.278571 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139037 | 187 | 8 | 55 | 23.375 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cbe0ae8e376972d8c9a6a87fd6b31ec263ff032 | 88,554 | py | Python | intronserter.py | djaeg/ChlamyIntronserter | ee1308e31bd31acf6d739587b0f5835d1afc7366 | [
"BSD-3-Clause"
] | 2 | 2019-06-26T16:48:56.000Z | 2019-07-31T19:45:21.000Z | intronserter.py | djaeg/ChlamyIntronserter | ee1308e31bd31acf6d739587b0f5835d1afc7366 | [
"BSD-3-Clause"
] | 1 | 2019-07-11T19:21:44.000Z | 2019-07-11T19:21:44.000Z | intronserter.py | djaeg/ChlamyIntronserter | ee1308e31bd31acf6d739587b0f5835d1afc7366 | [
"BSD-3-Clause"
] | 1 | 2019-03-13T09:39:45.000Z | 2019-03-13T09:39:45.000Z | #!/usr/bin/env python3
#BSD 3-Clause License
#
#Copyright (c) 2019, Daniel Jaeger
#All rights reserved.
#
#Redistribution and use in source and binary forms, with or without
#modification, are permitted provided that the following conditions are met:
#
#* Redistributions of source code must retain the above copyright notice, this
#list of conditions and the following disclaimer.
#
#* Redistributions in binary form must reproduce the above copyright notice,
#this list of conditions and the following disclaimer in the documentation
#and/or other materials provided with the distribution.
#
#* Neither the name of the copyright holder nor the names of its
#contributors may be used to endorse or promote products derived from
#this software without specific prior written permission.
#
#THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
#AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
#IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
#DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
#FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
#DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
#SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
#CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
#OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
#OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import sys
import collections
import string
import argparse
import base64
import traceback
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC
import class_library
class MessageContainer():
def __init__(self):
self.messages = collections.OrderedDict()
self.messages[ 'global' ] = []
return
def parse_input( ArgsClass, MessageContainer ):
TMP = class_library.Tables( '' )
intron_name_seq_list = []
# update 13.03.2019: SeqIO.parse does not iterate when FASTA file contains no ">name" as first line, but only seq ...
default_name = 'unnamed_input_seq'
invalid_fasta = False
with open(ArgsClass.aa_fasta_file, 'rU') as fin:
for i, line in enumerate(fin):
if i == 0:
if not line.startswith(">"):
invalid_fasta = True
if invalid_fasta:
with open(ArgsClass.aa_fasta_file, 'rU') as fin:
s = fin.read()
with open(ArgsClass.aa_fasta_file, 'w') as fout:
print('>{0}'.format(default_name), file=fout)
print(s, file=fout)
# validate the parsed fasta seq by comparison against Biopython
with open(ArgsClass.aa_fasta_file, 'rU') as fin:
aa_seq_dict = TMP.parse_fasta( fin.read(), default_name=default_name )
fin.seek(0)
for (name, seq), record in zip(aa_seq_dict.items(), SeqIO.parse(fin, "fasta")):
#record.name == name is FALSE when the header contains any white spaces - white spaces are removed by Biopython
if not record.seq.upper() == seq:
MessageContainer.messages['global'].append('[ ERROR ] Parsing the FASTA AA seq of {0} was unsuccessful. VALIDATE output!'.format(name))
# validate FASTA AA input for only valid characters
allowed_characters = set(IUPAC.protein.letters + '*')
assert sorted([ TMP.AA_lookup_dict[k]['1-letter'] for k in TMP.AA_lookup_dict ]) == sorted(allowed_characters)
allowed_characters_DNA = IUPAC.unambiguous_dna.letters
for name, seq in aa_seq_dict.items():
MessageContainer.messages[name] = []
if ArgsClass.only_insert_introns:
if not set(seq).issubset(allowed_characters_DNA):
MessageContainer.messages[name].append('[ ERROR ] You specified --only_insert_introns, but your FASTA sequence {0} contains invalid characters. Allowed characters for this option are {1}'.format(name, sorted(allowed_characters_DNA)))
aa_seq_dict[name] = ''
else:
if not set(seq).issubset(allowed_characters):
MessageContainer.messages[name].append('[ ERROR ] Your FASTA sequence {0} contains invalid characters. Allowed characters are {1}'.format(name, sorted(allowed_characters)))
aa_seq_dict[name] = ''
# check if aa- or cDNA-seq:
allowed_characters = set('ATCG')
for name, seq in aa_seq_dict.items():
if seq and set(seq).issubset(allowed_characters):
if len(seq) % 3 != 0:
MessageContainer.messages[name].append( '[ ERROR ] Your input sequence "{0}" seems to be a cDNA sequence, but its length is not a multiplier of 3! Please re-submit a valid sequence, preferably as amino acids.'.format(name) )
aa_seq_dict[ name ] = ''
else:
if ArgsClass.only_insert_introns:
MessageContainer.messages[name].append( '[ INFO ] Your input sequence "{0}" seems to be a cDNA sequence and the option --only_insert_introns was specified. Only introns will be inserted, nothing else.'.format(name) )
else:
MessageContainer.messages[name].append( '[ INFO ] Your input sequence "{0}" seems to be a cDNA sequence, because it contains only the characters A, T, C and G. The sequence will be translated to its amino acid counterpart first.'.format(name) )
aa_seq_dict[ name ] = class_library.CodonOptimizer( '' ).translate( seq, TMP.codon2aa_oneletter )
if ArgsClass.custom_codon_usage_table_file:
with open(ArgsClass.custom_codon_usage_table_file) as fin:
codon_usage_table = fin.read()
try:
TMP.import_codon_table( codon_table_string = codon_usage_table )
TMP.convert_codon_counts_2_freq()
except:
tb = traceback.format_exc()
MessageContainer.messages['global'].append('[ ERROR ] Invalid codon table. Using default=Kazusa for C. reinhardtii instead... error message was: {0}'.format(tb))
codon_usage_table = TMP._kazusa_codon_table()
else:
if ArgsClass.codon_usage_table_id == 'kazusa':
codon_usage_table = TMP._kazusa_codon_table() # get internal table
else:
codon_usage_table = TMP._hivecut_codon_table() # get internal table
cut_site_list = [ cut_site for cut_site in ArgsClass.cut_sites.split(',') if cut_site ]
if 'custom' in cut_site_list:
cut_site_list = [ cut_site for cut_site in cut_site_list if cut_site != 'custom' ] # remove entry 'custom'
if ArgsClass.custom_cut_sites:
cut_site_list += [ cut_site for cut_site in ArgsClass.custom_cut_sites.split(',') if cut_site ] # process cut site fasta input
allowed_len = set([6, 8])
for cut_site in cut_site_list:
if len(cut_site) not in allowed_len:
MessageContainer.messages['global'].append('[ ERROR ] The list of cut sites to avoid contains at least one sequence with a length unequal to either 6 or 8. This/these sequences are ignored.')
break
cut_site_list = [cut_site for cut_site in cut_site_list if len(cut_site) == 6 or len(cut_site) == 8]
intron_name_seq_list.append( ('intron', ArgsClass.intron_seq.lower()) )
if ArgsClass.intron_lastdifferent:
if ArgsClass.intron_lastdifferent_seq:
intron_name_seq_list.append( ( 'last_intron', ArgsClass.intron_lastdifferent_seq.lower() ) )
else:
MessageContainer.messages['global'].append('[ ERROR ] You specified the parameter "--intron_lastdifferent", but did not specify the parameter "--intron_lastdifferent_seq". Option is ignored and the last intron will NOT be substituted. ')
if ArgsClass.supersede_intron_insert:
if ArgsClass.manual_intron_positions:
start_position_list = [ int(position.strip()) for position in ArgsClass.manual_intron_positions.split(',') if position ]
else:
MessageContainer.messages['global'].append('[ ERROR ] You specified the parameter "--supersede_intron_insert", but did not specify the parameter "--manual_intron_positions". Option is ignored and AUTOMATIC intron insertion performed. ')
start_position_list = None
else:
start_position_list = None
# cut site
cut_site_start = ArgsClass.cut_site_start
if cut_site_start == 'None':
cut_site_start = ''
elif cut_site_start == 'custom':
cut_site_start = ArgsClass.custom_cut_site_start
cut_site_end = ArgsClass.cut_site_end
if cut_site_end == 'None':
cut_site_end = ''
elif cut_site_end == 'custom':
cut_site_end = ArgsClass.custom_cut_site_end
kwargs = {}
kwargs[ 'aa_seq_dict' ] = aa_seq_dict
kwargs[ 'codon_table' ] = codon_usage_table
kwargs[ 'cut_site_list' ] = cut_site_list
kwargs[ 'intron_seq' ] = ArgsClass.intron_seq.lower()
kwargs[ 'intron_name_seq_list' ] = intron_name_seq_list
kwargs[ 'insertion_seq' ] = ArgsClass.nucleotide_pair
kwargs[ 'start' ] = ArgsClass.start
kwargs[ 'intermediate' ] = ArgsClass.target
kwargs[ 'end' ] = ArgsClass.end
kwargs[ 'max_exon_length' ] = ArgsClass.max
kwargs[ 'start_position_list' ] = start_position_list
kwargs[ 'cut_site_start' ] = cut_site_start.upper()
kwargs[ 'cut_site_end' ] = cut_site_end.upper()
kwargs[ 'linker_start' ] = ArgsClass.linker_start.upper()
kwargs[ 'linker_end' ] = ArgsClass.linker_end.upper()
kwargs[ 'insert_start_codon' ] = True if ArgsClass.insert_start_codon else False
kwargs[ 'insert_stop_codon' ] = True if ArgsClass.insert_stop_codon else False
kwargs[ 'remove_start_codon' ] = True if ArgsClass.remove_start_codon else False
kwargs[ 'remove_stop_codon' ] = True if ArgsClass.remove_stop_codon else False
kwargs[ 'only_insert_introns' ] = True if ArgsClass.only_insert_introns else False
kwargs[ 'output_dict' ] = None
return kwargs
def process_input( kwargs, MessageContainer ):
output_dict = collections.OrderedDict()
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)
# check if the intron sequences are free from cut sites
if not kwargs[ 'only_insert_introns' ]:
CSR_class = class_library.CutSiteRemover( dna_seq = '', cut_site_list = kwargs[ 'cut_site_list' ], codon2aa = {}, aa2codon_freq_list = {} )
for name, seq in kwargs[ 'intron_name_seq_list' ]:
seq = kwargs[ 'insertion_seq' ][0] + seq + kwargs[ 'insertion_seq' ][1]
cut_site2index_list = CSR_class.get_cut_site_indices( dna_seq = seq )
if cut_site2index_list:
l = []
for cut_site, index_found_at_list in cut_site2index_list.items():
l.append( 'cut site "{0}" at the position(s): {1}'.format( cut_site, index_found_at_list ) )
MessageContainer.messages['global'].append( '[ ERROR ] The following cut sites are part of the "{1}" intron sequence: {0}. (a) Intron insertion might NOT be possible and (b) these cut sites are NOT removed from the optimized sequence.'.format( l, name ) )
# fine tune sequence
if not kwargs[ 'only_insert_introns' ]:
start_codon, stop_codon = '', ''
if kwargs[ 'insert_start_codon' ]:
start_codon = 'M'
if kwargs[ 'insert_stop_codon' ]:
stop_codon = '*'
if kwargs[ 'linker_end' ]:
MessageContainer.messages['global'].append( '[ INFO ] Although you requested to insert a * stop codon, the * stop codon was NOT inserted, because you also requested a 3\'-linker. Inserting the * stop codon would have resulted in translation termination, counteracting your intended protein fusion as indicated by the 3\'-linker insertion request.' )
kwargs[ 'insert_stop_codon' ] = False
stop_codon = ''
for name, aa_seq in kwargs[ 'aa_seq_dict' ].items():
if not aa_seq:
continue
aa_seq = aa_seq.upper()
# remove start codon if requested
for z in range(1000):
if kwargs[ 'remove_start_codon' ] and aa_seq.startswith('M'):
aa_seq = aa_seq[1:]
else:
break
# remove stop codon if requested or if 3'-linker = linker_end is given
for z in range(1000):
if (kwargs[ 'remove_stop_codon' ] or kwargs[ 'linker_end' ]) and aa_seq.endswith('*'):
aa_seq = aa_seq[:-1]
if kwargs[ 'linker_end' ] and not kwargs[ 'remove_stop_codon' ]:
MessageContainer.messages['global'].append( '[ INFO ] Although you did not request to remove the native * stop codon, the * stop codon was automatically removed, because you requested to insert a 3\'-linker. Not removing the * stop codon would have resulted in translation termination, counteracting your intended protein fusion as indicated by the 3\'-linker insertion request.' )
else:
break
# insert start codon, linker_start, linker end, stop_codon
kwargs[ 'aa_seq_dict' ][ name ] = start_codon + kwargs[ 'linker_start' ] + aa_seq + kwargs[ 'linker_end' ] + stop_codon
for j, (name, aa_seq) in enumerate( kwargs[ 'aa_seq_dict' ].items() ):
if not aa_seq:
continue
# initialize:
if kwargs[ 'only_insert_introns' ]:
DB_class = class_library.Tables( str(Seq(aa_seq, IUPAC.unambiguous_dna).translate()).upper() )
else:
DB_class = class_library.Tables( aa_seq.upper() )
output_dict[ name ] = { 'cDNA_seq_plus_i' : None, 'genbank_string' : None, 'name' : '>{0}'.format(name) }
# codon optimize:
if not kwargs[ 'only_insert_introns' ]:
DB_class.import_codon_table( codon_table_string = kwargs[ 'codon_table' ] )
DB_class.convert_codon_counts_2_freq()
CO_class = class_library.CodonOptimizer( aa_seq = DB_class.aa_seq )
CO_class.reverse_translate( DB_class.aa_oneletter_2_mostfreq_codon_dict )
DB_class.cDNA_seq = CO_class.dna_seq
else:
DB_class.import_codon_table( codon_table_string = kwargs[ 'codon_table' ] )
DB_class.convert_codon_counts_2_freq()
CO_class = class_library.CodonOptimizer( aa_seq = '' )
# cut site removal:
if not kwargs[ 'only_insert_introns' ]:
CSR_class = class_library.CutSiteRemover( dna_seq = DB_class.cDNA_seq, cut_site_list = kwargs[ 'cut_site_list' ], codon2aa = DB_class.codon2aa, aa2codon_freq_list = DB_class.aa2codon_freq_list )
DB_class.cDNA_seq_cleaned = CSR_class.main( iter_max = 1000 )
MessageContainer.messages[name] += CSR_class.messages
else:
CSR_class = class_library.CutSiteRemover( dna_seq = '', cut_site_list = [], codon2aa = DB_class.codon2aa, aa2codon_freq_list = DB_class.aa2codon_freq_list )
DB_class.cDNA_seq_cleaned = aa_seq.upper() # aa_seq is in this case a cDNA seq
# annotate the sequence using a mapping nucleotide->annotation
seqlist = list( DB_class.cDNA_seq_cleaned )
seqlist_annotated_beforeCDS, seqlist_annotated_afterCDS = [], []
if kwargs['insert_start_codon']:
for i in range(3):
seqlist_annotated_beforeCDS.append( (seqlist.pop(0), 'Start') )
if kwargs['linker_start']:
for i in range(len(kwargs['linker_start'])*3):
seqlist_annotated_beforeCDS.append( (seqlist.pop(0), "5'-Linker") )
if kwargs['insert_stop_codon']:
for i in range(3):
seqlist_annotated_afterCDS.append( (seqlist.pop(), 'Stop') )
if kwargs['linker_end']:
for i in range(len(kwargs['linker_end'])*3):
seqlist_annotated_afterCDS.append( (seqlist.pop(), "3'-Linker") )
seqlist_annotated = seqlist_annotated_beforeCDS + [ (n, 'CDS') for n in seqlist ] + list(reversed(seqlist_annotated_afterCDS))
assert len(seqlist_annotated) == len(DB_class.cDNA_seq_cleaned)
assert ''.join([ n for n, a in seqlist_annotated ]) == DB_class.cDNA_seq_cleaned
# intron insertion:
if len(kwargs[ 'intron_name_seq_list' ]) == 2:
intron_seq_2 = kwargs[ 'intron_name_seq_list' ][1][1]
else:
intron_seq_2 = ''
II_class = class_library.IntronInserter(
dna_seq = DB_class.cDNA_seq_cleaned,
insertion_seq = kwargs[ 'insertion_seq' ],
intron_seq = kwargs[ 'intron_seq' ],
start = kwargs[ 'start' ],
intermediate = kwargs[ 'intermediate' ],
end = kwargs[ 'end' ],
max = kwargs[ 'max_exon_length' ],
start_position_list = kwargs[ 'start_position_list' ],
end_intron_different_seq = intron_seq_2,
CSR_class = CSR_class
)
II_class.determine_positions()
II_class.insert_introns()
MessageContainer.messages[name] += II_class.messages
DB_class.cDNA_seq_plus_i = II_class.dna_seq_new
# substitute the last intron to rbcS2 i2 if requested:
dna_seq_list = []
exon_list = DB_class.cDNA_seq_plus_i.split( II_class.intron_seq )
last_intron_different = True if len( kwargs['intron_name_seq_list'] ) == 2 else False
for i, exon in enumerate( exon_list ):
dna_seq_list.append( exon )
if i < len( exon_list ) - 1:
if i == len( exon_list ) - 2 and last_intron_different:
intron_2_name, intron_2_seq = kwargs[ 'intron_name_seq_list' ][1]
dna_seq_list.append( intron_2_seq )
else:
dna_seq_list.append( II_class.intron_seq )
DB_class.cDNA_seq_plus_i = ''.join( dna_seq_list )
# include introns into annotated seq
for i, n in enumerate(DB_class.cDNA_seq_plus_i):
if not n == seqlist_annotated[i][0]:
seqlist_annotated.insert(i, ( n, 'intron' ))
assert ''.join([ n for n, a in seqlist_annotated ]) == DB_class.cDNA_seq_plus_i
# final check: no cut sites appeared due to insertion of introns
if not kwargs[ 'only_insert_introns' ]:
cut_site2index_list = CSR_class.get_cut_site_indices( dna_seq = DB_class.cDNA_seq_plus_i )
if cut_site2index_list:
l = []
cut_site2enzyme = { v:k for k, v in DB_class.re_lookup_dict.items() }
cut_site2enzyme.update( { CO_class.reverse_complement( v ) : k for k, v in DB_class.re_lookup_dict.items() } )
for cut_site, index_found_at_list in cut_site2index_list.items():
l.append( 'cut site "{0}" ({1}) at the position(s): {2}'.format( cut_site, cut_site2enzyme[cut_site], index_found_at_list ) )
MessageContainer.messages[name].append( '[ WARNING ] The following cut sites appeared due to the insertion of introns: {0}'.format( l ) )
# final check 2: spliced, translated seq is identical to input seq
exon_list = DB_class.cDNA_seq_plus_i.split( II_class.intron_seq )
if last_intron_different:
intron_2_name, intron_2_seq = kwargs[ 'intron_name_seq_list' ][1]
last_two_exons = exon_list[-1].split(intron_2_seq)
exon_list = exon_list[:-1] + last_two_exons
coding_dna = ''.join(exon_list)
if not kwargs[ 'only_insert_introns' ]:
if not CO_class.translate(coding_dna, DB_class.codon2aa_oneletter) == aa_seq or not str(Seq(coding_dna, IUPAC.unambiguous_dna).translate()) == aa_seq:
MessageContainer.messages[name].append( '[ ERROR ] The translation of the spliced optimized DNA sequence does not match the input AA sequence. MANUALLY VALIDATE OUTPUT!' )
else:
translated_intron_enriched_seq = CO_class.translate(coding_dna, DB_class.codon2aa_oneletter)
translated_input_seq = CO_class.translate(aa_seq, DB_class.codon2aa_oneletter)
translated_intron_enriched_seq_2 = str(Seq(coding_dna, IUPAC.unambiguous_dna).translate())
translated_input_seq_2 = str(Seq(aa_seq, IUPAC.unambiguous_dna).translate())
if not translated_intron_enriched_seq == translated_input_seq or not translated_intron_enriched_seq_2 == translated_input_seq_2:
MessageContainer.messages[name].append( '[ ERROR ] The translation of the spliced optimized DNA sequence does not match the translation of the input DNA sequence. MANUALLY VALIDATE OUTPUT!' )
# add cut sites:
if kwargs[ 'only_insert_introns' ]:
dna_seq_fine_tuned = DB_class.cDNA_seq_plus_i
dna_seq_list_fine_tuned = list(dna_seq_list)
else:
dna_seq_fine_tuned = kwargs[ 'cut_site_start' ] + DB_class.cDNA_seq_plus_i + kwargs[ 'cut_site_end' ]
dna_seq_list_fine_tuned = list(dna_seq_list)
dna_seq_list_fine_tuned[ 0 ] = kwargs[ 'cut_site_start' ] + dna_seq_list_fine_tuned[ 0 ]
dna_seq_list_fine_tuned[ -1 ] = kwargs[ 'cut_site_end' ] + dna_seq_list_fine_tuned[ -1 ]
# include cut sites into annotated seq
if kwargs[ 'cut_site_start' ]:
seqlist_annotated = [ (n, 'cut_site') for n in kwargs[ 'cut_site_start' ] ] + seqlist_annotated
if kwargs[ 'cut_site_end' ]:
seqlist_annotated = seqlist_annotated + [ (n, 'cut_site') for n in kwargs[ 'cut_site_end' ] ]
assert ''.join([ n for n, a in seqlist_annotated ]) == dna_seq_fine_tuned
# create genbank file:
GB_class = class_library.MakeGenbank()
output_dict[ name ][ 'genbank_string' ] = GB_class.generate_gb_string(
aa_seq = DB_class.aa_seq,
fasta_name = name,
intron_name_seq_list = kwargs[ 'intron_name_seq_list' ],
seqlist_annotated = seqlist_annotated,
codon2aa = DB_class.codon2aa_oneletter,
CO_class = CO_class,
)
if not GB_class.check_gb(aa_seq = DB_class.aa_seq, gb_string = output_dict[ name ][ 'genbank_string' ]):
MessageContainer.messages[name].append( '[ ERROR ] The annotation or sequence itself in the generated GenBank is not identical to the input amino acid sequence. MANUALLY VALIDATE OUTPUT!' )
output_dict[ name ][ 'filename' ] = 'Intronserter_optDNA-{0}_{1}.gb'.format( j + 1, name.translate(remove_punctuation_map).replace(' ','')[ : 125 - 29 ] ) # remove characters that are not allowed for filenames
DB_class.cDNA_seq_plus_i = dna_seq_fine_tuned
output_dict[ name ][ 'cDNA_seq_plus_i' ] = DB_class.cDNA_seq_plus_i
# Create two figures
PlotClass = class_library.Plotting()
output_dict[ name ][ 'fig_tmp' ] = PlotClass.plot_norm_codon_freq(dna_seq = DB_class.cDNA_seq_cleaned, aa2codon_freq_list = DB_class.aa2codon_freq_list)
output_dict[ name ][ 'fig_tmp_introns' ] = PlotClass.plot_gene_architecture(dna_seq_list = dna_seq_list_fine_tuned, cut_sites = (kwargs[ 'cut_site_start' ], kwargs[ 'cut_site_end' ]))
LogClass = class_library.PrepareLog()
output_dict[ name ][ 'session_logs' ] = (
'<ul><li>' + '</li><li>'.join([str(_) for _ in kwargs.items()]) + '</li></ul>',
LogClass.cut_site_removal_log( log_dict = CSR_class.log_dict ),
LogClass.intron_insertion_log( log_dict = II_class.log_dict )
)
kwargs[ 'output_dict' ] = output_dict
return kwargs, MessageContainer
def get_html_strings():
base = '''<html>
<head>
<script language="JavaScript">
{functions}
function CopyToClipboard() {{
const el = document.createElement('textarea');
var text = document.getElementById("TextToCopy").innerHTML;
el.value = text;
document.body.appendChild(el);
el.select();
document.execCommand('copy');
document.body.removeChild(el);
alert("Copied the text: " + text);
}}
</script>
<style> body {{ font-family: Arial; line-height: 1.5; }} </style>
<style> tr:nth-child(even).tr_alternate_color {{background: #E3EBF5}} </style>
<style> tr:nth-child(odd).tr_alternate_color {{background: #FFF}} </style>
<style> h1, h2 {{ color: #007a00; }} </style>
<style> h3 {{ color: #007a00; margin-bottom: 0em; }} </style>
</head>'''
html_header='''
<body>
<div style="word-spacing: 20px;background-color: rgba(192, 255, 33, 0.22);">
<img src="data:image/jpeg;base64,{0}" alt="Intronserter-Logo" style="float:left;height:78px;"><br><br>
<span style="color:white;">.</span>
{{anchors}}
<br><br>
</div>
<hr>
<h1>Codon-optimized, cut sites removed, intron-enriched DNA sequence(s):</h1>
{{messages}}
<hr>'''
html_header = html_header.format('/9j/4AAQSkZJRgABAQEAlgCWAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCACMA2UDASIAAhEBAxEB/8QAHwABAAICAwEBAQEAAAAAAAAAAAgJAQoCBAcLBgMF/8QAdBAAAAUDAgICCA8HDAsLCAsAAQIDBAUABgcIEQkSEyEKGTFSYZHR1xQVFxgiMkFRWFlxl5ix1hYjOWdygaEaNjc4V3mys7a3uPAkJSdCc3d4iLS1wSYoKTNiZXaClsLxQ0ZJU2aHp+E0NURHY2iSoqbH1f/EAB4BAQABBAMBAQAAAAAAAAAAAAAFAQMEBgIHCAkK/8QAWhEAAQIEAgYECAgICggFBQAAAQIDAAQFERIhBhMxQVFhFCLR8AcVcYGRk6GxFjJSU1VikvEIIzNCcoKywSQ0NUVUVmR0s+EJGCU3Q3N1djZjlMLTOESktLX/2gAMAwEAAhEDEQA/AN6YRHYvX/fED83Rk6q5VxH2pPyifxZK5V5PO0/q/spiHPxlfpH3CFKUqoBNyASBYEgXsTsB4X3cYbLE5A5A8TwHGFKUqkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSqEhIurZcX85tCFK/moG5e4A7D1lEvNzhsIGIPN97ADlESGFUpyAUwjy8/IIac/GtvPUPenF1x3pxsTWRrD03Yqj+HJC5qXt7TBny6sNpy2Q1dTuSbBeTNwIQBnEdMqvLcRjWRnD+JLIkawkIRs8QbEcou9s0L0RntNdIZTRumLQibm1pbly4cKdYsCwKsgkE7SSLJuc9htvvsyrDky+vC0ylTjpsThQmxJy77OJtuOUrQD9QrUb8a5xevp1X94f+ZvD+gPep6hWo341zi9fTqv7w/8zeH9Ae9XoL/VG8Ivz0j/AOrR/wDJ3seV4L4WUD+lD7K+/Huct/ylaAfqFajfjXOL19Oq/vD/AMzeH9Ae9T1CtRvxrnF6+nVf3h/5m8P6A96n+qN4RfnpH/1aP/k72PK74WUD+lD7K+/Huct/ylaAfqFajfjXOL19Oq/vD/zN4f0B71ZDBmo0g848Vvi9GENhAvr676IJhE3tSqKRJSJmEoiBREFTGHZNNFRQ5Arg5+CX4QWG3HnXJRTbSFOLSiZSpZShOI4UhZJOWyx2RyRpVQ1qShuZBWohKRhVmo2FvaeyN/kC7CI777//ACrlVMHY+GYcq534QGkDKebMh3nlfJtxNM4M7iyDkK4pW7ryuMlq6l8zWfBHnLlm1nkvLOYu14CChiOpWReSB27BuLgQV6Rd3c8A7gA+/XmGoyTtMnp2nPnE/JTS5dxWdjq1Ybi+wG2w5i8bBYJUW07CgOec4bg5X3+fI5RmlKVhxSFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCOI+1J+UT+LJXKuI+1J+UT+LJWR36tvfDf5KqFYVFVsVrZceomKEgKJPzifemKDuyVsxZcwPwossZHwflTIuGshR2ScMsY6/MV3zcuO7zj2UrfccykmjC6LSlYedatpBqqds/bNHnI9aHUbOkjtjqiSNHYnufs76itAGZrz1B5ty7nm8YvV1fNrxl15lyTeOUriirZZYfwZKM7dj7hvWXm5FvEN5OXmJMkazeBHpu5Zy6TSKs7VAPT+yrg24N2ZfDlTBQ+PI0VUUuw1iiPDWzoAD1jrWyCPWO24hg7Tt1AOw7CO3UA7c47J8xefmDsWgstL8HOlrwaCH01iQKHbJunCzTzfEQeqUkixO0qsLqzyKmCinUFSRdSp566flDXOnCfKBY+gc9uYADYQ7u3Vt7we5/4/wC2sbAGwB74Dt3erq33339z9NUT8anjU9p+aacXHravXDH1APsmsuT1ZvUjG0SY5+4U3N1YxyWM2Mv92hkRArWDPG+lwEFxIiuJkq6eIV2VxhjSzb2P7M0+Ydjc5aibtxdYV/ZCj398KNsQYKm7+suGupCxpi5IyFQnMn3jbicm3ZXPCQ8TZcaw2bt3M8wn2MhAM9TkaBValLy03Iymvl56ZmJRpy9kpmZZIVMOLUsh1tJCcKS8QgqyGS0k3FSEyFtNKbCFOU9FRacuLGQW6ltAFjfGlxQbOV9ovdKrbd4huGw0Hfq298PF7v8AUfr2r5ymKuzQtZ0ZeLdfOWk/THfFgGUMD+ExQ5yzie8uQTFMk4Z3Zed+5mhgVZp85wbHstNJ4ryAC7BMDCO8Xw/NfOAOJFp3t7Unp5lny1uyMivb122lcDVuzvLHd9xyDJ9NWTd8Y0Vft20lGt5NhIMFo6TexM7DSrCWj1fQkq3WHLquiNapTJnnmEJStAQ462pKwhIwkquDll1SbXvfDfIxiLs0tDbtwFqSlJIuCoqTYXvko7d+w7Y8l4vPEKkOGNoovPVNB41ZZWuSLui1LHtm1Ja4VbZgwnrycumjKYnnzWPkZF1Dw4NlXjuHi02b6ZAhI4kvCouF5Zn4nwEdaOd9fmgdpqW1FzcLM5Gu7NOWo86NtQEfbVtW9b8NKRyVv2zbsSxFRVCIh2C5WrdSXdyk68Ehn81LyUg6XeL6w/ZP/GXLkST1EcJf1uPpL6lmVMTXH6v/AKsIy/p4VnZsHfwNvUv9S9h6A6f7sfSwzwcjSgkCPB6BTg8MiWKPBq7I9Dh46c8a6ITaNj5gNIZdl5EMml1DFsEEQyXcMaQrcLIHCF5+iRgzeyHa8mhZbnFMU44/34uyUjRWYnNFapMNyaV1KqTlNmqS8XEnDS1Myb1/jlCStRmV4XAJpQWkFPVQE5tUl1SzNMUu2slVVCYqZGV2G2XS2ATbFZIaFhex4bT9MvcANy+/uby+WhR6h93Ywh1fL/8APxVTHxneLl2onCuIswjp+9cMfK2S3uOvudLlX1JvSDoLVlbmGbGW9TLJHpwVyWMO1UYjEsuiKc6oSJlClJVGGuLsu1ti7FmA0NKOBMf3Ln7KeIrHy/lZjkq8J2+8W4Ke35HtJ6KxUZayjYzuTJV5oQrlN7cL4jqwGVus3sG0WTm5R1MRcJqcjo3WKoguScukNKnJqn4lKSAmalHA3OKVc3SAoEAqAubBNyoRbbZW4mWcSbNTLC5qV25Sq0JeZueRuM+A8g3bR8PkrADv4+rw+GtbjgE8brM/FuSzNa2X9O1pY5nsHQVmSs5lXGc/NEx3ccle8nKsIm1krEvF3P3Ha8qdpb0g/aKt77uxjIN2b9RdGCcpsWsn6hxaeyDNKnCzkT4mCDlNQmqRWMjZUcLWjcLK3IyzWUkiV5EPcr3+vHzba0F5JgYj1pbrW3LiuuRjHUU/cQkXCyUbcjlOaN1OVqqKSJcPTTzQW2lNrZgLUpR2JSAgkqUQAL3tbK3KMOTqnkt9USmT+VgLYUgbwT102AzztfhfxSvm1ynZnHEAXulV3CaaNG8bZXowFW9vysDm6XuUYsFDqGZPLuZ5khIp25QIPKk8JZjVuiAHEsa4EwJl2J+E12THpm4it+wmn3LdiL6XtR9yKFZWXByd0N7vxblaRBAywxdp3m4ZWy6gr0fFTXOzsu6YXoJZQjGPti6Z+cWCLNJTGgdfallPGSacUlAcKELQs9UpKknCo55bRkSAAd44vBUtdxSihtIONYBOFJsCTkchffbYSeWzbWOooeD+v9fkCv5FOYAHm3OcpCiO6hQMc5AEFR5ypkSEHBgAAUBBsIgICZugqddWtYDGPZM2JZvXdql0lZywLF6fsU6VltR57w1Jy2cXF2oSjDT9c7q2E/QuKmeH4KRGYvt6k3Qg7Xibxm5kZd+ztuLb3A/dtiGgZKnzVUfm0SDYmHWGxMzSCUJ6NgUlq4CiCo3JACMSrZgZXi4Jd9MuuZZAcYYAZLozsl3CcXDZe2ZyvYk7doTubiI++Pc22D3qAO49Xc9/w+9Xz5dUfZnua1r3fMtGelTEsHjqOfOmsfcGpV3d983ddMeiIFZSjm0cY3zjuLsl645wFeO+7LIQpkTRVSnOcztqSWfDh7L2tbNOT7Mwxr0wraGFVb3lm8A0z/iWZmjYvhp2UcJM4VK9sf3i7nLltC2XKyxW8neyN/3m3YOFGzqWgIa3ySk/ET40E0iMuqY1TY6uLV4wVgAJOaSSbEG1nLvAg5JFopMMKlQL3U2Gw8opz3AnYd3n5Ru0U39yuumILcpiKAcp0wVIcglOU4c5zFWQMjzkErgo+wHY64ABFSbKpCsfWq4vvZJeAuGjkF5p0xljlXUtqUi2LJzekCldido46xOpKR5X8bG3nc6cZcErJXes3cRcmtZULFkBvCvkXEreEC/UZNnmuStNnp2f8WSkoXJxBVjSMggJIutaldVCRsJUQkKISTc2gwwqaa17Rs1le/MAgX4m9yBmPdsu1gRABAPdH3PB79fNli+zOOIYlcqa03pv0ZyFqkdH6eFirbzbFXAsz5h5WzW53Ob7jjUXfRfeiPVLUeoqqnBVVudDmRDaU4QXZB2mzipzr7Dzix5jTxqfjYOQuT1KZu4GV62zeNtxipvTKUxjkBGKgiTryCZqNZSdtOfte25iNbOni8H90kXFy8ky2Kb0Kr8nLLmXJVpSUIDjmrUlZSkWKr2N8rm5sQNxMWngtjrLUW2wRicFrJBsMRzOwkZ7ri+0RsD1xEoD+Yfc+QOrxbV/MBEQD2A7lHn5QOUNigsYTFFUQFICCQiiaIGApSkFVFRRJJRQ5NT7ijdlT6d9FeR7q0/6W8co6rcwWW/eQF9XetdZbZwlZFytQWRe2+lOxbO4JfJU/bsi3K0nYyAJbdtRpnSsSnd4zkPJQ7OCp9Nm6tMdEkZFx95NtahGFGVx1lOKUENgE7VqSCbJviIBvNSzjyNdYusixLhSRa5FrbydhtYnlG2KJwAdh38XUFcu7XzdbK7M816M7sZuMjaX9It0WQDznkoKyWWZMf3WuzEwgVswvOayxlCIYLgmIFWfmsF+BgA4osWyhkzJbiXCp4y+lfit2RKvMUrv8c5ps2LZSOS9Pt5u2q942uycrJMxuS25ZoijG33YCkssRildUSmxesnjyPaXdbNqyElEoyk3PaHV2my65l1ht6XbQVO9GUFKlkZJKnrbQm9lEXBtcEi8WHz0dSBMXwKNkKAuATbDc36ovlnYX2XuAbdhHcDAHWPc7obj3Ov9P/hWQD2Ow+9t+j3/APx/PUfdV+dA0v6YtQmo8bYG9fUIw5kbLYWiE39zX3Tjj+05S6QgTXD6VzoQgSgxgMzSvpBO+ggVFyMPIdGLZTVWtXsvzEExpMzRni6tKpceZbta/bTxthbBDLPBL/dZRlLggJ6fmLpmrh9SmxF7GsXHjSIZoTL1vA3U5lpW4IODbkYFc/2NCUyhVCrtzCqc2p1FPW3KPIQpIcUXClRSLqSdmZUeolOaiBtyG5Z/DLPOZJnHNS2TsAIF89l7Dafadu5OcA5ibCICA7exEecAH3SBuG5u4HWYuxTCbc2wENq+dkT8bjOPDBSxhhDThYVqqZYztY8/c7XMl4nUm47GsXFzB4ARtywTN0oy47o9FnXfRcvcbpW2mC7Lo5mzbmaOVEUqzeHL2V9q71Parsd6ecvaRMKXdH5buhO3YF9gqWvrH1y2U1TTfyUrdM4jkC6spwt3MYCCZPJyfIwNZokjY13JN1l/QxYo+ujxpeMGXi8ZTwxkgunn1vg4eseeskIQuVhyyWf9OJ5SdJJemIY2xqMSZn7Nsq0VZyPTn/s5NdFHdmG7UfQeqNV2lqq8jLuyGt6XNah0A9FwTLbCXUIWlxzFNMtKIQlSLItbVkg5Uk2j/aSXgbdHVqAL3M4ptKWFDLqgIDgzsrZuN4+sRp/uGXu7BWFbuuN+pKXFdGI8cXFOSipEU1ZOWm7QiZSSkFEm6KTdA7168XcnbNQI1aqrKIpJAAcxvXv+UHX1dzud3Yf/ABrUS4LvZHPr7s+YG0DF0bmxWDHEL2OHLA6iBvg6o4exoC4OQsj1EbLKh91BoAqRkgvFMsV6PMcV5QE+RbYq1x67dN3DvwRNagdTV5ha9nxrokNb8FFpEkbyyLd6yaikZZGPrZO4YGmJ96LZw4NzuWEdbzFpIXJPvIi3GczJt9c0goc9T6oWXG1pZqMxMPU6Wl8K3V4XwlCUCXJUVHGggOjiNthEVIJW8pUkq4mpVEu3MG4I6Q40laDfhhSrPZe5FriJh823tg2/T9VOYojvv19Qe7/X8/h8NfO+z92aJqbkbrcl0t6R8DWRY6Dt2kyUz3JZIyjd0uwKqYkXIOUMe3piCGtp06akI6cwSK11tGSq526E7KCmd8rKbQn2Y9A3necDYfEC0/29i+Fm3IsnedMCu7mkLZtdw6URQar3FiG4Vrju4kATpVlpacty97pn45BApWFmTAuVVmGcnQXSFTKX1S7Czhuhp1aVzIyTYdQlOIi4AubEHZlF59l5gErJUE2JwjFe1jYDac7Zb+VxG83y+yA2/cDbbx+X/wCdY33Py+4Ab7+Eer6hGvy9o3ha2QLSt6+rFuSDvGzbwg4247Wuy2ZNrK29ctvTjRJ9ETULLsVXDF/FvmS6TuOcNFXJBbqAcypjuOatfPjNcf3tRmbcUYZNpLDUCGVMaGyJ90gZ4LiT0jELsnrZLEGijYUyYL0TGiRkyyZZhkJAclL6DARE1a7J02bmJ9uissBNQcdcQJdSkoCi22txQxOFI6qW1G3EWF1EA3WGXJxLzkt1mWWhPzG/qoUlAA2fnKAN7bSbgbdjUAAeYN+6I7+DfcP0VnlDcB94NgrRn4lvZdNz4W1B3JhjQPiDD2SrVxvLurYvTMGaBvO5revG5Y1dVhcLLGtuY/vWwl0bein6KkeyvKTueZbXOomd9EW8wjE28ncGw1wVeJne/FV0nSWoi+cFJYQk7fyHKYxUTi7oeXNal9PYCCt+XlLqtMZWPYTEPEJu580IeJkFZtVrIRb1Mbll1CKEYyczojWpOlqqU1KoaabIU7dSStLZWltKi2CVAFd7AjLNZsmwFiYSZYtB0316kttkbMRBdAvv/O8wJ5Rb8Ht/zE/jC1ps8WX8PbZX70JbX9MzJFbkwe3/ADE/jC1ps8WX8PbZX70JbX9MzJNdnfg4/wC93Rb+8P8A+CmIqtfyPVf7m77kR+OrA9w24gUOU4CI9zYxTEAB8AmMAfnrNA7pREB2AxTG27oAUwGAQ8PMUPHX1uFri5sMSQetgyJAIxZ4b7L7o6SyFzhByOXHK3Z6MojJqmzrkTAVqWzcWO9Pt7ah5WdutKAkrZsc8wm8gGAxkg6NOuBhbQvRYWTVyxaM1Cuoxoy6eTbCpJtlSoJOJMJG6RNM+xgMYhDHA3SmMBTFAE+c50UAFbcionA4KqikKA8rQAMiavjiK6s8i6RMZ2DeOOYaypqWunIKFnyKV7sJeRZpMPuamJY7hBGHm4FZN6m4jWhUl3Lh4yIgdwVSPWVURWb2DIn6ZJNX2PMchTn5TKiUoKAHRpplOHKRFMSKlIU51VQV6f74IcwjB0moIm5/SNgT5mjTqmxLBgo1QkMcri1Acy6bjzOtN8NsgL5yk7KhiTpTvRkt65h4awKJUqxbyUkmycF+qQLqubkkAD+lcie2D8/1DXGuRPbB+f6hqTnf4nN/3Z//AAlxgMga5nL/AIzX+IjsEW2djJfgP9EX+cn/AEvs/wBX0+6HyD9ZaoW7GRAe0f6Iur4Sf9L7P1X0+6HyD9YV8S9KkLOkekRCFEGrv2ISc/xh5R36oZrsPz2/cLRmlKVr+rc+bX9lXZFLHge/3j0wpSlNW582v7KuyFjw79yPTClKVxIINiCDa9iLG3Gx3QhSlKpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCOI+1J+UT+LJXKuI+1J+UT+LJXKh2n9X9hMD8c/8xPvTGuN2Vf+BuzJ/jTwT/ONFVFDsNf8GvnAPf1sZA/mO07/ANfzVK/sq/8AA3Zl/wAaeCf5xoqoo9hrfg184+DWvkD+Y7TuH+2uyKAVDwa6WlO0VmRUN3xWaaf3ERerH8maP/8AUHP8Z6IYdmyf/UXDt2HYfTbVAO/Xt1MsD9Q8vstthEREoCYNtw5es5bC+xmOFTp5wjokw5rOvvGdsXzqe1DQ7nIjG/LtiI245LGVgO5KUYWNAY3UfN3BbVXnLYIhcVwzcL0FwST2fUipWScxUVEx0VXl2bN1wHDqEeoRmNToe9tzM8De/wBzYfFW0jweurhY8PM3uG0g4FDq7gD6nMB/X/wq5JzUxJeDRxyWVhXN1uYlsV7FGOZcW4QRxClXy2XAIyIy9ICsq0YbuQ0uly2stbMJDxSDvICzcgbwDa4BHj/G60J4K1n8P/UutkWxLVfZRxNhXI+TcOZNVh41O87Lu6w7WkLtjmsXdCKKkqhbVxOYVvC3TAovBjZOLXcGcNxetmKiWs52FTeU6S6Ne2OFH7hS1hg8GXinFKLKGboT6UlkG3136CZ1CJoru2CjdB2YAOdymxZ8hDixIJdz3XZ16I9Yo97pb1AgPh/uT3Z1/p/r3K0euwuygXUvrZ2AOvC1jF7vX7HITswfJ1j3O5WJofMProGncsp9bss1Tw7LpWomzjyHUPDPZiLLStuZBOWIxSqlJ0fkphac5avsSjRO0guS7qTe5/PdWM8rE3Gdotj7Kx0zabYfh05H1Fw2nvCkPqBuHOGG2s/naOxPY8dmWdZOhexDlpM5ObW20vN8gtGRkfGKIykyqRWMjoxHo1ESjyfnOxidE+jTM3C/sTKWX9JGmPLOUW+bMrJIZJyXgTFd8361JATUYeCQbXldFqydwpJQhkklIoiUmghGnOIsBIY59pL9lj9XCGu0PezzhT/WUzX9uxPevhA2MIbfs25s8QTbD69urw/JvXOjzc58Aa8tt1xDrFZpyGlFxd20tsU0pQg3ulHXUMCerYnLMmLdXUroOjWK5S/MVVCwNq0LRMBV9yhbPPeBci2UIOzPw20Y6SOo+w6mZYSAYTEHc+LLwAQImo36T2QqcyyZlA6BQRRRTBMTKoeo9iZaEMQ410Ls9bEtZFvS+ddQN338zt7Ic1Et30/Z2KrFuJ1YTK1LZcvCHWt1tN3ZbtyzdyekB2rq5Ul4FpLvHbWDgypeX9mkDvoz0kl//M1J/pxRdYD+cNh/PVufY4QAPBf0T+8Nt5PD83q4ZM8tc6I++PBpWZpt86yYr04HTsIXNVFGvIJvbZbbnffvtVvE05ozJtizapNWKxABRLSzqmEkDIpJOYJztsie+om48PaH8BavtXduY4s225m3sX3jmfI723YGMt6SybcuNLKl1rcLdruKbNnE5NrFZR9spSkgo7kVEVmqCTjpGpQr54nAD4frLjD69c76n9aDmSyrjrFUs2yxleMl3aiZMwZjyhPzL217ZuE7cSOXFml9JLmn7miYw6QHaR1v2muiWCl10T7znHUSeOeERr6KwKsK5cA3IdToTcqnoJu9jHEkJh/9R6XpufRQCPW1Batf7sK6QiD6etbsYko0G4G2ZsZvZREA2fhDv7LnEIZRc2/W09Hx8+DUA22WF5vtvsFrQuYcbp+nFWbSUzdOk5eVps2qytTLoaRMY9xsX3XAs3Oxv5IjlVX1y1AkUy6ihurVdCX1pslYAKGVC4H5yQQRketiSSopjclgMU4wtWwW+KbZx1YcDjFjEjb7THELaUBHWI1gUUCMvSZtabGMawAR7dumm1BmEUkzSRSQamBU4GcJfPC7KC4TuM9CF9Yd1zaPreb4dx9lK/zWnediWIP3OQOOM1RccretqXdi9rDJtkbUirqjYWeXViYb0virVuO0k1oNqgjLt20f9H4AAgCBd/ZbgOw9wNhHwAAdXj2rVx7LrkoJrwpYxpLAVSTltUGJG1uez5DJSiNt5KfuFADkP0u8CymkuTqD74JwN7ACm1+g1eoSekdIm2phRVPVJlqcQtZUmaS8oNm6djoQVh0Oi2EotleJCjJS68qmqQlLb7K0rZSAWEIQ0XC+k5/jSEm4y2kZ7Itp4QGrmX1x8OXS1qRupcjm+LxsP0iyM5TSBuR3kLHkzJY+vCVFuYRM3+6Kbtt5caKBfvSTeVT6P+xzNhH5p81pN9fF2QBmHSu4lnkFD5f4hGf4a6ZePOgnIR9kxWWr7ue9nMao6avmqcsnacFMek67pk8bN5UWThdm7TTFsrvN9ioNpNvwdsOLSJxO1kMp5weQyfIBASi0sgSTBQoACRebnmWUwsBuc2wK7CHuBq0cNc2/ZZtyB72sniCh/wDw/Uh5A+quxKEwzKeErStmXaDTLVK1gb2gvqImLW/5678RuiLlHHpfRGsKSojos0WWVEklJQ7MtoN8ibMhJ2EZAWj6JmnHSrp30jYwiMO6csRWViiwoZigwCItaCas3Uwo1ZopKS92yxUzzV13S+ImkeTuyeknk5KOypvZCQOoKKamgH2XdobwVptz5ps1F4Osa2sYu9S8VlNllO2bPi2kDbcrfGOpCy35cgNoWOatmLO4Lqj8ggjdS7dNuSZkIJrNuW7qYkZiXk/pAgGxBEe5uQdvr6v6/pGtEvs2cB9IOHYJQ3AJXU8Ih4BYYGAo9YhvsYQH83drR9FqpPq0ypU2/MKfceni28ok6t9pxp9x2WUi+EhK22yCQbFKSN0S9BbQpmdlC0VMIpM26mWJKlJcZU2WXyom9syVHMlN42u+FzfdzZP4b+hu/btkFJW6bl0l4IkZiWWVD0RKS6uMbeQcyLsev+znD9Izx/tt/Z6ynu7184zh55S0n4+46mYL+4r7GCdwZczaiEHktkuCXumwLS1HOMkyDaBuDI0KDSSZrW9GvkrjYNJSbh5e3rXuRzb11TaUXEQbm4oX6FPBVAe1N8PvfYBHSziwoiICbb/c433EAKG4j1iUoEOmJjCBQMbfolKsOL72MphjiH5LuPUxgjJqem7UfdaSbq/mMvbyly4jytOtWp27ObnI+OdRU7Yl4SYlZIXFd0K7uWOeoNUnzixl59+/mnczK1KUomnFecnNZKMVFmZljNspKlS9n+kqcAShawlQeKApCScaE3SU3KYakqamtHn6ZMzCmde+Jhp8HCXFIeUxqFEZBBbAKQLgpxA7CTsG2Reul/Vfil+xx3dOBdSWELgj3NvvmdmzNg5bxnKRThuePeQUgzhns9bj1mKZlGy0WqgZJPZw0XQaKJnWHWJxZ2MfduBeLQy1xac8/wCOcE6cbBzdbWVcZYfg7Tu+9L1G2n7GPHKmJ1fRTy0oa0LdnfRt62baUlF3FexIe1ZWM6aNVdMXMYpq/wCWOx++N/oKu5bJOJsUX1dru0/vsNmTRTk97PXQRZU3RrBa1vW6/s/UIVdE4JFVXQx41TWVOiVBZyAkMacHDD7JZ1y6XM/2rpx4kdwXRljDBrkicc3vLZYtk1uahsFPFXTeLC5Ji4VY2Fu+8koF06M7vqKyU0uK+F41Jd3BXCyfMCMJOXp1Cek3X53RHSJE4p2SmJaYlnltuuvh9JSQsJW8OlWcKWlPJYIClDEErsciaROS9PmZdwJm5LGnGRkrVpAx6km41ZQnAuxFkhJSMQAG3N2QRrOurRJwvM5X5YEwrb2UMnOIfAGOZ9o6XYPoObySSRRnJ+EcIuwfNbjt7H0VeErAv2hjLMbhYx00Q7cW/ocKIuxZODxp6vPT414jWpDHVs5kvi/bxuSGwDbd8Q7G5rOsG28ez69vyt/Et6Uau4yRv+Zv2Lm4+KlZOPfBarO2GMpAFbSMw9fNJNdmVEkXPDv06umIHUiCavrcGUXRBMWx1VsPZlUYCoAFMCpDGOsRASFHYiKqSht1CiGsPoZ7Gj1xcQbS9jfVZhLMukGGx1kpW6kouFyJfuY4e94l1Zt3ztlSzO4Yq18A3dDMHRpKAcu2ZW9xyfoqJdxz7nRBwKJcfQyXaRonpBMPzgpU69VTTXahhUsSbEgW5VTakPLbQrpBDyUlpaLOPLUFHLFkzyWDTaFLIeKmH0vzq8rdO1ylHU3B6ol20Ni+10IzBBVf6kOS8NYozVY0pi7MGM7ByjjqaZiwlrMvm0YK6bVettgE4KQ9wMpBigdI4FXbAIdMk9TBcxjGOUyfzTNbGKm3Y8PHRxNfmnOSnYrBkkaxc0QdpOZSQk3qWDciT0vZ2W8PSj47oz25GDNS3L0jrXeSnTO26Q2rIv3MjOQzmQX9EN2GfxPNx2zvoM5Ng6jZP1BDtt8ul0QDw/IH5uaPYafE9RMVQc76ED9GIHKRPJ+oEDHMG4lKXfS8mXmMblAnOoQgn5dzk7oZOjknR6HU0TC9MW5uVfl1icphYQhLrCynVuIUZtzAtpOIBWrOIKINri9gqll02ck5g9IQ80pDQwgGXJAKdxuQdwtmQbXzjex4pz1q/wCF1r3kmSqbho90ZagnrddPflXbOcR3Co3WDn++cy7QEuvuG6PpRAorbB8//sVPQji7Vvrsv/J2bbMgcg4+0u43bXtHWjdDFvL229yjdM63hLDfzsI9RcR8uzgI9heVws2r5uu3Rn4mFkDJmOyJtvj8QO2ZyyeDnq6s25XrKSuC0+Hzlu256SjFnS0bITcDgeTiZd+wVdNma7lm9kWbpw2cPWUe7FIxeZoXnPy6qfYUhua+uIYce4e2tO235pjMJNv/ANu/v7e5Udoi6iWkvCTNSRDgZ1jsu6TsLDdpdYOexogc8t8UmtadFaeh1ZZcfqEow6lJ/wCGoS6HwDc7QSnjY557N6K4MN4juu67Kvu68Z2JcN7Y4SnmliXdM2rAyFzWayuSDfWxcrS2ZozFR7DoT1tyLmLm42NO0j5FsomRdAxUS8vzquy1tNmnbTnqQ0iQWnnAOFcBw1zYbu2UuCHwxiuw8Wxc9KtL/UaNpKeY2PDw8fJPWrQhUUXT9N4/QZG9BqueUREv0pDdwfkH6q+eH2aIP++n0Uj72DLz2H5cjOCgPiHfq+XrqB0Mm5z4VaPtKmXFpmZyZcmEYyQvDITjiEqGYUEuWUkHYcwARlmU4WTVUdWzdImi0bXscLKbmxGeFROVs7i2YtuT6H9E2jXE2KdPGXsU6R9M2M8ruMHWE5Wylj7AeLLNyOqtcePIctxrKX1atrRt0nNcYu3KkyoaYXdSjdwukq2UIqoolo/8Ze48hcXvj/464ecDdi8RjHFt9Qmnq2vQphfs7bMjGt741EZCasgRcovblalZT0eUFCJJyrSwbaj367VFNVyn9B3S8ADpm08cwCb+4diUOUPbHA9gW+TkL1h1nA/L/wBb89fKE1B6Qsr63+Ojqm0n2Tclj2blbL2tLVTHwM/lOUuCFs1vIRV4ZHvNNKZkrWte85xueXjYM7KJ9AW1JHVln0eisVugqq7Q2LRZw1DT6e6WgvGlylYmJXFdzVKfni0FBBWMYlGm1rw4gAHALjKMWlBA0ZqM4lwMuvSlKbdfIKi1+IBU4b3N0lJBN9hINwTH1O9IGh3S3oVxfD4m0x4etHGsFHxzRCTm42MaK3xfUk3TOmtcmQr1UTNct5TkkoioZR5Nv3YFbj6WxDRrBsSRzOn/ALII4ROnPWJo7z1qCt3GdsWZquwXju68t2xlC0LfaRdz3/GWFDL3Bclg36EeDM97MZW3oV+1t91Klk5i2ZZOOVgVUGAScHNavP6jQ4nJy7lzxoO6hDqHJmoQBABFQCiAl0vqIgUwEExQTWOJwEDnMc3MCeC9ho8Tsg7+rzoN27pjDlDUEA+4IAIjpgKXbmAvKJzAAm290QAeC6ZIKnm6qNO0ibbcu070dKU4kOpUq9521nAktLSoG6SWyDYiOdNUxTnW0oWVNlOtmWzmZ1ChZaQVA2UpKiUkAnEnLO97pew89Zl35d0v510jX3LvJkdMN129c2MHMg69FLscbZWCe9E2m3OoIrGjrYvK1piUZCHMkiS8hYpdC0ZskCVWdmdFFTWvpUTDYRPpiMQNw6gEcsX2UBNsUw8pd9zcoc+wDyiBthq7jsfTgb6weFJnTOeRdQuSdOt5WjlDErGyYyOw1dmS56aRuaOvCInmj2YaX5h3HjdGORjG8ygZdhLuVCunjVMI1dJUzljSV2Zl7LW3pPDbrHTOIdfgyvfo9fc90ob9Ye71VluP02d8Juj0xIupfl5hhan3UKuH30yUyiaKSCbZlCr3xXA2mKaPXaVpGWgW5cydRmJYHeysSagLWyQl0uhNxuUTe9ztj8CzQZiHRlw69PaNvWHbbLLGbsR2TlLON9OIZmtdd43HkuAQuwlvTk0DNN5IWvYzC4iW1b0AqU8azh2qzxZspJzExJyds+OMX43xBbprOxVYNpY2tIZu47i+5iybfi7XgSz11zj65LklSQ8I0Yxib6bnJR/KyK6LfpF3rtwdc/SD7L8LpfLvpn07+6IYNxIYPlCwLeHr8fv17sA7/m3AQ8PV9XXWi6SVCama3V+kEhZnZhgoubGVl3y1LJtcJuwUC9hvO3OIWQT/AAGWLhKnC23MArJUrWTLaXJggqzB64G38naAe3/MT+MLWmzxZfw9tlfvQltf0zMk1uTB7f8AMT+MLWmzxZfw9tlfvQltf0zMk12l+Dj/AL3dFv7w/wD4KYxa1/I9W/ubvuRH46sh3fzG+oaxTxbDuBjCOwFAQEAEffATcpRD/lV9bsvztljf0R0mNo/ST+0LxSnxwQ3wLhQPx3svD/5k3TV0bT/6M2/wBP4JKqU4wmNMjZMwviSKxzYF639KxeYm0lJR1j2pMXM8i45Gz7gbqPnDaFZPl2zEi7toktIOUk2SCzhsmqsVRdEqltzQvI0bkEpkzAigc5TlVAwgdIATTOc4JgdZASKich0CqIJOG6YiACYB0jRZp5utacKeFkLrNN1J43p52HfxNs7mNkrDqV0qhJSoEol5jEAbkfjmhcjaLgHbtschnH9qde4bb9ZilHlKJjiU4gQxEy9EsUVlSmFJDnTEemUT6BRs79DukVcie2D8/wBQ1ts9boU5i2dFmL7Pml228419kkPMkbdc1b1iY/R8DzhFabNXfC00qagsmXrm+DvS92GW4yUjbGuKwou1kmuP895TxxCKR7SdxvcUx068LabFzJqO5p2Q8o4enZpMWJmse0tg/U+ujHfb1S9Tfu/+eWLvc2/E74a/y+xlA34IOiMfe9cl/S9z/wBdXzdW/h2Hr8AbeWvjFpLpHW2dIK8w3UFMMt1eYS2hCUlZAdUBhNrlVufER6BxrSHLblt+gAAe4nlvOUUW/qfTRj+6Xqb/AO2WLvM7T9T6aMf3S9Tf/bLF3mdq9OlQnwmrn0pOfYTy+rz93HJr3OI9HfgPRFFn6n00Y/ul6m/+2WLvM7WDdj7aMwDqyVqbHrAwh92eLAAejEFQKbnw2YokOZMCH3KbYphNt7HcL1KyHdD5Q+ug0nrwzTVJq/10pwZ2viyGVjt3ZHfDWrVYEixIBy5jsHojWy0N6fbV0icXvLOnvGdx33K2HGac1JI6l4y8a+m5hSWQxhcfJMKwlv21FyCUW9kHyUO5JHFVbtFXSJTgVQ2+yUHWPPuUefmPuUvUYpx9gIn90SFKBRD3dwH3KotsH8PVmr/Jhif5O4kq9T3d9/c7n+2r+krxdmae64oOvO0mVLiwLAKKLk2tvO/K/pEUe2JG66T5+r3MKUpWtRbhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQhSlKQjiPtSflE/iyVke4PyDWB9qT8on8WSuQ9YCHv03njdA85SkCKEElwDbdXujXG7KtHbg3ZiH8aeB/5xoqoo9hrgHa2M5CA7gOtnIBvHg7Tv8A7RrYT1/6FcQ8R3TXc2lfOFw5FtfH11T9o3FIzGK5e2oO80XtnzraejUo99d1oXzAAg4dNiovk3tuOhM1OcyC7NcibpLznho8MzA3CuwjduBNPd3ZevOzrzyhKZZkpLM8/Z8/cze5py17LtR8xjn1j2DjmPSiSRtlQxkSuoRzIlfuVgcSRUjEaMd3pNZkJTQnSKiTBPTZ+elpqSGEgalluVDhuBYnWsuAjaBhOwxkT6ulyVIZYzElNhx3ba2IlV/OobcufHVj7NkD+0PDt3HYPTfU9t4RBpgX3fkHud3uDW0nwew34WHD3/yRcDG/+HNuht+ivOuKHwddMXFqj8LMtR9854stHBbm/n1pGwjdGN7aUfmyQnZqcqjcRr6xrklNwmyJY8cWHNFNoYpSqSPohSUVMmBZ76asEWhpdwBhzTlYEhckvY+EMcWhi60pW73Mc9uiRgbMhm0HGv7gdxERAxTqYes2bdxJLR8RGtjO1FBJHMimKkbG8by3wGZojh/2h46mZ9QsQA06tK08jdJBy2G9+MXau83NvUFxg3EhTDLOXuLlWK2VwSbqGeYsL77x+C12bDol1ij1+x0s6gQ/P6lF2fVvWj32F2ADqY1tD1j/AHFrEEPzZBdAA/o6/D4q378tY5gsv4tyTiW5nUwxtvKdh3hjq4HtvLtG0+zhb2t2StuUdQjl/HyzJvKt2UmuswVdxci1K5Kn6KYu24qN1KrOGLwP9KHCjvvKd/6dshahLzmctWvF2hcTfMt24zuKIaR0NL+nrdzBo2PiPGj5B8q8XEiyrxaQZehBIRJukqYqg2tHanLSFO0rlZhVpiryTLEqmxIVh1lyDsuCsGxOw3OYJis28HKI1I4SqaTXGp58ZW1SES67nZcWaVbLaRs3xK7K7i3j/g+ZHetkyqN4HM2EpaS5yriBGStzrQiZgFIvIUTyMywR3cLN0jAqKaah3Z2zdeE/Yt3EI0W4z4dLfAeXdT2BcMZZtLOuQCoWVlzKlm40nbnjr7NAyVsydqs75lIQboLJPXD6HBvbJ5VRo7i00pBs3dOmYqbU2p/TbiLWBgXJmm7O1uFujF+VLeVgLmiyrmZvEujcISMVMw8iUBGKn7emmUfOwMpyK+l0xHMnfod10XoZXTrmuwrcepZAUnLD4ht8WzYzeZRfQlt3Bpzg7mvaOYILg7bt1shwuZ7Ug3Es3R5EwlEMds0zv0QdBEERctxqS0WqNFVSK1Qq1NKkhUqoxU0zIGSW2mJVlWAgEAgS5uVAApWLEEGE2tqapdOWmYwO0wzSW2he6nXysIFrXUDrjkLG4BJIGfrnZoIG9ZhpIMImAC6mpUoc4HNvyYuu8DCQ6i5j7lHYix1EjAuqAqoqlTAxVLcexxB/4GPRMH/s5k73e7/dyyb1be73N/F1VILiTcKLAXFRxBirDupW/MyW9D4luwl8xE3hmcsm0Z2YuBS2XdrOxmgvTHmTIw8e4aPHLoqEe1aLFdJoGI89DkXbuZL6KdImNNCmmbF+lPEM1fNxY7xEynmFtTOSJOCmb0eN7ju+4LxdjOyFt27acIo4TkZ52m1CMtyHR9LAYeiEXC5QWCxL1Omy+h9XoDMxeYe0gmpyVVY/jpDp2Nmx2EKasTYgg5ZxgVNLs47o6psYlU2TUidA2F1TRsNmX4xah7gY9K1BYatnUPgvMuBry5gtTNGLL+xZcKiZTGWbRV+WtK2w8eNwJ98ByzSkzOkDJ+zKsiQxfdr5jvC+1l5J7Hp4mOasJatLMuJtjWfd+pJqCiYJqdaQYt4OVWksZ5vspk9FopdUGzayTibh0E1U1J/H17TD2OTcygx7Jf6n5jf3/UBU+YxjCI7gBCmMIlIUBOoPUBQITZTYwmJzGKBDVacSDg76JuKNAxSWoqyJeHyVa7H0qtDOuLZBja+WbbiOnF0nBGmJGKmoa7beI6cPV0oC8Im5oyPdPpF9b/pFIyjlxWDozXG6LMz8vNSzk5SqsylifYbSFu4QLBSG8ScYFyVoCklTKUquooCDKIckpyRmZCZHR2XJhMzKuEmzbyCg+QAKCVDJQuM0qzEf7EFxmOFHc9hMcjseITpMaQL+KLNpxlwZssu2r8TbLolXK3f4yuaVjckMpVIvKg4gZC3U5lusAkbMCqGQA2h7x9OLOjxi9Q2D9JWiSAu++cJ2Bex43HygQcgwubP2aLyI1thlPRNrSzRrORkFFsVVoCy0J1BhPrDOT8rNMY5JWNZMrVJHsKCyFLqUeRfERuZnZgvirkgHmluGkrmSiTKqmRjz3ahnmPj1pLoiAgpJFsc6ZVzlFeJcoqGSJsF8NbgT6EOGG9Pe2ILXuHJGbXCDqPWztmR/EXJfcWweNwavomy2kPGQFqWNHuEHLxoq6t2DC55Fi4XYzs9NINGxGexSbmhdLnmay1OztSmmVF2TkXJYttS7iwW9YQWWQChKi2lWNQBVcIUtIUm2mZclmpluTa6TOTDKmHJog2S1hyVt2EApuBiULizSSSJYcMvSOTQtoU0z6WVlGLmexVjxi3vV3HDzMX2Rrpfvr1yO7YK//aI5a+binxjXI7GVYehdw9gFaCvDWAB7LMuQfe1k8Qfx/chqPDr/ADV9Mwo9GUSm3KbbZQScxhEDJgAmAADpCpffVUkSqgB1QIqoHIUnSFo2wn2P7ozwJxDX/EttDJWpqQzrI5KzNlNzaNyXhil/iNO4M3xd9wd0xiVvRWFom8vSaMQvyYC3G/3dA6SdNIpVWUlkyKt18TRvSVmX0rrtfrThZZqMk7KhqxOOZccVqxcbBqcDd9hwZG5vGOEYdGp2mtnFOPpZXfLNZuScgNhudnHbF55u4XfuCCYeDuAI/orRK7NoEPSDh2eCW1PCHvdbHAm/d8Hc8PirevIIm3AdzGESEMJRAfZgoYoiKqRUSAo4MBgJ0SKXRuAUIsCPOJhqi4ofB10ycWpnhRnqLvbO1mkwS5v5/aA4UunHFsC/PkdGz0pVK4zXzjXJKTlNiSyY4kKaKaw6Zelfei1pFQya6WtUOYRTq7SpqZxFpmZXMultJKhKobeQkgC5vd1N7Dqi53RI0uaalTUHV9ZD8i9LNDO5fmC3e2+2VwL/ABstsdjg7M5ST4PuhWPgplS3Jl/pEx81iLiSasn60DJO7RKgxmEmcgg5ZulY5ydJymg6RUaqnTAj0ijEzlI+mroc472tTQ5xSMn4/wCLvm7MV/4+aOLtwRlOJkmihoDD12w10MnMHlq1cT2fEREAvFEGMMi+cWhaxZmasK4kZu3Up8raIiJP6BGmDTzZek3Tzh/TbjmTueXsXCFg2/jy1Ze9Hse8ut9CW20MzYrXG6goq3olzLFTKf0WuwiYlqk53WTapimQlVt8THgS6FeKJKNb8y5BXVjDOkfHoxaWb8PPYaBu6ZiGTZwziIa+YmVjpe273imCQIFi3UpCo3VHsY5pFxtzso0kgze7GxpDTUaSVifn5YVGk1iXMqpxbSHJySABCVtglC8sWJaW3E4kJaWQ4W0pMVIsy/ihdLmUll5DxeYdQbFsl5RulXWBtiJCrHr48ylRiRMTxe+FhL2uyu1pxEdGLeMWYJSSLOX1GYrgbnK2UTIqkg+suWuRheTR+RIRSWi3ECxfpKCZgu1W6XmP89bjV6hcV8ZTi64ptfQPBSF9EnIHF+nSFvhpbLyEPlq80rvuiVlb+Iydx7K40bStmLuhtDjP3OwjFG9u2Y8mBj2sG2ReurkzdhO2uWeB2TiPT42uL4OaFDSewGeJHnU2BkW5vXCFjjPFgEETSCFoi3BU5TKQqyAqELsNcNLgZaGOF86c3jhq27oyNnCSaOYqRztl2RibivlhDPU00JCCsZtCRcDa1kQjkvohu9dW/CluiTbKiwn7gmm7Vsm1kKc9ofo5PorknPz1QnZOWcck5FTGraW7MoQEuOhTMviCNWAQXFJwFSQ2tZCkZa5x2Wl5hiTIf6WyJVaki7IxYU4kZ5Oi5URkVEFN7EmOfGi0F3Frj4XmWtOdjNSzeWrKt61Mh4ebkKiVWZv/ABSmk4Rt9gD1RIG0le9u/dHZ8esu4bAm7uNBw8WFmgkA6hfY5vHIxzw8Wd7aGNcTm4bDxFI3/K3BYORHsDMSA4Wvx+UsTe9l5Etxm0c3PE2pNyEWi79HRUC+k7VulaYLcMf6VSbmTt76P4iJt+T2xQIAgUonL1ELtzCIc6aRFFRIgVYxRX6FTm5kxMRShPiTdjqaBeI/eUrmGZZ3jp+1ATXSrT2VMNKxDNnfUgVuVu2f5Ksa4YiSgrscthAp/T2GVte6JQqRWstdL5s1bIoQ9C0llJY1aSq7a3aLXXVTLgaSHOhzbmErU20MK1sKWUkhxYcl1oBDakrNmFmYpkrTJh3VPUpwKkZxJIJaUoJwAgEDCQsZpPVWUEgAGJhXdxkuFLZNkPr+m+IXpHfQcfHHlHMdaGcbHv8AvRRuREqpiMsZ2DLXFkCZky8xPQ0VGWsEyuqiCaJDqpqJ1qXyvZK/EM1h8T218I8MW3bMksHZDuG3MWYtxzmfFzebUnPQ66x7vzhesjb0lAX1acYViMpOuItG9yQ9t2Fb7R3KxqlzEmXR/V4DsJm0GlwtnN0cRe4561QcGM7hYHSzEWxcK7PnH+x2dzP8/wB5RbR0KJjJA8XtVZIqolcegQ5ASrZR4cHBt0R8LqLlFNPFkTE3ku5I4Im7M7ZSkGV25XnIkr8skNvt5JjFwcDadvJu02C60BZ1t200m3DCKkLjC4JKIjn7OSaOhFFXMzqXpitPMsFmWkpmW1bCFqscCzqpfCDe+IrUgC4abXcKFl95DcsqWl2xMPODAqaOxu9gVk4srXyFlm5ALqRHpnFQK6LwvNfxHqiCzsNGOoUHizVBRo3VdDiS5unVbtFl3SrVusrzqN2qi6qrZISkXWcKD0lajfYUQga9+IV/0a07jt739ucw7fVv8vX7tbx2oPClqakMF5f093y/uKJszN2Nr1xZc8labiNZXVG2/flvv7bmXkE5mIS5IlrNos5FZdi4k4GXZJqkL6Ij3iZjNz1zcLvgs6WeEvL5kmNOV/Z/vRzm+NsmNuxPNd144uVuxb2E4uRzDLW8NhYsxuqgs6UuqSJIemakuiJG7UseVoTpln0Zo7WJCnUzTJiaUptyty60SUqwk6poqYSMClWw4EkgkE3JAO6xvTJQ5RJGmpdK3JSptvucScbSyb77hu27j5bfDB1b90QEgAHX1iocqQ9zwKD4K+ep2aPGO2+onQ5PqlEI59hrJcYzMUQ39Ewt8w7t6PsgHf7xcDAADcoiYQABMYQIb6FRdh5gNuGxg2MUQA5TbgBAJ74nMBUjB3ihh9wRqrnik8JvTfxYsQ25jvN7q5LQu3HMpLTuKMsWMaNJddkyMs1aITkctHyzCRj7jtK5ix8Z6eQDhNso4XjI92xkY9+yau04fRyot0qu0SrupUZeQmlqcwjEQHZZ2XC8JKSpLanUrWkEEpSQDciLtNcYaW+04Qnp8jNSxUo5JOsaUD6U8gbDmS4cHEG0XaldPmmCy8Sam8DXTlyRwJYBXmD4zKtlq5ihJC1seRCd5RLrFruXb30gFpKMJAjxZzaqTJNmxUdqHFiB3RNJ7sjPT1m/hu8W2xeJNhpg8ZWfl297EzRYd4lYOV7bgc9Y2bwxr0x/dKrRVBNye6vSBlezpk7dtS3XBXZczBqV96QzazW9jhodi5ocOrWfivV6111yuVj4rVusG2PG2nIuPULgYXjZdzWa/jJe4jZ1vhT0CizuNZ0QBt0BVdMCrKJkAiaZdmDULp0wnqxxHdeDNReNrcyvii92yTW4rRuVA6jdY7bo3LaWjZRo4j5aDuSLWAXVuXFAPIqfhZBJJ9FumKyIOkdlmavTaJpHI12gzTlUTNpqa6pJMslhwNTUy2VJQ6pDOrKFjWAKU4FYMJUm4WMSnlDLU7THxrpKZal2C6rPGloHE+RvwhKUkgC+JQAsIqX0OdkPcNDWBimBu269RmK9LWUSxbQ1/wCItRWQ4LFy9t3AKRUpNvbl93i6gbOvyB9EpKuoORt+Scy60Odj6fW9bMu5cQLKJ3Fy7JP0f6XcD3vZejDO9h6htW92wLiDx7L4ocW9kvGGL3Msks3DJV3Xk1UnMY3C4t9uU76Gshm7uZ8/nTxbS67fYwB5gU4aZ77C7023jc68vp01mZSwlb7lw7eL2rkbF0FnUWIvFzroMoKXir1w1MMotkmf0K1RuM9zzBkW/K9l3joV3qnpelvsOPRXiq4Yu5dTWe8sapQiXwuS2fDwMbgTHk+gZMyKTK5Y2Cnsg5Ddt0VlCOP7QZUtJVRZFMzwXEcm8jnuS5L6AzcwmdXPz0q2pwTD9MwLWH3cnC045gICVqsCEuqvbCXkpUVG41qZBQwnxmEC7SCLlojMWzA6oFhcZ2ucyqJr9jpa7+InxCtP2SM560GGNj41YzsdZmEb2t3Hz+xryyfOQxpUMlXBJgymS2K+tmCcjA24yeWracMR1c6F0MXByLQK6SuuP2Zmb/fuaTRN1D62QBH3+rK19CP1dzw7DX0OrHsWzMZWbbePse2pb9kWNZsLH27aVmWnDsIC3LbgYdsRrHQ0HExzEsXHR8YzQSTYMWLVJFuIjzHAqyJ1KheJlwKNInFZyjj3LuoXIeoyz7jxrYauPrfbYXuzGlswTmCGembk9ETLO+MRZElXL0sjLOkk1mkqwa+gDFICBTCU1RcrXaajTWnV1qSRSqVJFxostALU6BLOoQ8+EjILWpF1NDVJvhKeoFK50p3orVSD7mNyfkpzUS6f+CX1tLwAk5GyTcEZk3tbM2baXf2tGnj/ABGYk/m/t6vda/HY/s+Ox5ZFnY/hnD93DWNalu2dEuZVRopKOIy2IZhCx68iq0asmzl6q0ZpHcuGTKPaCqJuVmnzEAn7GtZn30zdRnJtu5ZmJiZW2TttMvqfG3aePCIuVbW1JyrbnxhLtpO7OXShj92zzZ5w7oCH5P8ADLWmxxZfw9tle92oS2/0azMj/n939Nbkph2DcBEogBuU3KBgKflNyHNzbEKCZ+VQTKKJFAShsoBxIU1a+szhDcOziD5EtvKmr7TmjlvINn2Y3x7b1zBlHNFgvWVmMpqYuRrAuEsX5KstvLMmk9cs9IsHUqzduEHMtKppOCpqgUd08FumEroNppSNKZqVcnEU8TDypRpSULfwFKcCVLCkAm+1STbLqmE5KCdkJqVU5qkzDamy5g1mAKsMWAKRisbXTiScwb2jXRpVuP6mR4HnwIx+klq+8/tP1MjwPPgRj9JLV95/a9k/659A/qXUd3/3kpy/s/o8o8+k/AVn6Y//AATy/tnl9vKKjqVbj+pkeB58CMfpJavvP7T9TI8Dz4EY/SS1fef2qD8M7R/MjQuoXNiSJuTzOVif4Pny8o874DM5XrHpkTy/tnfPlFR1cie2D8/1DVt/6mR4HnwIx+klq+8/tZDsZPgfk9mXRGbcBDcPXI6uhAwbgIlEVM+bEDq3FQqiQpgXmFQSAdM9t/8ADLoD7DzPwNnk65lxrE5NSy0JLiCgKUlDAUQCq9kkHZnxuM6EMtutr8aF7AoK1Yk8BWU2NsXS1AbL3sd+WyHYyY7cD/RKOw9zUl4vXeZ/Hr7nufp2q+avDNNunDC+kfCtjad9PFkN8b4cxuzlGdm2U1mJ+fShkp24Ja6plRSZuqYuC4JN/LXJOzUzKSEjMvlnb6QVOKpuQNvc/d8Hv14Lrc43UatUZxBxNTk+7OMMm12A4sqGLPaAoA5kXFhewjeSoEhYuLo1ZvfbdPtGy+y5PCFKUqNikKyXuh8ofXWKyXuh8ofXSEUVWD+HqzV/kwxP8ncSVepVFdg/h6s1f5MMT/J3ElXqVP6QflKX/wBIlP2BFxXxR+j+9MKUpUBFuFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCFKUpCOI+1J+UT+LJXKuI+1J+UT+LJXKh2n9X9hMD8Zf6R9whSlKQhSlKQhSlKQhTYPepSkIUpSkIUpSkIUpSkIUpSkIUpSn3ebhCFKUpCFKUpAZbMvJClKUhClKVUJUr4oJ8gJ90AOA9EKUpXLVufNr+wrshY8PZ5P8vZClKU1bnza/sq7IWPD2eT/AC9kKUpTVufIX9lXZCx4ezyf5eyFKUpq3Pm1/ZV2QtbdbzeT/L2QpSlNW58hf2VdkLcs+/77QpSlNWvZgX5MJ7PJCx4Hv949MKUpTVufNr+yrf5oWPCFKUpq3Pm1/ZV2QseB7/ePTClKU1bnza/sq7IWPA9/vHphSlKatz5tf2VdkLHge/3j0wpSlC2sZlCwOJSeyFjwPf7x6YUpSuEIVkvdD5Q+usVkvdD5Q+ukIoqsH8PVmr/Jhif5O4kq9SqK7B/D1Zq/yYYn+TuJKvUqf0g/KUv/AKRJ/sCLivij9H96YUpSoCLcKUpSEKUpVQlR2AnyAn3QhSlK5atz5tf2VdkIUpSmrc+Qv7KuyEKUpTVufNr+yrshClKU1bnyF/ZV2QseEKUpTVufNr+yrshY8O/cj0wpSlNWvPqLy29U5eXKFjwOeznClKU1bnyF/ZV2QseHfuR6YUpSmrc+Qv7KuyEKUpTVufIX9lXZCFKUpq3PkL+yrshClKU1bnyF/ZV2QhSlKatz5C9l/iq2cdmyFjwMKUpTVufIX9lXZCx4QpSlNWv5C/sns5iFjw79yPTClKU1bnyF/ZV2QsdtsuMKUpTVufNr+yrshClKU1bnyF/ZV2QhSlKatz5tfH4ith2HZCx2Wz4QpSlNW582v7KuyFjwPf7x6YUpSmrc+bX9lXZCx4d+5HphSlKatz5teYuOqrMcdmyEKUpQtrGZQsDiUnshClKVwhClKUhHEfak/KJ/FkrlXEfak/KJ/FkrlQ7T+r+wmB+Mv9I+4QpSlIQpSlIQpSlIQpSlIQpSlIQpSlIQpSlIQpSlIQpSlIQpSlIQpSlIQpv4dhHbYR9/ffl6xAPvm3R/9elNtxAAEAERDbm22HrAduvq3232qoJBulOI7k8d1oRTzxjNbGZNLGIsOYg0qEhktXeszKrXCODbhuRGPe23jFogwVuLI+XpyHlEXLKcbY8tFkqZpGLNJAhpiTjHikRNpMlIWQpgPwW9KGSEUJ7WFP501rZcdmdvrlyrm7O2Zmr6TnZJ0o9lHkHb1nXxbsXacWudZNJlbTdR8lDxiEfHrP5R42XdJzT4xn33iscB5MRMKai/E4UFP+96Qmm/F4AYPfEAE319ypRiIbAABsAiI7eEoFL/ALf6713zoNTZSVoUvMtsAPTKi64rK4OsWi1+AwXAysTx2Z7GYud4v+zFQnaF+E/8FUPn01LeH8cXhHxjTtC/Cf8Agqh8+mpbw/ji8I+Mat6pW62HAd/uHojIsOA7/cPRFQvaF+E/8FUPn01LeH8cXhHxjTtC/Cf+CqHz6alvD+OLwj4xq3qlLDgO/wBw9ELDgO/3D0RUL2hfhP8AwVQ+fTUt4fxxeEfGNO0L8J/4KofPpqW8P44vCPjGreqUsOA7/cPRCw4Dv9w9EVC9oX4T/wAFUPn01LeH8cXhHxjTtDHCf+CqHz6alvPF4R8Y1b1QR273rHYBP/xZRDrKKn/J3AAL/wDiCShFwbW2E+QAXJ8wFx5BDqixIyuBs4kAfu9nCKhe0M8J/wCCqHv/ALOepXu+/wDsw93w07Qvwn/gqB3d/wBnTUr3fnirhxa+IpknQzauEbZ0+2faOR9Qmd7+Nb1m2dfEVcU5Gr2vEtEyTbokXa9x2pLPJtedmreh4YEpxoRA8so7MJzN09vc+GFrYPr30l2dnGbZWzCZFQlp+y8qW1aBJFtAQF7QDshxRj2MxKS8qzZyltvbeuBi3eSkuozaS6bEX4ehuRSsuOlIeXLp/iwUXTbclaUKIyuQFkJuOrcqANwYrMDoimQ/tmFJQ1vAUoawA22EhO/PeBHh/aGOE/8ABV/+OmpbzxVjtC/Cf+CoHz6alvPFVve+wCIhzB1F5RDcphOYpC8/vFA5imEffKFVw8U3XM/0BaV5XL9pxVtXLlS4LstywsVW1d6cm/gZa55R04eyjiWYwslCyy8VHWzDz7oG8fMRbh4+QZsAeJpulAPZdeEuBjF1rKUpFh8ZZSkbb5ZgnlmbRzaaLziW02BVcjkEjEfN1Rfzco8i7Qvwn/gqB8+mpbzxU7Qvwn/gqB8+mpbzxV/p8JbiEX5rrxvl1hnK07Lx3qFwTk19YuRLDsllcERHRsUqmqS33qsPdFx3fMsniUvF3da8givcDgCPLYUcCmBn4cltFZj0s7LhrWWKHW0uNkW34cjY8tm249OO24l4KLYsplwtugjIWsCfNYWOX7oqF7Qvwn/gqh8+mpbw/ji8I+MadoX4T/wVQ+fTUt4fxxeEfGNW9UqxYcB3+4eiLlhwHf7h6IqF7Qvwn/gqh8+mpbw/ji8I+MadoX4T/wAFUPn01LeH8cXhHxjVvVKWHAd/uHohYcB3+4eiKhe0L8J/4KofPpqW8P44vCPjGnaF+E/8FUPn01LeH8cXhHxjVvVKWHAd/uHohYcB3+4eiKhQ4DHCg36tKnUO4G/u56keUC7CO5zKZh9gXfb2RDpnEwgXnApzgbKvCWx/gFmrffDZyrmfRBnW2zjOWZI2llzI16Ysua4GyagIQeV8cZJuq8oi6rSuAyhY2YReAug1anCQUYyiLM8Q/t5oBQMJSj7o7APvb7gPjKIl/wCtXBxpp1JbeQFtrshSSAQQqydhBG+2YtbI5QsOA7/cPREkeFNrYnNeujey8zX3bKFl5ntq4ruwvqEtJichoqBznieU+5m/CwiaazkGUHNrJMbniYw7lyMSym0oz0ZILNHUg7sfrXg7HWEfU14pxC9SaPGs1wpop7daCZoXCpjk39wDmKUfd6yiPdrYfrzZpHKMyFbqknLs2blppCAdoSl4YwP1ch5oi3BZRzvnbnkBCsl7ofKH11ish3Q+UPrqGjhFFVg/h6s1f5MMT/J3ElXqVRXYP4erNX+TDE/ydxJV6lT+kH5Smf8ASJP9gRcV8Ufo/vTClK4nKBgANhEeYBDYQKYNvbcp+lSMQDE5inOQ4iCRj8xDp85DQSAorSEi6ibAbbmOAFyBxjlWB9zYdh3Dbwjv1F6+/wDaf9avFcP6jMHZ7JOepDk6076c209cx9xRcO/FGfgnbN85jXJZq2pFJlPxjf0Y0VbsJF3Gto2WKkdaLVcIkMevaw7obCUBES7c3cEQMA7B4dgGua2nGFp1jSWisizir4RYgkm4zJAsDxIhYE2OYJsfIe+7zGKbeM3rny3pDwlinGWmEsOjqz1eZOQwzhG4bjbR8hbeMmjZiafyJlych5NFyymW1gWs1EGkc6aSDIJeVjHkpDzsSzfwcjrxSXCt085UOW4tWty5s1iZUeHcv7hyZmvNOW05F9NSLg7uQeQlu2xeMJGWnFuxVTKzttA75KHjkGDBKRlBRUdFsu45ZubiRcD8huYyao8StQyf97zkwJiIAMHviG4/p9+s9wBAA2ADG6vDuIVp3hB0krGjkvo/J0OoP0g1KQcqT70u4uXqDpRPzUtg6Q2tK0M4GQtOFab4yVXNsMdPTD7SkpQoJFhnuyIzyts2beA23ir7tL3DT+DUT53s++dGnaXuGn8GonzvZ986NWg0rrP4f6cf1x0h3fz5O/V/tfI+zhlH9Nmf6SPb3+88rVfdpe4afwaifO9n3zo1XzxCuGhokwXCaT3mLcKBarjJutzBOHr3UJkTLM2adx1ejS9FbltoC3LfUwnGemqkRHf22h02U62BAUWUi3RcuiLbI9VQ8WLf7ndCOwAP/CVaX99/cD0FkbrDw12X4HNL9Kqr4UdB6dU9JavUJCbr8ozNSU9VZmalJlo3JbmJd6ZW060Sm6kLSUmwuMsrjc3MlRHSlDqOZoJSoWbUbpJyBy27r+S0eDcKbQGdQR9QQgB7MS7ZPzWUglMoYCgG+RTmMJEyp7mOchgE4/ex33LjtUegPYQ9QUuw93+6lmvzi1Yd3v8Agyf96lfegaD6HIQpPwYocw7dLjimqbJuuAWudW3hViIIyASeNjaOrjW6uSlCKlUQEtoOIzyrjrG6r3B5Z/fXiHCi0BB3MCF+dHNXnFp2qPQH+4KX50s1gHiDIu39R9+p9TisshCTC0E2ReTSUVIHh2blQEm7yWBoqEU0cKHesUyIupEWrdUVVuj5FDb9EOzlDwjTDc2pO7MdO5TVNj6z8bZHJcj1s0gLKeNnkUtbBY+LVjZNVRne1+oFeOpBSYbqlGZRUKLIS+gUNjBWCjRfQ4zkvJfBGlguyS5zEujyYlAlCkps7N9F/jVzkzssFZEiL6arWeipnfG88Qp5LODp5xKKgTdKcrpTg6x3EpH50R87VFoD/cFL86Wa/OLTtUmgMolH1BAH2RQEC5RzUI7COxhAByJsPKAifYTJh7HfnAQAprDqe6X8ov8ACCsw6F6HkEDRnR1BOWNUhIkDZtBl0A8PjDMk32X4ePa0bBM/UCrKw6YV32ZYb9bf6L74pox3w49GNwcSB3gKWw2V3iZPRAfL4Wj6oGV0ChkcmdGNlGuQk8hfKd0gH3OrLRhokZtSCOYRdnYGeoNnCVlnaW+GiO3+9qL1bAH92DP/ALn/AL0vD7teXYmKA8YKQEDGKftarwpeXfrA+qJkXYfAPNuPyeL2Hip60cn6HsE2blDFFvWDck/ceVIuw3zHIUdcUpDlinlq3bLOV2za2LktWSCU9FwDFBBUZX0MRq4eAsgcxk1Ufjt+EvN6VMfhB1bQ3Q+rTtDk5tyTZp1Kps+/SpFuYW26pThXLHVtNLQ3dQAzIAsTHZVOcnJqWpgbmJsvOSqVPFxeFJJRfE6u+Rve6ibg8zHR7S9w0+562snzv5++v1Ut64hwW+GiA7hpqJv3P2YM/wDnSqKMVqR498zHMJZloj0vrsJZkzk2bkbsgiA6ZPkwdN1DFPq0TUILhPlIduYibsCn6BRMrg4BXUa8WbVDpsve2rS4j+kT1JLdu52VqxynjNVzL2vHlVKZNYSMiS+Q4W5HMaYDKXAzgsgGuOIZpt1lbXmJBFq1P1I9SPCvjcakPCGavOsBRdpNI06U/UOqm6kdGM2h1xScJ6iBjJFgIy1IqBUSw/cMjE8BM6zIEZYE3xEm4w53ubG0S5DgucNIB3DTUT54M/j+gcpbVntLvDSEQH1tRNw7n91/P3nSqyCOuqGuW02162nLRlxwE1BEuW251i4LJw83FyMcEpFP2Tlg4SSWYPmhklUjg6E66SqK7I6SSyplKHeG5xlbk1R5wk8E6h7dxtY03cLc58TTVgs7jhIuYnY07x3MW/cR7kvC7FAmZKPIjJWs+YDHszPop5Gnb88klWvUd/wuVmU0gnKfpTXSNG0BVWlpmuz/AEpkEOEhtJms1JSw5iJIIw3FznFlC6guU6eZg6kKUlQGYAQE4tm8FQ2br8ImH2l7hp/BrJ87+fvOlTtL3DT339bWT538/edL9NWgj1gG5TB7LYxRTKisXpikFADpHUIUi/MpzgT2Xod4BygQem3LVrjbW/lK8eJ7mvRTJ29jpLFmOsZIXrBT7CNn0r3eyJoLGUksjKSS9yu7fWY9Pekscjdvbjd6IIxqicgkgm4buY2i6ReEivJqi5LSyvhukUx2rTJXXp4FUow60w6lH8LJUvG4mwNuqCSQE5cG351cu9NCYOpYTjXYHIZbxbK+XlNo7I8F/hpj3dNZPnfz950qx2l3hph1etqJ87+fvOlVoKm484jzcwlV9mfkHlMIrHS9gcpiqF2UEBS5DiqAiTYebcNdzLfGiyFYOtGfxpD2LjZ9pKsPNdoYTv7Jr+NutW546Qfej2lzOmN0kugttNDw8lC3jIwqK9pvgcRVrquQTL6IMcmdotPeFHTGaqEpSdK9IQaZLqmX1qr88kEIwgNpV0rrOOrslCBmo3PxUqIusKn5hgzDb920jMWN8rG3G+RO7blnaJxBwXeGkAbBpqJt/jfz950qyXgwcNQm3LprTA24ATfL+fw9lvzB1hlEdgDYTGHYRIUDHIJDlKoS0AqpFikVTOChFAFUDgYBIYFgIcoIlKYyRSEEFAVMgY6apxIcStxAEzZ9w35Jv4I1rLmnenzWulndK9IUzCHnAsqr08lKA0cBT1ZlRSkkC4CTfcBtGOJuZVkJpQvvQooVs2pXmActtrey2rLpU4eWj3JM3rBaXpiP08b4t1v6gMP2IA37k+LGDxzZDq3CWpb4HhLyjhkwj05F6IS0wZ/NvQWAz6SeCQh05adqi0BfuCE+dHNW/j9UWv8Aa0NDvcXEBH3e2WarQ8T60AD9ABU9K+9ngp0X0cqfg30Inqho/R5yfmdH5FyZn1ybL0y84ppJU4+uYStcwtRuVEJJN73yjSa/WKqxV55tuoz6E61sAJnSEpBaZJwjO2ZJtlfPiIrx7VFoC/cFL86Wa/ONTtUegPbb1BS/Ojmr6/VF3qw8O6AdWwiADvuO3WAgIFAQ5hKYCm2MYoex35twABi/kW6tVMbn7Ftu43xnZNw6eZdkX1VL4k5KLQu+2X4OpfdCHYK5FglDmMiEY+A33EzBuV8YoKqAYSm3ad0a0Nkeg4tD6Y4qdfTLJelKHJzkuhSssU4von8GRkSpe1MYbFUrDyn2xWJ68qwqYN54gKCE4rA3zKrYUpIuVEJ2x4h2qLQH+4KX50s1+cWgcKPQGH/3Cl+dLNfnFqw8dvc7u5imHlBPm5DCBTCQhEUhMYuwqHK2RHmAA9lvvWKzkaE6HlJJ0aoXWSdlFkgbEXFiZdIGe8qHuiwK7WL/AMoVDZ/SyvcPzb9bZ7L74op4gXD40g4U0kZXyfjLEoWvfNsGsP0knQvzJ02LMJrJdnW9Jl9LbkvWViHAOoiWftOZzHuTt+n9EtgSdIouErai8GDhpmAo+tsAwGKZQgmy/nwvtlFAMHRpZMIjsUhUiAoBgVUEpulbtuUiYxg4qwAOgnPAD3N8Xfzz47/rt7vcq+Q3tzfKH8EtfMX8OuZmdDq/og1otMTWi7UxIuqdRo/MOyDEyRq7GaEqttFszhKlDCcgc43ah1CcfpDLzs7Nrd6Q6kqdJIUEpZsFnbgGI2sRmoi1oq97S9w0/g1E+d7PvnRp2l7hp/BqJ872ffOjVoNK8D/D/Tj+uOkO7+fJ36v9r5H2cMpPpsz/AEke3v8AeeVqvu0vcNP4NRPnez750a4n4L/DTAoiOmwhQ2HmN6sGfQ5Q26x5fVNVA23yFMQN1UxOomRBW0OlZUjp7psqdk0OaX6QLQqZYQtCq3PKSpKnW0kFImlFQIvcBJuLC3DkmbmVKSnpShcgXQooWMx8VRuEnLaRvPK2qjw/uHzpBzZpIxPk/JuI/ulvi5hv0k5N/d/k+E9GjC5OvOCjP7WW9esVGIC3hI6LZj6Fj2qZwbFcqmdvHbpYsyO1RaAv3BCfOjmrzi/1/MFc+FT+0IwN+XlTuf45b/qwuvv5oVolotN6H6OvTNCoj825R6fMTE09S2Q8+tSBiVMTL6FKcUq+akpJUQMuGiVisVVqp1Fpqoz6EoncAwzqkpQknIBJvZNvzd1rHOK8e1RaA/3BS/Olmvzi07VFoD/cFL86Wa/OLVhoiAAImABDY3tgNygAlEDCIlENvYCYAEw8vMJdym3AhowWZdWqh/qWyNbF8YzsqK0zxsCLvG+RYyRZuLuuO5CmtVFZhLs0sgy7hJpuvdxFFT2HbxBWhWhBdkMIIvdgmNF9DpV+UljofT3OlLCA+ihyTkok4blTsz0T8WgWIKhvI2XjFZqlZdamHPHE8dQ3rChU+pOOykJwpwkLKuteyc7BRGwx4j2qLQH+4KX50s1+cWshwpNAgDuGBSgP+NLNfnFqw4d+rcNg9lyh/wAncN/c9/asVmfAzRH83RbR0qtYBchIqTsANwZZINhc/GGdzfZfFFdrOX+0KhuzE5Nr+TngWrCrjZWWfIRR/qn4emjzGk5pAbWXiEsMhlDW5gLEN9JhfmUZQZ7Hl8ObjRui3eaYvWQPG+mibFqASkR6CmmoogRk/bprOCqWrBwYOGoIiU2mwhjAG24Zgz+BQEhzkOUpBykcw8pg25jHTEO4CWxhEsdtcQANy8P3f4yvSoIfL6Ou7b9PX+aryw9qH5JPrUr5Sfhy1Cc0N8JNDp+iNSnqDTnaA265L0aZdp8sqYC2jdaZRbbdklagFKIIuRizMb9Sp2cepcq87PTa1h+auXDiBAcSAFnaABYWFgANuQEVejwYOGkUNx01kANwEQDMOeyiYAEBEhefKQAYxwDlKUB5hMICHcGv9CP4VenTGBz3DpPuHNOkDKkeYshA5OwvmrLBJZhKNBF1FqzcNcd8TcZOx7E6ZkpK3lyRxJpgquwGWYeiDKGsyrIbiIbdYgJTAHvmIYDlD/8AUUK8VteEDThDjazpVWpkIWlRYmatNTDDoSUlSHWXJlxDiFAKBQpCkkEAgjZICcmLj8eFZjLMXzG/zd8rTq4MOuTL+rLDOW8VaoRi3urHR1k8+F80XXANIyLgMnxshFp3BjPL0ZCRSLZtBGv+1zHPLRSDRkxTnIaTkIyNgo+Tb21CXKgO4APvhvWsBwMjAHEk43yZREpCjw1VgT95VxgTLR3BvdH25SB1+/Wz7v7Lb3OXfwd2vQU8ApyWnMKENzdJpM4ZVHVS1MVOUYmVuJSMgEFxScIyAFgchE8lSy20pdsShx5DfvvY+U5c45UpSsKKxxH2pPyifxZK5VxH2pPyifxZK5UO0/q/sJgfjL/SPuEKUpSEKUpSEKUpSEKUpSEKUpSEKUpSEKUpSEKUpSEKUpSEKUpSEKUpSEKwPc/OX6wrNYHufnL9YVUbfMr3GKp2jyj3xrWcYz8K7wHPyuJ9/RuxhUow7ofkE+qoucYz8K7wHPyuJ9/RuxhUow7ofkE+qvQ2hn/hqkcejLvx/LL2+e8SLX5JP6n/ALYzSlK2mL0KUpSEKUpSEKyURAQEC84bgUS8vOBgOYCAUS+8YxgKKn/kN/RH/kqxUK+IfqcbaQdG+dM5lcpoXFb9nuoawklC9IV3kS7Dp23ZRDoFMRddsynZJrMSJGw9OSLjHy/UkiqYuPNPGXl3XUglaU2bSNq3FEJbQOa1qSkeWL8sz0h9pq9gpaSVbkpQcaln9EJKvNFK2InCnEO48uQ8p8wymDuHvbCllWwsJjqw7y/IxxL2y3VUA3sElX1+vMhXPGyKYbvmNjQJDiJRLt0dCEqbh6cYjVDoXl1PSnD2qI6uXMIt1k+hjWU0o3k72t6FYNihyt2x7eXvTHxjDsdzN2bbpzbgmU1Q14ZNvcbDTNglxdGlLRNgrJFj6i5GLy8fJOW70sxO87qjJWKQSgN2ZtUGNnTWAFmu4k4lpM2qk+I4nJSRI6AsiIl8u4l/beXFyYW13ardKWJ8EPtMNwW2zhsk4due2JJZVyvejOZthlebNhn3L80vGR9y9MxipJKPhIoHdwSMa6cvXMi0Ac5tDNOfpEukp6GzKmRrF3C0HH6mpLq0haswWZtSFWIF9w2CLT7ip/xq4qxTMutzVMsQSl2nAamxGSS+3rU3TsvY7TG9MYe6HcATbe06XYqZ+kIBUtt1FSgUxhDr5HfSn/va1g9ablTX/wAafTFo8i95LE2jxmXMeY025xeMF54npPe8y1eoj7F4k4bIY2sJUwAJ27y4JcoCQoH3vKjdYWMJLRelraB4onjP1D1M1ukkhSO+bM20A4k5C2EiqrIJjcKEq1eWgdoZQCIXMgLQ4Ck7UCtR/huveMMjP5r10aUtJeHc6Las7puBedyHli5bXjRRXiLxm5S4omyYp7qExLNR8IFxvSt3/ouNfsFlbYiWkW4InDuCDxDSWq8pMykmXojRedGDXDpjyC1TiXd2Ehb+K1sCmzfMJPBCyqkKcRlMVIiRbsbHVhaHZ0ptnbAUNZfnpcGy9rBcgyJuG/x37evbnVh8A8RC328JcXLuWIjr/uCRbQrl6UhQH0Q5Z5JQt25pdwp1tobJU8gUeU5gLtGjvzG3AobmOYoED72JBEB6RP3yKKisT5UBrSX4oNrcaXU1hCOvnVnonwhjSx9N7qYyoGTMRXjaSt4W3EHikmlwkcti6m8nP3UA5TbRctIJRVrnkUnMMxepvEGrd0ittBcNzVG01haMcI5rO9TdXXIWs3tjJKZQTSUbZKs8AgbvOq3RH0O3JKu2iFwRzdEiBkoSYil10zKPQMF2UQpdIKFLxTNKmVy6wVAqdkpxWulXnxa6XmjiQL3yUDfCRbjOWbnmnkAdHnmUJTgScPTZZIamNUfmVoSCrYfxdiDa5nNSlKxo5wpSlIQpSlIQrIAJh5Q3HfqECjyGMX++AFBHoyBy7ibpSqJqEAyAkEypRDFNwDrEeUAAfZdYbDsOwcxRKYOcdk9gMBVOfolSOEVFGq4YiQEKCVblKw2HG+Pq7L/GyiotcX2b4hzwHtQWB8Q2dxQYHLebsSYunZnjKa37liIPI+RbOsWZkrdctMSQzaeYRNyTUS7eQi8tAzMS1k2jI7IH8PKxwrgsxMkW9z17GjP4W2mP5+MX/aqtcTgx6FNK2rGF4md86gsVFv667U4vmtewIGTC9skWkMbaTBbHd2t4cY+xrutmHXFK4r0uZ8L50g9kj+jgYi+UjI6MQbXOdpu4cHwcQ+eDPHnVrozSVNANeqomnqqJjXt63oiJRbWxsXKFKSVZfXGeYJ2RHL1eNV72vuvy4+22ftiT/r2NGfwttMfz8Yv+1VcTa19GRgEDattMmwgYBAM84q3EDFEo7FWuwCH2ARExdjCBAMflECiIRi7Tdw4Pg4h88GePOrTtN3Dg+DiHzwZ486tQWr0Yt+WrxNtnQpPM2GX8a4372jj+J+t7Yr8tDUhp8i+NTl3LMhnHEbfFsnp2jINhkhLIdor2S9mk4DGKC8KyuxGUCDfv2Jop+kLNFyo7Ki0dKJkMk3XMW44dd+ioBDfVnp06hOnyhmKwhAgIn5Sbk9PB5TKFNzCO/ubVHTtN3Dg7nrcQ+d/PP1+qrtTtN3Dg+DiHzwZ486tZ87O6MzxaWTXk6tiVliehSZt0YWJ/jW03zt54fifre2JF+vw0U/Cy06/O/YX/APt1kNeGincB9djp2HYxPYhl6wxE3McpeQChPFMYynN0aZScxjKGIAFHrEI59pu4cHwcQ+eDPHnVrIcHDhwlEB9bkABzF3MGX87jy7mKBT8o5PXKbkOJT8p0jkHl2MHKIgOKj4LlbYD1cUStFkrlJZtJOJFgpaJlaki5FylJ35WtaqdTiHxto798uOUaV8nkC5bKzLc2QMa3nNW1OMb5uaVt68rPnF4iSSItMv3QP46egHBVxRWYLpiZ01cqgqyMsiukZos4MS7nSdx6MnWT6XWlqwtUcs2ymCLY+SbQSh4LI7FAUx53MvBiMLal5AKZUgauWadmSrgTmdLubkdpjvQ7e8M3jb9uy3oViqDaPu+4oKGYNxVeOSIx8q4bsWbZRdRw8kFyN/QqaR3b5aSXPyoopKqrbVa9pR4LGqPPwxtx5PbJadMcOjFXF5e8U4dZElG3SK83pRjIz6IlmSKLlARKrfL63Ods+O7atJZsUzRx3BVpfRsyEuauGtSJNBY11ukbE2wi+ar+Y5+fKc1OBN9ls9mzLhnfhbfHrPFI1OYM1S6+eCNemC8gQ18xUYbiOtrhatCO2Nw2s9kcA4wNHsrpgJNJrNQisiMXLDFDJMWycmWMlF41V81bncB717pvyjfwhqJvEF0bYd0O67ODXZGFS3YRPIi3EJeZNnbkueQkpW/39o4FskLVczzRIkfbLQlut7yuJtEtISFZIskJmRbqGWUN0yksvdHrKPXtzd3nKXcCCkb3U0wHlEPdMIDXjzw2CXTVNFxJ26INH3wwbknVmrTh6wIFjiKxa5FgDe5IGsVPDiRg2X3X4i9/37oUpSulYjYVVDxYv1uaEf3yrS9/oeRateqqHixfrc0I/vlWl7/Q8i12v4DP973g+/7jk/c5Fxr4+ezC5/hqiQ3e/wCDJ/3qU73/AAZP+9Sv0ZdbErCLnC2bcbEE+wGOpUJxEAbS0jhxJ9ufnj8vfEi+iLLu6VjFfQ8lG2xPP49x0KTjoHzSLdLtFOhXbukFNnCaYdGugogpzdGsXozGqBvC/wA85W1FadJS/Mw3UF33U3yVcUEhKBB2/b/JEtIS1njdt6XWvEQ8OmJXL92qJ0m4rrCqKjgwnDcZyZL/AGOb+/6F3R/qR9VXHBP/AGoMx/jiuzf5fudszetJXMzB8IMlJtTRRIO6ITE4Jck2U8mpoBBHEfmnLK8TKENL0ZXMkJx+OGWwbZpIlXBhucxi2kDIkC4ukEW+090v5Rf4QUp7pfyi/wAIK3Q7POPeIgzls4j3iIh4n/DBP/3td3/SjY15F2Q7+0/xZ/lF2/8AyFyRXruJ/wAME/8A3td3/SjY15F2Q9+0/wAWf5RdvfyFyRXxe8PX/wBYSP79If8A603HdejRypuf82H09HPt2+2LvcbfsdWF/wBDbX/1DHV4LrhwhbmoXSnm/Glwx7Z6d7YNxTVsrrEEy8PeluRbqZtSYYmL7MjhpMNGxF0y7Fexqz+NWEG71Ya96xt+x1YX/Q21/wDUMdXlWrfKEHhjTJnXJVwuW7djbGMLvWbEcn5E384+hnUXbUQUe4K0xcL2MjUA/wDWuiD7leQg5ONaZFynlwTqdJSZYt3x68VBOqtbPJVvMDFila4zknqfymsYGey2BGPF+ri2xWNwMcrzeQtB07aU8uq8Vw5kC87ChjrLis4+5p3BQ15xKAqCJjFTQfXRMMWO5hbs4pizRZACbZcoUf6SNEMvqV0UZtzJiAriN1K6es9hdmPn0Sq5jZy5YGOte25eTtRku2MCiU2wfNC3PZS5eUI2faOozp2f3TA+bXScBvHUraOhS97zlEeiSyrk287hgjCUedxBW9b8FZ3TqH2DbluGFuJuiQNwFBFM4D19X47sePcmn/UCIiACXPQey2EwFAbPt0vSBybmE6ZRFRMhfvi6pCN0vvyxK9EVCruaNz/hdrNGQHJiUq+jDy2iAWXrKmEzTCk5Xl3iXm3Rf4qnBcmxEoJhuVp8yppOKQ+Ebowi1yhSQl9AuNuJSgkWtcAAWifvDR1vxWtnAEdcEq4ZMsxWCLG1sv22iZFJZOcKmBY+7GjMABZGCu9q0VfswMBSM5lG4oMDughyPF4B4LHfsgLVUHe4EQ2/61rYI3+oP6jXl+sOx7r4WGte2ddmHId4tp4zbOL2xnix4UqQxcdNTano2dbEajsg1NPLIL33Z6hwKgyu+JkGArEinTZJX9HpRvu1cl8c7UNf9kTDS47OvDTXD3Fbk1GmMo0kIiTsvADhi5bif2bcATWSUBsYekZGIKS/9lncVB06iyCZbS7S7RpOOgaS6DVV9DCli1KrCZySM7SXVpAw6hWJTdwkqZCVJBSpJNh6W6FTK0GlgtuyUq/JzXFkzKVrYSLkY5VQu7c3ItfZFymtrUEz0u6WsyZqWWRJKWtazxraSCw7+jr4uFZG3rPaAnuHTN/T+TYuZRP3IVvJqB/xdUZYI4fr+/uDXkxzMsFpDMua3Ejqpt1y7bgtN+mNpoKurBYtF+b7+a8bTaTq6Tk5+ZqTIzwx+sBAP1HHDvW/88ZU06aAcHwx7xv243rrKlxWu3fR0UnKyKcdLRFoMVpyXkYmKZEbw8fe03PLSMmxaN2ikK6O4TM3IYvqsDmHjwWxAwtrwehLSvGwdvwzCGh49vdVrFZsIuEaIsGrQiJtXHQooosWiCRW2zYvQIkKimRuZQquHofSJ2i6CUmep1XodMqlerjNcK6xWJOkk0qlvlMomVS8kuuy8zNB91wZBxpaDe1hGQ0joaKS228FHXP1KeQVJBJmU6thICjmkNKW6SRZJFrEg4ZgcI/UwbUtoxsB5Mviu77xSUMQXxzHH0Qq8s9q3Tt2WXKIiLg8zZrm3XTuQ3H0XOFndx3TGrN/cN+Sb+CNaoPDFunMujLiJ3pp41KWHHYZd6p4k9yMbCiJGJk7RhbqVkpW6sfmt55EXLdrRKCTYlu2zIVuSdknpZBxDsZl0pJRCnLtfdfKA7iJeQ+wc/MUAEoHAdu/OBwOqPuOBXD3a0Xws0JmkaVvz0h0XxTpCy1VZJ6ReLrMw66gCbSy+wSw/LMv4rn4qseLyQcwymUm5iVTYJbutjKwMo6QpGHbcJViF+XGKM9DX64uID++Warv9OtCp6VAvQ1+uLiA/vlmq7/TrQqelffDwOhR8FmgISLq+D1NsOP4tH7o680hw+OqhiTiTrW7p4/wdm3ttD83ugIjydJy8ogcB5TB0fdKAbqmTIG+/OBuUBq91E6isy2HxBdKuDbVvD0sxdkaAQe3na/pBa8l6bPBlboSV/tzJwUnNsN28cwHo4qVQIXYBAQECiFoYd38xvqGqTtXX4V3Q4H/ALLof6/vHy1IadTUyx8GOjzRaam9K6TLzDAuMaCp4qTuyUAPLvjnQGkPGtOPMgFujVNTBIuAQy3hVnw2jgrMWNouv6vYgAdQcwkHfl+9mKiCZeUplek5Sl26RVQioe1BLlMYS5rBe4PyE+o9ZreBfANttSPcj/P2xCo+MPP7jFenFW/aE54+XF388+O6vkN7c3yh/BLVDfFW/aE54+XF388+O6vkN7c3yh/BLXyZ/wBIp/4o0L/6c97mbx2BQf5El/70/wDsy8YpSlfN6JaFKUrMp38oSP8AfJb/ABkRyR8dP6SfeIod4VX7QjA35WU/55L/AKsKqvXhVftCMDflZT/nkv8Aqwqv0naAA/AvRiwuTQaZlx/EojrmtgmrVYJOEmoKsbXtkYbCPhDqAQ2KbmAwgQS7HVT7vNtuXpDl9sRMxgAS1bYT1H5lu7iWajtPk9d5ZDEliWAabtO0/SC22gxMmUuJ0yuhuFrbzC6JXkLcc4kCczLOkgB3zEQOKaSje0n3S/lF/hBVJOm/btyer8PdHFRR39zYEsFh9f1eGrekMzMMaQaGS8vNFDT9RqaZli5s4noAJTkcwCBtuN42ZX6ayy5SNIXFywW63TWCh8gdVXSUXIvexAscrHIgG173bb77D75CAPylAQpQPak/JD6xpW5RBH8z9FP/ALIgbri/XNw/P3yrSr/pt31eUHtQ/JJ9alUa64v1zcPz98q0q/6bd9XlB7UPySfWpXxy/wBIP/vToH/bSPTrGP8AOOyqP/JMr/zpv/FEKwPdL8v+wazWB7pfl/2DXgUbR5R74kN6f0k/tCMcDT8JPxwv8Hw0P5g8uVtAh3A+QK1fuBp+En44X+D4aH8weXK2gS9wPkD6q9bz35Chf9r6P/8A8qQt++3njZkfFSD82LX8qf8AP2xmlKVHxyjiPtSflE/iyVyriPtSflE/iyVyodp54bfYTA/GX+kfcIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIUpSkIU7gh+UX+EFKAG/gEO51b9ZvYB7H++6zdz8/uUAUTZO3d5sz7IeWNazjFj/wrnAdD3zcT39Gm7GFSiDuh+QT6q/M8cXTdmq87M0va1dNVivMrZo4fuWZ3JxsSQrV68u/KWD8kW03sjOtnWMLLdwrditrs42aYRaRHK71pCSCTBjKSQsYt9B7F/FW4feU7baz6OqjDuOn50ihNWXm2+LYw5f9ryqZ1Gz23rhtm+pWBcJTEM6QXZSKcatLxyKiZBaSj8ihn7z0DoRNMzGj8q2yoY5clDoJz/KLcOWRF8eW0G+24UE57GQ83YPN5IsIpUQO2D6B/hv6QPpLYb+3FO2D6B/hv6QPpLYb+3FbfGTEv6VEDtg+gf4b+kD6S2G/txTtg+gf4b+kD6S2G/txSES/pUQO2D6B/hv6QPpLYb+3FO2D6B/hv6QPpLYb+3FLlOado2e6H7iD6DeJfiAiGwGMQR2ADAZIgFMIgBBMZbYgDz8vR/fUR6fogBUBHlNQ3xoNJmrvXLP6WcA4cx6q407MsiM73ztkkb3x1b6VtrP5NK0mJWsBcd4R1zTZ7MtiQuudet4m25hNWVlYwxDmMUpTWS9sH0D/AA39IH0lsN/binbB9A/w39IH0lsN/biuGrQVSbjgBMm+JnM/Gsk5Wvmc72HlOUc0PWTN2uDNtCX22sSUC9xaxtvy35xKS2LchLOtm3rQtuPRiretSGjbagoxqmkm1joeCYtouPZtiA3IsDZu1aotG4qKpAYGqgggcNlE/H9UeB7e1Pad8xYBuYECRmU7EnLXReuEumThpty2Fe2p8hATWOLqAuJvFzDEpCffHzJskqdJBRVUnnfbB9A/w39IH0lsN/binbB9A/w39IH0lsN/bik0hucS8h0AoeSVWVlZUxZSVDL4za0oWncFJ3xSULkiptbKw2tKgkKOYswMJTxstClIJ2EKPCNYaN0X8Y2P4aNycPMNLD9VpIZpjboY3OXOOnc7BviRfe6Z+zkE0svJPSLHydGx10IkUjXC6rSTlY1UyLhBFo52wNL2DYPTRp3w3ga3ejPG4usC3bXM7SKkQstKtY5uafnB6BNMiqs5Pmk5Vy4OmiC7t2uo1F0iJ1i+cdsH0Dj3Nb+kDub/ALZbDfc9/wDXx3PDTtg+gf4b+kD6S2G/txWX0t8ImU4sUxNqlTNP2GucMtLplkS6hmNQEIS4m2WMbL3JsqYZXMy5KSiWl0Ta5VgnrS6ptaVuYuK3FhSidobVbeIk/dtrwN72tcdlXVGNpm2Lwg5W17jiXhTKNZKDn2K8XKMHSRAMZdq9ZulmrxAE1QWaLLpiTYwnJQjwY9H+s3QblPVHgrK+OXPrXbluyQu3DuUyXvi2WRfzUBMktZrKjacJdcxdMOfJViBDzTr02hotCNPZ6LJcU3L1umtaf2wfQP8ADf0gfSWw39uKdsH0D/Df0gfSWw39uKx5e0rMOuNX/GyCpJ4bipRBBO4lNzY2335RecKnZUMOAWE+mcaO9OEWIB3A7+OfnmBvzBvtsAiYCFLtykKUduvk+9CZTcDdXX1Dt1b1iogdsH0D/Df0gfSWw39uKdsH0D/Df0gfSWw39uKRxiX9KiB2wfQP8N/SB9JbDf24p2wfQP8ADf0gfSWw39uKQiX9KiB2wfQP8N/SB9JbDf24p2wfQP8ADf0gfSWw39uKQiX9KiB2wfQP3PXv6QdvdENSeHDmANt9ylLe/OIgO3/FmTPtv7MC8wD4tmbiuaN8eQQNcXZbtLVHmKeOnD4vwXpouGMzVkHJN6yIKJ29bkUzx4tdIR5nj4hPRj9+46SPalVVbt5CQMyiX9CpKesu2FNlKvwGZ90Il72Ox+xxxVf37PXP/qPCNbDdVMcFzSHk/SDoraR2eUGEfqH1EZZybqyz9BR3RKNrWyhnGRj5h7aJnSL+QSdyVqW0wti3p9YjhYgT8bIpNHT+KSjH7q2evNuk76Jmu1WabthVNNs3GdwlKU3uRsIAItxteIx38o5zAOWzPCeJ8/OFKUqCi1ClKUhCuCnPsHR7cwjsAmMYgFMYolIcVCKEOmUigkOcyZFj8hTAVLcekT50qqVFKgoGxSQoHgQbg+aKjI5i/KIXafdAulnTXPyl646xs0dZBmZWQlneRbyMFz3w2czL6TdLpQsm7Tbx9qE5ZJRiCdpx0Q9XYpNizakq5Mk5QmaQ2/WG4l+9iXuiHKBhICYibc5Do8ggcFQ51DHEwe1MBf6U32Eu3dEdg6ubYTAJdxL/AHwdfc9/r9ysp+ZmJt0Pzb5KQhLTQJJsbAJIsd9xs88VxCwFst/Pnu82fljV945f4Sjgefk8S3+YTEVcvdN+Ub+ENSX472lrM2Tca6b9YGm+x3uU8zaC8oT2SQxPCIPHN3ZOwxke3mlnZptCyTsSKuVbqVg4+GmWUUg1fuXzSGkEWEZKyQsox5UXjziO6JciwSE0nqOxlYbzlMlL2lmC7LexNfNuSaKhmzyAuC270kIxwSYiHqDxpIBGKPIxM5ExaPJAivpi50HwlUOtVqX0bnaVT3Z9unSDtNn+jNuPvIeVPzUwg6lkFZCkvgpIuFHGAbpUExVUZdUppSE4kixIIPWFgNh8mY8hMTdpUYPXu6LfhfaYPn+xR9qKevd0W/C+0wfP9ij7UV1T8FNJv6v1nd/NVQ5f+X9/6xiN1bv9HHoESfqqDix/rc0IdX/pK9L3+hZF66mB693Rb8L7TB8/2KPtRVZfE41R6Zb6t/RgnYuozBN5rWrxAtO163Mla2W7BuU9uWfAsb+9Obsn0YWbfrRdsw6jtkWVmHaSbBqLpugsuRV0gU/aHgV0frkj4VtBJudpFRk5WX0glHH5qakJ2Xl2WwFAuPPuoCGkC4utRAFwdpi4006V26OBdDudhf8AJr2Wzudg8sT773/Bk/71Kj0OrfSkBv2zmnvYwexAM0YxAdg9nzqbXOkJ3KgqGFwY6BTgYpQ39sA49d1pR+E5p7+erGX2pr9AI0hoZUtDtXpScGruoVFrIiwB65KTnkcWW+Ork0yfuP4E6PxbVrNr/NXcjZw4i4yvy9evePfS1l3fExiXTyUnbE9Hx7fpUkOnfO4t0g0T6Zddqgnu4OmbpHC6aCfL0ixgTKaoF8L7AuVtOenSUsHMNqhZ91Ocl3HPoxYTlv3AB4p5CWu0bOfTG15iZh1BO5ZPExTRcgsiKYkcFKcQKElvXdaUPhOae/nqxl9qqeu60ofCc09/PVjL7VVFKf0b8eprrtbpZn26aun4RUZOxaXMpmQOoSoZDamxIPozNTVTTnZFMi5hdn253NtY+IkgjYAAcQvkTkLH415DU90v5Rf4QVHn13WlD4Tmnv56sZfaqsDq50ojsBdTmnsB35t/VpxiYPY+zADFG6ylMmYSgRQpuYBTMb2IjttK/CKh/TFKRs6xqKQB8XMkmw/z5ZYgptUuLyKwN9kKv7o/F4m37cG/2Dr7Ws928AjqgZhv+mv0HGW0w5z1W6crAsLAdjfd5dcHmSGuqRixuO1bYBvAR9nXvGPX5394T1uxSqaT6ajGx2ickR4crsy6SZ0W7gycccX6mtNzDipvb/f6gsIMbBDh/OLLJe7nK1htbO+6/wBcXHToWincq8uhCHuAIbpJhvDovVHx4QCuioig1XOS2L17mi34X2mD5/sUfaivi9+E4qvSf4RVU0p0epblZ8XPykxLvtyU1P05xYafbUHXZUodUnC4QdS4FYrXNriO0KU5NykrSymWUVok0gpORB1dilQ3cr7stxiqeFzRx87fg4qAa6INMi7eDi4+JQWc3pa6p1U4xmk1RVXVT1bpMjLqA1SOsm1Qbt3C4FSUQMkodsr+RuvRjxQuIVJW5Da478sDT5gSLlmcxNYpxY8aSM3LOmxzKlUTj4d7d0W+fHIo4QZSN2X1MMbUWWTfQ1qPhSCPWuG9e7ot+F9pg+f7FH2op693RZ8L7TB8/wDij7U11B8KNMGXXJ2meDCk0mqFS3lVaQ0XqpnUrWLKellTLr6EzFicC1IUUqUSM8xkqfmQ3jlpItuOOddYNikXTcix4ZcSdmcev2fji2MX4wg8WY6hUIK1bOtBGz7VhEFTEK2Yx0Wmwj26y7v0Pu/Eqih5WSeJ+iXsgss7eO0JBQ/T1a8GvStnnSjiDMdr58sQLDnbsywW5oBgFy2jcno2DJbERHFe9JZtyXJHsud0gsT0K7VYvA5NyNzNgSFOdnr3NFod3V7phD/3/Yo+1FPXuaLvhe6Yfn+xR9qK1GXVp2ikaQ03xHU3DpLMyD89NTNLqZmkdCeedGryCSpevUJjElV7gpIsb2cDxlFSIBVK9OFQSk7S/YJVtzFyb38txst6bmrDti5/xXe+HskRCc1Z1+QbmDlGokKLpuouJRjpSJWOg4KynIiSI0k4WREgDHSbVq9L0hkCoK0DcLjhqamtGut/Id3ZCtRo5w4jjy/bJtLKcXdFmroXWD+57PdQz4lqM7lf3bCGl4mMfOHKE3Gs0IxRJ6n6ZGO0QUeXV+vd0Wb7eu+0wb+96v8Aijf+VFPXuaLfhfaYPn+xR9qKztH53wgaOUevaOyWj9UckK/KKl5uUmaXUjKsBQGOYkFJwlEy4kFCxdSFjCVoUpDZTz1k0iRNOEuShSgok2thxIWRc3OYHpvziunRppI1FvuINqX1s6o8d/cGpMIuoPBsW4u2xboW+56XcekTZ0ROzbouc8S9tbG1sxMIZN4rHtXy12PlmLUyiD0G92vWPMJuoN1OYDjsmRMiiqxknXvi1ExxKHfGKPvVGH17mi7q/wB97ph6+5/d+xR1/J/uop69vRd8L3TD7/7P2KO57/66Kj9ImtM9JpiRmpnRx6Sbp9MkqPJU9uk1UyjEnLBCNayFBakvAJu9dRuq1sso4TSHpqamHgyfx5bUm9stWkJI3WuSdwzJiu7i6aIc2agpTT1nbSxbSU3n3DN4IoiRGbtO1pD7mUXRbzt+WGWvObgIZRWyLtjDnRaejwdLEuwx0kVEkV+W4Ow5S6Jux7SmL3tlWzbylbZhpK67TUkIiYNbVxvYpqvNwikrAvJSJeqRz86jQFWcq9IqVuLvn/srq8N9e3ou6x9d7ph2Duj6v2KOr5f91HVWPXuaLhAQDV7pfNzgZPYc+4r2HpCiQABRO5DmROImAE1+lbEROJVFlwQBVNXnOI0zntHabo5OaOTymKHMzhptTRSagZ5mVnDrFyKkrBbLIWS6m6RtsSSITCpiaVKOKl7dDk1yhJyUcWDnmOqNoyzOzKK0tDX64uID++Warv8ATrQqelVSaN9SOna2Z/XEtceesNQCN1cQTUpeVrLT2UbGhy3NaM8+ts8NdsAeRnmpJq2pwjZVWMm2Jnca8Mi4Fg4MQqwnmr67rSh8JzT389WMvtVX3p8EdZo8p4MtCZacqUpKTLOjcih5qbmmpV1KtSkFOrWq2IEgWVv8saDXZKouVieWxKLUlS0YVBCjcGXaSc8NjvGR2gxIb8/ugAhz9HzcwgQA5jD0fdMA7KlUIO23IJuUQq91E6dcy35xBdKucrVtD0zxfjmAQY3ndHp/a8b6VOwlboVV/tNJzsZNv9m8jHl6SKilyG3AAAREoDLz13WlD4Tmnv56sZfaqnrutKHwnNPfz1Yy+1VbbV3tGq2ZFD9cpesptSkKrLkVCR+NJIUjeb547kZnPneMSQarEgl4MyKyJinPU54FtYBS/bEbADZtvfbtBAIiQvV7EQHqHmAgbc33spURTNzFKl0fMU2/RqpnVH2wK8pTAbNR59d1pQ+E5p7+erGX2qp67rSh8JzT389WMvtVUwNIqGlKGxXKUSlP51Rk7HIbcBx2A4Zg2AyvGKKbVP6CsbcwhVxl5Ij1xVv2hOePlxd/PPjvf9FXyG9ub5Q/glrW44lWo3T3fWinNFq2TnbDd43PKjjf0ttq1so2LcNwSYMst2FIvQjoWGm5CSfizj2jqQdA2RJ6FYNXT5VYqLVQp7qPXu6L9g5tXmmAu4F5hJn3ExUjHKG3KUPunSOHRAPsBMiVQyRyFcD0qQAPy2/D6Yf0j0m0QXQm5mqIYpz2v8Wy659u5DWS3GQpSb2ukjNRBFza0b1QmJlqitNvSykr6S6rCpJBsUsgEAgEi4Odt3DMyepUYPXu6LfhfaYPn+xR9qKevd0W/C+0wfP9ij7UV89vgppN/V+s7v5qqHL/AMv7/wBYxJat3+jj0CJP0qMHr3dFvwvtMHz/AGKPtRQdbui7Y3Lq90wCYSnKAjn/ABXsTmKJekHo7mUP7ABE25ElTlHYyaYqlIIZchotpIiek1KoNaQlM1LqUsU2eQUgOtkqxrbwpsMyVZW/SMVS27iT/BxtGwAb4q74VP7QjA35eVf55b/2qwuqkeGrqM0+2Noowva18Z0w5Z1zxQ5INJW3dWUbFtyeiQkct37JMk5GDmp2PlGIvI50zk2YLtzJLMHrd43U5HQ806PXdaUBDcNTmnsQ9/1asZbfyqr9DOg9borOhmjDD9Tk5V1qj00PNzT7bawQhFwFLVgTbdi2DIxoVakZ92pVFbck442qexAhCjjSTtBCcwbg3vz2bZDbiHgDqER3KXlAogcTbnSU7nLvsXozm9qRQphADVb4T045ltDiWajtQc9Z4R+JL8x+eFtS7fugtt2MrJmLiZQrQbea3C/uiL6QluTiwKzUS1SH0GJSrkMqimvMH13mk/4Tunr568Y/aqs+u60ofCc09/PVjL7VVNT01o1Pz9NqD9cphcoc0+5LAVGTGJU1KhgA4DjsQSThzyyyvFpmXrEozPSSaYsMz8tqlkoUcKcSF3BtkciBcEAE5XIIkN7xfcAiYj8pgMPy/wCylR59d1pQ+E5p7+erGX2qp67rSh8JzT389WMvtVUn8IaCclVilIFs1GogAfFzOfL28oxDS6iEISJJ0FNvzFbiN1rbPNbeBt8T1xfrl4fuwf8ApKtKn5v7Ou7cfF1fnq8oPah+ST61K1zdY2pHTtc0/ofVtzPOFrhQtXiB6a7wulWHyjYkuhbdoQby6Bm7ruE8bNvwhrZhgctjzE7IEQjY9JZP0U5TFZIqlyfr3dFwjuOrzTCUCgJNzZ9xRzGOUxhOcDGupXZI5RTBNuioRJIUzrilzOw5Pkh+HhIT1d8JdAfo0lM1eXboCAqZpsu5PspJW1bE4yFFJNiflGxyFrnf6RLzDdJlm3JYpXr5rqqBBF3QdhANrWzGWeR23k/WB7pfl/2DUYfXu6LNw31faXxD3QHP2KTBttv1kC6eU4bgG5VPYf3w9ZQr8JkHiOaJMdwTiaW1GYpvt2UqYRtn4qu63srXlckksqVGNhIG3rHkJpdxIyb0yLNu2MkxaAqsT0xlGDH0QrXiFnRDSh11tpGjtVcU6tLYQ7TZ5ttWMpSQpbjeBIsb3Vl6YzujPkgBkIJIsrgbg8fL6D55scDQNuJPxwje+nw0NvzYDy55a2gQ9we4O3c/r71UK8CLS3mjGmP9S2sPUfY8jirMevTKNuX+lia4WThpeeNsNYut95ZuGbevoFyNzo3g5iJObnZaKVZtXcS3lo1GURZTasvDxV9fdr0lPjUmSlVWU9IUijUx1oKCksLkJGXl3SlQulQ1jarKSSCCCFKBBOyYVAIC9qW7+kp99722WyhSlKwoRxEwFAPf9j/FkrkA7gA++G9YRKVVQxDAAAUTiAl6h6gR7o+73R9z6q7gIlANgMfYOoOsPJV8y7vXcXqsCW0qSEFQUkBdsiU5H90csKitQv5uWRGdrx1KV2+iDvj+MPJTog74/jDyVZ6v1/WHly5e7hnXAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJTq/X9YeXLl7uGbAeI9vZHUpXb6IO+P4w8lOiDvj+MPJXJASVAXcFztDhy2cuXu4QwHiPb2R0x329iOxgEpij1e2IYDlDrKbumKHcKI+Dbeoq5M0K6JM2Tqt1Zo0caWcuXQuossvcWT9PeJ7/AJ1RVyJTL88vddnSL8/MJS7iZQRHr9l1iAy26IO+P4w8lOiDvj+MPJWQ2qYlVJMvMvtknqlLqhhNhc2A32PHbneKhKhsI232n0HlEAe1YcML4uLQb9D/AE++H8XfhHxjTtWHDC+Li0G/Q/0++H8XfhHxjU/uiDvj+MPJTog74/jDyVl+MKv9JTXr19nL38TFevxT383e55WgD2rDhhfFxaDfof6ffD+Lvwj4xp2rDhhfFxaDfof6ffD+Lvwj4xqf3RB3x/GHkp0Qd8fxh5KeMKv9JTXr19nL38TDr8U9/N3ueVoA9qw4YXxcWg36H+n3w/i78I+Max2rDhg/Fw6Dfof6ffN3U/8Aog74/jDyU6IO+P4w8lPGFX+kpr16+zl7+Jh1+Ke/m73PK0Ae1YcMIe7w4tBw/wCZ/p983dO1YcML4uLQb3Nv2n+n3ue9+x33Kn90Qd8fxh5KdEHfH8YeSq+MKx9JTXr18uXIdyYdfinv5u9zytADtV/DB+Lh0G/Q/wBPvm7p2q/hg/Fw6Dfof6ffN3U/+iDvj+MPJTog74/jDyU8YVj6SmvXr5cuQh1+Ke/m73PK0Ae1YcMIO5w4tBof5n+n3zd07Vhwwvi4tBv0P9Pvm7qf3RB3x/GHkp0Qd8fxh5KeMKx9JTXr17rcuXe5h1+Ke/m73PK0AO1X8MH4uHQb9D/T75u6z2rDhhfFxaDvof6ffN3U/uiDvj+MPJTog74/jDyU8YVj6SmvXr5cuQh1+Ke/m73PK0Ae1YcML4uLQb9D/T74fxd+EfGNO1YcML4uLQb9D/T74fxd+EfGNT+6IO+P4w8lOiDvj+MPJVPGFX+kpr16+zl7+Jh1+Ke/m73PK0Ae1YcML4uLQb9D/T74fxd+EfGNO1YcML4uLQb9D/T74fxd+EfGNT+6IO+P4w8lOiDvj+MPJTxhV/pKa9evs5e/iYdfinv5u9zytAHtWHDC+Li0G/Q/0++H8XfhHxjTtWHDC+Li0G/Q/wBPvh/F34R8Y1P7og74/jDyU6IO+P4w8lPGFX+kpr16+zl7+Jh1+Ke/m73PK0AB4WHDC+Lg0HG98oaQdPxBN19wDBj9MvdEBEDiJBAB3KI7CHs2IdIGk7T6/Vl8CaXdO+EZhUFgVlMO4Rxhi+QWFyiZBT+y7PtyGOqPRGEFRO4OmcClE6JzFTMSTPRB3x/GHkp0Qd8fxh5K4OTlTdQUOz8wtsjrJLyjcDzbeB288zFevxHfhl33Wjok8HWQPam7oAXlIUOU5fYK83RiY5gHcDAAbddc67fRB3x/GHkp0Qd8fxh5Kjy2blRcdVbM3XYnZtIHI7t9tkcChRN8vb2R1KV2+iDvj+MPJTog74/jDyVx6v1/WHly5e7hmwHiPb2R1KV2+iDvj+MPJTog74/jDyU6v1/WHly5e7hmwHiPb2R1KV2+iDvj+MPJTog74/jDyU6vBfrDy5cvdwzYDxHt7I6lK7fRB3x/GHkp0Qd8fxh5KqlKVKA64uduM5bOXL0WzyhgPEe3sjpjvt7EdjAJTFHq9sQwHKHWU3dMUO4UR8G29RVyZoW0S5rnVbqzPo50s5buhdRVZe4snafMUX9OqLOBAywnl7rs6Rfn3EpdxMoI9Y+y9kIDLbog74/jDyU6IO+P4w8lZDan5ZaVS8w82oq6qg4QUqsBcWG8XG82Od4qEqFtmRvviAXasuGJ8XHoP+iBp+83lO1ZcMT4uPQf9EDT95vKn70Qd8fxh5KdEHfH8YeSszxhV/pKa9evs5e/iYrZX1fR5OXLvYWgF2rLhifFx6D/AKIGn7zeVgeFjwwx6h4ceg4Q94dIGn0f/wCu6n90Qd8fxh5KdEHfH8YeSq+MKx9JTXr18uXLvcwsr6ve3Ll3ytAHtWHDC+Li0HfQ/wBPvm7p2rDhhfFxaDvof6ffN3U/uiDvj+MPJTog74/jDyVTxhV/pKa9evs5e/iYdfinv5u9zytAHtWHDC+Li0HfQ/0++bunasOGF8XFoO+h/p983dT+6IO+P4w8lOiDvj+MPJTxhV/pKa9evs5e/iYdfinv5u9zytAHtWHDC+Li0G/Q/wBPvh/F34R8Y07Vhwwvi4tB30P9Pvm7qf3RB3x/GHkp0Qd8fxh5KeMKv9JTXr19nL38TDr8U9/N3ueVoA9qx4YYdzhxaDg/zQNPvm7rPasuGJ8XHoP+iBp+83lT96IO+P4w8lOiDvj+MPJTxhV/pKa9evs5e/iYWX9Xvbly75WgF2rLhifFx6D/AKIGn7zeU7VlwxPi49B/0QNP3m8qfvRB3x/GHkp0Qd8fxh5KeMKv9JTXr19nL38TCyvq+jycuXewtALtWXDE+Lj0H/RA0/ebynasuGJ8XJoP+iBp+83lT96IO+P4w8lOiDvj+MPJTxhV/pKa9evs5e/iYWV9X0eTly72FoBdqy4Ynxceg/6IGn7zeU7VlwxPi49B/wBEDT95vKn70Qd8fxh5KdEHfH8YeSnjCr/SU169fZy9/Ewsr6vo8nLl3sLQC7VlwxPi5NB/0QNP3m8p2rLhiD3eHJoPH/NA0/ebyp+9EHfH8YeSnRB3x/GHkp4wq/0lM+uXy5ch3JhZX1fR5OXLvYWgF2rLhifFyaD/AKIGn7zeU7VlwxPi49B/0QNP3m8qfvRB3x/GHkp0Qd8fxh5KeMKv9JTPrl8uXIdyYWV9X0eTly72FoA9qw4YW+/a4tB2490fWf6fd/5u6dqw4YXxcWg76H+n3zd1P7og74/jDyU6IO+P4w8lV8YVj6SmvXr3W5cu9zDr8U9/N3ueVoA9qw4YXxcWg76H+n3zd07Vhwwvi4tB30P9Pvm7qf3RB3x/GHkp0Qd8fxh5Kp4wq/0lNevX2cvfxMOvxT383e55WgD2rDhhfFxaDvof6ffN3TtWHDC+Li0HfQ/0++bup/dEHfH8YeSnRB3x/GHkp4wq/wBJTXr19nL38TDr8U9/N3ueVoA9qw4YXd7XFoO39/1n+n3zd1ntWXDE+Lj0H/RA0/ebyp+9EHfH8YeSnRB3x/GHkp4wq/0lNevX2cvfxMLL+r3ty5d8rQC7VlwxPi49B/0QNP3m8p2rLhifFx6D/ogafvN5U/eiDvj+MPJTog74/jDyU8YVf6SmvXr7OXv4mFlfV9Hk5cu9haAXasuGJ8XHoP8AogafvN5TtWXDE+Lj0H/RA0/ebyp+9EHfH8YeSnRB3x/GHkp4wq/0lNevX2cvfxMLK+r6PJy5d7C0Ae1YcML4uLQb9D/T75u6dqw4YQdzhxaDg/zP9Pvm7qf3RB3x/GHkp0Qd8fxh5Kr4wrH0lNevXuty5d7mHX4p7+bvc8rQB7Vhwwg7nDi0Gh/mf6ffN3WO1YcMH4uHQb9D/T75u6n/ANEHfH8YeSnRB3x/GHkp4wrH0lNevXuty5d7mHX4p7+bvc8rQB7Vhwwvi4tB30P9Pvm7p2rDhhfFxaDvof6ffN3U/uiDvj+MPJTog74/jDyVTxhV/pKa9evs5e/iYdfinv5u9zytADtWHDB337XDoN37u/rP9Pu+/v8A7Hdcu1ZcMT4uPQf9EDT95vKn70Qd8fxh5KdEHfH8YeSq+MKx9JTXr17rcuXe5hZf1e9uXLvlaAXaseGIPUHDk0Ibf3wF0gaegMYA69gMpjsQJ17CJi+y2AQDqEa9HxpoZ0TYVn0Lrw5o60sYjudsqCyFyYs09YpsK4UlSAHIs1lLTtCJk01k+oBWbORDbuoiPKdOW3RB3x/GHkp0Qd8fxh5KtuTdTeQW3p+YcbV8ZKnlEG1rbU7Rbbt255mFlfV/du5cs/Za0dEonEwiYdxNuYevm5RHbf2QoJHOY/tjiqYTgYADlEBEQ512+iDvj+MPJTog74/jDyVhKZULqU44QBmMZudm025eiKFCibkjv5o6lK7fRB3x/GHkpVmw4r3f8Q8uXIRTAeI9vZH/2Q==')
result = '''
<p id="{header}">
<h2>Optimized DNA sequence of {header}:</h2>
<table style="table-layout: fixed; width: 1024px; word-break:break-all;" >
<tr>
<td>>{header}</td>
</tr>
<tr>
<td style="font-family:'Courier New'">{cDNA_seq_plus_i}</td>
</tr>
</table>
<p>
<table style="width: 1024px;">
<tr>
<td> </td>
<td>
<a download="{gb_fname}" href="data:application/text;base64,{gb_base64}"><b>Download optimized DNA sequence in GenBank format</b></a> (should work for Firefox and Chrome, but does <u>not</u> for Internet Explorer)
</td>
</tr>
</table>
</p>
<p>
<table style="width: 1024px;">
<tr>
<td> </td>
<td>
<b>Please cite in your materials and methods section:</b>
<i id="TextToCopy">"The sequence was optimized based on the strategy described in Baier et al, 2018, with the tool described in Jaeger et al, 2019."</i>
<a href="/chlamyintronserter?id=references">(see References)</a>
<button onclick="CopyToClipboard()">Copy this</button>
</td>
</tr>
</table>
</p>
<h3>Visualization of the optimization process:</h3>
<p>
<img src="data:image/png;base64,{fig_normfreq}">
</p>
<p>
<img src="data:image/png;base64,{fig_exonintron}">
</p>
<button onclick="show_log_{i}()"><b>Show log of optimization process</b></button>
<div id="logDIV{i}" style="display:none;">
<h2>Call of Intronserter</h2>
<p><b>Call:</b><br>{call}</p>
<p><b>Respective content of FASTA file:</b><br>{fasta_content}</p>
<h2>Processed Input Parameters</h2>
<p>{params}</p>
<h2>Cut Site Removal in codon-optimized cDNA sequence for {header}</h2>
<p>{csr}</p>
<h2>Intron insertion into codon-optimized and cut-site-removed cDNA sequence for {header}</h2>
<p>{ii}</p>
<button onclick="show_log_{i}()"><b>Hide detailed log</b></button>
</div>
<p><a href="#">back to top</a></p>
<hr>
'''
footer = '</body></html>'
return base, html_header, result, footer
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("aa_fasta_file", help='a file containing the FASTA AA sequence which is to be optimized', type=str)
parser.add_argument("--output_prefix", help='prefix for the two output files (.gb and .html) (default=Intronserter_optDNA)', type=str, default='Intronserter_optDNA')
parser.add_argument("--codon_usage_table_id", help='ID of the internally stored codon usage table (default=kazusa)', type=str, choices=['kazusa', 'hivecut'], default='kazusa')
parser.add_argument("--custom_codon_usage_table_file", help='a file containing a codon usage table; this supersedes the parameter --codon_usage_table_id', type=str)
parser.add_argument("--cut_sites", help='comma-separated cut sites (only 6 or 8 nt length!) to be removed from the back-translated sequence, for XbaI and XhoI e.g. TCTAGA,CTCGAG - special = custom (default=GAAGAC,GGTCTC)', type=str, default='GAAGAC,GGTCTC')
parser.add_argument("--custom_cut_sites", help='if --cut_sites contains the entry "custom", use these additional comma-separated cut sites given as DNA sequences, e.g. TCTAGA,CTCGAG', type=str)
parser.add_argument("--intron_seq", help='use this DNA sequence as the intron sequence (default=gtgagtcg... -> seq of rbcS2i1)', type=str, default='gtgagtcgacgagcaagcccggcggatcaggcagcgtgcttgcagatttgacttgcaacgcccgcattgtgtcgacgaaggcttttggctcctctgtcgctgtctcaagcagcatctaaccctgcgtcgccgtttccatttgcag')
parser.add_argument("--intron_lastdifferent", help='if specified, the last intron is substituted for the seq given by --intron_lastdifferent_seq; therefore, --intron_lastdifferent_seq has be specified.', action='store_true')
parser.add_argument("--intron_lastdifferent_seq", help='if --intron_lastdifferent is specified, use this DNA sequence as the last intron sequence', type=str)
parser.add_argument("--supersede_intron_insert", help='if specified, no automatic determination of intron positions is performed. Instead, the positions given by --manual_intron_positions are used. Therefore, --manual_intron_positions has to be specified.', action='store_true')
parser.add_argument("--manual_intron_positions", help='if --supersede_intron_insert is specified, use these positions instead, given as a comma-separated list, e.g. 100,450', type=str)
parser.add_argument("--nucleotide_pair", help='nucleotide pair between which the introns are inserted (default=GG)', type=str, default='GG', choices=['AA', 'AC', 'AG', 'AT', 'CA', 'CC', 'CG', 'CT', 'GA', 'GC', 'GG', 'GT', 'TA', 'TC', 'TG', 'TT'])
parser.add_argument("--start", help='start exon length for automatic intron position determination (default=100)', type=int, default=100)
parser.add_argument("--target", help='target intermediate length for automatic intron position determination (default=450)', type=int, default=450)
parser.add_argument("--max", help='max intermediate exon length for automatic intron position determination (default=500)', type=int, default=500)
parser.add_argument("--end", help='end exon length for automatic intron position determination (default=100)', type=int, default=100)
parser.add_argument("--only_insert_introns", help='if specified, ONLY introns are inserted - requires a DNA sequence as input!', action="store_true")
parser.add_argument("--cut_site_start", help='cut site to introduce at the start/5-end, e.g. None or custom or TCTAGA or CTCGAG or ... (default=None)', type=str, default='None')
parser.add_argument("--custom_cut_site_start", help='custom cut site to introduce at the start/5-end given by this DNA sequence, e.g. TCTAGA; only active if "--cut_site_start custom" is called.', type=str)
parser.add_argument("--cut_site_end", help='cut site to introduce at the end/3-end, e.g. None or custom or TCTAGA or CTCGAG or ... (default=None)', type=str, default='None')
parser.add_argument("--custom_cut_site_end", help='custom cut site to introduce at the end/3-end given by this DNA sequence, e.g. TCTAGA; only active if "--cut_site_end custom" is called.', type=str)
parser.add_argument("--linker_start", help='linker peptide to introduce at the start/5-end given by this AA sequence, e.g. GSGS (default=inactive/not set)', type=str, default='')
parser.add_argument("--linker_end", help='linker peptide to introduce at the end/3-end given by this AA sequence, e.g. GSGS (default=inactive/not set)', type=str, default='')
parser.add_argument("--insert_start_codon", help='if specified, a start codon is inserted at the start/5-end', action="store_true")
parser.add_argument("--insert_stop_codon", help='if specified, a * stop codon is inserted at the end/3-end', action="store_true")
parser.add_argument("--remove_start_codon", help='if specified, the native start codon (Met) is removed', action="store_true")
parser.add_argument("--remove_stop_codon", help='if specified, the native * stop codon is removed', action="store_true")
ArgsClass = parser.parse_args()
MC_Class = MessageContainer()
kwargs = parse_input( ArgsClass = ArgsClass, MessageContainer = MC_Class )
kwargs, _ = process_input( kwargs = kwargs, MessageContainer = MC_Class )
# display any messages
message_list = []
print('Info, Warning and Error messages:')
for name, messages in MC_Class.messages.items():
if messages:
for message in messages:
message_list.append( '{0}, {1}'.format(name, message) )
if message_list:
for message in message_list:
print(message)
else:
print('None.')
print()
# print optimized DNA seq
print('optimized DNA sequence(s):')
for name, aa_seq in kwargs[ 'aa_seq_dict' ].items():
if not aa_seq:
continue
print('>{0}'.format(name))
print(kwargs[ 'output_dict' ][ name ][ 'cDNA_seq_plus_i' ])
# save genbank file to disk
gb_file = '{0}.gb'.format(ArgsClass.output_prefix)
with open( gb_file, 'w' ) as fout:
for name, aa_seq in kwargs[ 'aa_seq_dict' ].items():
if not aa_seq:
continue
print( kwargs[ 'output_dict' ][ name ][ 'genbank_string' ], file = fout )
# generate HTML file with the optimized DNA sequence and the two plots
base, html_header, result, footer = get_html_strings()
if not message_list:
messages = ''
else:
messages= '<b><span style="color: red;">Info, Warning and Error messages:</span></b><ul>' + \
'<li>' + \
'</li><li>'.join(message_list) + \
'</li></ul>'
function_template = ''' function show_log_{i}() {{
var x = document.getElementById("logDIV{i}");
if (x.style.display === "none") {{
x.style.display = "block";
}} else {{
x.style.display = "none";
}}
}}'''
i2fasta = {}
with open(ArgsClass.aa_fasta_file, 'rU') as fin:
for i, record in enumerate(SeqIO.parse(fin, "fasta")):
i2fasta[i] = record.seq.upper()
with open( '{0}.html'.format(ArgsClass.output_prefix), 'w' ) as fout:
print(base.format(
functions = os.linesep.join([function_template.format(i=i) for i in range(len(kwargs['aa_seq_dict']))])
),
file=fout)
print(html_header.format(
anchors = ' '.join(['<a href="#{header}">{header}</a>'.format(header=name) for name in kwargs['aa_seq_dict' ] ]),
messages=messages), file=fout)
for i, (name, aa_seq) in enumerate(kwargs[ 'aa_seq_dict' ].items()):
if not aa_seq:
continue
print(
result.format(
i = i,
header=name,
cDNA_seq_plus_i = kwargs[ 'output_dict' ][ name ][ 'cDNA_seq_plus_i' ],
gb_base64=base64.b64encode(kwargs[ 'output_dict' ][ name ][ 'genbank_string' ].encode()).decode(),
fig_normfreq =kwargs[ 'output_dict' ][ name ][ 'fig_tmp' ],
fig_exonintron =kwargs[ 'output_dict' ][ name ][ 'fig_tmp_introns' ],
call=sys.argv,
fasta_content=i2fasta[i],
params=kwargs[ 'output_dict' ][ name ][ 'session_logs' ][0],
csr=kwargs[ 'output_dict' ][ name ][ 'session_logs' ][1],
ii=kwargs[ 'output_dict' ][ name ][ 'session_logs' ][2],
gb_fname=gb_file
),
file=fout
)
print(footer, file=fout)
| 139.675079 | 52,168 | 0.836371 | 6,349 | 88,554 | 11.496456 | 0.32005 | 0.008152 | 0.006288 | 0.003836 | 0.143497 | 0.112822 | 0.092409 | 0.079366 | 0.06309 | 0.0555 | 0 | 0.103712 | 0.099205 | 88,554 | 633 | 52,169 | 139.895735 | 0.811318 | 0.032161 | 0 | 0.230174 | 0 | 0.071567 | 0.748862 | 0.630567 | 0 | 1 | 0 | 0 | 0.009671 | 1 | 0.007737 | false | 0 | 0.027079 | 0 | 0.044487 | 0.027079 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4cf7430a2e377cf00400a1db709c684c95a7dfd4 | 26 | py | Python | ovcfg/__init__.py | jok4r/ovcfg | 61dd26f924c8a47df1c2c1e68a7e111441f3aef7 | [
"MIT"
] | 1 | 2021-12-19T11:44:33.000Z | 2021-12-19T11:44:33.000Z | ovcfg/__init__.py | jok4r/ovcfg | 61dd26f924c8a47df1c2c1e68a7e111441f3aef7 | [
"MIT"
] | null | null | null | ovcfg/__init__.py | jok4r/ovcfg | 61dd26f924c8a47df1c2c1e68a7e111441f3aef7 | [
"MIT"
] | null | null | null | from .ovcfg import Config
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e278080e41acc01f4c8b740598c089bfdccab89f | 27,985 | py | Python | tests/autodiff_test.py | ByzanTine/AutoHOOT | 007bb423bfc8eefa64e4d1b0f8dad80b440bcf7a | [
"Apache-2.0"
] | null | null | null | tests/autodiff_test.py | ByzanTine/AutoHOOT | 007bb423bfc8eefa64e4d1b0f8dad80b440bcf7a | [
"Apache-2.0"
] | null | null | null | tests/autodiff_test.py | ByzanTine/AutoHOOT | 007bb423bfc8eefa64e4d1b0f8dad80b440bcf7a | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import autodiff as ad
import backend as T
from tests.test_utils import tree_eq
def test_identity(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
y = ad.sum(x2)
grad_x2, = ad.gradients(y, [x2])
executor = ad.Executor([y, grad_x2])
x2_val = 2 * T.ones(3)
y_val, grad_x2_val = executor.run(feed_dict={x2: x2_val})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x2_val))
assert T.array_equal(grad_x2_val, T.ones_like(x2_val))
def test_add_by_const(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
y = ad.sum(5 + x2)
grad_x2, = ad.gradients(y, [x2])
executor = ad.Executor([y, grad_x2])
x2_val = 2 * T.ones(3)
y_val, grad_x2_val = executor.run(feed_dict={x2: x2_val})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x2_val + 5))
assert T.array_equal(grad_x2_val, T.ones_like(x2_val))
def test_sub_by_const(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
y = ad.sum(x2 - 5)
grad_x2, = ad.gradients(y, [x2])
executor = ad.Executor([y, grad_x2])
x2_val = 2 * T.ones(3)
y_val, grad_x2_val = executor.run(feed_dict={x2: x2_val})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x2_val - 5))
assert T.array_equal(grad_x2_val, T.ones_like(x2_val))
def test_sub_by_const_2(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
y = ad.sum(5 - x2)
grad_x2, = ad.gradients(y, [x2])
executor = ad.Executor([y, grad_x2])
x2_val = 2 * T.ones(3)
y_val, grad_x2_val = executor.run(feed_dict={x2: x2_val})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(5 - x2_val))
assert T.array_equal(grad_x2_val, -T.ones_like(x2_val))
def test_negative(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
y = ad.sum(-x2)
grad_x2, = ad.gradients(y, [x2])
executor = ad.Executor([y, grad_x2])
x2_val = 2 * T.ones(3)
y_val, grad_x2_val = executor.run(feed_dict={x2: x2_val})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(-x2_val))
assert T.array_equal(grad_x2_val, -T.ones_like(x2_val))
def test_mul_by_const(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
y = ad.sum(5 * x2)
grad_x2, = ad.gradients(y, [x2])
executor = ad.Executor([y, grad_x2])
x2_val = 2 * T.ones(3)
y_val, grad_x2_val = executor.run(feed_dict={x2: x2_val})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x2_val * 5))
assert T.array_equal(grad_x2_val, T.ones_like(x2_val) * 5)
def test_mul_by_const_float(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3])
y1 = ad.sum(5 * x)
y2 = ad.sum(5.0 * x)
assert y1.name == y2.name
assert tree_eq(y1, y2, [x])
def test_power(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
y = ad.sum(x2**3)
grad_x2, = ad.gradients(y, [x2])
executor = ad.Executor([y, grad_x2])
x2_val = 2 * T.ones(3)
y_val, grad_x2_val = executor.run(feed_dict={x2: x2_val})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x2_val**3))
assert T.array_equal(grad_x2_val, 3 * (x2_val**2))
def test_add_two_vars(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
x3 = ad.Variable(name="x3", shape=[3])
y = ad.sum(x2 + x3)
grad_x2, grad_x3 = ad.gradients(y, [x2, x3])
executor = ad.Executor([y, grad_x2, grad_x3])
x2_val = 2 * T.ones(3)
x3_val = 3 * T.ones(3)
y_val, grad_x2_val, grad_x3_val = executor.run(feed_dict={
x2: x2_val,
x3: x3_val
})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x2_val + x3_val))
assert T.array_equal(grad_x2_val, T.ones_like(x2_val))
assert T.array_equal(grad_x3_val, T.ones_like(x3_val))
def test_sub_two_vars(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
x3 = ad.Variable(name="x3", shape=[3])
y = ad.sum(x2 - x3)
grad_x2, grad_x3 = ad.gradients(y, [x2, x3])
executor = ad.Executor([y, grad_x2, grad_x3])
x2_val = 2 * T.ones(3)
x3_val = 3 * T.ones(3)
y_val, grad_x2_val, grad_x3_val = executor.run(feed_dict={
x2: x2_val,
x3: x3_val
})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x2_val - x3_val))
assert T.array_equal(grad_x2_val, T.ones_like(x2_val))
assert T.array_equal(grad_x3_val, -T.ones_like(x3_val))
def test_mul_two_vars(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
x3 = ad.Variable(name="x3", shape=[3])
y = ad.sum(x2 * x3)
grad_x2, grad_x3 = ad.gradients(y, [x2, x3])
executor = ad.Executor([y, grad_x2, grad_x3])
x2_val = 2 * T.ones(3)
x3_val = 3 * T.ones(3)
y_val, grad_x2_val, grad_x3_val = executor.run(feed_dict={
x2: x2_val,
x3: x3_val
})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x2_val * x3_val))
assert T.array_equal(grad_x2_val, x3_val)
assert T.array_equal(grad_x3_val, x2_val)
def test_add_mul_mix_1(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x1 = ad.Variable(name="x1", shape=[3])
x2 = ad.Variable(name="x2", shape=[3])
x3 = ad.Variable(name="x3", shape=[3])
y = ad.sum(x1 + x2 * x3 * x1)
grad_x1, grad_x2, grad_x3 = ad.gradients(y, [x1, x2, x3])
executor = ad.Executor([y, grad_x1, grad_x2, grad_x3])
x1_val = 1 * T.ones(3)
x2_val = 2 * T.ones(3)
x3_val = 3 * T.ones(3)
y_val, grad_x1_val, grad_x2_val, grad_x3_val = executor.run(feed_dict={
x1: x1_val,
x2: x2_val,
x3: x3_val
})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x1_val + x2_val * x3_val))
assert T.array_equal(grad_x1_val,
T.ones_like(x1_val) + x2_val * x3_val)
assert T.array_equal(grad_x2_val, x3_val * x1_val)
assert T.array_equal(grad_x3_val, x2_val * x1_val)
def test_add_mul_mix_2(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x1 = ad.Variable(name="x1", shape=[3])
x2 = ad.Variable(name="x2", shape=[3])
x3 = ad.Variable(name="x3", shape=[3])
x4 = ad.Variable(name="x4", shape=[3])
y = ad.sum(x1 + x2 * x3 * x4)
grad_x1, grad_x2, grad_x3, grad_x4 = ad.gradients(y, [x1, x2, x3, x4])
executor = ad.Executor([y, grad_x1, grad_x2, grad_x3, grad_x4])
x1_val = 1 * T.ones(3)
x2_val = 2 * T.ones(3)
x3_val = 3 * T.ones(3)
x4_val = 4 * T.ones(3)
y_val, grad_x1_val, grad_x2_val, grad_x3_val, grad_x4_val = executor.run(
feed_dict={
x1: x1_val,
x2: x2_val,
x3: x3_val,
x4: x4_val
})
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(x1_val + x2_val * x3_val * x4_val))
assert T.array_equal(grad_x1_val, T.ones_like(x1_val))
assert T.array_equal(grad_x2_val, x3_val * x4_val)
assert T.array_equal(grad_x3_val, x2_val * x4_val)
assert T.array_equal(grad_x4_val, x2_val * x3_val)
def test_add_mul_mix_3(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3])
x3 = ad.Variable(name="x3", shape=[3])
z = x2 * x2 + x2 + x3 + 3
y = ad.sum(z * z + x3)
grad_x2, grad_x3 = ad.gradients(y, [x2, x3])
executor = ad.Executor([y, grad_x2, grad_x3])
x2_val = 2 * T.ones(3)
x3_val = 3 * T.ones(3)
y_val, grad_x2_val, grad_x3_val = executor.run(feed_dict={
x2: x2_val,
x3: x3_val
})
z_val = x2_val * x2_val + x2_val + x3_val + 3
expected_yval = z_val * z_val + x3_val
expected_grad_x2_val = 2 * \
(x2_val * x2_val + x2_val + x3_val + 3) * (2 * x2_val + 1)
expected_grad_x3_val = 2 * (x2_val * x2_val + x2_val + x3_val + 3) + 1
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, T.sum(expected_yval))
assert T.array_equal(grad_x2_val, expected_grad_x2_val)
assert T.array_equal(grad_x3_val, expected_grad_x3_val)
def test_einsum(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3, 2])
x3 = ad.Variable(name="x3", shape=[2, 3])
matmul = ad.einsum('ik,kj->ij', x2, x3)
y = ad.sum(matmul)
grad_x2, grad_x3 = ad.gradients(y, [x2, x3])
executor = ad.Executor([y, grad_x2, grad_x3])
x2_val = T.tensor([[1, 2], [3, 4], [5, 6]]) # 3x2
x3_val = T.tensor([[7, 8, 9], [10, 11, 12]]) # 2x3
y_val, grad_x2_val, grad_x3_val = executor.run(feed_dict={
x2: x2_val,
x3: x3_val
})
expected_grad_sum = T.ones_like(T.dot(x2_val, x3_val))
expected_yval = T.sum(T.dot(x2_val, x3_val))
expected_grad_x2_val = T.dot(expected_grad_sum, T.transpose(x3_val))
expected_grad_x3_val = T.dot(T.transpose(x2_val), expected_grad_sum)
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x2_val, expected_grad_x2_val)
assert T.array_equal(grad_x3_val, expected_grad_x3_val)
def test_einsum_3op(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x2 = ad.Variable(name="x2", shape=[3, 2])
x3 = ad.Variable(name="x3", shape=[2, 3])
x4 = ad.Variable(name="x4", shape=[3, 2])
matmul = ad.einsum('ik,kj,jl->il', x2, x3, x4)
y = ad.sum(matmul)
grad_x2, grad_x3, grad_x4 = ad.gradients(y, [x2, x3, x4])
executor = ad.Executor([y, grad_x2, grad_x3, grad_x4])
x2_val = T.tensor([[1, 2], [3, 4], [5, 6]]) # 3x2
x3_val = T.tensor([[7, 8, 9], [10, 11, 12]]) # 2x3
x4_val = T.tensor([[1, 2], [3, 4], [5, 6]]) # 3x2
y_val, grad_x2_val, grad_x3_val, grad_x4_val = executor.run(feed_dict={
x2: x2_val,
x3: x3_val,
x4: x4_val
})
expected_grad_sum = T.ones_like(T.dot(T.dot(x2_val, x3_val), x4_val))
expected_yval = T.sum(T.dot(T.dot(x2_val, x3_val), x4_val))
expected_grad_x2_val = T.einsum("il, kj, jl->ik", expected_grad_sum,
x3_val, x4_val)
expected_grad_x3_val = T.einsum("ik, il, jl->kj", x2_val,
expected_grad_sum, x4_val)
expected_grad_x4_val = T.einsum("ik, kj, il->jl", x2_val, x3_val,
expected_grad_sum)
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x2_val, expected_grad_x2_val)
assert T.array_equal(grad_x3_val, expected_grad_x3_val)
assert T.array_equal(grad_x4_val, expected_grad_x4_val)
def test_norm(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 2])
y = ad.norm(x)
z = y**2
grad_x, = ad.gradients(z, [x])
executor = ad.Executor([z, grad_x])
x_val = T.tensor([[1., 2.], [3., 4.], [5., 6.]]) # 3x2
z_val, grad_x_val = executor.run(feed_dict={x: x_val})
expected_zval = T.norm(x_val)**2
expected_grad_x_val = 2 * x_val
assert isinstance(z, ad.Node)
assert T.array_equal(z_val, expected_zval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_sum(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 2])
y = ad.sum(x)
grad_x, = ad.gradients(y, [x])
executor = ad.Executor([y, grad_x])
x_val = T.tensor([[1, 2], [3, 4], [5, 6]]) # 3x2
y_val, grad_x_val = executor.run(feed_dict={x: x_val})
expected_yval = T.sum(x_val)
expected_grad_x_val = T.ones_like(x_val)
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_transpose(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 2])
y = ad.sum(ad.transpose(x))
grad_x, = ad.gradients(y, [x])
executor = ad.Executor([y, grad_x])
x_val = T.tensor([[1, 2], [3, 4], [5, 6]]) # 3x2
y_val, grad_x_val = executor.run(feed_dict={x: x_val})
expected_yval = T.sum(T.transpose(x_val))
expected_grad_x_val = T.ones_like(x_val)
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_transpose_einsum(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 2])
y = ad.sum(ad.einsum("ij->ji", x))
grad_x, = ad.gradients(y, [x])
executor = ad.Executor([y, grad_x])
x_val = T.tensor([[1, 2], [3, 4], [5, 6]]) # 3x2
y_val, grad_x_val = executor.run(feed_dict={x: x_val})
expected_yval = T.sum(T.transpose(x_val))
expected_grad_x_val = T.ones_like(x_val)
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_tensor_transpose_einsum(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[2, 2, 2])
y = ad.einsum("kij->jik", x)
v = ad.Variable(name="v", shape=[2, 2, 2])
v_val = T.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) # 2 x 2 x 2
grad_x, = ad.transposed_vjps(y, [x], v)
executor = ad.Executor([y, grad_x])
x_val = T.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) # 2 x 2 x 2
y_val, grad_x_val = executor.run(feed_dict={x: x_val, v: v_val})
expected_yval = T.einsum("kij->jik", x_val)
expected_grad_x_val = T.einsum("kij->jik", v_val)
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_inner_product(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[1, 3])
x_inner = ad.sum(ad.einsum("ab,bc->ac", x, ad.transpose(x)))
grad_x, = ad.gradients(x_inner, [x])
executor = ad.Executor([x_inner, grad_x])
x_val = T.tensor([[3., 4.]]) # 1x2
y_val, grad_x_val = executor.run(feed_dict={x: x_val})
expected_yval = T.norm(x_val)**2
expected_grad_x_val = 2 * x_val
assert isinstance(x_inner, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_inner_product_einsum(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3])
x_inner = ad.einsum('i,i->', x, x)
grad_x, = ad.gradients(x_inner, [x])
executor = ad.Executor([x_inner, grad_x])
x_val = T.tensor([3., 4.]) # 1x2
y_val, grad_x_val = executor.run(feed_dict={x: x_val})
expected_yval = T.norm(x_val)**2
expected_grad_x_val = 2 * x_val
assert isinstance(x_inner, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_summation_einsum(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[2, 2])
x_sum = ad.einsum('ij->', x)
grad_x, = ad.gradients(x_sum, [x])
executor = ad.Executor([x_sum, grad_x])
x_val = T.tensor([[1., 2.], [3., 4.]])
x_sum_val, grad_x_val = executor.run(feed_dict={x: x_val})
expected_x_sum_val = T.sum(x_val)
expected_grad_x_val = T.ones_like(x_val)
assert T.array_equal(x_sum_val, expected_x_sum_val)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_summation_einsum_2(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[2, 2])
y = ad.Variable(name="y", shape=[2, 2])
out = ad.sum(ad.einsum('ij,ab->ab', x, y))
grad_x, = ad.gradients(out, [x])
executor = ad.Executor([out, grad_x])
x_val = T.tensor([[1., 2.], [3., 4.]])
y_val = T.tensor([[5., 6.], [7., 8.]])
out_val, grad_x_val = executor.run(feed_dict={x: x_val, y: y_val})
expected_out_val = T.sum(T.einsum('ij,ab->ab', x_val, y_val))
expected_grad_x_val = T.sum(y_val) * T.ones_like(x_val)
assert T.array_equal(out_val, expected_out_val)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_trace_einsum(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[2, 2])
trace = ad.einsum('ii->', x)
grad_x, = ad.gradients(trace, [x])
executor = ad.Executor([trace, grad_x])
x_val = T.tensor([[1., 2.], [3., 4.]])
trace_val, grad_x_val = executor.run(feed_dict={x: x_val})
expected_trace_val = T.einsum('ii->', x_val)
expected_grad_x_val = T.identity(2)
assert T.array_equal(trace_val, expected_trace_val)
assert T.array_equal(grad_x_val, expected_grad_x_val)
def test_vjps(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[2])
A = ad.Variable(name="A", shape=[3, 2])
v = ad.Variable(name="v", shape=[3])
y = ad.einsum('ab, b->a', A, x)
transposed_vjp_x, = ad.transposed_vjps(y, [x], v)
executor = ad.Executor([y, transposed_vjp_x])
x_val = T.tensor([1., 2.]) # 1x3
A_val = T.tensor([[1., 2.], [3., 4.], [5, 6]])
v_val = T.tensor([1., 2., 3.])
y_val, transposed_vjp_x_val = executor.run(feed_dict={
x: x_val,
A: A_val,
v: v_val
})
expected_yval = T.einsum('ab, b->a', A_val, x_val)
expected_transposed_vjp_x_val = T.einsum('b, ba->a', v_val, A_val)
assert isinstance(transposed_vjp_x, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(transposed_vjp_x_val,
expected_transposed_vjp_x_val)
def test_jvps(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x1 = ad.Variable(name="x1", shape=[2])
A1 = ad.Variable(name="A1", shape=[3, 2])
x2 = ad.Variable(name="x2", shape=[2])
A2 = ad.Variable(name="A2", shape=[3, 2])
v1 = ad.Variable(name="v1", shape=[2])
v2 = ad.Variable(name="v2", shape=[2])
y = ad.einsum('ab, b->a', A1, x1) + ad.einsum('ab, b->a', A2, x2)
transposed_vjp_x = ad.jvps(y, [x1, x2], [v1, v2])
executor = ad.Executor([y, transposed_vjp_x])
x1_val = T.tensor([1., 2.])
A1_val = T.tensor([[1., 2.], [3., 4.], [5, 6]])
v1_val = T.tensor([3., 4.])
x2_val = T.tensor([1., 2.])
A2_val = T.tensor([[1., 2.], [3., 4.], [5, 6]])
v2_val = T.tensor([3., 4.])
y_val, transposed_vjp_x_val = executor.run(feed_dict={
x1: x1_val,
A1: A1_val,
v1: v1_val,
x2: x2_val,
A2: A2_val,
v2: v2_val
})
expected_yval = T.einsum('ab, b->a', A1_val, x1_val) + T.einsum(
'ab, b->a', A2_val, x2_val)
expected_transposed_vjp_x_val = T.einsum(
'ab, b->a', A1_val, v1_val) + T.einsum('ab, b->a', A2_val, v2_val)
assert isinstance(transposed_vjp_x, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(transposed_vjp_x_val,
expected_transposed_vjp_x_val)
def test_jtjvps(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[2])
A = ad.Variable(name="A", shape=[3, 2])
v = ad.Variable(name="v", shape=[2])
y = ad.einsum('ab, b->a', A, x)
jtjvp_x, = ad.jtjvps(y, [x], [v])
executor = ad.Executor([y, jtjvp_x])
x_val = T.tensor([1., 2.])
A_val = T.tensor([[1., 2.], [3., 4.], [5, 6]])
v_val = T.tensor([3., 4.])
y_val, jtjvp_x_val = executor.run(feed_dict={
x: x_val,
A: A_val,
v: v_val
})
expected_yval = T.einsum('ab, b->a', A_val, x_val)
expected_jtjvp_x_val = T.einsum('ba, ac->bc', T.transpose(A_val),
A_val)
expected_jtjvp_x_val = T.einsum('ab, b->a', expected_jtjvp_x_val,
v_val)
assert isinstance(jtjvp_x, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(jtjvp_x_val, expected_jtjvp_x_val)
def test_inner_product_hvp(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 1])
v = ad.Variable(name="v", shape=[3, 1])
y = ad.sum(ad.einsum("ab,bc->ac", ad.transpose(x), x))
grad_x, = ad.gradients(y, [x])
Hv, = ad.hvp(output_node=y, node_list=[x], vector_list=[v])
executor = ad.Executor([y, grad_x, Hv])
x_val = T.tensor([[1.], [2.], [3]]) # 3x1
v_val = T.tensor([[1.], [2.], [3]]) # 3x1
y_val, grad_x_val, Hv_val = executor.run(feed_dict={
x: x_val,
v: v_val
})
expected_yval = T.sum(T.transpose(x_val) @ x_val)
expected_grad_x_val = 2 * x_val
expected_hv_val = 2 * v_val
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
assert T.array_equal(Hv_val, expected_hv_val)
def test_hvp1(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 1])
H = ad.Variable(name="H", shape=[3, 3])
v = ad.Variable(name="v", shape=[3, 1])
y = ad.sum(x * ad.einsum("ab,bc->ac", H, x))
grad_x, = ad.gradients(y, [x])
Hv, = ad.hvp(output_node=y, node_list=[x], vector_list=[v])
executor = ad.Executor([y, grad_x, Hv])
x_val = T.tensor([[1.], [2.], [3]]) # 3x1
v_val = T.tensor([[1.], [2.], [3]]) # 3x1
H_val = T.tensor([[2., 0., 0.], [0., 2., 0.], [0., 0., 2.]]) # 3x3
y_val, grad_x_val, Hv_val = executor.run(feed_dict={
x: x_val,
H: H_val,
v: v_val
})
expected_yval = T.transpose(x_val) @ H_val @ x_val
expected_grad_x_val = 2 * H_val @ x_val
expected_hv_val = T.tensor([[4.], [8.], [12.]])
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval[0][0])
assert T.array_equal(grad_x_val, expected_grad_x_val)
assert T.array_equal(Hv_val, expected_hv_val)
def test_hvp2(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 1])
H = ad.Variable(name="H", shape=[3, 3])
v = ad.Variable(name="v", shape=[3, 1])
y = ad.sum(
ad.einsum("ab,bc->ac", ad.einsum("ab,bc->ac", ad.transpose(x), H),
x))
grad_x, = ad.gradients(y, [x])
Hv, = ad.hvp(output_node=y, node_list=[x], vector_list=[v])
executor = ad.Executor([y, grad_x, Hv])
x_val = T.tensor([[1.], [2.], [3]]) # 3x1
v_val = T.tensor([[1.], [2.], [3]]) # 3x1
H_val = T.tensor([[2., 0., 0.], [0., 2., 0.], [0., 0., 2.]]) # 3x3
y_val, grad_x_val, Hv_val = executor.run(feed_dict={
x: x_val,
H: H_val,
v: v_val
})
expected_yval = T.sum(T.transpose(x_val) @ H_val @ x_val)
expected_grad_x_val = 2 * H_val @ x_val
expected_hv_val = T.tensor([[4.], [8.], [12.]])
assert isinstance(y, ad.Node)
assert T.array_equal(y_val, expected_yval)
assert T.array_equal(grad_x_val, expected_grad_x_val)
assert T.array_equal(Hv_val, expected_hv_val)
def test_tensorinv_matrix(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 3])
inv_x = ad.tensorinv(x)
executor = ad.Executor([inv_x])
x_val = T.random([3, 3])
inv_x_val, = executor.run(feed_dict={x: x_val})
assert T.array_equal(inv_x_val, T.inv(x_val))
def test_tensorinv_tensor(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[3, 2, 3, 2])
inv_x = ad.tensorinv(x)
executor = ad.Executor([inv_x])
x_val = T.random([3, 2, 3, 2])
inv_x_val, = executor.run(feed_dict={x: x_val})
assert T.array_equal(inv_x_val, T.tensorinv(x_val))
def test_tensorinv_odd_dim(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
x = ad.Variable(name="x", shape=[24, 8, 3])
inv_x = ad.tensorinv(x, ind=1)
assert inv_x.shape == [8, 3, 24]
assert inv_x.input_indices_length == 2
executor = ad.Executor([inv_x])
x_val = T.random([24, 8, 3])
inv_x_val, = executor.run(feed_dict={x: x_val})
assert T.array_equal(inv_x_val, T.tensorinv(x_val, ind=1))
def test_tensordot(backendopt):
for datatype in backendopt:
T.set_backend(datatype)
a = ad.Variable(name="a", shape=[3, 3, 3, 3])
b = ad.Variable(name="b", shape=[3, 3, 3, 3])
result = ad.tensordot(a, b, axes=[[1, 3], [0, 1]])
result2 = ad.einsum("abcd,bdef->acef", a, b)
assert tree_eq(result, result2, [a, b])
| 32.019451 | 81 | 0.578167 | 4,472 | 27,985 | 3.367844 | 0.044723 | 0.037182 | 0.063741 | 0.090299 | 0.882013 | 0.861297 | 0.838723 | 0.808711 | 0.779298 | 0.761968 | 0 | 0.044524 | 0.27529 | 27,985 | 873 | 82 | 32.056128 | 0.698092 | 0.023155 | 0 | 0.622549 | 0 | 0 | 0.014358 | 0 | 0 | 0 | 0 | 0 | 0.184641 | 1 | 0.058824 | false | 0 | 0.004902 | 0 | 0.063725 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2c46a07a7de047964a0d7cee700042e563701ff5 | 142 | py | Python | data/__init__.py | ShiQiu0419/DRNet | edd9adceefbf8f6871abc565626d5f5cfb9571e0 | [
"MIT"
] | 23 | 2020-05-14T06:43:32.000Z | 2022-02-24T03:09:28.000Z | data/__init__.py | ShiQiu0419/DRNet | edd9adceefbf8f6871abc565626d5f5cfb9571e0 | [
"MIT"
] | 4 | 2021-04-14T14:38:03.000Z | 2021-09-22T14:44:15.000Z | data/__init__.py | ShiQiu0419/DRNet | edd9adceefbf8f6871abc565626d5f5cfb9571e0 | [
"MIT"
] | 1 | 2020-05-24T13:41:05.000Z | 2020-05-24T13:41:05.000Z | # from .ModelNet40Loader import ModelNet40Cls
from .ShapeNetPartLoader import ShapeNetPart
# from .Indoor3DSemSegLoader import Indoor3DSemSeg
| 35.5 | 50 | 0.866197 | 12 | 142 | 10.25 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046875 | 0.098592 | 142 | 3 | 51 | 47.333333 | 0.914063 | 0.647887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c48869641c4429c305becb3ef8cf5d2cdd64198 | 211 | py | Python | quotas/admin.py | msenoville/hbp_neuromorphic_platform | 897675af3a9928da0cf3abcfeac3d7f508a859e1 | [
"Apache-2.0"
] | 13 | 2017-09-03T19:57:29.000Z | 2021-11-17T11:25:28.000Z | quotas/admin.py | msenoville/hbp_neuromorphic_platform | 897675af3a9928da0cf3abcfeac3d7f508a859e1 | [
"Apache-2.0"
] | 30 | 2017-06-27T08:36:41.000Z | 2022-02-14T16:04:32.000Z | quotas/admin.py | msenoville/hbp_neuromorphic_platform | 897675af3a9928da0cf3abcfeac3d7f508a859e1 | [
"Apache-2.0"
] | 6 | 2017-06-11T20:16:57.000Z | 2021-05-05T12:49:01.000Z | from django.contrib import admin
from .models import Project, Quota, Review, ProjectMember
admin.site.register(Quota)
admin.site.register(Project)
admin.site.register(Review)
admin.site.register(ProjectMember)
| 26.375 | 57 | 0.824645 | 28 | 211 | 6.214286 | 0.428571 | 0.206897 | 0.390805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075829 | 211 | 7 | 58 | 30.142857 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2c9567467d8ebe28bf67b83ec3c62ba8567f37f7 | 3,368 | py | Python | test.py | xzx482/captcha_identify.pytorch_fork | 8c2ff599c6afb196dddca3d4bc477ac78c95992e | [
"MIT"
] | 64 | 2019-11-05T08:06:08.000Z | 2022-03-24T05:05:58.000Z | test.py | xzx482/captcha_identify.pytorch_fork | 8c2ff599c6afb196dddca3d4bc477ac78c95992e | [
"MIT"
] | 11 | 2019-11-17T18:10:00.000Z | 2021-12-15T09:35:15.000Z | test.py | xzx482/captcha_identify.pytorch_fork | 8c2ff599c6afb196dddca3d4bc477ac78c95992e | [
"MIT"
] | 21 | 2019-11-04T16:18:46.000Z | 2022-03-10T01:22:24.000Z | # -*- coding: UTF-8 -*-
import numpy as np
import torch
from torch.autograd import Variable
import settings
import datasets
from models import *
import one_hot_encoding
import argparse
import torch_util
import os
from models import *
from tqdm import *
# os.environ["CUDA_VISIBLE_DEVICES"] = "1"
device = torch.device("cpu")
def main(model_path):
cnn = CNN()
cnn.eval()
cnn.load_state_dict(torch.load(model_path, map_location=device))
print("load cnn net.")
test_dataloader = datasets.get_test_data_loader()
correct = 0
total = 0
pBar = tqdm(total=test_dataloader.__len__())
for i, (images, labels) in enumerate(test_dataloader):
pBar.update(1)
image = images
vimage = Variable(image)
predict_label = cnn(vimage)
c0 = settings.ALL_CHAR_SET[np.argmax(predict_label[0, 0:settings.ALL_CHAR_SET_LEN].data.numpy())]
c1 = settings.ALL_CHAR_SET[np.argmax(predict_label[0, settings.ALL_CHAR_SET_LEN:2 * settings.ALL_CHAR_SET_LEN].data.numpy())]
c2 = settings.ALL_CHAR_SET[np.argmax(predict_label[0, 2 * settings.ALL_CHAR_SET_LEN:3 * settings.ALL_CHAR_SET_LEN].data.numpy())]
c3 = settings.ALL_CHAR_SET[np.argmax(predict_label[0, 3 * settings.ALL_CHAR_SET_LEN:4 * settings.ALL_CHAR_SET_LEN].data.numpy())]
predict_label = '%s%s%s%s' % (c0, c1, c2, c3)
true_label = one_hot_encoding.decode(labels.numpy()[0])
total += labels.size(0)
if(predict_label == true_label):
correct += 1
# if(total%200==0):
# print('Test Accuracy of the model on the %d test images: %f %%' % (total, 100 * correct / total))
print('Test Accuracy of the model on the %d test images: %f %%' % (total, 100 * correct / total))
def test_data(model_path):
cnn = CNN()
cnn.eval()
cnn.load_state_dict(torch.load(model_path, map_location=device))
test_dataloader = datasets.get_test_data_loader()
correct = 0
total = 0
for i, (images, labels) in enumerate(test_dataloader):
image = images
vimage = Variable(image)
predict_label = cnn(vimage)
c0 = settings.ALL_CHAR_SET[np.argmax(predict_label[0, 0:settings.ALL_CHAR_SET_LEN].data.numpy())]
c1 = settings.ALL_CHAR_SET[np.argmax(predict_label[0, settings.ALL_CHAR_SET_LEN:2 * settings.ALL_CHAR_SET_LEN].data.numpy())]
c2 = settings.ALL_CHAR_SET[np.argmax(predict_label[0, 2 * settings.ALL_CHAR_SET_LEN:3 * settings.ALL_CHAR_SET_LEN].data.numpy())]
c3 = settings.ALL_CHAR_SET[np.argmax(predict_label[0, 3 * settings.ALL_CHAR_SET_LEN:4 * settings.ALL_CHAR_SET_LEN].data.numpy())]
predict_label = '%s%s%s%s' % (c0, c1, c2, c3)
true_label = one_hot_encoding.decode(labels.numpy()[0])
total += labels.size(0)
if(predict_label == true_label):
correct += 1
# if(total%200==0):
# print('Test Accuracy of the model on the %d test images: %f %%' % (total, 100 * correct / total))
return 100 * correct / total
# print('Test Accuracy of the model on the %d test images: %f %%' % (total, 100 * correct / total))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="test path")
parser.add_argument('--model-path', type=str, default="weights/cnn_1.pt")
args = parser.parse_args()
main(args.model_path)
| 37.842697 | 137 | 0.66924 | 499 | 3,368 | 4.270541 | 0.200401 | 0.113562 | 0.154857 | 0.185828 | 0.778508 | 0.778508 | 0.778508 | 0.778508 | 0.740028 | 0.740028 | 0 | 0.027067 | 0.199228 | 3,368 | 88 | 138 | 38.272727 | 0.76307 | 0.11639 | 0 | 0.634921 | 0 | 0 | 0.044504 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.190476 | 0 | 0.238095 | 0.031746 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2c9b3dfa295c42f3fb88316d169f08e5b78c5459 | 16,110 | py | Python | tests/test_inputs_searches.py | olga-clarifai/clarifai-python-grpc | c1d45ea965f781de5ccf682b142049c7628d0480 | [
"Apache-2.0"
] | 44 | 2020-01-30T16:14:06.000Z | 2022-03-21T16:00:48.000Z | tests/test_inputs_searches.py | olga-clarifai/clarifai-python-grpc | c1d45ea965f781de5ccf682b142049c7628d0480 | [
"Apache-2.0"
] | 13 | 2020-04-21T05:42:26.000Z | 2022-03-23T14:50:51.000Z | tests/test_inputs_searches.py | olga-clarifai/clarifai-python-grpc | c1d45ea965f781de5ccf682b142049c7628d0480 | [
"Apache-2.0"
] | 11 | 2020-01-30T16:14:10.000Z | 2022-02-16T12:07:12.000Z | import urllib.request
import uuid
from google.protobuf import struct_pb2
from clarifai_grpc.grpc.api import service_pb2_grpc, service_pb2, resources_pb2
from clarifai_grpc.grpc.api.resources_pb2 import (
Search,
Query,
Rank,
Annotation,
Data,
Concept,
Filter,
Image,
)
from clarifai_grpc.grpc.api.service_pb2 import PostInputsSearchesRequest
from tests.common import (
both_channels,
metadata,
raise_on_failure,
DOG_IMAGE_URL,
wait_for_inputs_upload,
)
@both_channels
def test_search_by_custom_concept_id(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
concept_id = input_.data.concepts[0].id
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
filters=[
Filter(
annotation=Annotation(
data=Data(concepts=[Concept(id=concept_id, value=1)])
)
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_custom_concept_name(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
concept_name = input_.data.concepts[0].name
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
filters=[
Filter(
annotation=Annotation(
data=Data(concepts=[Concept(name=concept_name, value=1)])
)
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_predicted_concept_id(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
ranks=[
Rank(
annotation=Annotation(
# The ID of the "dog" concept in clarifai/main
data=Data(concepts=[Concept(id="ai_8S2Vq3cR", value=1)])
)
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_predicted_concept_name(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
ranks=[
Rank(
annotation=Annotation(
data=Data(concepts=[Concept(name="dog", value=1)])
)
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_predicted_concept_name_in_chinese(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
ranks=[
Rank(
annotation=Annotation(
data=Data(concepts=[Concept(name="狗", value=1)])
)
)
],
language="zh",
),
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_image_url(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
ranks=[
Rank(
annotation=Annotation(
data=Data(image=Image(url=DOG_IMAGE_URL))
)
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_image_bytes(channel):
stub = service_pb2_grpc.V2Stub(channel)
http_response = urllib.request.urlopen(DOG_IMAGE_URL)
url_bytes = http_response.read()
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
ranks=[
Rank(
annotation=Annotation(data=Data(image=Image(base64=url_bytes)))
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_metadata(channel):
stub = service_pb2_grpc.V2Stub(channel)
search_metadata = struct_pb2.Struct()
search_metadata.update({"another-key": {"inner-key": "inner-value"}})
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
ranks=[
Rank(annotation=Annotation(data=Data(metadata=search_metadata)))
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_geo_point_and_limit(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
filters=[
Filter(
annotation=Annotation(
data=Data(
geo=resources_pb2.Geo(
geo_point=resources_pb2.GeoPoint(
longitude=43, latitude=56
),
geo_limit=resources_pb2.GeoLimit(
value=1000, type="withinKilometers"
),
)
)
)
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_geo_box(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
filters=[
Filter(
annotation=Annotation(
data=Data(
geo=resources_pb2.Geo(
geo_box=[
resources_pb2.GeoBoxedPoint(
geo_point=resources_pb2.GeoPoint(
longitude=43, latitude=54
)
),
resources_pb2.GeoBoxedPoint(
geo_point=resources_pb2.GeoPoint(
longitude=45, latitude=56
)
),
]
)
)
)
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
@both_channels
def test_search_by_image_url_and_geo_box(channel):
stub = service_pb2_grpc.V2Stub(channel)
with SetupImage(stub) as input_:
response = stub.PostInputsSearches(
PostInputsSearchesRequest(
searches=[
Search(
query=Query(
ranks=[
Rank(
annotation=Annotation(
data=Data(image=Image(url=DOG_IMAGE_URL))
)
),
],
filters=[
Filter(
annotation=Annotation(
data=Data(
geo=resources_pb2.Geo(
geo_box=[
resources_pb2.GeoBoxedPoint(
geo_point=resources_pb2.GeoPoint(
longitude=43, latitude=54
)
),
resources_pb2.GeoBoxedPoint(
geo_point=resources_pb2.GeoPoint(
longitude=45, latitude=56
)
),
]
)
)
)
),
],
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=1000),
),
metadata=metadata(),
)
raise_on_failure(response)
assert len(response.hits) >= 1
assert input_.id in [hit.input.id for hit in response.hits]
class SetupImage:
def __init__(self, stub: service_pb2_grpc.V2Stub) -> None:
self._stub = stub
def __enter__(self) -> resources_pb2.Input:
my_concept_id = "my-concept-id-" + uuid.uuid4().hex[:15]
my_concept_name = "my concept name " + uuid.uuid4().hex[:15]
image_metadata = struct_pb2.Struct()
image_metadata.update(
{"some-key": "some-value", "another-key": {"inner-key": "inner-value"}}
)
post_response = self._stub.PostInputs(
service_pb2.PostInputsRequest(
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
image=resources_pb2.Image(url=DOG_IMAGE_URL, allow_duplicate_url=True),
concepts=[
resources_pb2.Concept(
id=my_concept_id, name=my_concept_name, value=1
)
],
metadata=image_metadata,
geo=resources_pb2.Geo(
geo_point=resources_pb2.GeoPoint(longitude=44, latitude=55)
),
),
)
]
),
metadata=metadata(),
)
raise_on_failure(post_response)
self._input = post_response.inputs[0]
wait_for_inputs_upload(self._stub, metadata(), [self._input.id])
return self._input
def __exit__(self, type_, value, traceback) -> None:
delete_response = self._stub.DeleteInput(
service_pb2.DeleteInputRequest(input_id=self._input.id), metadata=metadata()
)
raise_on_failure(delete_response)
| 36.613636 | 99 | 0.407697 | 1,155 | 16,110 | 5.447619 | 0.109957 | 0.044501 | 0.033376 | 0.048951 | 0.79323 | 0.762556 | 0.745391 | 0.732676 | 0.724889 | 0.724889 | 0 | 0.023101 | 0.524395 | 16,110 | 439 | 100 | 36.697039 | 0.798094 | 0.002731 | 0 | 0.603053 | 0 | 0 | 0.008902 | 0 | 0 | 0 | 0 | 0 | 0.050891 | 1 | 0.035623 | false | 0 | 0.017812 | 0 | 0.058524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2cc03faf8d4ec7fc6507944e4fe16c8e41f65a6a | 196 | py | Python | modelsProject/modelsApp/admin.py | cs-fullstack-2019-spring/django-models3-cw-PorcheWooten | c25fe7420f7f0586cbaccd2a25237651f7a69827 | [
"Apache-2.0"
] | null | null | null | modelsProject/modelsApp/admin.py | cs-fullstack-2019-spring/django-models3-cw-PorcheWooten | c25fe7420f7f0586cbaccd2a25237651f7a69827 | [
"Apache-2.0"
] | null | null | null | modelsProject/modelsApp/admin.py | cs-fullstack-2019-spring/django-models3-cw-PorcheWooten | c25fe7420f7f0586cbaccd2a25237651f7a69827 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
# Register your models here.
from django.contrib import admin
from .models import Book
from .models import Car
admin.site.register(Book)
admin.site.register(Car) | 21.777778 | 32 | 0.806122 | 30 | 196 | 5.266667 | 0.4 | 0.126582 | 0.21519 | 0.291139 | 0.35443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 196 | 9 | 33 | 21.777778 | 0.918605 | 0.132653 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2cc91ae3fe91c308b66433c84bb5b0aaed08337c | 9,587 | py | Python | src/backend/api/handlers/tests/update_event_rankings_test.py | bovlb/the-blue-alliance | 29389649d96fe060688f218d463e642dcebfd6cc | [
"MIT"
] | 266 | 2015-01-04T00:10:48.000Z | 2022-03-28T18:42:05.000Z | src/backend/api/handlers/tests/update_event_rankings_test.py | bovlb/the-blue-alliance | 29389649d96fe060688f218d463e642dcebfd6cc | [
"MIT"
] | 2,673 | 2015-01-01T20:14:33.000Z | 2022-03-31T18:17:16.000Z | src/backend/api/handlers/tests/update_event_rankings_test.py | bovlb/the-blue-alliance | 29389649d96fe060688f218d463e642dcebfd6cc | [
"MIT"
] | 230 | 2015-01-04T00:10:48.000Z | 2022-03-26T18:12:04.000Z | import json
from typing import Dict, List, Optional
from google.appengine.ext import ndb
from werkzeug.test import Client
from backend.api.trusted_api_auth_helper import TrustedApiAuthHelper
from backend.common.consts.auth_type import AuthType
from backend.common.consts.event_type import EventType
from backend.common.models.api_auth_access import ApiAuthAccess
from backend.common.models.event import Event
AUTH_ID = "tEsT_id_0"
AUTH_SECRET = "321tEsTsEcReT"
REQUEST_PATH = "/api/trusted/v1/event/2014casj/rankings/update"
def setup_event(remap_teams: Optional[Dict[str, str]] = None) -> None:
Event(
id="2014casj",
year=2014,
event_short="casj",
event_type_enum=EventType.OFFSEASON,
remap_teams=remap_teams,
).put()
def setup_auth(access_types: List[AuthType]) -> None:
ApiAuthAccess(
id=AUTH_ID,
secret=AUTH_SECRET,
event_list=[ndb.Key(Event, "2014casj")],
auth_types_enum=access_types,
).put()
def get_auth_headers(request_path: str, request_body) -> Dict[str, str]:
return {
"X-TBA-Auth-Id": AUTH_ID,
"X-TBA-AUth-Sig": TrustedApiAuthHelper.compute_auth_signature(
AUTH_SECRET, request_path, request_body
),
}
def test_bad_event_key(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
resp = api_client.post(
"/api/trusted/v1/event/asdf/rankings/update", data=json.dumps({})
)
assert resp.status_code == 404
def test_bad_event(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
resp = api_client.post(
"/api/trusted/v1/event/2015casj/rankings/update", data=json.dumps({})
)
assert resp.status_code == 404
def test_bad_auth_type(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_INFO])
resp = api_client.post(
"/api/trusted/v1/event/2014casj/rankings/update", data=json.dumps({})
)
assert resp.status_code == 401
def test_no_auth(api_client: Client) -> None:
setup_event()
request_body = json.dumps([])
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 401
def test_bad_json(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
request_body = "abcd"
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 400
def test_bad_payload_type(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
request_body = json.dumps([])
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 400
def test_bad_breakdowns(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
request_body = json.dumps({"breakdowns": "foo", "rankings": []})
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 400
def test_bad_rankings(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
request_body = json.dumps({"breakdowns": [], "rankings": "foo"})
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 400
def test_bad_ranking_type(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
request_body = json.dumps({"breakdowns": [], "rankings": ["foo"]})
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 400
def test_bad_team_key(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
request_body = json.dumps({"breakdowns": [], "rankings": [{"team_key": "foo"}]})
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 400
def test_bad_rank(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
request_body = json.dumps(
{"breakdowns": [], "rankings": [{"team_key": "frc254", "rank": "foo"}]}
)
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 400
def test_rankings_update(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
rankings = {
"breakdowns": ["QS", "Auton", "Teleop", "T&C"],
"rankings": [
{
"team_key": "frc254",
"rank": 1,
"played": 10,
"dqs": 0,
"QS": 20,
"Auton": 500,
"Teleop": 500,
"T&C": 200,
},
{
"team_key": "frc971",
"rank": 2,
"played": 10,
"dqs": 0,
"QS": 20,
"Auton": 500,
"Teleop": 500,
"T&C": 200,
},
],
}
request_body = json.dumps(rankings)
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 200
event: Optional[Event] = Event.get_by_id("2014casj")
assert event is not None
event_rankings = event.rankings
assert event_rankings is not None
assert event_rankings[0] == {
"rank": 1,
"team_key": "frc254",
"record": {"wins": 0, "losses": 0, "ties": 0},
"qual_average": None,
"matches_played": 10,
"dq": 0,
"sort_orders": [20.0, 500.0, 500.0, 200.0],
}
def test_rankings_wlt_update(api_client: Client) -> None:
setup_event()
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
rankings = {
"breakdowns": ["QS", "Auton", "Teleop", "T&C", "wins", "losses", "ties"],
"rankings": [
{
"team_key": "frc254",
"rank": 1,
"wins": 10,
"losses": 0,
"ties": 0,
"played": 10,
"dqs": 0,
"QS": 20,
"Auton": 500,
"Teleop": 500,
"T&C": 200,
},
{
"team_key": "frc971",
"rank": 2,
"wins": 10,
"losses": 0,
"ties": 0,
"played": 10,
"dqs": 0,
"QS": 20,
"Auton": 500,
"Teleop": 500,
"T&C": 200,
},
],
}
request_body = json.dumps(rankings)
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 200
event: Optional[Event] = Event.get_by_id("2014casj")
assert event is not None
event_rankings = event.rankings
assert event_rankings is not None
assert event_rankings[0] == {
"rank": 1,
"team_key": "frc254",
"record": {"wins": 10, "losses": 0, "ties": 0},
"qual_average": None,
"matches_played": 10,
"dq": 0,
"sort_orders": [20.0, 500.0, 500.0, 200.0],
}
def test_rankings_update_remapteams(api_client: Client) -> None:
setup_event(remap_teams={"frc9000": "frc254B"})
setup_auth(access_types=[AuthType.EVENT_RANKINGS])
rankings = {
"breakdowns": ["QS", "Auton", "Teleop", "T&C"],
"rankings": [
{
"team_key": "frc254",
"rank": 1,
"played": 10,
"dqs": 0,
"QS": 20,
"Auton": 500,
"Teleop": 500,
"T&C": 200,
},
{
"team_key": "frc9000",
"rank": 2,
"played": 10,
"dqs": 0,
"QS": 20,
"Auton": 500,
"Teleop": 500,
"T&C": 200,
},
],
}
request_body = json.dumps(rankings)
response = api_client.post(
REQUEST_PATH,
headers=get_auth_headers(REQUEST_PATH, request_body),
data=request_body,
)
assert response.status_code == 200
event: Optional[Event] = Event.get_by_id("2014casj")
assert event is not None
event_rankings = event.rankings
assert event_rankings is not None
assert event_rankings[1] == {
"rank": 2,
"team_key": "frc254B",
"record": {"wins": 0, "losses": 0, "ties": 0},
"qual_average": None,
"matches_played": 10,
"dq": 0,
"sort_orders": [20.0, 500.0, 500.0, 200.0],
}
| 27.869186 | 84 | 0.573381 | 1,092 | 9,587 | 4.777473 | 0.108059 | 0.073797 | 0.040253 | 0.053671 | 0.810427 | 0.799885 | 0.7834 | 0.774391 | 0.767874 | 0.758865 | 0 | 0.04333 | 0.297069 | 9,587 | 343 | 85 | 27.950437 | 0.730821 | 0 | 0 | 0.657343 | 0 | 0 | 0.110253 | 0.018775 | 0 | 0 | 0 | 0 | 0.08042 | 1 | 0.059441 | false | 0 | 0.031469 | 0.003497 | 0.094406 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2ccf36f836bc4080554dfe73f7ab1f489e130bd1 | 6,094 | py | Python | tests/test_schema_validation.py | ThiefMaster/cern-search | fb8adef358dad5267ed36e771adb94f2ccac28c2 | [
"MIT"
] | null | null | null | tests/test_schema_validation.py | ThiefMaster/cern-search | fb8adef358dad5267ed36e771adb94f2ccac28c2 | [
"MIT"
] | null | null | null | tests/test_schema_validation.py | ThiefMaster/cern-search | fb8adef358dad5267ed36e771adb94f2ccac28c2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# This file is part of CERN Search.
# Copyright (C) 2018-2019 CERN.
#
# CERN Search is free software; you can redistribute it and/or modify it
# under the terms of the MIT License; see LICENSE file for more details.
import json
import pytest
import requests
HEADERS = {
"Accept": "application/json",
"Content-Type": "application/json; charset=utf-8",
"Authorization": ''
}
@pytest.mark.unit
def test_control_number_update(endpoint, api_key):
HEADERS['Authorization'] = 'Bearer {credentials}'.format(credentials=api_key)
body = {
"_access": {
"owner": ["CernSearch-Administrators@cern.ch"],
"update": ["CernSearch-Administrators@cern.ch"],
"delete": ["CernSearch-Administrators@cern.ch"]
},
"_data": {
"title": "test_control_number_update",
"description": "Not updated document"
}
}
# Create test record
resp = requests.post('{endpoint}/api/records/'.format(endpoint=endpoint),
headers=HEADERS, data=json.dumps(body))
assert resp.status_code == 201
orig_record = resp.json()['metadata']
# Update without control_number
body["_data"]['description'] = 'Update with no control number'
resp = requests.put('{endpoint}/api/record/{control_number}'.format(
endpoint=endpoint,
control_number=orig_record['control_number']),
headers=HEADERS, data=json.dumps(body))
put_record = resp.json()['metadata']
assert resp.status_code == 200
assert put_record.get('control_number') is not None
assert put_record.get('control_number') == orig_record['control_number']
assert put_record["_data"]['description'] == body["_data"]['description']
# Update with a wrong control_number
body["_data"]['description'] = 'Update with wrong control number'
resp = requests.put('{endpoint}/api/record/{control_number}'.format(
endpoint=endpoint,
control_number=orig_record['control_number']),
headers=HEADERS, data=json.dumps(body))
put_record = resp.json()['metadata']
assert resp.status_code == 200
assert put_record.get('control_number') is not None
assert put_record.get('control_number') == orig_record['control_number']
assert put_record["_data"]['description'] == body["_data"]['description']
# Delete test record
resp = requests.delete('{endpoint}/api/record/{control_number}'.format(
endpoint=endpoint,
control_number=orig_record['control_number']),
headers=HEADERS)
assert resp.status_code == 204
@pytest.mark.unit
def test_access_fields_existence(endpoint, api_key):
HEADERS['Authorization'] = 'Bearer {credentials}'.format(credentials=api_key)
# POST and PUT should follow the same workflow. Only checking POST.
# Without _access field
body = {
"_data": {
"title": "test_access_fields_existence",
"description": "No _access field"
}
}
resp = requests.post('{endpoint}/api/records/'.format(endpoint=endpoint),
headers=HEADERS, data=json.dumps(body))
assert resp.status_code == 400
assert {"field": "_schema", "message": "Missing field _access"} in resp.json()['errors']
# Without _access.delete field
body = {
"_access": {
"owner": ["CernSearch-Administrators@cern.ch"],
"update": ["CernSearch-Administrators@cern.ch"]
},
"_data": {
"title": "test_access_fields_existence",
"description": "No _access.delete field"
}
}
resp = requests.post('{endpoint}/api/records/'.format(endpoint=endpoint),
headers=HEADERS, data=json.dumps(body))
assert resp.status_code == 400
assert {"field": "_schema", "message": "Missing or wrong type (not an array) in field _access.delete"} in resp.json()['errors']
# Without _access.update field
body = {
"_access": {
"owner": ["CernSearch-Administrators@cern.ch"],
"delete": ["CernSearch-Administrators@cern.ch"]
},
"_data": {
"title": "test_access_fields_existence",
"description": "No _access.update field"
}
}
resp = requests.post('{endpoint}/api/records/'.format(endpoint=endpoint),
headers=HEADERS, data=json.dumps(body))
assert resp.status_code == 400
assert {"field": "_schema", "message": "Missing or wrong type (not an array) in field _access.update"} in resp.json()['errors']
# Without _access.owner field
body = {
"_access": {
"update": ["CernSearch-Administrators@cern.ch"],
"delete": ["CernSearch-Administrators@cern.ch"]
},
"_data": {
"title": "test_access_fields_existence",
"description": "No _access.owner field"
}
}
resp = requests.post('{endpoint}/api/records/'.format(endpoint=endpoint),
headers=HEADERS, data=json.dumps(body))
assert resp.status_code == 400
assert {"field": "_schema", "message": "Missing or wrong type (not an array) in field _access.owner"} in resp.json()['errors']
@pytest.mark.unit
def test_data_field_existence(endpoint, api_key):
HEADERS['Authorization'] = 'Bearer {credentials}'.format(credentials=api_key)
# Create test record without _data field
body = {
"_access": {
"owner": ["CernSearch-Administrators@cern.ch"],
"update": ["CernSearch-Administrators@cern.ch"],
"delete": ["CernSearch-Administrators@cern.ch"]
},
"title": "test_access_fields_existence",
"description": "No _access field"
}
resp = requests.post('{endpoint}/api/records/'.format(endpoint=endpoint),
headers=HEADERS, data=json.dumps(body))
assert resp.status_code == 400
assert {"field": "_schema", "message": "Missing field _data"} in resp.json()['errors']
| 35.637427 | 131 | 0.621595 | 661 | 6,094 | 5.565809 | 0.163389 | 0.074205 | 0.091329 | 0.097853 | 0.806741 | 0.786899 | 0.763251 | 0.740419 | 0.734982 | 0.733895 | 0 | 0.007959 | 0.237118 | 6,094 | 170 | 132 | 35.847059 | 0.783394 | 0.089104 | 0 | 0.631148 | 0 | 0 | 0.358937 | 0.147117 | 0 | 0 | 0 | 0 | 0.163934 | 1 | 0.02459 | false | 0 | 0.02459 | 0 | 0.04918 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.