hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f9d4d4e83be0575a705e2a0e0afc3485b8b16e49 | 36,365 | py | Python | tests/testOpenflowDefs.py | hubo1016/vlcp | 61c4c2595b610675ac0cbc4dbc46f70ec40090d3 | [
"Apache-2.0"
] | 252 | 2015-11-17T14:21:50.000Z | 2022-03-11T10:19:47.000Z | tests/testOpenflowDefs.py | SarahZarei/vlcp | 61c4c2595b610675ac0cbc4dbc46f70ec40090d3 | [
"Apache-2.0"
] | 23 | 2018-01-09T13:28:52.000Z | 2019-12-12T06:11:44.000Z | tests/testOpenflowDefs.py | SarahZarei/vlcp | 61c4c2595b610675ac0cbc4dbc46f70ec40090d3 | [
"Apache-2.0"
] | 37 | 2016-08-03T04:42:22.000Z | 2021-12-30T16:57:10.000Z | '''
Created on 2015/7/20/
:author: hubo
'''
from __future__ import print_function
import unittest
from vlcp.protocol.openflow import common, openflow10, openflow13, openflow14
from namedstruct import nstruct, dump
import json
import vlcp.utils.ethernet as ethernet
class Test(unittest.TestCase):
exclude = [common.ofp_error_experimenter_msg,
openflow13.ofp_action_set_field, openflow14.ofp_action_set_field,
openflow10.nx_flow_mod_spec, openflow13.nx_flow_mod_spec, openflow14.nx_flow_mod_spec,
openflow10.nx_matches, openflow13.nx_matches, openflow14.nx_matches,
openflow10.nx_action_reg_load2,
openflow13.nx_action_reg_load2,
openflow14.nx_action_reg_load2,
openflow13._ofp_oxm_mask_value, openflow14._ofp_oxm_mask_value,
openflow14._bundle_add_msg_padding,
openflow14.ofp_bundle_add_msg,
openflow14.ofp_requestforward,
#openflow13.ofp_group_desc_stats, openflow13.ofp_oxm_mask, openflow13.ofp_oxm_nomask, openflow13._ofp_oxm_mask_value
]
def testDefs10(self):
for k in dir(openflow10):
attr = getattr(openflow10, k)
if isinstance(attr, nstruct) and not attr in self.exclude and not k.startswith('nxm_') and not hasattr(ethernet, k):
if not attr.subclasses:
self.assertEqual(k, repr(attr), k + ' has different name: ' + repr(attr))
print(k, repr(attr))
obj = attr.new()
s = obj._tobytes()
r = attr.parse(s)
self.assertTrue(r is not None, repr(attr) + ' failed to parse')
obj2, size = r
self.assertEqual(size, len(s), repr(attr) + ' failed to parse')
self.assertEqual(dump(obj), dump(obj2), repr(attr) + ' changed after parsing')
def testDefs13(self):
for k in dir(openflow13):
attr = getattr(openflow13, k)
if isinstance(attr, nstruct) and not attr in self.exclude and not k.startswith('ofp_oxm_') and not k.startswith('nxm_') and not hasattr(ethernet, k):
if not attr.subclasses:
if k != repr(attr) and hasattr(openflow13, repr(attr)) and getattr(openflow13, repr(attr)) is attr:
# An alias
print(k, 'is alias of', repr(attr))
continue
self.assertEqual(k, repr(attr), k + ' has different name: ' + repr(attr))
print(k, repr(attr))
obj = attr.new()
s = obj._tobytes()
r = attr.parse(s)
self.assertTrue(r is not None, repr(attr) + ' failed to parse')
obj2, size = r
self.assertEqual(size, len(s), repr(attr) + ' failed to parse')
self.assertEqual(dump(obj), dump(obj2), repr(attr) + ' changed after parsing')
def testDefs14(self):
for k in dir(openflow14):
attr = getattr(openflow14, k)
if isinstance(attr, nstruct) and not attr in self.exclude and not k.startswith('ofp_oxm_') and not k.startswith('nxm_') and not hasattr(ethernet, k):
if not attr.subclasses:
if k != repr(attr) and hasattr(openflow14, repr(attr)) and getattr(openflow14, repr(attr)) is attr:
# An alias
print(k, 'is alias of', repr(attr))
continue
self.assertEqual(k, repr(attr), k + ' has different name: ' + repr(attr))
print(k, repr(attr))
obj = attr.new()
s = obj._tobytes()
try:
r = attr.parse(s)
except Exception:
self.assertTrue(False, repr(attr) + ' failed to parse')
self.assertTrue(r is not None, repr(attr) + ' failed to parse')
obj2, size = r
self.assertEqual(size, len(s), repr(attr) + ' failed to parse')
self.assertEqual(dump(obj), dump(obj2), repr(attr) + ' changed after parsing')
def testEmbeddedMsg(self):
obj = openflow14.ofp_bundle_add_msg(message=openflow14.ofp_packet_out(data=b'abc'))
s = obj._tobytes()
# No padding
self.assertEqual(len(s), 43)
self.assertEqual(s[-3:], b'abc')
r = openflow14.ofp_msg.parse(s)
self.assertTrue(r is not None, 'failed to parse')
obj2, size = r
self.assertEqual(size, len(s), 'failed to parse')
self.assertEqual(dump(obj), dump(obj2), 'changed after parsing')
obj = openflow14.ofp_bundle_add_msg(message=openflow14.ofp_packet_out(data=b'abc'),
properties=[openflow14.ofp_bundle_prop_experimenter()])
s = obj._tobytes()
# auto padding
self.assertEqual(len(s), 64)
r = openflow14.ofp_msg.parse(s)
self.assertTrue(r is not None, 'failed to parse')
obj2, size = r
self.assertEqual(size, len(s), 'failed to parse')
self.assertEqual(dump(obj), dump(obj2), 'changed after parsing')
obj = openflow14.ofp_requestforward(request=openflow14.ofp_packet_out(data=b'abc'))
s = obj._tobytes()
# No padding
self.assertEqual(len(s), 35)
self.assertEqual(s[-3:], b'abc')
r = openflow14.ofp_msg.parse(s)
self.assertTrue(r is not None, 'failed to parse')
obj2, size = r
self.assertEqual(size, len(s), 'failed to parse')
self.assertEqual(dump(obj), dump(obj2), 'changed after parsing')
def testOxm(self):
for ofp in (openflow13, openflow14):
fm = ofp.ofp_flow_mod.new(priority = ofp.OFP_DEFAULT_PRIORITY, command = ofp.OFPFC_ADD, buffer_id = ofp.OFP_NO_BUFFER)
fm.cookie = 0x67843512
fm.match = ofp.ofp_match_oxm.new()
fm.match.oxm_fields.append(ofp.create_oxm(ofp.OXM_OF_ETH_DST, b'\x06\x00\x0c\x15\x45\x99'))
fm.match.oxm_fields.append(ofp.create_oxm(ofp.OXM_OF_ETH_TYPE, common.ETHERTYPE_IP))
fm.match.oxm_fields.append(ofp.create_oxm(ofp.OXM_OF_IP_PROTO, 6))
fm.match.oxm_fields.append(ofp.create_oxm(ofp.OXM_OF_IPV4_SRC_W, [192,168,1,0], [255,255,255,0]))
apply = ofp.ofp_instruction_actions.new(type = ofp.OFPIT_APPLY_ACTIONS)
apply.actions.append(ofp.ofp_action_set_field.new(field = ofp.create_oxm(ofp.OXM_OF_IPV4_SRC, [202, 102, 0, 37])))
apply.actions.append(ofp.ofp_action_set_queue.new(queue_id = 1))
fm.instructions.append(apply)
write = ofp.ofp_instruction_actions.new(type = ofp.OFPIT_WRITE_ACTIONS)
write.actions.append(ofp.ofp_action_output.new(port = 7))
fm.instructions.append(write)
goto = ofp.ofp_instruction_goto_table.new(table_id = 1)
fm.instructions.append(goto)
s = fm._tobytes()
r = common.ofp_msg.parse(s)
self.assertTrue(r is not None, 'Cannot parse message')
obj2, size = r
self.assertEqual(size, len(s), 'Cannot parse message')
print(json.dumps(dump(fm, tostr=True), indent=2))
print(json.dumps(dump(obj2, tostr=True), indent=2))
self.assertEqual(dump(fm), dump(obj2), 'message changed after parsing')
def testDefs14Size(self):
self.assertEqual(openflow14.ofp_header()._realsize(), 8)
self.assertEqual(openflow14.ofp_hello_elem()._realsize(), 4)
self.assertEqual(openflow14.ofp_hello_elem_versionbitmap()._realsize(), 4)
self.assertEqual(openflow14.ofp_hello()._realsize(), 8)
self.assertEqual(openflow14.ofp_switch_config()._realsize(), 12)
self.assertEqual(openflow14.ofp_table_mod_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_table_mod_prop_eviction()._realsize(), 8)
self.assertEqual(openflow14.ofp_table_mod_prop_vacancy()._realsize(), 8)
self.assertEqual(openflow14.ofp_table_mod_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_table_mod()._realsize(), 16)
self.assertEqual(openflow14.ofp_port_desc_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_port_desc_prop_ethernet()._realsize(), 32)
self.assertEqual(openflow14.ofp_port_desc_prop_optical()._realsize(), 40)
self.assertEqual(openflow14.ofp_port_desc_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_port()._realsize(), 40)
self.assertEqual(openflow14.ofp_switch_features()._realsize(), 32)
self.assertEqual(openflow14.ofp_port_status()._realsize(), 56)
self.assertEqual(openflow14.ofp_port_mod_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_port_mod_prop_ethernet()._realsize(), 8)
self.assertEqual(openflow14.ofp_port_mod_prop_optical()._realsize(), 24)
self.assertEqual(openflow14.ofp_port_mod_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_port_mod()._realsize(), 32)
self.assertEqual(openflow14.ofp_match()._realsize(), 4)
self.assertEqual(openflow14.ofp_oxm_experimenter()._realsize(), 8)
self.assertEqual(openflow14.ofp_action()._realsize(), 4)
self.assertEqual(openflow14.ofp_action_output()._realsize(), 16)
self.assertEqual(openflow14.ofp_action_generic()._realsize(), 8)
self.assertEqual(openflow14.ofp_action_mpls_ttl()._realsize(), 8)
self.assertEqual(openflow14.ofp_action_push()._realsize(), 8)
self.assertEqual(openflow14.ofp_action_pop_mpls()._realsize(), 8)
self.assertEqual(openflow14.ofp_action_set_queue()._realsize(), 8)
self.assertEqual(openflow14.ofp_action_group()._realsize(), 8)
self.assertEqual(openflow14.ofp_action_nw_ttl()._realsize(), 8)
self.assertEqual(openflow14.ofp_action_set_field()._realsize(), 8)
self.assertEqual(openflow14.ofp_action_experimenter()._realsize(), 8)
self.assertEqual(openflow14.ofp_instruction()._realsize(), 4)
self.assertEqual(openflow14.ofp_instruction_goto_table()._realsize(), 8)
self.assertEqual(openflow14.ofp_instruction_write_metadata()._realsize(), 24)
self.assertEqual(openflow14.ofp_instruction_actions()._realsize(), 8)
self.assertEqual(openflow14.ofp_instruction_meter()._realsize(), 8)
self.assertEqual(openflow14.ofp_instruction_experimenter()._realsize(), 8)
self.assertEqual(openflow14.ofp_flow_mod()._realsize(), 56)
self.assertEqual(openflow14.ofp_bucket()._realsize(), 16)
self.assertEqual(openflow14.ofp_group_mod()._realsize(), 16)
self.assertEqual(openflow14.ofp_packet_out()._realsize(), 24)
self.assertEqual(openflow14.ofp_packet_in()._realsize(), 34)
self.assertEqual(openflow14.ofp_flow_removed()._realsize(), 56)
self.assertEqual(openflow14.ofp_meter_band()._realsize(), 12)
self.assertEqual(openflow14.ofp_meter_band_drop()._realsize(), 16)
self.assertEqual(openflow14.ofp_meter_band_dscp_remark()._realsize(), 16)
self.assertEqual(openflow14.ofp_meter_band_experimenter()._realsize(), 16)
self.assertEqual(openflow14.ofp_meter_mod()._realsize(), 16)
self.assertEqual(openflow14.ofp_error_msg()._realsize(), 12)
self.assertEqual(openflow14.ofp_error_experimenter_msg()._realsize(), 16)
self.assertEqual(openflow14.ofp_multipart_request()._realsize(), 16)
self.assertEqual(openflow14.ofp_multipart_reply()._realsize(), 16)
self.assertEqual(openflow14.ofp_desc()._realsize(), 1056)
self.assertEqual(openflow14.ofp_flow_stats_request()._realsize(), 40 + openflow14.ofp_multipart_request()._realsize())
self.assertEqual(openflow14.ofp_flow_stats()._realsize(), 56)
self.assertEqual(openflow14.ofp_aggregate_stats_request()._realsize(), 40 + openflow14.ofp_multipart_request()._realsize())
self.assertEqual(openflow14.ofp_aggregate_stats_reply()._realsize(), 24 + openflow14.ofp_multipart_reply()._realsize())
self.assertEqual(openflow14.ofp_table_feature_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_instruction_id()._realsize(), 4)
self.assertEqual(openflow14.ofp_table_feature_prop_instructions()._realsize(), 4)
self.assertEqual(openflow14.ofp_table_feature_prop_tables()._realsize(), 4)
self.assertEqual(openflow14.ofp_action_id()._realsize(), 4)
self.assertEqual(openflow14.ofp_table_feature_prop_actions()._realsize(), 4)
self.assertEqual(openflow14.ofp_table_feature_prop_oxm()._realsize(), 4)
self.assertEqual(openflow14.ofp_table_feature_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_table_features()._realsize(), 64)
self.assertEqual(openflow14.ofp_table_stats()._realsize(), 24)
self.assertEqual(openflow14.ofp_table_desc()._realsize(), 8)
self.assertEqual(openflow14.ofp_port_stats_request()._realsize(), 8 + openflow14.ofp_multipart_request()._realsize())
self.assertEqual(openflow14.ofp_port_stats_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_port_stats_prop_ethernet()._realsize(), 40)
self.assertEqual(openflow14.ofp_port_stats_prop_optical()._realsize(), 44)
self.assertEqual(openflow14.ofp_port_stats_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_port_stats()._realsize(), 80)
self.assertEqual(openflow14.ofp_group_stats_request()._realsize(), 8 + openflow14.ofp_multipart_request()._realsize())
self.assertEqual(openflow14.ofp_bucket_counter()._realsize(), 16)
self.assertEqual(openflow14.ofp_group_stats()._realsize(), 40)
self.assertEqual(openflow14.ofp_group_desc()._realsize(), 8)
self.assertEqual(openflow14.ofp_group_features()._realsize(), 40)
self.assertEqual(openflow14.ofp_meter_multipart_request()._realsize(), 8 + openflow14.ofp_multipart_request()._realsize())
self.assertEqual(openflow14.ofp_meter_band_stats()._realsize(), 16)
self.assertEqual(openflow14.ofp_meter_stats()._realsize(), 40)
self.assertEqual(openflow14.ofp_meter_config()._realsize(), 8)
self.assertEqual(openflow14.ofp_meter_features()._realsize(), 16)
self.assertEqual(openflow14.ofp_queue_desc_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_queue_desc_prop_min_rate()._realsize(), 8)
self.assertEqual(openflow14.ofp_queue_desc_prop_max_rate()._realsize(), 8)
self.assertEqual(openflow14.ofp_queue_desc_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_queue_desc_request()._realsize(), 8 + openflow14.ofp_multipart_request()._realsize())
self.assertEqual(openflow14.ofp_queue_desc()._realsize(), 16)
self.assertEqual(openflow14.ofp_queue_stats_request()._realsize(), 8 + openflow14.ofp_multipart_request()._realsize())
self.assertEqual(openflow14.ofp_queue_stats_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_queue_stats_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_queue_stats()._realsize(), 48)
self.assertEqual(openflow14.ofp_flow_monitor_request()._realsize(), 24)
self.assertEqual(openflow14.ofp_flow_update()._realsize(), 4)
self.assertEqual(openflow14.ofp_flow_update_full()._realsize(), 32)
self.assertEqual(openflow14.ofp_flow_update_abbrev()._realsize(), 8)
self.assertEqual(openflow14.ofp_flow_update_paused()._realsize(), 8)
self.assertEqual(openflow14.ofp_experimenter_multipart_header()._realsize(), 8)
# self.assertEqual(openflow14.ofp_experimenter_structure()._realsize(), 8)
self.assertEqual(openflow14.ofp_experimenter_msg()._realsize(), 16)
self.assertEqual(openflow14.ofp_role_request()._realsize(), 24)
self.assertEqual(openflow14.ofp_role_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_role_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_role_status()._realsize(), 24)
self.assertEqual(openflow14.ofp_async_config_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_async_config_prop_reasons()._realsize(), 8)
self.assertEqual(openflow14.ofp_async_config_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_async_config()._realsize(), 8)
self.assertEqual(openflow14.ofp_table_status()._realsize(), 24)
self.assertEqual(openflow14.ofp_requestforward()._realsize(), 16)
self.assertEqual(openflow14.ofp_bundle_prop()._realsize(), 4)
self.assertEqual(openflow14.ofp_bundle_prop_experimenter()._realsize(), 12)
self.assertEqual(openflow14.ofp_bundle_ctrl_msg()._realsize(), 16)
self.assertEqual(openflow14.ofp_bundle_add_msg()._realsize(), 24)
def testDefs13Size(self):
# From openflow.h
self.assertEqual(len(openflow13.ofp_header()),8)
self.assertEqual(openflow13.ofp_hello_elem()._realsize(),4) # Excluding padding
self.assertEqual(openflow13.ofp_hello_elem_versionbitmap()._realsize(),4)
self.assertEqual(len(openflow13.ofp_hello()),8)
self.assertEqual(len(openflow13.ofp_switch_config()),12)
self.assertEqual(len(openflow13.ofp_table_mod()),16)
self.assertEqual(len(openflow13.ofp_port()),64)
self.assertEqual(len(openflow13.ofp_switch_features()),32)
self.assertEqual(len(openflow13.ofp_port_status()),80)
self.assertEqual(len(openflow13.ofp_port_mod()),40)
self.assertEqual(len(openflow13.ofp_match()),8)
self.assertEqual(len(openflow13.ofp_oxm_experimenter()),8)
self.assertEqual(len(openflow13.ofp_action()),8)
self.assertEqual(len(openflow13.ofp_action_output()),16)
self.assertEqual(len(openflow13.ofp_action_mpls_ttl()),8)
self.assertEqual(len(openflow13.ofp_action_push()),8)
self.assertEqual(len(openflow13.ofp_action_pop_mpls()),8)
self.assertEqual(len(openflow13.ofp_action_group()),8)
self.assertEqual(len(openflow13.ofp_action_nw_ttl()),8)
self.assertEqual(len(openflow13.ofp_action_set_field()),8)
self.assertEqual(len(openflow13.ofp_action_experimenter()),8)
self.assertEqual(openflow13.ofp_instruction()._realsize(),4)
self.assertEqual(len(openflow13.ofp_instruction_goto_table()),8)
self.assertEqual(len(openflow13.ofp_instruction_write_metadata()),24)
self.assertEqual(len(openflow13.ofp_instruction_actions()),8)
self.assertEqual(len(openflow13.ofp_instruction_meter()),8)
self.assertEqual(len(openflow13.ofp_instruction_experimenter()),8)
self.assertEqual(len(openflow13.ofp_flow_mod()),56)
self.assertEqual(len(openflow13.ofp_bucket()),16)
self.assertEqual(len(openflow13.ofp_group_mod()),16)
self.assertEqual(len(openflow13.ofp_packet_out()),24)
self.assertEqual(len(openflow13.ofp_packet_in()),34) # Add the extra padding
self.assertEqual(len(openflow13.ofp_flow_removed()),56)
self.assertEqual(openflow13.ofp_meter_band()._realsize(),12)
self.assertEqual(len(openflow13.ofp_meter_band_drop()),16)
self.assertEqual(len(openflow13.ofp_meter_band_dscp_remark()),16)
self.assertEqual(len(openflow13.ofp_meter_band_experimenter()),16)
self.assertEqual(len(openflow13.ofp_meter_mod()),16)
self.assertEqual(len(openflow13.ofp_error_msg()),12)
self.assertEqual(len(openflow13.ofp_error_experimenter_msg()),16)
self.assertEqual(len(openflow13.ofp_multipart_request()),16)
self.assertEqual(len(openflow13.ofp_multipart_reply()),16)
self.assertEqual(len(openflow13.ofp_desc()),1056)
self.assertEqual(len(openflow13.ofp_flow_stats_request()),40 + len(openflow13.ofp_multipart_request()))
self.assertEqual(len(openflow13.ofp_flow_stats()),56)
self.assertEqual(len(openflow13.ofp_aggregate_stats_request()),40 + len(openflow13.ofp_multipart_request()))
self.assertEqual(len(openflow13.ofp_aggregate_stats_reply()),24 + len(openflow13.ofp_multipart_reply()))
self.assertEqual(openflow13.ofp_table_feature_prop()._realsize(),4)
self.assertEqual(openflow13.ofp_table_feature_prop_instructions()._realsize(),4)
self.assertEqual(openflow13.ofp_table_feature_prop_next_tables()._realsize(),4)
self.assertEqual(openflow13.ofp_table_feature_prop_actions()._realsize(),4)
self.assertEqual(openflow13.ofp_table_feature_prop_oxm()._realsize(),4)
self.assertEqual(openflow13.ofp_table_feature_prop_experimenter()._realsize(),12)
self.assertEqual(len(openflow13.ofp_table_features()),64)
self.assertEqual(len(openflow13.ofp_table_stats()),24)
self.assertEqual(len(openflow13.ofp_port_stats_request()),8 + len(openflow13.ofp_multipart_request()))
self.assertEqual(len(openflow13.ofp_port_stats()),112)
self.assertEqual(len(openflow13.ofp_group_stats_request()),8 + len(openflow13.ofp_multipart_request()))
self.assertEqual(len(openflow13.ofp_bucket_counter()),16)
self.assertEqual(len(openflow13.ofp_group_stats()),40)
self.assertEqual(len(openflow13.ofp_group_desc()),8)
self.assertEqual(len(openflow13.ofp_group_features()),40)
self.assertEqual(len(openflow13.ofp_meter_multipart_request()),8 + len(openflow13.ofp_multipart_request()))
self.assertEqual(len(openflow13.ofp_meter_band_stats()),16)
self.assertEqual(len(openflow13.ofp_meter_stats()),40)
self.assertEqual(len(openflow13.ofp_meter_config()), 8)
self.assertEqual(len(openflow13.ofp_meter_features()),16)
self.assertEqual(len(openflow13.ofp_experimenter_multipart_header()),8)
self.assertEqual(len(openflow13.ofp_experimenter()),16)
self.assertEqual(len(openflow13.ofp_queue_prop_header()),8)
self.assertEqual(len(openflow13.ofp_queue_prop_min_rate()),16)
self.assertEqual(len(openflow13.ofp_queue_prop_max_rate()),16)
self.assertEqual(len(openflow13.ofp_queue_prop_experimenter()),16)
self.assertEqual(len(openflow13.ofp_packet_queue()),16)
self.assertEqual(len(openflow13.ofp_queue_get_config_request()),16)
self.assertEqual(len(openflow13.ofp_queue_get_config_reply()),16)
self.assertEqual(len(openflow13.ofp_action_set_queue()),8)
self.assertEqual(len(openflow13.ofp_queue_stats_request()),8 + len(openflow13.ofp_multipart_reply()))
self.assertEqual(len(openflow13.ofp_queue_stats()),40)
self.assertEqual(len(openflow13.ofp_role_request()),24)
self.assertEqual(len(openflow13.ofp_async_config()),32)
def testDefs10Size(self):
self.assertEqual(len(openflow10.ofp_header()),8)
self.assertEqual(len(openflow10.ofp_phy_port()),48)
self.assertEqual(len(openflow10.ofp_packet_queue()),8)
self.assertEqual(len(openflow10.ofp_queue_prop_header()),8)
self.assertEqual(len(openflow10.ofp_queue_prop_min_rate()),16)
self.assertEqual(len(openflow10.ofp_match()),40)
self.assertEqual(len(openflow10.ofp_action()),8)
self.assertEqual(len(openflow10.ofp_action_output()),8)
self.assertEqual(len(openflow10.ofp_action_enqueue()),16)
self.assertEqual(len(openflow10.ofp_action_vlan_vid()),8)
self.assertEqual(len(openflow10.ofp_action_vlan_pcp()),8)
self.assertEqual(len(openflow10.ofp_action_dl_addr()),16)
self.assertEqual(len(openflow10.ofp_action_nw_addr()),8)
self.assertEqual(len(openflow10.ofp_action_nw_tos()),8)
self.assertEqual(len(openflow10.ofp_action_tp_port()),8)
self.assertEqual(len(openflow10.ofp_action_vendor()),8)
self.assertEqual(len(openflow10.ofp_switch_features()),32)
self.assertEqual(len(openflow10.ofp_switch_config()),12)
self.assertEqual(len(openflow10.ofp_flow_mod()),72)
self.assertEqual(len(openflow10.ofp_port_mod()),32)
self.assertEqual(len(openflow10.ofp_queue_get_config_request()),12)
self.assertEqual(len(openflow10.ofp_queue_get_config_reply()),16)
self.assertEqual(len(openflow10.ofp_stats_request()),12)
self.assertEqual(len(openflow10.ofp_stats_reply()),12)
self.assertEqual(len(openflow10.ofp_desc_stats()),1056)
self.assertEqual(len(openflow10.ofp_flow_stats_request()),44 + len(openflow10.ofp_stats_request()))
self.assertEqual(len(openflow10.ofp_flow_stats()),88)
self.assertEqual(len(openflow10.ofp_aggregate_stats_request()),44 + len(openflow10.ofp_stats_request()))
self.assertEqual(len(openflow10.ofp_aggregate_stats_reply()),24 + len(openflow10.ofp_stats_reply()))
self.assertEqual(len(openflow10.ofp_table_stats()),64)
self.assertEqual(len(openflow10.ofp_port_stats_request()),8 + len(openflow10.ofp_stats_request()))
self.assertEqual(len(openflow10.ofp_port_stats()),104)
self.assertEqual(len(openflow10.ofp_queue_stats_request()),8 + len(openflow10.ofp_stats_request()))
self.assertEqual(len(openflow10.ofp_queue_stats()),32)
self.assertEqual(len(openflow10.ofp_packet_out()),16)
self.assertEqual(len(openflow10.ofp_packet_in()),18) # No extra padding
self.assertEqual(len(openflow10.ofp_flow_removed()),88)
self.assertEqual(len(openflow10.ofp_port_status()),64)
self.assertEqual(len(openflow10.ofp_error_msg()),12)
self.assertEqual(len(openflow10.ofp_vendor()),12)
def testDefs10ExtSize(self):
self.assertEqual(len(openflow10.nicira_header()),16)
self.assertEqual(len(openflow10.nx_stats_request()),24)
self.assertEqual(len(openflow10.nx_flow_mod_table_id()),8 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_set_packet_in_format()),4 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_packet_in()),24 + len(openflow10.nicira_header()) + 2) # Extra padding
self.assertEqual(len(openflow10.nx_role_request()),4 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_async_config()),24 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_set_flow_format()),4 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_flow_mod()),32 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_flow_removed()),40 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_flow_stats_request()),8 + len(openflow10.nx_stats_request()))
self.assertEqual(len(openflow10.nx_flow_stats()),48)
self.assertEqual(len(openflow10.nx_aggregate_stats_request()),8 + len(openflow10.nx_stats_request()))
self.assertEqual(len(openflow10.nx_controller_id()),8 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_flow_monitor_request()),16 + len(openflow10.nx_stats_request()))
self.assertEqual(openflow10.nx_flow_update()._realsize(),4)
self.assertEqual(len(openflow10.nx_flow_update_full()),24)
self.assertEqual(len(openflow10.nx_flow_update_abbrev()),8)
self.assertEqual(len(openflow10.nx_flow_monitor_cancel()),4 + len(openflow10.nicira_header()))
self.assertEqual(len(openflow10.nx_action_controller()), 16)
self.assertEqual(len(openflow10.nx_action_controller2()), 16)
self.assertEqual(len(openflow10.nx_action_reg_load2()), 24)
self.assertEqual(len(openflow10.nx_action_clone()), 16)
self.assertEqual(len(openflow10.nx_action_output_reg()), 24)
self.assertEqual(len(openflow10.nx_action_output_reg2()), 24)
self.assertEqual(len(openflow10.nx_action_bundle()), 32)
self.assertEqual(len(openflow10.nx_action_reg_move()), 24)
self.assertEqual(len(openflow10.nx_action_reg_load()), 24)
self.assertEqual(len(openflow10.nx_action_stack()), 24)
self.assertEqual(len(openflow10.nx_action_cnt_ids()), 16)
self.assertEqual(len(openflow10.nx_action_fin_timeout()), 16)
self.assertEqual(len(openflow10.nx_action_encap()), 16)
self.assertEqual(len(openflow10.nx_action_decap()), 16)
self.assertEqual(len(openflow10.nx_action_resubmit()), 16)
self.assertEqual(len(openflow10.nx_action_learn()), 32)
self.assertEqual(len(openflow10.nx_action_learn2()), 40)
self.assertEqual(len(openflow10.nx_action_conjunction()), 16)
self.assertEqual(len(openflow10.nx_action_multipath()), 32)
self.assertEqual(len(openflow10.nx_action_note()), 16)
self.assertEqual(len(openflow10.nx_action_sample()), 24)
self.assertEqual(len(openflow10.nx_action_sample2()), 32)
self.assertEqual(len(openflow10.nx_action_conntrack()), 24)
self.assertEqual(len(openflow10.nx_action_nat()), 16)
self.assertEqual(len(openflow10.nx_action_output_trunc()), 16)
self.assertEqual(len(openflow10.nx_action_write_metadata()), 32)
def testDefs13ExtSize(self):
self.assertEqual(len(openflow13.nicira_header()),16)
self.assertEqual(len(openflow13.nx_stats_request()),24)
self.assertEqual(len(openflow13.nx_flow_mod_table_id()),8 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_set_packet_in_format()),4 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_packet_in()),24 + len(openflow13.nicira_header()) + 2) # Extra padding
self.assertEqual(len(openflow13.nx_role_request()),4 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_async_config()),24 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_set_flow_format()),4 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_flow_mod()),32 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_flow_removed()),40 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_flow_stats_request()),8 + len(openflow13.nx_stats_request()))
self.assertEqual(len(openflow13.nx_flow_stats()),48)
self.assertEqual(len(openflow13.nx_aggregate_stats_request()),8 + len(openflow13.nx_stats_request()))
self.assertEqual(len(openflow13.nx_controller_id()),8 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_flow_monitor_request()),16 + len(openflow13.nx_stats_request()))
self.assertEqual(openflow13.nx_flow_update()._realsize(),4)
self.assertEqual(len(openflow13.nx_flow_update_full()),24)
self.assertEqual(len(openflow13.nx_flow_update_abbrev()),8)
self.assertEqual(len(openflow13.nx_flow_monitor_cancel()),4 + len(openflow13.nicira_header()))
self.assertEqual(len(openflow13.nx_action_controller()), 16)
self.assertEqual(len(openflow13.nx_action_controller2()), 16)
self.assertEqual(len(openflow13.nx_action_reg_load2()), 24)
self.assertEqual(len(openflow13.nx_action_clone()), 16)
self.assertEqual(len(openflow13.nx_action_output_reg()), 24)
self.assertEqual(len(openflow13.nx_action_output_reg2()), 24)
self.assertEqual(len(openflow13.nx_action_bundle()), 32)
self.assertEqual(len(openflow13.nx_action_reg_move()), 24)
self.assertEqual(len(openflow13.nx_action_reg_load()), 24)
self.assertEqual(len(openflow13.nx_action_stack()), 24)
self.assertEqual(len(openflow13.nx_action_cnt_ids()), 16)
self.assertEqual(len(openflow13.nx_action_fin_timeout()), 16)
self.assertEqual(len(openflow13.nx_action_encap()), 16)
self.assertEqual(len(openflow13.nx_action_decap()), 16)
self.assertEqual(len(openflow13.nx_action_resubmit()), 16)
self.assertEqual(len(openflow13.nx_action_learn()), 32)
self.assertEqual(len(openflow13.nx_action_learn2()), 40)
self.assertEqual(len(openflow13.nx_action_conjunction()), 16)
self.assertEqual(len(openflow13.nx_action_multipath()), 32)
self.assertEqual(len(openflow13.nx_action_note()), 16)
self.assertEqual(len(openflow13.nx_action_sample()), 24)
self.assertEqual(len(openflow13.nx_action_sample2()), 32)
self.assertEqual(len(openflow13.nx_action_conntrack()), 24)
self.assertEqual(len(openflow13.nx_action_nat()), 16)
self.assertEqual(len(openflow13.nx_action_output_trunc()), 16)
self.assertEqual(len(openflow13.nx_action_write_metadata()), 32)
def testDefs14ExtSize(self):
self.assertEqual(len(openflow14.nicira_header()),16)
self.assertEqual(len(openflow14.nx_stats_request()),24)
self.assertEqual(len(openflow14.nx_flow_mod_table_id()),8 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_set_packet_in_format()),4 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_packet_in()),24 + len(openflow14.nicira_header()) + 2) # Extra padding
self.assertEqual(len(openflow14.nx_role_request()),4 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_async_config()),24 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_set_flow_format()),4 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_flow_mod()),32 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_flow_removed()),40 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_flow_stats_request()),8 + len(openflow14.nx_stats_request()))
self.assertEqual(len(openflow14.nx_flow_stats()),48)
self.assertEqual(len(openflow14.nx_aggregate_stats_request()),8 + len(openflow14.nx_stats_request()))
self.assertEqual(len(openflow14.nx_controller_id()),8 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_flow_monitor_request()),16 + len(openflow14.nx_stats_request()))
self.assertEqual(openflow14.nx_flow_update()._realsize(),4)
self.assertEqual(len(openflow14.nx_flow_update_full()),24)
self.assertEqual(len(openflow14.nx_flow_update_abbrev()),8)
self.assertEqual(len(openflow14.nx_flow_monitor_cancel()),4 + len(openflow14.nicira_header()))
self.assertEqual(len(openflow14.nx_action_controller()), 16)
self.assertEqual(len(openflow14.nx_action_controller2()), 16)
self.assertEqual(len(openflow14.nx_action_reg_load2()), 24)
self.assertEqual(len(openflow14.nx_action_clone()), 16)
self.assertEqual(len(openflow14.nx_action_output_reg()), 24)
self.assertEqual(len(openflow14.nx_action_output_reg2()), 24)
self.assertEqual(len(openflow14.nx_action_bundle()), 32)
self.assertEqual(len(openflow14.nx_action_reg_move()), 24)
self.assertEqual(len(openflow14.nx_action_reg_load()), 24)
self.assertEqual(len(openflow14.nx_action_stack()), 24)
self.assertEqual(len(openflow14.nx_action_cnt_ids()), 16)
self.assertEqual(len(openflow14.nx_action_fin_timeout()), 16)
self.assertEqual(len(openflow14.nx_action_encap()), 16)
self.assertEqual(len(openflow14.nx_action_decap()), 16)
self.assertEqual(len(openflow14.nx_action_resubmit()), 16)
self.assertEqual(len(openflow14.nx_action_learn()), 32)
self.assertEqual(len(openflow14.nx_action_learn2()), 40)
self.assertEqual(len(openflow14.nx_action_conjunction()), 16)
self.assertEqual(len(openflow14.nx_action_multipath()), 32)
self.assertEqual(len(openflow14.nx_action_note()), 16)
self.assertEqual(len(openflow14.nx_action_sample()), 24)
self.assertEqual(len(openflow14.nx_action_sample2()), 32)
self.assertEqual(len(openflow14.nx_action_conntrack()), 24)
self.assertEqual(len(openflow14.nx_action_nat()), 16)
self.assertEqual(len(openflow14.nx_action_output_trunc()), 16)
self.assertEqual(len(openflow14.nx_action_write_metadata()), 32)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testDefs']
unittest.main()
| 69.134981 | 161 | 0.701636 | 4,458 | 36,365 | 5.425976 | 0.0655 | 0.246806 | 0.183058 | 0.138906 | 0.8952 | 0.846418 | 0.752946 | 0.418827 | 0.292819 | 0.238042 | 0 | 0.056523 | 0.167579 | 36,365 | 525 | 162 | 69.266667 | 0.742559 | 0.01199 | 0 | 0.112426 | 0 | 0 | 0.015596 | 0.000668 | 0 | 0 | 0.000279 | 0 | 0.798817 | 1 | 0.021696 | false | 0 | 0.011834 | 0 | 0.037475 | 0.015779 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f9d7feedffa81b405c11f328c27e6377f1a82554 | 222 | py | Python | apps/images/schemas.py | FrozRt/marmon_back | ec58b6d8fcc81d13a841efb84bde5914fd316787 | [
"MIT"
] | null | null | null | apps/images/schemas.py | FrozRt/marmon_back | ec58b6d8fcc81d13a841efb84bde5914fd316787 | [
"MIT"
] | 45 | 2021-03-12T02:20:24.000Z | 2021-07-26T02:46:11.000Z | apps/images/schemas.py | FrozRt/marmon_back | ec58b6d8fcc81d13a841efb84bde5914fd316787 | [
"MIT"
] | null | null | null | from typing import Optional, Union
from pydantic import BaseModel
class ImageSchema(BaseModel):
id: int
name: Optional[str]
url: str
width: Optional[Union[int, str]]
height: Optional[Union[int, str]] | 20.181818 | 37 | 0.702703 | 29 | 222 | 5.37931 | 0.551724 | 0.25 | 0.205128 | 0.24359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202703 | 222 | 11 | 37 | 20.181818 | 0.881356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
ddba433a171a7b8b9bbf0eb38c0656aed8bcb6f0 | 8,076 | py | Python | tests/_amt_launcher_test.py | xgouchet/AutoMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 41 | 2017-04-10T10:12:32.000Z | 2022-02-11T09:34:43.000Z | tests/_amt_launcher_test.py | xgouchet/AutoMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 14 | 2017-02-17T09:58:57.000Z | 2018-02-12T14:38:51.000Z | tests/_amt_launcher_test.py | xgouchet/ArachneMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 5 | 2017-04-11T13:03:20.000Z | 2021-06-23T08:41:10.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import unittest
from configparser import ConfigParser
from automergetool.amt_launcher import *
FAKE_TOOL = 'blu'
FAKE_TOOL_SECTION = 'mergetool "blu"'
class ToolsLauncherTest(unittest.TestCase):
def test_write_section_name(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
section = ToolsLauncher.tool_section_name(FAKE_TOOL)
# Then
self.assertEqual(section, FAKE_TOOL_SECTION)
def test_get_tool_trust_from_config_none(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
trust = launcher.get_tool_trust(FAKE_TOOL)
# Then
self.assertFalse(trust)
def test_get_tool_trust_from_config_overriden_false(self):
# Given
cfg = ConfigParser()
cfg.optionxform = str
cfg.add_section(FAKE_TOOL_SECTION)
cfg.set(FAKE_TOOL_SECTION, OPT_TRUST_EXIT_CODE, 'false')
launcher = ToolsLauncher(cfg)
# When
trust = launcher.get_tool_trust(FAKE_TOOL)
# Then
self.assertFalse(trust)
def test_get_tool_trust_from_config_overriden_true(self):
# Given
cfg = ConfigParser()
cfg.optionxform = str
cfg.add_section(FAKE_TOOL_SECTION)
cfg.set(FAKE_TOOL_SECTION, OPT_TRUST_EXIT_CODE, 'true')
launcher = ToolsLauncher(cfg)
# When
trust = launcher.get_tool_trust(FAKE_TOOL)
# Then
self.assertTrue(trust)
def test_get_tool_trust_from_config_overriden_not_boolean(self):
# Given
cfg = ConfigParser()
cfg.optionxform = str
cfg.add_section(FAKE_TOOL_SECTION)
cfg.set(FAKE_TOOL_SECTION, OPT_TRUST_EXIT_CODE, 'plop')
launcher = ToolsLauncher(cfg)
# When
with self.assertRaises(ValueError):
trust = launcher.get_tool_trust(FAKE_TOOL)
def test_get_tool_trust_from_config_known(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
trust = launcher.get_tool_trust('gen_debug')
# Then
self.assertTrue(trust)
def test_get_tool_extensions_none(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
exts = launcher.get_tool_extensions(FAKE_TOOL)
# Then
self.assertIsNone(exts)
def test_get_tool_extensions_override(self):
# Given
cfg = ConfigParser()
cfg.optionxform = str
cfg.add_section(FAKE_TOOL_SECTION)
cfg.set(FAKE_TOOL_SECTION, OPT_EXTENSIONS, 'a;b;c')
launcher = ToolsLauncher(cfg)
# When
exts = launcher.get_tool_extensions(FAKE_TOOL)
# Then
self.assertEqual(exts, ['a', 'b', 'c'])
def test_get_tool_extensions_known(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
exts = launcher.get_tool_extensions('java_imports')
# Then
self.assertEqual(exts, ['java'])
def test_get_tool_ignored_extensions_none(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
exts = launcher.get_tool_ignored_extensions(FAKE_TOOL)
# Then
self.assertIsNone(exts)
def test_get_tool_ignored_extensions_override(self):
# Given
cfg = ConfigParser()
cfg.optionxform = str
cfg.add_section(FAKE_TOOL_SECTION)
cfg.set(FAKE_TOOL_SECTION, OPT_IGNORED_EXTENSIONS, 'a;b;c')
launcher = ToolsLauncher(cfg)
# When
exts = launcher.get_tool_ignored_extensions(FAKE_TOOL)
# Then
self.assertEqual(exts, ['a', 'b', 'c'])
def test_get_tool_path_none(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
path = launcher.get_tool_path(FAKE_TOOL)
# Then
self.assertEqual(path, FAKE_TOOL)
def test_get_tool_path_override(self):
# Given
cfg = ConfigParser()
cfg.optionxform = str
cfg.add_section(FAKE_TOOL_SECTION)
cfg.set(FAKE_TOOL_SECTION, OPT_PATH, '/path/to/bar')
launcher = ToolsLauncher(cfg)
# When
path = launcher.get_tool_path(FAKE_TOOL)
# Then
self.assertEqual(path, '/path/to/bar')
def test_get_tool_path_known(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
path = launcher.get_tool_path('gen_debug')
# Then
self.assertEqual(path, KNOWN_PATHS['gen_debug'])
def test_get_tool_cmd_none(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
# When
cmd = launcher.get_tool_cmd(FAKE_TOOL)
# Then
self.assertIsNone(cmd)
def test_get_tool_cmd_overriden(self):
# Given
cfg = ConfigParser()
cfg.optionxform = str
cfg.add_section(FAKE_TOOL_SECTION)
cfg.set(FAKE_TOOL_SECTION, OPT_CMD, 'spam')
launcher = ToolsLauncher(cfg)
# When
cmd = launcher.get_tool_cmd(FAKE_TOOL)
# Then
self.assertEqual(cmd, 'spam')
def test_get_tool_cmd_known(self):
# Given
cfg = ConfigParser()
launcher = ToolsLauncher(cfg)
interpreter = sys.executable
# When
cmd = launcher.get_tool_cmd('gen_debug')
# Then
self.assertEqual(cmd, interpreter + ' ' + KNOWN_PATHS['gen_debug'] + ' -m $MERGED')
def test_get_tool_cmd_known_with_options(self):
# Given
cfg = ConfigParser()
cfg.optionxform = str
cfg.add_section('mergetool "gen_debug"')
cfg.set('mergetool "gen_debug"', 'breakfast', 'bacon')
cfg.set('mergetool "gen_debug"', 'path', '/toto')
cfg.set('mergetool "gen_debug"', 'trustExitCode', 'false')
launcher = ToolsLauncher(cfg)
interpreter = sys.executable
# When
cmd = launcher.get_tool_cmd('gen_debug')
# Then
self.assertEqual(cmd, interpreter + ' /toto -m $MERGED --breakfast bacon')
def test_sanitize_command_simple(self):
# Given
cfg = ConfigParser()
cmd = "foo -o /dev/null/base /dev/null/merged"
# When
tokens = ToolsLauncher.sanitize_command(cmd)
# Then
self.assertEqual(tokens, ['foo', '-o', '/dev/null/base', '/dev/null/merged'])
def test_sanitize_command_with_whitespaces(self):
# Given
cfg = ConfigParser()
cmd = "foo -o \n /dev/null/base \t\t /dev/null/merged"
# When
tokens = ToolsLauncher.sanitize_command(cmd)
# Then
self.assertEqual(tokens, ['foo', '-o', '/dev/null/base', '/dev/null/merged'])
def test_sanitize_command_with_quotes(self):
# Given
cfg = ConfigParser()
cmd = "foo -o '/dev/null/base with space' '/dev/null/mergedwith\"e'"
# When
tokens = ToolsLauncher.sanitize_command(cmd)
# Then
self.assertEqual(tokens,
['foo', '-o', '/dev/null/base with space', '/dev/null/mergedwith\"e'])
def test_sanitize_command_with_double_quotes(self):
# Given
cfg = ConfigParser()
cmd = 'foo -o "/dev/null/base with space" "/dev/null/mergedwith\'e"'
# When
tokens = ToolsLauncher.sanitize_command(cmd)
# Then
self.assertEqual(tokens,
['foo', '-o', '/dev/null/base with space', '/dev/null/mergedwith\'e'])
def test_sanitize_command_weird_syntax(self):
# Given
cfg = ConfigParser()
cmd = 'foo -o="/dev/null/base" -p=\'/dev/null/merged\''
# When
tokens = ToolsLauncher.sanitize_command(cmd)
# Then
self.assertEqual(tokens, ['foo', '-o="/dev/null/base"', '-p=\'/dev/null/merged\''])
if __name__ == '__main__':
unittest.main()
| 27.100671 | 95 | 0.605498 | 907 | 8,076 | 5.123484 | 0.116869 | 0.051216 | 0.059393 | 0.118786 | 0.852163 | 0.80482 | 0.785668 | 0.759415 | 0.73187 | 0.708414 | 0 | 0.000347 | 0.287147 | 8,076 | 297 | 96 | 27.191919 | 0.806844 | 0.050272 | 0 | 0.581699 | 0 | 0.006536 | 0.099605 | 0.011579 | 0 | 0 | 0 | 0 | 0.150327 | 1 | 0.150327 | false | 0 | 0.026144 | 0 | 0.183007 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
349b9af3e7afb3b855cb187bc3f6ecfa13f8bd83 | 133 | py | Python | pooling/__init__.py | LabForComputationalVision/pooling-windows | 96e68879edc52c8817034e16ad26d95c60b4d91b | [
"MIT"
] | 1 | 2021-11-11T18:56:19.000Z | 2021-11-11T18:56:19.000Z | pooling/__init__.py | LabForComputationalVision/pooling-windows | 96e68879edc52c8817034e16ad26d95c60b4d91b | [
"MIT"
] | 1 | 2021-03-12T22:44:11.000Z | 2021-03-12T22:44:11.000Z | pooling/__init__.py | LabForComputationalVision/pooling-windows | 96e68879edc52c8817034e16ad26d95c60b4d91b | [
"MIT"
] | 1 | 2022-01-02T14:30:09.000Z | 2022-01-02T14:30:09.000Z | #!/usr/bin/env python3
from . import pooling
from . import utils
from . import sampling
from .pooling_windows import PoolingWindows
| 19 | 43 | 0.789474 | 18 | 133 | 5.777778 | 0.611111 | 0.288462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008772 | 0.142857 | 133 | 6 | 44 | 22.166667 | 0.903509 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
34afe06d5be385715f68c2acb9c91e5b38a0d930 | 74,580 | py | Python | test/test_onedspec.py | cylammarco/ASPIRED | d4d04a5dfdf7c4719f9d22feea6d4035bfee8345 | [
"BSD-3-Clause"
] | 13 | 2019-12-22T08:44:57.000Z | 2022-03-23T18:09:14.000Z | test/test_onedspec.py | cylammarco/ASPIRED | d4d04a5dfdf7c4719f9d22feea6d4035bfee8345 | [
"BSD-3-Clause"
] | 71 | 2019-09-13T20:58:28.000Z | 2022-03-27T13:40:56.000Z | test/test_onedspec.py | cylammarco/ASPIRED | d4d04a5dfdf7c4719f9d22feea6d4035bfee8345 | [
"BSD-3-Clause"
] | null | null | null | import os
import numpy as np
import pytest
from aspired import image_reduction
from aspired import spectral_reduction
from aspired.wavelength_calibration import WavelengthCalibration
from aspired.flux_calibration import FluxCalibration
base_dir = os.path.dirname(__file__)
abs_dir = os.path.abspath(os.path.join(base_dir, '..'))
np.random.seed(0)
def file_len(fname):
with open(fname) as f:
for i, l in enumerate(f):
pass
return i + 1
def test_logger():
onedspec_debug = spectral_reduction.OneDSpec(
log_level='DEBUG',
logger_name='onedspec_debug',
log_file_name='onedspec_debug.log',
log_file_folder='test/test_output/')
onedspec_info = spectral_reduction.OneDSpec(
log_level='INFO',
logger_name='onedspec_info',
log_file_name='onedspec_info.log',
log_file_folder='test/test_output/')
onedspec_warning = spectral_reduction.OneDSpec(
log_level='WARNING',
logger_name='onedspec_warning',
log_file_name='onedspec_warning.log',
log_file_folder='test/test_output/')
onedspec_error = spectral_reduction.OneDSpec(
log_level='ERROR',
logger_name='onedspec_error',
log_file_name='onedspec_error.log',
log_file_folder='test/test_output/')
onedspec_critical = spectral_reduction.OneDSpec(
log_level='CRITICAL',
logger_name='onedspec_critical',
log_file_name='onedspec_critical.log',
log_file_folder='test/test_output/')
onedspec_debug.logger.debug('debug: debug mode')
onedspec_debug.logger.info('debug: info mode')
onedspec_debug.logger.warning('debug: warning mode')
onedspec_debug.logger.error('debug: error mode')
onedspec_debug.logger.critical('debug: critical mode')
onedspec_info.logger.debug('info: debug mode')
onedspec_info.logger.info('info: info mode')
onedspec_info.logger.warning('info: warning mode')
onedspec_info.logger.error('info: error mode')
onedspec_info.logger.critical('info: critical mode')
onedspec_warning.logger.debug('warning: debug mode')
onedspec_warning.logger.info('warning: info mode')
onedspec_warning.logger.warning('warning: warning mode')
onedspec_warning.logger.error('warning: error mode')
onedspec_warning.logger.critical('warning: critical mode')
onedspec_error.logger.debug('error: debug mode')
onedspec_error.logger.info('error: info mode')
onedspec_error.logger.warning('error: warning mode')
onedspec_error.logger.error('error: error mode')
onedspec_error.logger.critical('error: critical mode')
onedspec_critical.logger.debug('critical: debug mode')
onedspec_critical.logger.info('critical: info mode')
onedspec_critical.logger.warning('critical: warning mode')
onedspec_critical.logger.error('critical: error mode')
onedspec_critical.logger.critical('critical: critical mode')
debug_debug_length = file_len('test/test_output/onedspec_debug.log')
debug_info_length = file_len('test/test_output/onedspec_info.log')
debug_warning_length = file_len('test/test_output/onedspec_warning.log')
debug_error_length = file_len('test/test_output/onedspec_error.log')
debug_critical_length = file_len('test/test_output/onedspec_critical.log')
assert debug_debug_length == 6, 'Expecting 6 lines in the log file, ' +\
'{} is logged.'.format(debug_debug_length)
assert debug_info_length == 5, 'Expecting 5 lines in the log file, ' +\
'{} is logged.'.format(debug_info_length)
assert debug_warning_length == 3, 'Expecting 3 lines in the log file, ' +\
'{} is logged.'.format(debug_warning_length)
assert debug_error_length == 2, 'Expecting 2 lines in the log file, ' +\
'{} is logged.'.format(debug_error_length)
assert debug_critical_length == 1, 'Expecting 1 lines in the log file, ' +\
'{} is logged.'.format(debug_critical_length)
try:
os.remove('test/test_output/onedspec_debug.log')
except Exception as e:
print(e)
try:
os.remove('test/test_output/onedspec_info.log')
except Exception as e:
print(e)
try:
os.remove('test/test_output/onedspec_warning.log')
except Exception as e:
print(e)
try:
os.remove('test/test_output/onedspec_error.log')
except Exception as e:
print(e)
try:
os.remove('test/test_output/onedspec_critical.log')
except Exception as e:
print(e)
def test_add_fluxcalibration():
# Create a dummy FluxCalibration
dummy_fluxcal = FluxCalibration(log_file_name=None)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_fluxcalibration(dummy_fluxcal)
@pytest.mark.xfail(raises=TypeError)
def test_add_fluxcalibration_fail():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_fluxcalibration(None)
# science add_wavelengthcalibration
def test_add_wavelengthcalibration_science():
# Create a dummy WavelengthCalibration
dummy_wavecal = WavelengthCalibration(log_file_name=None)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration(dummy_wavecal)
onedspec.add_wavelengthcalibration(dummy_wavecal,
spec_id=0,
stype='science')
onedspec.add_wavelengthcalibration([dummy_wavecal])
onedspec.add_wavelengthcalibration([dummy_wavecal],
spec_id=[0],
stype='science')
# science add_wavelengthcalibration to two traces
def test_add_wavelengthcalibration_science_two_spec():
# Create a dummy WavelengthCalibration
dummy_wavecal = WavelengthCalibration(log_file_name=None)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_science_spectrum1D(1)
onedspec.add_wavelengthcalibration(dummy_wavecal,
spec_id=[0, 1],
stype='science')
@pytest.mark.xfail(raises=ValueError)
# science add_wavelengthcalibration to two traces
def test_add_wavelengthcalibration_science_expect_fail():
# Create a dummy WavelengthCalibration
dummy_wavecal = WavelengthCalibration(log_file_name=None)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration(dummy_wavecal,
spec_id=[0, 1],
stype='science')
# standard add_wavelengthcalibration
def test_add_wavelengthcalibration_standard():
# Create a dummy WavelengthCalibration
dummy_wavecal = WavelengthCalibration(log_file_name=None)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration(dummy_wavecal)
onedspec.add_wavelengthcalibration(dummy_wavecal, stype='standard')
onedspec.add_wavelengthcalibration([dummy_wavecal])
onedspec.add_wavelengthcalibration([dummy_wavecal], stype='standard')
# science
@pytest.mark.xfail(raises=TypeError)
def test_add_wavelengthcalibration_science_fail_type_None():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration(None, stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_wavelengthcalibration_science_fail_type_list_of_None():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration([None], stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_add_wavelengthcalibration_science_fail_spec_id():
# Create a dummy WavelengthCalibration
dummy_wavecal = WavelengthCalibration(log_file_name=None)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration(dummy_wavecal,
spec_id=1,
stype='science')
# standard
@pytest.mark.xfail(raises=TypeError)
def test_add_wavelengthcalibration_standard_fail_type_None():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration(None, stype='standard')
@pytest.mark.xfail(raises=TypeError)
def test_add_wavelengthcalibration_standard_fail_type_list_of_None():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration([None], stype='standard')
# spec_id is ignored in standard, so this passes
def test_add_wavelengthcalibration_standard_fail_spec_id():
# Create a dummy WavelengthCalibration
dummy_wavecal = WavelengthCalibration(log_file_name=None)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelengthcalibration(dummy_wavecal,
spec_id=1,
stype='standard')
# science add_spec
def test_add_spec_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=np.arange(100),
count_err=np.arange(100),
count_sky=np.arange(100),
spec_id=0,
stype='science')
onedspec.add_spec(count=[np.arange(200)],
count_err=[np.arange(200)],
count_sky=[np.arange(200)],
spec_id=1,
stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_spec_science_fail_count_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=0.,
count_err=np.arange(100),
count_sky=np.arange(200),
spec_id=0,
stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_spec_science_fail_count_sky_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=np.arange(100),
count_err=np.arange(100),
count_sky=0.,
spec_id=0,
stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_spec_science_fail_count_err_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=np.arange(100),
count_err=0.,
count_sky=np.arange(200),
spec_id=0,
stype='science')
@pytest.mark.xfail(raises=AssertionError)
def test_add_spec_science_fail_count_sky_length_mismatch():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=np.arange(100),
count_err=np.arange(100),
count_sky=np.arange(200),
spec_id=0,
stype='science')
@pytest.mark.xfail(raises=AssertionError)
def test_add_spec_science_fail_count_err_length_mismatch():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=np.arange(100),
count_err=np.arange(200),
count_sky=np.arange(100),
spec_id=0,
stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_spec_science_fail_count_shape_mismatch():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=[np.arange(100)],
count_err=[np.arange(100),
np.arange(100)],
count_sky=[np.arange(100),
np.arange(100)],
spec_id=[0, 1, 2],
stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_spec_science_fail_count_sky_shape_mismatch():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=[np.arange(100), np.arange(100)],
count_err=[np.arange(100)],
count_sky=[np.arange(100),
np.arange(100)],
spec_id=[0, 1, 2],
stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_spec_science_fail_count_err_shape_mismatch():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(count=[np.arange(100), np.arange(100)],
count_err=[np.arange(100),
np.arange(100)],
count_sky=[np.arange(100)],
spec_id=[0, 1, 2],
stype='science')
# science add_wavelength
def test_add_wavelength_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_spec(np.arange(200), spec_id=1, stype='science')
onedspec.add_wavelength(np.arange(100), spec_id=0, stype='science')
onedspec.add_wavelength(np.arange(200), spec_id=1, stype='science')
onedspec.add_wavelength([np.arange(100)], spec_id=0, stype='science')
onedspec.add_wavelength([np.arange(200)], spec_id=1, stype='science')
# science add_wavelengthcalibration to two traces
def test_add_wavelength_science_two_spec():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_spec(np.arange(100), spec_id=1, stype='science')
onedspec.add_wavelength(np.arange(100), spec_id=[0, 1], stype='science')
@pytest.mark.xfail(raises=ValueError)
# science add_wavelengthcalibration to two traces
def test_add_wavelength_science_expect_fail():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_wavelength(np.arange(100), spec_id=[0, 1], stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_science_fail_no_science_data():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelength(np.arange(100), stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_science_fail_no_science_data_2():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelength([np.arange(100)], stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_wavelength_science_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_wavelength(None, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_add_wavelength_science_fail_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_wavelength(np.arange(100), spec_id=1, stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_science_fail_wavelength_size():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_wavelength(np.arange(10), stype='science')
# standard add_wavelength
def test_add_wavelength_standard():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='standard')
onedspec.add_wavelength(np.arange(100), spec_id=0, stype='standard')
onedspec.add_wavelength([np.arange(100)], spec_id=0, stype='standard')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_standard_fail_no_standard_data():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelength(np.arange(100), stype='standard')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_standard_fail_no_standard_data_2():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelength([np.arange(100)], stype='standard')
@pytest.mark.xfail(raises=TypeError)
def test_add_wavelength_standard_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_wavelength(None, stype='standard')
# Note that standard does not care about spec_id, there can only be one
def test_add_wavelength_standard_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_wavelength(np.arange(100), spec_id=1, stype='standard')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_standard_fail_wavelength_size():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_wavelength(np.arange(10), stype='standard')
# science add_wavelength_resampled
def test_add_wavelength_resampled_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_spec(np.arange(200), spec_id=1, stype='science')
onedspec.add_spec(np.arange(100), spec_id=10, stype='science')
onedspec.add_wavelength_resampled(np.arange(100),
spec_id=0,
stype='science')
onedspec.add_wavelength_resampled(np.arange(200),
spec_id=1,
stype='science')
onedspec.add_wavelength_resampled([np.arange(100)],
spec_id=0,
stype='science')
onedspec.add_wavelength_resampled([np.arange(200)],
spec_id=1,
stype='science')
onedspec.add_wavelength_resampled(
[np.arange(100), np.arange(200),
np.arange(100)],
spec_id=[0, 1, 10],
stype='science')
# science add_wavelengthcalibration to two traces
def test_add_wavelength_resampled_science_two_spec():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_spec(np.arange(100), spec_id=1, stype='science')
onedspec.add_wavelength_resampled(np.arange(100),
spec_id=[0, 1],
stype='science')
@pytest.mark.xfail(raises=ValueError)
# science add_wavelengthcalibration to two traces
def test_add_wavelength_resampled_science_two_spec_expect_fail():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_wavelength_resampled(np.arange(100),
spec_id=[0, 1],
stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_resampled_science_fail_no_science_data():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelength_resampled(np.arange(100), stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_resampled_science_fail_no_science_data_2():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelength_resampled([np.arange(100)], stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_wavelength_resampled_science_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_wavelength_resampled(None, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_add_wavelength_resampled_science_fail_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_wavelength_resampled(np.arange(100),
spec_id=1,
stype='science')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_resampled_science_fail_wavelength_size():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_wavelength_resampled(np.arange(10), stype='science')
# standard add_wavelength_resampled
def test_add_wavelength_resampled_standard():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='standard')
onedspec.add_wavelength_resampled(np.arange(100),
spec_id=0,
stype='standard')
onedspec.add_wavelength_resampled([np.arange(100)],
spec_id=0,
stype='standard')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_resampled_standard_fail_no_standard_data():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelength_resampled(np.arange(100), stype='standard')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_resampled_standard_fail_no_standard_data_2():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_wavelength_resampled([np.arange(100)], stype='standard')
@pytest.mark.xfail(raises=TypeError)
def test_add_wavelength_resampled_standard_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_wavelength_resampled(None, stype='standard')
# Note that standard does not care about spec_id, there can only be one,
# so this test passes
def test_add_wavelength_resampled_standard_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_wavelength_resampled(np.arange(100),
spec_id=1,
stype='standard')
@pytest.mark.xfail(raises=RuntimeError)
def test_add_wavelength_resampled_standard_fail_wavelength_size():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_wavelength_resampled(np.arange(10), stype='standard')
# science arc_spec
def test_add_arc_spec_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_spec(np.arange(100), stype='science')
onedspec.add_arc_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_arc_spec([np.arange(100)], spec_id=0, stype='science')
onedspec.add_arc_spec(np.arange(200), spec_id=1, stype='science')
onedspec.add_arc_spec(np.arange(200), spec_id=[1], stype='science')
onedspec.add_arc_spec([np.arange(100), np.arange(200)],
spec_id=[0, 2],
stype='science')
# science find_arc_lines
def test_find_arc_lines_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_spec(np.arange(100), stype='science')
onedspec.add_arc_spec([np.arange(100)], spec_id=0, stype='science')
onedspec.find_arc_lines(spec_id=0, stype='science')
onedspec.find_arc_lines(spec_id=[0], stype='science')
# science find_arc_lines fail spec_id
@pytest.mark.xfail(raises=ValueError)
def test_find_arc_lines_science_fail_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_arc_spec(np.arange(100), spec_id=0, stype='science')
onedspec.find_arc_lines(spec_id=7, stype='science')
# science add_arc_lines
def test_add_arc_lines_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_spec(np.arange(200), spec_id=1, stype='science')
onedspec.add_spec(np.arange(100), spec_id=10, stype='science')
onedspec.add_arc_lines(np.arange(7), spec_id=0, stype='science')
onedspec.add_arc_lines(np.arange(5), spec_id=1, stype='science')
onedspec.add_arc_lines([np.arange(7)], spec_id=0, stype='science')
onedspec.add_arc_lines([np.arange(15)], spec_id=1, stype='science')
onedspec.add_arc_lines(
[np.arange(7), np.arange(15),
np.arange(7)],
spec_id=[0, 1, 10],
stype='science')
# mismatched spec lengths
def test_add_arc_lines_science_fail_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_arc_lines(np.arange(5), spec_id=0, stype='science')
onedspec.add_arc_lines(np.arange(5), spec_id=7, stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_arc_lines_science_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='science')
onedspec.add_arc_lines(None, stype='science')
# standard add_arc_lines
def test_add_arc_lines_standard():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_spec(np.arange(100), spec_id=0, stype='standard')
onedspec.add_arc_lines(np.arange(5), spec_id=0, stype='standard')
onedspec.add_arc_lines([np.arange(15)], spec_id=0, stype='standard')
onedspec.add_arc_lines([np.arange(15)], spec_id=[0], stype='standard')
@pytest.mark.xfail(raises=TypeError)
def test_add_arc_lines_standard_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_arc_lines(None, stype='standard')
# science add_trace
def test_add_trace_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_trace(np.arange(100), np.arange(100), stype='science')
onedspec.add_trace(np.arange(100),
np.arange(100),
spec_id=0,
stype='science')
onedspec.add_trace(np.arange(200),
np.arange(200),
spec_id=1,
stype='science')
onedspec.add_trace(np.arange(200),
np.arange(200),
spec_id=[0, 1],
stype='science')
onedspec.add_trace([np.arange(100), np.arange(200)],
[np.arange(100), np.arange(200)],
spec_id=[0, 2],
stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_trace_science_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_trace(np.polyfit, np.ndarray, stype='science')
# standard add_trace
def test_add_trace_standard():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_trace(np.arange(100), np.arange(100), stype='standard')
onedspec.add_trace([np.arange(100)], [np.arange(100)], stype='standard')
@pytest.mark.xfail(raises=TypeError)
def test_add_trace_standard_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_trace(np.polyfit, np.ndarray, stype='standard')
# science add_fit_coeff
def test_add_fit_coeff_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), spec_id=0, stype='science')
onedspec.add_spec(np.arange(200), spec_id=1, stype='science')
onedspec.add_spec(np.arange(100), spec_id=10, stype='science')
onedspec.add_fit_coeff(np.arange(100), stype='science')
onedspec.add_fit_coeff([0, 1, 2, 3, 4, 5], stype='science')
onedspec.add_fit_coeff(np.arange(100), fit_type='leg', stype='science')
onedspec.add_fit_coeff(np.arange(100), fit_type='cheb', stype='science')
onedspec.add_fit_coeff(np.arange(100), spec_id=0, stype='science')
onedspec.add_fit_coeff(np.arange(200), spec_id=1, stype='science')
onedspec.add_fit_coeff([np.arange(100)], spec_id=0, stype='science')
onedspec.add_fit_coeff([np.arange(200)], spec_id=1, stype='science')
onedspec.add_fit_coeff(
[np.arange(100), np.arange(200),
np.arange(100)],
fit_type=['poly', 'poly', 'poly'],
spec_id=[0, 1, 10],
stype='science')
onedspec.add_fit_coeff(
[[np.arange(100)], [np.arange(200)], [np.arange(100)]],
fit_type=[['poly'], ['poly'], ['poly']],
spec_id=[0, 1, 10],
stype='science')
@pytest.mark.xfail(raises=TypeError)
def test_add_fit_coeff_science_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(10), stype='science')
onedspec.add_wavelength_resampled(None, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_add_fit_coeff_science_fail_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(10), stype='science')
onedspec.add_fit_coeff(np.arange(10), spec_id=1, stype='science')
# standard add_fit_coeff
def test_add_fit_coeff_standard():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(100), stype='standard')
onedspec.add_fit_coeff(np.arange(100), spec_id=0, stype='standard')
onedspec.add_fit_coeff([0, 1, 2, 3, 4, 5], stype='standard')
onedspec.add_fit_coeff(np.arange(100), fit_type='leg', stype='standard')
onedspec.add_fit_coeff(np.arange(100), fit_type='cheb', stype='standard')
onedspec.add_fit_coeff(np.arange(100), spec_id=0, stype='standard')
onedspec.add_fit_coeff([np.arange(100)], spec_id=0, stype='standard')
onedspec.add_fit_coeff([np.arange(100)], spec_id=[0], stype='standard')
@pytest.mark.xfail(raises=TypeError)
def test_add_fit_coeff_standard_fail_type():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(10), stype='standard')
onedspec.add_fit_coeff(None, stype='standard')
# Note that standard does not care about spec_id, there can only be one
# so this test passes
def test_add_fit_coeff_standard_not_fail_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_spec(np.arange(10), stype='standard')
onedspec.add_fit_coeff(np.arange(10), spec_id=1, stype='standard')
# Note that this is testing the "relay" to the WavelengthCalibrator, but
# not testing the calibrator itself.
def test_calibrator_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.initialise_calibrator(spec_id=0, stype='science')
onedspec.initialise_calibrator(spec_id=[1], stype='science')
onedspec.initialise_calibrator(spec_id=[11, 75], stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_calibrator_properties(spec_id=0, stype='science')
onedspec.set_calibrator_properties(spec_id=[1],
num_pix=100,
stype='science')
onedspec.set_calibrator_properties(spec_id=[11, 75],
pixel_list=np.arange(100),
log_level='debug',
stype='science')
onedspec.set_hough_properties(num_slopes=1000, stype='science')
onedspec.set_hough_properties(spec_id=0, num_slopes=1000, stype='science')
onedspec.set_hough_properties(spec_id=[1],
num_slopes=1000,
stype='science')
onedspec.set_hough_properties(spec_id=[11, 75],
num_slopes=1000,
stype='science')
onedspec.set_ransac_properties(filter_close=True, stype='science')
onedspec.set_ransac_properties(spec_id=0,
filter_close=True,
stype='science')
onedspec.set_ransac_properties(spec_id=[1],
filter_close=True,
stype='science')
onedspec.set_ransac_properties(spec_id=[11, 75],
filter_close=True,
stype='science')
onedspec.add_user_atlas(elements=['HeXe'] * 10,
wavelengths=np.arange(10),
stype='science')
onedspec.add_user_atlas(spec_id=0,
elements=['HeXe'] * 10,
wavelengths=np.arange(10),
stype='science')
onedspec.add_user_atlas(spec_id=[1],
elements=['HeXe'] * 10,
wavelengths=np.arange(10),
stype='science')
onedspec.add_user_atlas(spec_id=[11, 75],
elements=['HeXe'] * 10,
wavelengths=np.arange(10),
stype='science')
assert len(
onedspec.science_wavecal[0].spectrum1D.calibrator.atlas_elements) == 20
onedspec.remove_atlas_lines_range(wavelength=5., tolerance=1.5, spec_id=0)
assert len(
onedspec.science_wavecal[0].spectrum1D.calibrator.atlas_elements) == 16
assert len(
onedspec.science_wavecal[1].spectrum1D.calibrator.atlas_elements) == 20
onedspec.remove_atlas_lines_range(wavelength=5., tolerance=1.5)
assert len(
onedspec.science_wavecal[1].spectrum1D.calibrator.atlas_elements) == 16
onedspec.clear_atlas()
assert onedspec.science_wavecal[
0].spectrum1D.calibrator.atlas_elements == []
onedspec.add_atlas(elements=['Ar'] * 10, stype='science')
onedspec.add_atlas(spec_id=0, elements=['Ar'] * 10, stype='science')
onedspec.add_atlas(spec_id=[1], elements=['Ar'] * 10, stype='science')
onedspec.add_atlas(spec_id=[11, 75], elements=['Ar'] * 10, stype='science')
onedspec.do_hough_transform(stype='science')
onedspec.do_hough_transform(spec_id=0, stype='science')
onedspec.do_hough_transform(spec_id=[1], stype='science')
onedspec.do_hough_transform(spec_id=[11, 75], stype='science')
onedspec.set_known_pairs(pix=100, wave=4500., stype='science')
onedspec.set_known_pairs(pix=[100], wave=[4500.], stype='science')
onedspec.set_known_pairs(pix=[100, 200],
wave=[4500., 5500.],
stype='science')
# Fail at the RASCAL initilisation
@pytest.mark.xfail(raises=TypeError)
def test_calibrator_science2():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.initialise_calibrator(spec_id=[1, 5], stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_calibrator_properties_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_hough_properties_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_ransac_properties_science():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_known_pairs_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(stype='science')
onedspec.set_known_pairs(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_add_user_atlas_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(stype='science')
onedspec.add_user_atlas(elements=['bla'],
wavelengths=[1234.],
spec_id=7,
stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_add_atlas_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(stype='science')
onedspec.add_atlas(elements=['Xe'], spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_remove_atlas_lines_range_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(stype='science')
onedspec.add_atlas(elements=['Xe'], stype='science')
onedspec.remove_atlas_lines_range(wavelength=6000.,
spec_id=7,
stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_clear_atlas_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(stype='science')
onedspec.add_atlas(elements=['Xe'], stype='science')
onedspec.clear_atlas(spec_id=0, stype='science')
onedspec.add_atlas(elements=['Xe'], stype='science')
onedspec.list_atlas(stype='science')
onedspec.list_atlas(spec_id=0, stype='science')
onedspec.clear_atlas(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_list_atlas_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.add_atlas(elements=['Xe'], stype='science')
onedspec.list_atlas(spec_id=[0, 1], stype='science')
onedspec.list_atlas(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_hough_transform_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(stype='science')
onedspec.add_atlas(elements=['Xe'], stype='science')
onedspec.do_hough_transform(spec_id=0, stype='science')
onedspec.do_hough_transform(spec_id=[1, 11], stype='science')
onedspec.do_hough_transform(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_fit_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(stype='science')
onedspec.add_atlas(elements=['Xe'], stype='science')
onedspec.do_hough_transform(stype='science')
onedspec.fit(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_refine_fit_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_arc_lines(np.arange(5),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.add_arc_spec(np.arange(100),
spec_id=[0, 1, 11, 75],
stype='science')
onedspec.initialise_calibrator(stype='science')
onedspec.set_calibrator_properties(stype='science')
onedspec.set_hough_properties(stype='science')
onedspec.set_ransac_properties(stype='science')
onedspec.add_atlas(elements=['Xe'], stype='science')
onedspec.do_hough_transform(stype='science')
onedspec.science_wavecal_polynomial_available = True
onedspec.robust_refit(spec_id=7, stype='science')
@pytest.mark.xfail(raises=ValueError)
def test_calibrator_science_fail_ap_extract_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_fit_coeff([1, 2, 3, 4, 5])
onedspec.apply_wavelength_calibration(spec_id=7)
img = image_reduction.ImageReduction(log_file_name=None)
img.add_filelist(filelist='test/test_data/sprat_LHS6328.list')
img.load_data()
img.reduce()
twodspec = spectral_reduction.TwoDSpec(log_file_name=None)
twodspec.add_data(img)
twodspec.ap_trace()
twodspec.ap_extract(model='lowess')
def test_from_twodspec():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.from_twodspec(twodspec, spec_id=0)
@pytest.mark.xfail()
def test_from_twodspec_fail_spec_id():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.from_twodspec(twodspec, spec_id=10)
def test_extinction_function():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.set_atmospheric_extinction()
onedspec.set_atmospheric_extinction(
extinction_func=np.polyfit([0, 1], [0, 1], 1))
def test_standard_library_lookup():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.lookup_standard_libraries(target='cd32d9927')
onedspec.lookup_standard_libraries(target='agk_81d266_005')
onedspec.lookup_standard_libraries(target='bd28')
onedspec.lookup_standard_libraries(target='hr3454')
onedspec.load_standard(library='esowdstan', target='agk_81d266_005')
onedspec.inspect_standard(display=False)
def test_sensitivity():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_trace(np.ones(70) * 37,
np.ones(70),
spec_id=0,
stype='science+standard')
onedspec.add_spec(np.arange(1, 71), spec_id=0, stype='science+standard')
onedspec.add_arc_spec(np.arange(1, 71),
spec_id=0,
stype='science+standard')
onedspec.add_fit_coeff(np.array((4000., 1, 0.2, 0.0071)),
spec_id=0,
stype='science+standard')
onedspec.apply_wavelength_calibration(wave_bin=100.,
wave_start=4000.,
wave_end=9000.)
onedspec.load_standard(target='cd32d9927')
onedspec.inspect_standard(
display=False,
return_jsonstring=True,
save_fig=True,
filename='test/test_output/test_onedspec_inspect_standard')
coeff = np.polynomial.polynomial.polyfit(np.arange(1, 1001),
np.random.random(1000) * 20, 2)
onedspec.add_sensitivity_func(
lambda x: np.polynomial.polynomial.polyval(x, coeff))
# Not implemented yet
# onedspec.save_sensitivity_func('test/test_output/' +\
# 'test_onedspec_sensitivity_func')
onedspec.inspect_sensitivity(
display=False,
return_jsonstring=True,
save_fig=True,
filename='test/test_output/test_onedspec_inspect_sensitivity')
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.add_trace(np.ones(70) * 37,
np.ones(70),
spec_id=0,
stype='science+standard')
onedspec.add_spec(np.arange(1, 71), spec_id=0, stype='science+standard')
onedspec.add_arc_spec(np.arange(1, 71), spec_id=0, stype='science+standard')
onedspec.add_fit_coeff(np.array((4000., 1, 0.2, 0.0071)),
spec_id=0,
stype='science+standard')
onedspec.apply_wavelength_calibration(wave_bin=100.,
wave_start=4000.,
wave_end=9000.)
onedspec.load_standard(target='cd32d9927')
coeff = np.polynomial.polynomial.polyfit(np.arange(1, 1001),
np.random.random(1000) * 20, 2)
onedspec.add_sensitivity_func(
lambda x: np.polynomial.polynomial.polyval(x, coeff))
# Not implemented yet
# onedspec.save_sensitivity_func('test/test_output/' +\
# 'test_onedspec_sensitivity_func')
# onedspec.get_sensitivity()
onedspec.set_atmospheric_extinction()
onedspec.apply_flux_calibration()
onedspec.apply_atmospheric_extinction_correction(science_airmass=1.2,
standard_airmass=1.5)
onedspec.create_fits(output='trace+count+wavelength+count_resampled+'
'sensitivity+flux+sensitivity_resampled+'
'flux_resampled',
empty_primary_hdu=False)
def test_adding_telluric_function():
onedspec.add_telluric_function(
lambda x: np.polynomial.polynomial.polyval(x, coeff))
onedspec.add_telluric_function(
lambda x: np.polynomial.polynomial.polyval(x, coeff), spec_id=0)
onedspec.add_telluric_function(
lambda x: np.polynomial.polynomial.polyval(x, coeff), spec_id=[0])
onedspec.add_telluric_function([np.arange(10000), np.arange(10000)])
onedspec.add_telluric_function(
[np.arange(10000), np.arange(10000)], spec_id=0)
onedspec.add_telluric_function(
[np.arange(10000), np.arange(10000)], spec_id=[0])
def test_getting_telluric_profile():
onedspec.add_telluric_function([np.arange(10000), np.arange(10000)])
onedspec.get_telluric_profile()
onedspec.get_telluric_profile(spec_id=0)
onedspec.get_telluric_profile(spec_id=[0])
onedspec.inspect_telluric_profile(
display=False,
save_fig=True,
fig_type='iframe+png+jpg+svg+pdf',
filename='test/test_output/test_onedspec_inspect_telluric_profile')
onedspec.inspect_telluric_profile(display=False,
spec_id=0,
return_jsonstring=True)
onedspec.inspect_telluric_profile(display=False, spec_id=[0])
onedspec.apply_telluric_correction()
onedspec.apply_telluric_correction(spec_id=0)
onedspec.apply_telluric_correction(spec_id=[0])
@pytest.mark.xfail(raises=ValueError)
def test_adding_telluric_function_wrong_spec_id():
onedspec.add_telluric_function(
lambda x: np.polynomial.polynomial.polyval(x, coeff), spec_id=1000)
@pytest.mark.xfail(raises=ValueError)
def test_getting_telluric_profile_wrong_spec_id():
onedspec.get_telluric_profile(spec_id=1000)
@pytest.mark.xfail(raises=ValueError)
def test_inspecting_telluric_profile_wrong_spec_id():
onedspec.inspect_telluric_profile(spec_id=1000)
@pytest.mark.xfail(raises=ValueError)
def test_applying_atmospheric_extinction_correction_wrong_spec_id():
onedspec.apply_atmospheric_extinction_correction(spec_id=1000)
def test_miscellaneous():
onedspec.apply_flux_calibration()
onedspec.apply_flux_calibration(spec_id=0)
onedspec.apply_flux_calibration(spec_id=[0])
onedspec.apply_atmospheric_extinction_correction(science_airmass=1.2)
onedspec.apply_atmospheric_extinction_correction(science_airmass=1.2,
standard_airmass=1.5)
onedspec.apply_atmospheric_extinction_correction(spec_id=0,
science_airmass=1.2,
standard_airmass=1.5)
onedspec.apply_atmospheric_extinction_correction(spec_id=[0],
science_airmass=1.2,
standard_airmass=1.5)
onedspec.apply_flux_calibration()
onedspec.set_atmospheric_extinction(extinction_func=np.poly1d([1, 2, 3]))
onedspec.apply_atmospheric_extinction_correction(science_airmass=1.2)
onedspec.apply_atmospheric_extinction_correction(science_airmass=1.2,
standard_airmass=1.5)
onedspec.apply_atmospheric_extinction_correction(spec_id=0,
science_airmass=1.2,
standard_airmass=1.5)
onedspec.apply_atmospheric_extinction_correction(spec_id=[0],
science_airmass=1.2,
standard_airmass=1.5)
onedspec.apply_flux_calibration()
onedspec.set_atmospheric_extinction()
onedspec.apply_atmospheric_extinction_correction(science_airmass=1.2)
onedspec.apply_atmospheric_extinction_correction(science_airmass=1.2,
standard_airmass=1.5)
onedspec.apply_atmospheric_extinction_correction(spec_id=0,
science_airmass=1.2,
standard_airmass=1.5)
onedspec.apply_atmospheric_extinction_correction(spec_id=[0],
science_airmass=1.2,
standard_airmass=1.5)
onedspec.inspect_reduced_spectrum(
display=False,
filename='test/test_output/test_onedspec_inspect_reduced_spectrum')
onedspec.inspect_reduced_spectrum(
display=False,
spec_id=0,
renderer='default',
return_jsonstring=True,
save_fig=True,
fig_type='iframe+png',
filename='test/test_output/test_onedspec_inspect_reduced_spectrum')
onedspec.inspect_sensitivity(
display=False,
renderer='default',
return_jsonstring=True,
save_fig=True,
fig_type='iframe+png',
filename='test/test_output/test_onedspec_inspect_reduced_spectrum')
onedspec.create_fits(output='trace+count+weight_map+arc_spec+wavecal+'
'wavelength+count_resampled+sensitivity+flux+'
'sensitivity_resampled+flux_resampled',
empty_primary_hdu=False)
onedspec.create_fits(output='trace+count+weight_map+arc_spec+wavecal+'
'wavelength+count_resampled+sensitivity+flux+'
'sensitivity_resampled+flux_resampled',
recreate=True)
onedspec.create_fits(output='trace+count+weight_map+arc_spec+wavecal+'
'wavelength+count_resampled+sensitivity+flux+'
'sensitivity_resampled+flux_resampled',
spec_id=0,
empty_primary_hdu=False,
recreate=True)
onedspec.modify_trace_header(0, 'set', 'COMMENT', 'Hello Trace!')
onedspec.modify_count_header(0, 'set', 'COMMENT', 'Hello Count!')
onedspec.modify_weight_map_header('set', 'COMMENT', 'Hello Weight!')
onedspec.modify_arc_spec_header(0, 'set', 'COMMENT', 'Hello Arc Spec!')
onedspec.modify_wavecal_header('set', 'COMMENT', 'Hello Wavecal!')
onedspec.modify_wavelength_header('set', 'COMMENT', 'Hello Wavelength!')
onedspec.modify_count_resampled_header(0, 'set', 'COMMENT',
'Hello Count Resampled!')
onedspec.modify_sensitivity_header('set', 'COMMENT', 'Hello Sensitivity!')
onedspec.modify_flux_header(0, 'set', 'COMMENT', 'Hello Flux!')
onedspec.modify_sensitivity_resampled_header(
'set', 'COMMENT', 'Hello Sensitivity Resampled!')
onedspec.modify_flux_resampled_header(0, 'set', 'COMMENT',
'Hello Flux Resampled!')
onedspec.modify_trace_header(0,
'set',
'COMMENT',
'Hello Trace!',
spec_id=0)
onedspec.modify_count_header(0,
'set',
'COMMENT',
'Hello Count!',
spec_id=0)
onedspec.modify_weight_map_header('set',
'COMMENT',
'Hello Weight!',
spec_id=0)
onedspec.modify_arc_spec_header(0,
'set',
'COMMENT',
'Hello Arc Spec!',
spec_id=0)
onedspec.modify_wavecal_header('set',
'COMMENT',
'Hello Wavecal!',
spec_id=0)
onedspec.modify_wavelength_header('set',
'COMMENT',
'Hello Wavelength!',
spec_id=0)
onedspec.modify_count_resampled_header(0,
'set',
'COMMENT',
'Hello Count Resampled!',
spec_id=0)
onedspec.modify_sensitivity_header('set',
'COMMENT',
'Hello Sensitivity!',
spec_id=0)
onedspec.modify_flux_header(0, 'set', 'COMMENT', 'Hello Flux!', spec_id=0)
onedspec.modify_sensitivity_resampled_header(
'set', 'COMMENT', 'Hello Sensitivity Resampled!', spec_id=0)
onedspec.modify_flux_resampled_header(0,
'set',
'COMMENT',
'Hello Flux Resampled!',
spec_id=0)
@pytest.mark.xfail(raises=ValueError)
def test_modify_trace_header_fail_spec_id():
onedspec.modify_trace_header(0,
'set',
'COMMENT',
'Hello Trace!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_count_header_fail_spec_id():
onedspec.modify_count_header(0,
'set',
'COMMENT',
'Hello Count!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_weight_map_header_fail_spec_id():
onedspec.modify_weight_map_header(0,
'set',
'COMMENT',
'Hello Weight!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_arc_spec_header_fail_spec_id():
onedspec.modify_arc_spec_header(0,
'set',
'COMMENT',
'Hello Arc Spec!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_wavecal_header_fail_spec_id():
onedspec.modify_wavecal_header('set',
'COMMENT',
'Hello Wavecal!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_wavelength_header_fail_spec_id():
onedspec.modify_wavelength_header('set',
'COMMENT',
'Hello Wavelength!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_count_resampled_header_fail_spec_id():
onedspec.modify_count_resampled_header(0,
'set',
'COMMENT',
'Hello Count Resampled!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_sensitivity_header_fail_spec_id():
onedspec.modify_sensitivity_header(0,
'set',
'COMMENT',
'Hello Sensitivity!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_flux_header_fail_spec_id():
onedspec.modify_flux_header(0, 'set', 'COMMENT', 'Hello Flux!', spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_sensitivity_resampled_header_fail_spec_id():
onedspec.modify_sensitivity_resampled_header(
0, 'set', 'COMMENT', 'Hello Sensitivity Resampled!', spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_modify_flux_resampled_header_fail_spec_id():
onedspec.modify_flux_resampled_header(0,
'set',
'COMMENT',
'Hello Flux Resampled!',
spec_id=10)
@pytest.mark.xfail(raises=ValueError)
def test_save_fits_fail_output_type():
onedspec.save_fits(output='wave')
@pytest.mark.xfail(raises=ValueError)
def test_save_fits_fail_stype():
onedspec.save_fits(stype='sci')
@pytest.mark.xfail(raises=ValueError)
def test_save_fits_fail_spec_id():
onedspec.save_csv(spec_id=100)
@pytest.mark.xfail(raises=ValueError)
def test_save_csv_fail_output_type():
onedspec.save_csv(output='wave')
@pytest.mark.xfail(raises=ValueError)
def test_save_csv_fail_stype():
onedspec.save_csv(stype='sci')
@pytest.mark.xfail(raises=ValueError)
def test_save_csv_fail_spec_id():
onedspec.save_csv(spec_id=100)
peaks = np.sort(np.random.random(31) * 1000.)
# Removed the closely spaced peaks
distance_mask = np.isclose(peaks[:-1], peaks[1:], atol=5.)
distance_mask = np.insert(distance_mask, 0, False)
peaks = peaks[~distance_mask]
# Line list
wavelengths_linear = 3000. + 5. * peaks
wavelengths_quadratic = 3000. + 4 * peaks + 1.0e-3 * peaks**2.
elements_linear = ['Linear'] * len(wavelengths_linear)
elements_quadratic = ['Quadratic'] * len(wavelengths_quadratic)
def test_linear_fit():
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.initialise_calibrator(peaks=peaks)
onedspec.set_calibrator_properties(num_pix=1000)
onedspec.set_hough_properties(num_slopes=1000,
range_tolerance=500.,
xbins=200,
ybins=200,
min_wavelength=3000.,
max_wavelength=8000.)
onedspec.add_user_atlas(elements=elements_linear,
wavelengths=wavelengths_linear)
onedspec.set_ransac_properties(minimum_matches=20)
onedspec.do_hough_transform(brute_force=False)
# Run the wavelength calibration
result = onedspec.fit(max_tries=2000, fit_deg=1, return_solution=True)
(best_p, matched_peaks, matched_atlas, rms, residual, peak_utilisation,
atlas_utilisation) = result['science'][0]
# Refine solution
result_robust = onedspec.robust_refit(fit_coeff=best_p,
refine=False,
return_solution=True,
robust_refit=True)
(best_p_robust, matched_peaks_robust, matched_atlas_robust, rms_robust,
residual_robust, peak_utilisation_robust,
atlas_utilisation_robust) = result_robust['science'][0]
assert np.abs(best_p_robust[1] - 5.) / 5. < 0.01
assert np.abs(best_p_robust[0] - 3000.) / 3000. < 0.01
assert peak_utilisation_robust > 0.8
assert atlas_utilisation_robust > 0.5
def test_manual_refit():
# Initialise the calibrator
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.initialise_calibrator(peaks=peaks)
onedspec.set_calibrator_properties(num_pix=1000)
onedspec.set_hough_properties(num_slopes=1000,
range_tolerance=500.,
xbins=200,
ybins=200,
min_wavelength=3000.,
max_wavelength=8000.)
onedspec.add_user_atlas(elements=elements_linear,
wavelengths=wavelengths_linear)
onedspec.set_ransac_properties(minimum_matches=20)
onedspec.do_hough_transform(brute_force=False)
# Run the wavelength calibration
result = onedspec.fit(max_tries=2000, fit_deg=1, return_solution=True)
(best_p, matched_peaks, matched_atlas, rms, residual, peak_utilisation,
atlas_utilisation) = result['science'][0]
# Refine solution
result_robust = onedspec.robust_refit(fit_coeff=best_p,
refine=False,
robust_refit=True,
return_solution=True)
(best_p_robust, matched_peaks_robust, matched_atlas_robust, rms_robust,
residual_robust, peak_utilisation_robust,
atlas_utilisation_robust) = result_robust['science'][0]
result_manual = onedspec.manual_refit(matched_peaks_robust,
matched_atlas_robust,
return_solution=True)
(best_p_manual, rms_manual, residual_manual, peak_utilisation_manual,
atlas_utilisation_manual) = result_manual['science'][0]
assert np.abs(best_p_manual[0] - best_p[0]) < 10.
assert np.abs(best_p_manual[1] - best_p[1]) < 0.1
def test_manual_refit_remove_points():
# Initialise the calibrator
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.initialise_calibrator(peaks=peaks)
onedspec.set_calibrator_properties(num_pix=1000)
onedspec.set_hough_properties(num_slopes=1000,
range_tolerance=500.,
xbins=200,
ybins=200,
min_wavelength=3000.,
max_wavelength=8000.)
onedspec.add_user_atlas(elements=elements_linear,
wavelengths=wavelengths_linear)
onedspec.set_ransac_properties(minimum_matches=20)
onedspec.do_hough_transform(brute_force=False)
# Run the wavelength calibration
result = onedspec.fit(max_tries=2000, fit_deg=1, return_solution=True)
(best_p, matched_peaks, matched_atlas, rms, residual, peak_utilisation,
atlas_utilisation) = result['science'][0]
# Refine solution
result_robust = onedspec.robust_refit(fit_coeff=best_p,
refine=False,
robust_refit=True,
return_solution=True)
(best_p_robust, matched_peaks_robust, matched_atlas_robust, rms_robust,
residual_robust, peak_utilisation_robust,
atlas_utilisation_robust) = result_robust['science'][0]
onedspec.remove_pix_wave_pair(5)
result_manual = onedspec.manual_refit(matched_peaks_robust,
matched_atlas_robust,
return_solution=True)
(best_p_manual, rms_manual, residual_manual, peak_utilisation_manual,
atlas_utilisation_manual) = result_manual['science'][0]
assert np.allclose(best_p_manual, best_p, rtol=1e-02)
def test_manual_refit_add_points():
# Initialise the calibrator
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.initialise_calibrator(peaks=peaks)
onedspec.set_calibrator_properties(num_pix=1000)
onedspec.set_hough_properties(num_slopes=1000,
range_tolerance=500.,
xbins=200,
ybins=200,
min_wavelength=3000.,
max_wavelength=8000.)
onedspec.add_user_atlas(elements=elements_linear,
wavelengths=wavelengths_linear)
onedspec.set_ransac_properties(minimum_matches=20)
onedspec.do_hough_transform(brute_force=False)
# Run the wavelength calibration
result = onedspec.fit(max_tries=2000, fit_deg=1, return_solution=True)
(best_p, matched_peaks, matched_atlas, rms, residual, peak_utilisation,
atlas_utilisation) = result['science'][0]
# Refine solution
result_robust = onedspec.robust_refit(fit_coeff=best_p,
refine=False,
robust_refit=True,
return_solution=True)
(best_p_robust, matched_peaks_robust, matched_atlas_robust, rms_robust,
residual_robust, peak_utilisation_robust,
atlas_utilisation_robust) = result_robust['science'][0]
onedspec.add_pix_wave_pair(2000., 3000. + 4 * 2000. + 1.0e-3 * 2000.**2.)
result_manual = onedspec.manual_refit(matched_peaks_robust,
matched_atlas_robust,
return_solution=True)
(best_p_manual, rms_manual, residual_manual, peak_utilisation_manual,
atlas_utilisation_manual) = result_manual['science'][0]
assert np.allclose(best_p_manual, best_p, rtol=1e-02)
def test_quadratic_fit():
# Initialise the calibrator
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.initialise_calibrator(peaks=peaks)
onedspec.set_calibrator_properties(num_pix=1000)
onedspec.set_hough_properties(num_slopes=1000,
range_tolerance=500.,
xbins=200,
ybins=200,
min_wavelength=3000.,
max_wavelength=8000.)
onedspec.add_user_atlas(elements=elements_quadratic,
wavelengths=wavelengths_quadratic)
onedspec.set_ransac_properties(minimum_matches=20)
onedspec.do_hough_transform(brute_force=False)
# Run the wavelength calibration
result = onedspec.fit(max_tries=2000, fit_deg=2, return_solution=True)
(best_p, matched_peaks, matched_atlas, rms, residual, peak_utilisation,
atlas_utilisation) = result['science'][0]
# Refine solution
result_robust = onedspec.robust_refit(fit_coeff=best_p,
refine=False,
robust_refit=True,
return_solution=True)
(best_p_robust, matched_peaks_robust, matched_atlas_robust, rms_robust,
residual_robust, peak_utilisation_robust,
atlas_utilisation_robust) = result_robust['science'][0]
assert peak_utilisation_robust > 0.7
assert atlas_utilisation_robust > 0.5
def test_quadratic_fit_legendre():
# Initialise the calibrator
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.initialise_calibrator(peaks=peaks)
onedspec.set_calibrator_properties(num_pix=1000)
onedspec.set_hough_properties(num_slopes=500,
range_tolerance=200.,
xbins=100,
ybins=100,
min_wavelength=3000.,
max_wavelength=8000.)
onedspec.add_user_atlas(elements=elements_quadratic,
wavelengths=wavelengths_quadratic)
onedspec.set_ransac_properties(sample_size=10, minimum_matches=20)
onedspec.do_hough_transform(brute_force=False)
# Run the wavelength calibration
result = onedspec.fit(max_tries=2000,
fit_tolerance=5.,
candidate_tolerance=2.,
fit_deg=2,
fit_type='legendre',
return_solution=True)
(best_p, matched_peaks, matched_atlas, rms, residual, peak_utilisation,
atlas_utilisation) = result['science'][0]
# Refine solution
result_robust = onedspec.robust_refit(fit_coeff=best_p,
refine=False,
robust_refit=True,
return_solution=True)
(best_p_robust, matched_peaks_robust, matched_atlas_robust, rms_robust,
residual_robust, peak_utilisation_robust,
atlas_utilisation_robust) = result_robust['science'][0]
def test_quadratic_fit_chebyshev():
# Initialise the calibrator
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.initialise_calibrator(peaks=peaks)
onedspec.set_calibrator_properties(num_pix=1000)
onedspec.set_hough_properties(num_slopes=500,
range_tolerance=200.,
xbins=100,
ybins=100,
min_wavelength=3000.,
max_wavelength=8000.)
onedspec.add_user_atlas(elements=elements_quadratic,
wavelengths=wavelengths_quadratic)
onedspec.set_ransac_properties(sample_size=10, minimum_matches=20)
onedspec.do_hough_transform(brute_force=False)
# Run the wavelength calibration
result = onedspec.fit(max_tries=2000,
fit_tolerance=5.,
candidate_tolerance=2.,
fit_deg=2,
fit_type='chebyshev',
return_solution=True)
(best_p, matched_peaks, matched_atlas, rms, residual, peak_utilisation,
atlas_utilisation) = result['science'][0]
# Refine solution
result_robust = onedspec.robust_refit(fit_coeff=best_p,
refine=False,
robust_refit=True,
return_solution=True)
(best_p_robust, matched_peaks_robust, matched_atlas_robust, rms_robust,
residual_robust, peak_utilisation_robust,
atlas_utilisation_robust) = result_robust['science'][0]
| 40.798687 | 79 | 0.639649 | 8,623 | 74,580 | 5.22382 | 0.038966 | 0.035298 | 0.078588 | 0.034632 | 0.909579 | 0.89142 | 0.865224 | 0.840537 | 0.812543 | 0.792075 | 0 | 0.030882 | 0.257106 | 74,580 | 1,827 | 80 | 40.821018 | 0.782132 | 0.029391 | 0 | 0.671491 | 0 | 0 | 0.077063 | 0.016442 | 0 | 0 | 0 | 0 | 0.015919 | 1 | 0.086831 | false | 0.000724 | 0.005065 | 0 | 0.092619 | 0.003618 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55084f073e2ba8932f9abc4453535b215f23520c | 32 | py | Python | sandblock/sandblock/__init__.py | adlnet-archive/sandbox-xblock | 2767f1c11d1e92a37e33d436f397477662a20e37 | [
"Apache-2.0"
] | 1 | 2016-10-12T17:27:12.000Z | 2016-10-12T17:27:12.000Z | sandblock/sandblock/__init__.py | adlnet/sandbox-xblock | 2767f1c11d1e92a37e33d436f397477662a20e37 | [
"Apache-2.0"
] | null | null | null | sandblock/sandblock/__init__.py | adlnet/sandbox-xblock | 2767f1c11d1e92a37e33d436f397477662a20e37 | [
"Apache-2.0"
] | null | null | null | from .sandblock import SandBlock | 32 | 32 | 0.875 | 4 | 32 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b58b231e58595ebc1c367f15244bdb151d39f44 | 43 | py | Python | unclenews/__init__.py | UncleEngineer/UncleNews | 7062d69e629da4670aa48d4bfd941ced3f4c26c5 | [
"MIT"
] | 1 | 2021-04-20T01:54:16.000Z | 2021-04-20T01:54:16.000Z | unclenews/__init__.py | UncleEngineer/UncleNews | 7062d69e629da4670aa48d4bfd941ced3f4c26c5 | [
"MIT"
] | null | null | null | unclenews/__init__.py | UncleEngineer/UncleNews | 7062d69e629da4670aa48d4bfd941ced3f4c26c5 | [
"MIT"
] | null | null | null | # __init__.py
import unclenews.runningtext | 21.5 | 28 | 0.837209 | 5 | 43 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 2 | 28 | 21.5 | 0.820513 | 0.255814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b5cb1e1b268b578841ca3763e33f4efaa03b4d1 | 24,232 | py | Python | topside/plumbing/tests/test_plumbing_engine_editing.py | roguextech/Waterloo-Rocketry-topside | 345e7d47efdac04c2c5f70d55f83bd77acdbb511 | [
"MIT"
] | 4 | 2020-04-18T00:40:55.000Z | 2021-06-10T04:04:09.000Z | topside/plumbing/tests/test_plumbing_engine_editing.py | roguextech/Waterloo-Rocketry-topside | 345e7d47efdac04c2c5f70d55f83bd77acdbb511 | [
"MIT"
] | 82 | 2020-04-15T21:26:04.000Z | 2022-02-04T04:50:07.000Z | topside/plumbing/tests/test_plumbing_engine_editing.py | roguextech/Waterloo-Rocketry-topside | 345e7d47efdac04c2c5f70d55f83bd77acdbb511 | [
"MIT"
] | 8 | 2020-04-21T17:54:36.000Z | 2022-02-28T16:14:21.000Z | import pytest
import topside as top
import topside.plumbing.invalid_reasons as invalid
import topside.plumbing.exceptions as exceptions
import topside.plumbing.tests.testing_utils as test
import topside.plumbing.plumbing_utils as utils
def test_add_to_empty():
plumb = top.PlumbingEngine()
pc = test.create_component(2, utils.CLOSED, 0, 0, 'valve', 'A')
mapping = {
1: 1,
2: 2
}
plumb.add_component(pc, mapping, 'open', {1: (20, False)})
assert plumb.is_valid()
assert plumb.time_res == utils.DEFAULT_TIME_RESOLUTION_MICROS
assert plumb.edges() == [
(1, 2, 'valve.A1', {'FC': utils.teq_to_FC(utils.s_to_micros(2))}),
(2, 1, 'valve.A2', {'FC': 0})
]
assert plumb.nodes() == [
(1, {'body': top.GenericNode(20)}),
(2, {'body': top.GenericNode(0)})
]
assert plumb.current_state('valve') == 'open'
def test_add_component():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
pc = test.create_component(0, 0, 0, 1, 'valve3', 'C')
mapping = {
1: 3,
2: 4
}
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
assert plumb.is_valid()
assert plumb.time_res == int(utils.s_to_micros(0.2) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(1, 2, 'valve1.A1', {'FC': utils.teq_to_FC(utils.s_to_micros(10))}),
(2, 1, 'valve1.A2', {'FC': 0}),
(2, 3, 'valve2.B1', {'FC': utils.teq_to_FC(utils.s_to_micros(0.5))}),
(3, 2, 'valve2.B2', {'FC': utils.teq_to_FC(utils.s_to_micros(0.2))}),
(3, 4, 'valve3.C1', {'FC': utils.teq_to_FC(0)}),
(4, 3, 'valve3.C2', {'FC': utils.teq_to_FC(utils.s_to_micros(1))})
]
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
(4, {'body': top.GenericNode(50)})
]
assert plumb.current_state('valve1') == 'closed'
assert plumb.current_state('valve2') == 'open'
assert plumb.current_state('valve3') == 'closed'
def test_add_component_errors():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
name = 'valve3'
wrong_node = 3
right_node = 2
pc = test.create_component(0, 0, 0, 1, name, 'C')
mapping = {
1: 3,
wrong_node: 4
}
with pytest.raises(exceptions.BadInputError) as err:
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
assert str(err.value) ==\
f"Component '{name}', node {right_node} not found in mapping dict."
def test_add_invalid_component():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
wrong_node = 5
pc_states = {
'open': {
(1, wrong_node, 'A1'): 0,
(2, 1, 'A2'): 0
},
'closed': {
(1, 2, 'A1'): 0,
(2, 1, 'A2'): 0
}
}
pc_edges = [(1, 2, 'A1'), (2, 1, 'A2')]
invalid_pc = top.PlumbingComponent('valve', pc_states, pc_edges)
assert not invalid_pc.is_valid()
with pytest.raises(exceptions.BadInputError) as err:
plumb.add_component(invalid_pc, {1: 3, 2: 4}, 'open')
assert str(err.value) == "Component not valid; all errors must be resolved before loading in."
def test_reset_added_component():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
pc = test.create_component(0, 0, 0, 1, 'valve3', 'C')
mapping = {
1: 3,
2: 4
}
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
plumb.reset(True)
plumb_initial = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == plumb_initial.time_res
assert plumb.edges() == plumb_initial.edges()
assert plumb.nodes() == plumb_initial.nodes()
assert plumb.current_state() == plumb_initial.current_state()
def test_reset_removed_component():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
plumb.remove_component('valve2')
plumb.reset(True)
plumb_initial = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == plumb_initial.time_res
assert plumb.edges() == plumb_initial.edges()
assert plumb.nodes() == plumb_initial.nodes()
assert plumb.current_state() == plumb_initial.current_state()
def test_reset_component_state():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
plumb.set_component_state('valve1', 'open')
plumb.reset()
plumb_initial = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == plumb_initial.time_res
assert plumb.edges() == plumb_initial.edges()
assert plumb.nodes() == plumb_initial.nodes()
assert plumb.current_state() == plumb_initial.current_state()
def test_reset_pressure():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
plumb.set_pressure(2, 150)
plumb.reset()
plumb_initial = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == plumb_initial.time_res
assert plumb.edges() == plumb_initial.edges()
assert plumb.nodes() == plumb_initial.nodes()
assert plumb.current_state() == plumb_initial.current_state()
def test_reset_step_and_solve():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
plumb.step()
plumb.solve()
plumb.reset()
plumb_initial = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == plumb_initial.time_res
assert plumb.edges() == plumb_initial.edges()
assert plumb.nodes() == plumb_initial.nodes()
assert plumb.current_state() == plumb_initial.current_state()
def test_reset_keep_component():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
pc = test.create_component(0, 0, 0, 1, 'valve3', 'C')
mapping = {
1: 3,
2: 4
}
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
plumb.remove_component('valve2')
plumb.reset()
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == int(utils.s_to_micros(0.2) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(1, 2, 'valve1.A1', {'FC': utils.teq_to_FC(utils.s_to_micros(10))}),
(2, 1, 'valve1.A2', {'FC': 0}),
(3, 4, 'valve3.C1', {'FC': utils.teq_to_FC(0)}),
(4, 3, 'valve3.C2', {'FC': utils.teq_to_FC(utils.s_to_micros(1))})
]
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
(4, {'body': top.GenericNode(0)})
]
assert plumb.current_state('valve1') == 'closed'
assert plumb.current_state('valve3') == 'closed'
def test_reset_fixed_pressure():
plumb = test.two_valve_setup_fixed(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
plumb.set_pressure(2, 150)
plumb.reset()
plumb_initial = test.two_valve_setup_fixed(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == plumb_initial.time_res
assert plumb.edges() == plumb_initial.edges()
assert plumb.nodes() == plumb_initial.nodes()
assert plumb.current_state() == plumb_initial.current_state()
assert plumb.fixed_pressures == plumb_initial.fixed_pressures
def test_integration_reset():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
pc = test.create_component(0, 0, 0, 1, 'valve3', 'C')
mapping = {
1: 3,
2: 4
}
plumb.remove_component('valve2')
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
plumb.set_component_state('valve1', 'open')
plumb.set_pressure(3, 150)
plumb.reset(True)
plumb_initial = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == plumb_initial.time_res
assert plumb.edges() == plumb_initial.edges()
assert plumb.nodes() == plumb_initial.nodes()
assert plumb.current_state() == plumb_initial.current_state()
def test_reset_integration_and_keep_component():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
pc = test.create_component(0, 0, 0, 1, 'valve3', 'C')
mapping = {
1: 3,
2: 4
}
plumb.remove_component('valve2')
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
plumb.set_component_state('valve1', 'open')
plumb.set_pressure(3, 150)
plumb.reset()
assert plumb.time == 0
assert plumb.is_valid()
assert plumb.time_res == int(utils.s_to_micros(0.2) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(1, 2, 'valve1.A1', {'FC': utils.teq_to_FC(utils.s_to_micros(10))}),
(2, 1, 'valve1.A2', {'FC': 0}),
(3, 4, 'valve3.C1', {'FC': utils.teq_to_FC(0)}),
(4, 3, 'valve3.C2', {'FC': utils.teq_to_FC(utils.s_to_micros(1))})
]
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
(4, {'body': top.GenericNode(0)})
]
assert plumb.current_state('valve1') == 'closed'
assert plumb.current_state('valve3') == 'closed'
def test_remove_component():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
plumb.remove_component('valve2')
assert plumb.is_valid()
assert plumb.time_res == int(utils.s_to_micros(0.2) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(1, 2, 'valve1.A1', {'FC': utils.teq_to_FC(utils.s_to_micros(10))}),
(2, 1, 'valve1.A2', {'FC': 0}),
]
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
]
assert plumb.current_state('valve1') == 'closed'
def test_add_remove():
old_lowest_teq = 0.2
plumb = test.two_valve_setup(
0.5, old_lowest_teq, 10, utils.CLOSED, 0.5, old_lowest_teq, 10,
utils.CLOSED)
new_lowest_teq = 0.1
pc = test.create_component(0, 0, 0, new_lowest_teq, 'valve3', 'C')
mapping = {
1: 3,
2: 4
}
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
assert plumb.time_res ==\
int(utils.s_to_micros(new_lowest_teq) / utils.DEFAULT_RESOLUTION_SCALE)
plumb.remove_component('valve3')
assert plumb.is_valid()
assert plumb.time_res ==\
int(utils.s_to_micros(old_lowest_teq) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(1, 2, 'valve1.A1', {'FC': utils.teq_to_FC(utils.s_to_micros(10))}),
(2, 1, 'valve1.A2', {'FC': 0}),
(2, 3, 'valve2.B1', {'FC': utils.teq_to_FC(utils.s_to_micros(0.5))}),
(3, 2, 'valve2.B2', {'FC': utils.teq_to_FC(utils.s_to_micros(0.2))}),
]
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
]
assert plumb.current_state('valve1') == 'closed'
assert plumb.current_state('valve2') == 'open'
def test_remove_nonexistent_component():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
nonexistent_component = 'potato'
with pytest.raises(exceptions.BadInputError) as err:
plumb.remove_component(nonexistent_component)
assert str(err.value) ==\
f"Component with name {nonexistent_component} not found in component dict."
def test_remove_add_errors():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
name = 'valve3'
wrong_node = 3
right_node = 2
pc = test.create_component(0, 0, 0, 1, name, 'C')
mapping = {
1: 3,
wrong_node: 4
}
with pytest.raises(exceptions.BadInputError) as err:
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
assert str(err.value) ==\
f"Component '{name}', node {right_node} not found in mapping dict."
plumb.remove_component(name)
assert plumb.is_valid()
assert plumb.time_res == int(utils.s_to_micros(0.2) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(1, 2, 'valve1.A1', {'FC': utils.teq_to_FC(utils.s_to_micros(10))}),
(2, 1, 'valve1.A2', {'FC': 0}),
(2, 3, 'valve2.B1', {'FC': utils.teq_to_FC(utils.s_to_micros(0.5))}),
(3, 2, 'valve2.B2', {'FC': utils.teq_to_FC(utils.s_to_micros(0.2))}),
]
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
]
assert plumb.current_state('valve1') == 'closed'
assert plumb.current_state('valve2') == 'open'
def test_remove_errors_wrong_component_name():
wrong_component_name = 'potato'
pc1 = test.create_component(0, 0, 0, 0, 'valve1', 'A')
pc2 = test.create_component(0, 0, 0, 0, 'valve2', 'B')
component_mapping = {
'valve1': {
1: 1,
2: 2
},
'valve2': {
1: 2,
2: 3
}
}
pressures = {3: (100, False)}
default_states = {'valve1': 'closed', 'valve2': 'open'}
plumb = top.PlumbingEngine(
{wrong_component_name: pc1, 'valve2': pc2}, component_mapping, pressures, default_states)
assert not plumb.is_valid()
assert len(plumb.errors()) == 2
error1 = invalid.InvalidComponentName(
f"Component with name '{wrong_component_name}' not found in mapping dict.",
wrong_component_name)
error2 = invalid.InvalidComponentName(
f"Component '{wrong_component_name}' state not found in initial states dict.",
wrong_component_name)
assert error1 in plumb.errors()
assert error2 in plumb.errors()
plumb.remove_component(wrong_component_name)
assert plumb.is_valid()
assert plumb.time_res == int(utils.s_to_micros(0.2) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(2, 3, 'valve2.B1', {'FC': utils.teq_to_FC(utils.s_to_micros(0))}),
(3, 2, 'valve2.B2', {'FC': utils.teq_to_FC(utils.s_to_micros(0))}),
]
assert plumb.nodes() == [
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
]
assert plumb.current_state('valve2') == 'open'
def test_reverse_orientation():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
plumb.reverse_orientation('valve1')
assert plumb.is_valid()
assert plumb.time_res == int(utils.s_to_micros(0.2) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(1, 2, 'valve1.A1', {'FC': 0}),
(2, 1, 'valve1.A2', {'FC': utils.teq_to_FC(utils.s_to_micros(10))}),
(2, 3, 'valve2.B1', {'FC': utils.teq_to_FC(utils.s_to_micros(0.5))}),
(3, 2, 'valve2.B2', {'FC': utils.teq_to_FC(utils.s_to_micros(0.2))})
]
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
]
assert plumb.current_state('valve1') == 'closed'
assert plumb.current_state('valve2') == 'open'
def test_reverse_orientation_wrong_component():
wrong_name = 'potato'
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
with pytest.raises(exceptions.BadInputError) as err:
plumb.reverse_orientation(wrong_name)
assert str(err.value) == f"Component '{wrong_name}' not found in component dict."
def test_reverse_orientation_three_edges():
pc_states = {
'open': {
(1, 2, 'A1'): 5,
(2, 1, 'A2'): 0,
(1, 3, 'B1'): 3,
(3, 1, 'B2'): 0,
(2, 3, 'C1'): 4,
(3, 2, 'C2'): 5
}
}
pc_edges = [(1, 2, 'A1'), (2, 1, 'A2'), (1, 3, 'B1'), (3, 1, 'B2'), (2, 3, 'C1'), (3, 2, 'C2')]
pc = top.PlumbingComponent('threeway', pc_states, pc_edges)
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
mapping = {
1: 3,
2: 4,
3: 5
}
plumb.add_component(pc, mapping, 'open')
with pytest.raises(exceptions.InvalidComponentError) as err:
plumb.reverse_orientation('threeway')
assert str(err.value) == "Component must only have two edges to be automatically reversed.\n" +\
"Consider adjusting direction manually."
def test_set_pressure():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
pc = test.create_component(0, 0, 0, 1, 'valve3', 'C')
mapping = {
1: 3,
2: 4
}
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
plumb.set_pressure(1, 200)
plumb.set_pressure(2, 7000)
assert plumb.nodes() == [
(1, {'body': top.GenericNode(200)}),
(2, {'body': top.GenericNode(7000)}),
(3, {'body': top.GenericNode(100)}),
(4, {'body': top.GenericNode(50)})
]
plumb.set_pressure(4, 10)
plumb.set_pressure(1, 0)
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(7000)}),
(3, {'body': top.GenericNode(100)}),
(4, {'body': top.GenericNode(10)})
]
def test_set_pressure_errors():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
pc = test.create_component(0, 0, 0, 1, 'valve3', 'C')
mapping = {
1: 3,
2: 4
}
plumb.add_component(pc, mapping, 'closed', {4: (50, False)})
pc_vent = test.create_component(0, 0, 0, 0, 'vent', 'D')
mapping_vent = {
1: 4,
2: utils.ATM
}
plumb.add_component(pc_vent, mapping_vent, 'closed')
negative_pressure = -20
not_a_number = 'potato'
with pytest.raises(exceptions.BadInputError) as err:
plumb.set_pressure(4, negative_pressure)
assert str(err.value) == f"Negative pressure {negative_pressure} not allowed."
with pytest.raises(exceptions.BadInputError) as err:
plumb.set_pressure(4, not_a_number)
assert str(err.value) == f"Pressure {not_a_number} must be a number."
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
(4, {'body': top.GenericNode(50)}),
(utils.ATM, {'body': top.AtmNode()})
]
nonexistent_node = 5
with pytest.raises(exceptions.BadInputError) as err:
plumb.set_pressure(nonexistent_node, 100)
assert str(err.value) == f"Node {nonexistent_node} not found in graph."
plumb.set_pressure(4, 100)
assert plumb.nodes() == [
(1, {'body': top.GenericNode(0)}),
(2, {'body': top.GenericNode(0)}),
(3, {'body': top.GenericNode(100)}),
(4, {'body': top.GenericNode(100)}),
(utils.ATM, {'body': top.AtmNode()})
]
with pytest.raises(exceptions.BadInputError) as err:
plumb.set_pressure(utils.ATM, 100)
assert str(err.value) == f"Pressure for atmosphere node ({utils.ATM}) must be 0."
def test_set_teq():
old_lowest_teq = 0.2
plumb = test.two_valve_setup(
0.5, old_lowest_teq, 10, utils.CLOSED, 0.5, old_lowest_teq, 10,
utils.CLOSED)
new_lowest_teq = 0.1
which_edge = {
'closed': {
(2, 1, 'A2'): new_lowest_teq,
(1, 2, 'A1'): 7
},
'open': {
(1, 2, 'A1'): 1
}
}
plumb.set_teq('valve1', which_edge)
assert plumb.time_res ==\
int(utils.s_to_micros(new_lowest_teq) / utils.DEFAULT_RESOLUTION_SCALE)
assert plumb.edges() == [
(1, 2, 'valve1.A1', {'FC': utils.teq_to_FC(utils.s_to_micros(7))}),
(2, 1, 'valve1.A2', {'FC': utils.teq_to_FC(utils.s_to_micros(new_lowest_teq))}),
(2, 3, 'valve2.B1', {'FC': utils.teq_to_FC(utils.s_to_micros(0.5))}),
(3, 2, 'valve2.B2', {'FC': utils.teq_to_FC(utils.s_to_micros(0.2))})
]
assert plumb.component_dict['valve1'].states == {
'open': {
(1, 2, 'A1'): utils.teq_to_FC(utils.s_to_micros(1)),
(2, 1, 'A2'): utils.teq_to_FC(utils.s_to_micros(old_lowest_teq))
},
'closed': {
(1, 2, 'A1'): utils.teq_to_FC(utils.s_to_micros(7)),
(2, 1, 'A2'): utils.teq_to_FC(utils.s_to_micros(new_lowest_teq))
}
}
def test_set_teq_errors():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
which_edge = {
'closed': {
(2, 1, 'A2'): 1,
(1, 2, 'A1'): 7
},
'open': {
(1, 2, 'A1'): 1
}
}
wrong_name = 'potato'
teq_too_low = utils.micros_to_s(utils.TEQ_MIN/2)
bad_teq = {
'closed': {
(2, 1, 'A2'): teq_too_low,
(1, 2, 'A1'): 7
},
'open': {
(1, 2, 'A1'): 1
}
}
bad_state = {
'closed': {
(2, 1, 'A2'): 1,
(1, 2, 'A1'): 7
},
wrong_name: {
(1, 2, 'A1'): 1
}
}
bad_key = {
'closed': {
(2, 1, wrong_name): 1,
(1, 2, 'A1'): 7
},
'open': {
(1, 2, 'A1'): 1
}
}
with pytest.raises(exceptions.BadInputError) as err:
plumb.set_teq('valve1', bad_teq)
assert str(err.value) == f"Provided teq {teq_too_low} (component 'valve1', state 'closed'," +\
f" edge (2, 1, 'A2')) too low. Minimum teq is {utils.micros_to_s(utils.TEQ_MIN)}s."
with pytest.raises(exceptions.BadInputError) as err:
plumb.set_teq(wrong_name, which_edge)
assert str(err.value) == f"Component name '{wrong_name}' not found in component dict."
with pytest.raises(exceptions.BadInputError) as err:
plumb.set_teq('valve1', bad_key)
assert str(err.value) == f"State 'closed', edge (2, 1, '{wrong_name}') not found in" +\
" component valve1's states dict."
with pytest.raises(exceptions.BadInputError) as err:
plumb.set_teq('valve1', bad_state)
assert str(err.value) == f"State '{wrong_name}' not found in component valve1's states dict."
def test_toggle_listing():
plumb = test.two_valve_setup(
0.5, 0.2, 10, utils.CLOSED, 0.5, 0.2, 10, utils.CLOSED)
pc_states = {
'open': {
(1, 2, 'A1'): 0,
(2, 1, 'A2'): 0
}
}
pc_edges = [(1, 2, 'A1'), (2, 1, 'A2')]
pc = top.PlumbingComponent('tank', pc_states, pc_edges)
mapping = {1: 3, 2: 4}
plumb.add_component(pc, mapping, 'open')
toggles = plumb.list_toggles()
assert len(toggles) == 2
assert 'valve1' in toggles
assert 'valve2' in toggles
assert 'tank' not in toggles
def test_unfix_node():
plumb = test.two_valve_setup_fixed(1, 1, 1, 1, 1, 1, 1, 1)
assert plumb.fixed_pressures == {3: 100}
plumb.set_pressure(3, 100, False)
assert plumb.fixed_pressures == {}
def test_fix_node():
plumb = test.two_valve_setup(1, 1, 1, 1, 1, 1, 1, 1)
assert plumb.fixed_pressures == {}
plumb.set_pressure(3, 100, True)
assert plumb.fixed_pressures == {3: 100}
| 31.429313 | 100 | 0.590376 | 3,455 | 24,232 | 3.95919 | 0.047467 | 0.087653 | 0.058922 | 0.01696 | 0.80291 | 0.779589 | 0.748447 | 0.723225 | 0.715476 | 0.700855 | 0 | 0.06346 | 0.239807 | 24,232 | 770 | 101 | 31.47013 | 0.679116 | 0 | 0 | 0.627642 | 0 | 0.001626 | 0.099331 | 0.004416 | 0 | 0 | 0 | 0 | 0.214634 | 1 | 0.045528 | false | 0 | 0.009756 | 0 | 0.055285 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9b9b858c6e7372e7b66ebc5ba9e89c4026cd1621 | 7,265 | py | Python | unit_tests/view_modify_land_charge/test_edit_additional_info.py | LandRegistry/maintain-frontend | d92446a9972ebbcd9a43a7a7444a528aa2f30bf7 | [
"MIT"
] | 1 | 2019-10-03T13:58:29.000Z | 2019-10-03T13:58:29.000Z | unit_tests/view_modify_land_charge/test_edit_additional_info.py | LandRegistry/maintain-frontend | d92446a9972ebbcd9a43a7a7444a528aa2f30bf7 | [
"MIT"
] | null | null | null | unit_tests/view_modify_land_charge/test_edit_additional_info.py | LandRegistry/maintain-frontend | d92446a9972ebbcd9a43a7a7444a528aa2f30bf7 | [
"MIT"
] | 1 | 2021-04-11T05:24:57.000Z | 2021-04-11T05:24:57.000Z | from maintain_frontend import main
from flask_testing import TestCase as TestCase
from unit_tests.utilities import Utilities
from unittest.mock import patch
from flask import url_for
from maintain_frontend.dependencies.session_api.session import Session
from maintain_frontend.models import LocalLandChargeItem
from maintain_frontend.constants.permissions import Permissions
ADD_CHARGE_STATE = LocalLandChargeItem()
ADDITIONAL_INFO_PATH = 'maintain_frontend.view_modify_land_charge.edit_additional_info'
VALID_DATA = {
'additional-info': 'some info',
'reference': '',
}
class TestEditAdditionalInfoView(TestCase):
render_templates = False
def create_app(self):
main.app.testing = True
Utilities.mock_session_cookie_flask_test(self)
return main.app
@patch('{}.AddChargeAdditionalInfoValidator'.format(ADDITIONAL_INFO_PATH))
def test_additional_info_post_success(self, mock_validator):
mock_validator.validate.return_value.errors = []
self.mock_session.return_value.add_charge_state = ADD_CHARGE_STATE
ADD_CHARGE_STATE.local_land_charge = 1
self.mock_session.return_value.edited_fields = []
self.client.set_cookie('localhost', Session.session_cookie_name,
'cookie_value')
self.mock_session.return_value.user.permissions = [Permissions.vary_llc]
response = self.client.post(url_for('modify_land_charge.post_additional_info'), data=VALID_DATA)
self.assertTrue('further_information' in self.mock_session.return_value.edited_fields)
self.assertStatus(response, 302)
self.assertRedirects(response, url_for('modify_land_charge.modify_land_charge', local_land_charge="LLC-1"))
@patch('{}.get_source_information_list'.format(ADDITIONAL_INFO_PATH))
@patch('{}.AddChargeAdditionalInfoValidator'.format(ADDITIONAL_INFO_PATH))
def test_additional_info_post_validation_error(self, mock_validator, get_source_information_list):
mock_validator.validate.return_value.errors = [{'field': 'validation error'}]
self.mock_session.return_value.add_charge_state = ADD_CHARGE_STATE
self.client.set_cookie('localhost', Session.session_cookie_name,
'cookie_value')
self.mock_session.return_value.user.permissions = [Permissions.vary_llc]
response = self.client.post(url_for('modify_land_charge.post_additional_info'), data={
'additional-info': '',
'reference': 'e',
})
get_source_information_list.assert_not_called()
self.assertStatus(response, 400)
self.assert_template_used('additional_info.html')
@patch('{}.get_source_information_list'.format(ADDITIONAL_INFO_PATH))
@patch('{}.AddChargeAdditionalInfoValidator'.format(ADDITIONAL_INFO_PATH))
def test_additional_info_post_validation_error_with_source_permissions(self, mock_validator,
get_source_information_list):
mock_validator.validate.return_value.errors = [{'field': 'validation error'}]
self.mock_session.return_value.add_charge_state = ADD_CHARGE_STATE
self.mock_session.return_value.user.permissions = [Permissions.vary_llc, Permissions.view_source_information]
self.client.set_cookie('localhost', Session.session_cookie_name, 'cookie_value')
response = self.client.post(url_for('modify_land_charge.post_additional_info'), data={
'additional-info': '',
'reference': 'e',
})
get_source_information_list.assert_called()
self.assertStatus(response, 400)
self.assert_template_used('additional_info.html')
def test_additional_info_post_without_session(self):
self.client.set_cookie('localhost', Session.session_cookie_name,
'cookie_value')
self.mock_session.return_value.user.permissions = [Permissions.vary_llc]
self.mock_session.return_value.add_charge_state = None
try:
response = self.client.post(url_for('modify_land_charge.post_additional_info'), data=VALID_DATA)
except Exception:
self.assertStatus(response, 302)
self.assert_redirects(response, '/error')
@patch('{}.AddChargeAdditionalInfoValidator'.format(ADDITIONAL_INFO_PATH))
def test_additional_info_post_exception(self, mock_validator):
mock_validator.validate.side_effect = Exception('test exception')
self.mock_session.return_value.add_charge_state = ADD_CHARGE_STATE
self.client.set_cookie('localhost', Session.session_cookie_name,
'cookie_value')
self.mock_session.return_value.user.permissions = [Permissions.vary_llc]
response = self.client.post(url_for('modify_land_charge.post_additional_info'), data={
'additional-info': '',
'reference': 'e',
})
self.assertStatus(response, 302)
self.assert_redirects(response, '/error')
def test_additional_info_get_with_session(self):
self.client.set_cookie('localhost', Session.session_cookie_name,
'cookie_value')
self.mock_session.return_value.user.permissions = [Permissions.vary_llc]
response = self.client.get(url_for('modify_land_charge.get_additional_info'))
self.assertStatus(response, 200)
self.assert_template_used('additional_info.html')
@patch('{}.get_source_information_list'.format(ADDITIONAL_INFO_PATH))
def test_additional_info_get_with_session_source_permissions(self, get_source_information_list):
self.client.set_cookie('localhost', Session.session_cookie_name, 'cookie_value')
self.mock_session.return_value.user.permissions = [Permissions.vary_llc, Permissions.view_source_information]
response = self.client.get(url_for('modify_land_charge.get_additional_info'))
get_source_information_list.assert_called()
self.assertStatus(response, 200)
self.assert_template_used('additional_info.html')
def test_additional_info_get_without_session(self):
self.client.set_cookie('localhost', Session.session_cookie_name,
'cookie_value')
self.mock_session.return_value.user.permissions = [Permissions.vary_llc]
self.mock_session.return_value.add_charge_state = None
response = self.client.get(url_for('modify_land_charge.get_additional_info'))
self.assertStatus(response, 302)
self.assert_redirects(response, '/error')
@patch('{}.render_template'.format(ADDITIONAL_INFO_PATH))
def test_additional_info_render_exception(self, mock_render):
self.mock_session.return_value.add_charge_state = ADD_CHARGE_STATE
mock_render.side_effect = Exception('test exception')
self.client.set_cookie('localhost', Session.session_cookie_name,
'cookie_value')
self.mock_session.return_value.user.permissions = [Permissions.vary_llc]
response = self.client.get(url_for('modify_land_charge.get_additional_info'))
self.assertStatus(response, 302)
self.assertRedirects(response, "/error")
| 49.760274 | 117 | 0.720028 | 836 | 7,265 | 5.874402 | 0.12201 | 0.102627 | 0.054979 | 0.07697 | 0.824679 | 0.818774 | 0.783954 | 0.760538 | 0.730808 | 0.711464 | 0 | 0.004897 | 0.184859 | 7,265 | 145 | 118 | 50.103448 | 0.824384 | 0 | 0 | 0.647059 | 0 | 0 | 0.163661 | 0.093049 | 0 | 0 | 0 | 0 | 0.184874 | 1 | 0.084034 | false | 0 | 0.067227 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9ba5f10e85667ec38e4fcdeba554382a612f6a88 | 39 | py | Python | src/ONUR_Voice_Assistant/__init__.py | atakan06/ONUR-Sesli-Asistan | a2f3365f473f0e890734289a2c03aa4a2f5382c2 | [
"MIT"
] | 1 | 2020-05-05T12:58:37.000Z | 2020-05-05T12:58:37.000Z | src/ONUR_Voice_Assistant/__init__.py | atakan06/ONUR-Sesli-Asistan | a2f3365f473f0e890734289a2c03aa4a2f5382c2 | [
"MIT"
] | null | null | null | src/ONUR_Voice_Assistant/__init__.py | atakan06/ONUR-Sesli-Asistan | a2f3365f473f0e890734289a2c03aa4a2f5382c2 | [
"MIT"
] | null | null | null | from .ONUR_Voice_Assistant import ONUR
| 19.5 | 38 | 0.871795 | 6 | 39 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd17137c37f0525640a62db347bb44522aae2225 | 56 | py | Python | tseries_patterns/labelers/__init__.py | tr8dr/patterns | 757a0b9d4936a0c6af633af6f16c0ca8ee676bb0 | [
"MIT"
] | 127 | 2020-07-12T21:48:20.000Z | 2022-03-27T21:12:26.000Z | tseries_patterns/labelers/__init__.py | kumprj/tseries-patterns | 99c5279d1a06e4ab0fe92f2a04102d09ae6300c7 | [
"MIT"
] | 11 | 2020-08-08T05:17:16.000Z | 2022-02-23T13:29:23.000Z | tseries_patterns/labelers/__init__.py | kumprj/tseries-patterns | 99c5279d1a06e4ab0fe92f2a04102d09ae6300c7 | [
"MIT"
] | 46 | 2020-07-22T20:50:55.000Z | 2021-12-16T00:57:50.000Z | from .AmplitudeBasedLabeler import AmplitudeBasedLabeler | 56 | 56 | 0.928571 | 4 | 56 | 13 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053571 | 56 | 1 | 56 | 56 | 0.981132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd361d605c0ef0460f5e39f9fd2b27356bbf4d57 | 35 | py | Python | example/data/lib/enemies/__init__.py | samueldev45/GodityEngine | ccc3e04ec711e9a2c30f1ee59e9e32d646a3d557 | [
"MIT"
] | 8 | 2021-01-25T19:23:27.000Z | 2021-07-19T14:31:54.000Z | example/data/lib/enemies/__init__.py | samueldev45/GodityEngine | ccc3e04ec711e9a2c30f1ee59e9e32d646a3d557 | [
"MIT"
] | null | null | null | example/data/lib/enemies/__init__.py | samueldev45/GodityEngine | ccc3e04ec711e9a2c30f1ee59e9e32d646a3d557 | [
"MIT"
] | 2 | 2021-01-25T19:26:15.000Z | 2021-08-07T00:10:02.000Z | from data.lib.enemies.blob import * | 35 | 35 | 0.8 | 6 | 35 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd4ac877162cd82b652dbea4b1e959483e9361bd | 38,361 | py | Python | unittests/test_exodata.py | YangyangFu/MPCPy | c9980cbfe7b5ea21b003c2c0bab800099dccf3f1 | [
"BSD-3-Clause-LBNL"
] | null | null | null | unittests/test_exodata.py | YangyangFu/MPCPy | c9980cbfe7b5ea21b003c2c0bab800099dccf3f1 | [
"BSD-3-Clause-LBNL"
] | null | null | null | unittests/test_exodata.py | YangyangFu/MPCPy | c9980cbfe7b5ea21b003c2c0bab800099dccf3f1 | [
"BSD-3-Clause-LBNL"
] | 1 | 2021-11-20T03:23:13.000Z | 2021-11-20T03:23:13.000Z | """
This module contains the classes for testing the exodata of mpcpy.
"""
from mpcpy import exodata
from mpcpy import utility
from mpcpy import units
from mpcpy import variables
from testing import TestCaseMPCPy
import unittest
import numpy as np
import pickle
import copy
import os
import pandas as pd
#%% Weather Tests
class WeatherFromEPW(TestCaseMPCPy):
'''Test the collection of weather data from an EPW.
'''
def setUp(self):
self.epw_filepath = os.path.join(self.get_unittest_path(), 'resources', 'weather', \
'USA_IL_Chicago-OHare.Intl.AP.725300_TMY3.epw');
self.weather = exodata.WeatherFromEPW(self.epw_filepath);
def tearDown(self):
del self.weather
def test_instantiate(self):
self.assertEqual(self.weather.name, 'weather_from_epw');
self.assertEqual(self.weather.file_path, self.epw_filepath);
self.assertAlmostEqual(self.weather.lat.display_data(), 41.980, places=4);
self.assertAlmostEqual(self.weather.lon.display_data(), -87.92, places=4);
self.assertEqual(self.weather.tz_name, 'America/Chicago');
def test_collect_data(self):
start_time = '1/1/2015';
final_time = '1/1/2016';
self.weather.collect_data(start_time, final_time);
# Check reference
df_test = self.weather.display_data();
self.check_df(df_test, 'collect_data.csv');
def test_collect_data_partial(self):
start_time = '10/2/2015 06:00:00';
final_time = '11/13/2015 16:00:00';
self.weather.collect_data(start_time, final_time);
# Check references
df_test = self.weather.display_data();
self.check_df(df_test, 'collect_data_partial_display.csv');
df_test = self.weather.get_base_data();
self.check_df(df_test, 'collect_data_partial_base.csv');
def test_standard_time(self):
start_time = '1/1/2015';
final_time = '1/1/2016';
weather = exodata.WeatherFromEPW(self.epw_filepath, standard_time=True)
weather.collect_data(start_time, final_time);
# Check instantiation
self.assertAlmostEqual(weather.lat.display_data(), 41.980, places=4);
self.assertAlmostEqual(weather.lon.display_data(), -87.92, places=4);
self.assertEqual(weather.tz_name, 'utc');
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_standard_time.csv');
class WeatherFromCSV(TestCaseMPCPy):
'''Test the collection of weather data from a CSV file.
'''
def setUp(self):
self.csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'weather', 'BerkeleyCSV.csv');
self.geography = [37.8716, -122.2727];
self.variable_map = {'TemperatureF' : ('weaTDryBul', units.degF), \
'Dew PointF' : ('weaTDewPoi', units.degF), \
'Humidity' : ('weaRelHum', units.percent), \
'Sea Level PressureIn' : ('weaPAtm', units.inHg), \
'WindDirDegrees' : ('weaWinDir', units.deg)};
def tearDown(self):
del self.csv_filepath
del self.geography
del self.variable_map
def test_instantiate(self):
weather = exodata.WeatherFromCSV(self.csv_filepath, \
self.variable_map,
self.geography);
self.assertEqual(weather.name, 'weather_from_csv');
self.assertEqual(weather.file_path, self.csv_filepath);
self.assertAlmostEqual(weather.lat.display_data(), 37.8716, places=4);
self.assertAlmostEqual(weather.lon.display_data(), -122.2727, places=4);
self.assertEqual(weather.tz_name, 'UTC');
def test_instantiate_without_geography(self):
with self.assertRaises(TypeError):
weather = exodata.WeatherFromCSV(self.csv_filepath,
self.variable_map);
def test_collect_data_default_time(self):
start_time = '2016-10-19 19:53:00';
final_time = '2016-10-20 06:53:00';
time_header = 'DateUTC';
# Instantiate weather object
weather = exodata.WeatherFromCSV(self.csv_filepath, \
self.variable_map, \
self.geography, \
time_header = time_header);
# Get weather data
weather.collect_data(start_time, final_time);
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_default_time.csv');
def test_collect_data_local_time_from_geography(self):
start_time = '10/19/2016 12:53:00 PM';
final_time = '10/19/2016 11:53:00 PM';
time_header = 'TimePDT';
# Instantiate weather object
weather = exodata.WeatherFromCSV(self.csv_filepath, \
self.variable_map, \
self.geography, \
time_header = time_header, \
tz_name = 'from_geography');
# Get weather data
weather.collect_data(start_time, final_time);
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_local_time_from_geography.csv');
def test_collect_data_local_time_from_tz_name(self):
start_time = '10/19/2016 12:53:00 PM';
final_time = '10/19/2016 11:53:00 PM';
time_header = 'TimePDT';
# Instantiate weather object
weather = exodata.WeatherFromCSV(self.csv_filepath, \
self.variable_map, \
self.geography, \
time_header = time_header, \
tz_name = 'America/Los_Angeles');
# Get weather data
weather.collect_data(start_time, final_time);
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_local_time_from_tz_name.csv');
def test_collect_data_clean_data(self):
start_time = '2016-10-19 19:53:00';
final_time = '2016-10-20 06:53:00';
time_header = 'DateUTC';
variable_map = {'TemperatureF' : ('weaTDryBul', units.degF), \
'Dew PointF' : ('weaTDewPoi', units.degF), \
'Humidity' : ('weaRelHum', units.percent), \
'Sea Level PressureIn' : ('weaPAtm', units.inHg), \
'WindDirDegrees' : ('weaWinDir', units.deg), \
'Wind SpeedMPH' : ('weaWinSpe', units.mph)};
clean_data = {'Wind SpeedMPH' : {'cleaning_type' : variables.Timeseries.cleaning_replace, \
'cleaning_args' : ('Calm', 0)}};
# Instantiate weather object
weather = exodata.WeatherFromCSV(self.csv_filepath, \
variable_map, \
self.geography, \
time_header = time_header,
clean_data = clean_data);
# Get weather data
weather.collect_data(start_time, final_time);
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_clean_data.csv');
class WeatherFromDF(TestCaseMPCPy):
'''Test the collection of weather data from a pandas DataFrame object.
'''
def setUp(self):
self.df = pd.read_csv(os.path.join(self.get_unittest_path(), 'resources', 'weather', 'BerkeleyCSV.csv'));
self.geography = [37.8716, -122.2727];
self.variable_map = {'TemperatureF' : ('weaTDryBul', units.degF), \
'Dew PointF' : ('weaTDewPoi', units.degF), \
'Humidity' : ('weaRelHum', units.percent), \
'Sea Level PressureIn' : ('weaPAtm', units.inHg), \
'WindDirDegrees' : ('weaWinDir', units.deg)};
def tearDown(self):
del self.df
del self.geography
del self.variable_map
def test_instantiate(self):
time = pd.to_datetime(self.df['DateUTC']);
self.df.set_index(time, inplace=True);
weather = exodata.WeatherFromDF(self.df, \
self.variable_map,
self.geography);
self.assertEqual(weather.name, 'weather_from_df');
self.assertAlmostEqual(weather.lat.display_data(), 37.8716, places=4);
self.assertAlmostEqual(weather.lon.display_data(), -122.2727, places=4);
self.assertEqual(weather.tz_name, 'UTC');
def test_instantiate_without_geography(self):
with self.assertRaises(TypeError):
weather = exodata.WeatherFromDF(self.df,
self.variable_map)
def test_collect_data_default_time(self):
start_time = '2016-10-19 19:53:00';
final_time = '2016-10-20 06:53:00';
time = pd.to_datetime(self.df['DateUTC']);
self.df.set_index(time, inplace=True);
# Instantiate weather object
weather = exodata.WeatherFromDF(self.df, \
self.variable_map, \
self.geography);
# Get weather data
weather.collect_data(start_time, final_time);
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_default_time.csv');
def test_collect_data_local_time_from_geography(self):
start_time = '10/19/2016 12:53:00 PM';
final_time = '10/19/2016 11:53:00 PM';
time = pd.to_datetime(self.df['TimePDT']);
self.df.set_index(time, inplace=True);
# Instantiate weather object
weather = exodata.WeatherFromDF(self.df, \
self.variable_map, \
self.geography,
tz_name = 'from_geography');
# Get weather data
weather.collect_data(start_time, final_time);
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_local_time_from_geography.csv');
def test_collect_data_local_time_from_tz_name(self):
start_time = '10/19/2016 12:53:00 PM';
final_time = '10/19/2016 11:53:00 PM';
time = pd.to_datetime(self.df['TimePDT']);
self.df.set_index(time, inplace=True);
# Instantiate weather object
weather = exodata.WeatherFromDF(self.df, \
self.variable_map, \
self.geography,
tz_name = 'America/Los_Angeles');
# Get weather data
weather.collect_data(start_time, final_time);
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_local_time_from_tz_name.csv');
def test_collect_data_tz_handling(self):
start_time = '2016-10-19 19:53:00';
final_time = '2016-10-20 06:53:00';
time = pd.to_datetime(self.df['DateUTC']);
self.df.set_index(time, inplace=True);
# Localize timezone
self.df = self.df.tz_localize('UTC')
# Instantiate weather object
with self.assertRaises(TypeError):
weather = exodata.WeatherFromDF(self.df, \
self.variable_map, \
self.geography);
# Remove timezone
self.df = self.df.tz_convert(None)
# Instantiate weather object
weather = exodata.WeatherFromDF(self.df, \
self.variable_map, \
self.geography);
# Get weather data
weather.collect_data(start_time, final_time);
# Collect twice
weather.collect_data(start_time, final_time);
# Check reference
df_test = weather.display_data();
self.check_df(df_test, 'collect_data_default_time.csv');
#%% Internal Tests
class InternalFromCSV(TestCaseMPCPy):
'''Test the collection of internal data from a CSV file.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'internal', 'sampleCSV.csv');
variable_map = {'intRad_wes' : ('wes', 'intRad', units.W_m2), \
'intCon_wes' : ('wes', 'intCon', units.W_m2), \
'intLat_wes' : ('wes', 'intLat', units.W_m2), \
'intRad_hal' : ('hal', 'intRad', units.W_m2), \
'intCon_hal' : ('hal', 'intCon', units.W_m2), \
'intLat_hal' : ('hal', 'intLat', units.W_m2), \
'intRad_eas' : ('eas', 'intRad', units.W_m2), \
'intCon_eas' : ('eas', 'intCon', units.W_m2), \
'intLat_eas' : ('eas', 'intLat', units.W_m2)};
# Instantiate internal object
self.internal = exodata.InternalFromCSV(csv_filepath, \
variable_map);
def tearDown(self):
del self.internal
def test_collect_data(self):
start_time = '1/2/2015';
final_time = '1/3/2015';
# Get internal data
self.internal.collect_data(start_time, final_time);
# Check reference
df_test = self.internal.display_data();
self.check_df(df_test, 'collect_data.csv');
class InternalFromOccupancyModel(TestCaseMPCPy):
'''Test the collection of internal data from an occupancy model.
'''
def setUp(self):
# Time
start_time_occupancy = '4/1/2013';
final_time_occupancy = '4/7/2013 23:55:00';
# Load occupancy models
with open(os.path.join(utility.get_MPCPy_path(), 'unittests', 'references', \
'test_models', 'OccupancyFromQueueing', \
'occupancy_model_estimated.txt'), 'r') as f:
occupancy_model = pickle.load(f);
# Define zones and loads
zone_list = ['wes', 'hal', 'eas'];
load_list = [[0.4,0.4,0.2], [0.4,0.4,0.2], [0.4,0.4,0.2]];
# Simulate occupancy models for each zone
occupancy_model_list = [];
np.random.seed(1); # start with same seed for random number generation
for zone in zone_list:
simulate_options = occupancy_model.get_simulate_options();
simulate_options['iter_num'] = 5;
occupancy_model.simulate(start_time_occupancy, final_time_occupancy)
occupancy_model_list.append(copy.deepcopy(occupancy_model));
# Instantiate internal object
self.internal = exodata.InternalFromOccupancyModel(zone_list, load_list, units.W_m2, occupancy_model_list);
def tearDown(self):
del self.internal
def test_collect_data(self):
start_time = '4/2/2013';
final_time = '4/4/2013';
# Get internal data
self.internal.collect_data(start_time, final_time);
# Check reference
df_test = self.internal.display_data();
self.check_df(df_test, 'collect_data.csv');
#%% Control Tests
class ControlFromCSV(TestCaseMPCPy):
'''Test the collection of control data from a CSV file.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'building', 'ControlCSV_0.csv');
variable_map = {'conHeat_wes' : ('conHeat_wes', units.unit1), \
'conHeat_hal' : ('conHeat_hal', units.unit1), \
'conHeat_eas' : ('conHeat_eas', units.unit1)};
# Instantiate control object
self.control = exodata.ControlFromCSV(csv_filepath, \
variable_map);
def tearDown(self):
del self.control
def test_collect_data(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Get control data
self.control.collect_data(start_time, final_time);
# Check reference
df_test = self.control.display_data();
self.check_df(df_test, 'collect_data.csv');
class ControlFromDF(TestCaseMPCPy):
'''Test the collection of control data from a pandas DataFrame object.
'''
def setUp(self):
self.df = pd.read_csv(os.path.join(self.get_unittest_path(), 'resources', 'building', 'ControlCSV_0.csv'));
time = pd.to_datetime(self.df['Time']);
self.df.set_index(time, inplace=True);
self.variable_map = {'conHeat_wes' : ('conHeat_wes', units.unit1), \
'conHeat_hal' : ('conHeat_hal', units.unit1), \
'conHeat_eas' : ('conHeat_eas', units.unit1)};
def tearDown(self):
del self.df
del self.variable_map
def test_collect_data(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Instantiate control object
control = exodata.ControlFromDF(self.df, \
self.variable_map);
# Get control data
control.collect_data(start_time, final_time);
# Check reference
df_test = control.display_data();
self.check_df(df_test, 'collect_data.csv');
def test_collect_data_tz_handling(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Localize timezone
self.df = self.df.tz_localize('UTC')
# Instantiate weather object
with self.assertRaises(TypeError):
control = exodata.ControlFromDF(self.df, \
self.variable_map);
# Remove timezone
self.df = self.df.tz_convert(None)
# Instantiate weather object
control = exodata.ControlFromDF(self.df, \
self.variable_map);
# Get control data
control.collect_data(start_time, final_time);
# Collect twice
control.collect_data(start_time, final_time);
# Check reference
df_test = control.display_data();
self.check_df(df_test, 'collect_data.csv');
#%% Other Input Tests
class OtherInputFromCSV(TestCaseMPCPy):
'''Test the collection of other input data from a CSV file.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'weather', 'Tamb.csv');
variable_map = {'T' : ('Tamb', units.degC)};
# Instantiate other input object
self.otherinput = exodata.OtherInputFromCSV(csv_filepath, \
variable_map);
def tearDown(self):
del self.otherinput
def test_collect_data(self):
start_time = '1/1/2015 00:00:00';
final_time = '1/1/2015 06:00:00';
# Get other input data
self.otherinput.collect_data(start_time, final_time);
# Check reference
df_test = self.otherinput.display_data();
self.check_df(df_test, 'collect_data.csv');
class OtherInputFromDF(TestCaseMPCPy):
'''Test the collection of other input data from a pandas DataFrame object.
'''
def setUp(self):
self.df = pd.read_csv(os.path.join(self.get_unittest_path(), 'resources', 'weather', 'Tamb.csv'));
time = pd.to_datetime(self.df['Time']);
self.df.set_index(time, inplace=True);
self.variable_map = {'T' : ('Tamb', units.degC)};
def tearDown(self):
del self.df
del self.variable_map
def test_collect_data(self):
start_time = '1/1/2015 00:00:00';
final_time = '1/1/2015 06:00:00';
# Instantiate control object
otherinput = exodata.OtherInputFromDF(self.df, \
self.variable_map);
# Get control data
otherinput.collect_data(start_time, final_time);
# Check reference
df_test = otherinput.display_data();
self.check_df(df_test, 'collect_data.csv');
def test_collect_data_tz_handling(self):
start_time = '1/1/2015 00:00:00';
final_time = '1/1/2015 06:00:00';
# Localize timezone
self.df = self.df.tz_localize('UTC')
# Instantiate weather object
with self.assertRaises(TypeError):
otherinput = exodata.OtherInputFromDF(self.df, \
self.variable_map);
# Remove timezone
self.df = self.df.tz_convert(None)
# Instantiate weather object
otherinput = exodata.OtherInputFromDF(self.df, \
self.variable_map);
# Get control data
otherinput.collect_data(start_time, final_time);
# Collect twice
otherinput.collect_data(start_time, final_time);
# Check reference
df_test = otherinput.display_data();
self.check_df(df_test, 'collect_data.csv');
#%% Parameter Tests
class ParameterFromCSV(TestCaseMPCPy):
'''Test the collection of parameter data from a CSV file.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'model', 'LBNL71T_Parameters.csv');
# Instantiate weather object
self.parameters = exodata.ParameterFromCSV(csv_filepath);
# Get parameter data
self.parameters.collect_data()
def tearDown(self):
del self.parameters
def test_collect_data(self):
# Check reference
df_test = self.parameters.display_data();
self.check_df(df_test, 'collect_data.csv', timeseries=False);
class ParameterSet(TestCaseMPCPy):
'''Test setting parameter data.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'model', 'LBNL71T_Parameters.csv');
# Instantiate weather object
self.parameters = exodata.ParameterFromCSV(csv_filepath);
# Get parameter data
self.parameters.collect_data()
def tearDown(self):
del self.parameters
def test_set_data_value(self):
# Set value only
self.parameters.set_data('adjeas.c_bou', value=20000.0)
df_test = self.parameters.display_data();
self.check_df(df_test, 'set_data_value.csv', timeseries=False);
def test_set_data_free(self):
# Set free only
self.parameters.set_data('adjeas.c_bou', free=False)
df_test = self.parameters.display_data();
self.check_df(df_test, 'set_data_free.csv', timeseries=False);
def test_set_data_min(self):
# Set min only
self.parameters.set_data('adjeas.c_bou', minimum=10000.0)
df_test = self.parameters.display_data();
self.check_df(df_test, 'set_data_min.csv', timeseries=False);
def test_set_data_max(self):
# Set max only
self.parameters.set_data('adjeas.c_bou', maximum=30000.0)
df_test = self.parameters.display_data();
self.check_df(df_test, 'set_data_max.csv', timeseries=False);
def test_set_data_cov(self):
# Set cov only
self.parameters.set_data('adjeas.c_bou', covariance=0.1)
df_test = self.parameters.display_data();
self.check_df(df_test, 'set_data_cov.csv', timeseries=False);
def test_set_data_name(self):
# Set name only
self.parameters.set_data('adjeas.c_bou', new_name='c_bou')
df_test = self.parameters.display_data();
self.check_df(df_test, 'set_data_name.csv', timeseries=False);
def test_set_data_all(self):
# Set all data
self.parameters.set_data('adjeas.c_bou',
value=20000.0,
free=False,
minimum=10000.0,
maximum=30000.0,
covariance=0.1,
new_name='c_bou')
df_test = self.parameters.display_data();
self.check_df(df_test, 'set_data_all.csv', timeseries=False);
def test_set_data_keyerror(self):
# Set cov only
with self.assertRaises(KeyError):
self.parameters.set_data('c_bou', value=20000.0)
class ParameterAppend(TestCaseMPCPy):
'''Test appending parameter data.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'model', 'LBNL71T_Parameters.csv');
# Instantiate weather object
self.parameters = exodata.ParameterFromCSV(csv_filepath);
# Get parameter data
self.parameters.collect_data()
def tearDown(self):
del self.parameters
def test_set_data_all(self):
# Set all data
self.parameters.append_data('c_bou',
value=20000.0,
free=False,
minimum=10000.0,
maximum=30000.0,
covariance=0.1,
unit=units.J_m2K)
df_test = self.parameters.display_data();
self.check_df(df_test, 'append_data.csv', timeseries=False);
def test_set_data_keyerror(self):
# Set cov only
with self.assertRaises(KeyError):
self.parameters.append_data('adjeas.c_bou',
value=20000.0,
free=False,
minimum=10000.0,
maximum=30000.0,
covariance=0.1,
unit=units.J_kgK)
class ParameterFromDF(TestCaseMPCPy):
'''Test the collection of parameter data from a pandas DataFrame object.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'model', 'LBNL71T_Parameters.csv');
df = pd.read_csv(csv_filepath, index_col = 'Name')
# Instantiate weather object
self.parameters = exodata.ParameterFromDF(df);
def tearDown(self):
del self.parameters
def test_collect_data(self):
# Get parameter data
self.parameters.collect_data();
# Check reference
df_test = self.parameters.display_data();
self.check_df(df_test, 'collect_data.csv', timeseries=False);
#%% Constraint Tests
class ConstraintFromCSV(TestCaseMPCPy):
'''Test the collection of constraint data from a CSV file.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'optimization', 'sampleConstraintCSV_Setback.csv');
variable_map = {'wesTdb_min' : ('wesTdb', 'GTE', units.degC), \
'wesTdb_max' : ('wesTdb', 'LTE', units.degC), \
'easTdb_min' : ('easTdb', 'GTE', units.degC), \
'easTdb_max' : ('easTdb', 'LTE', units.degC), \
'halTdb_min' : ('halTdb', 'GTE', units.degC), \
'halTdb_max' : ('halTdb', 'LTE', units.degC), \
'conHeat_wes_min' : ('conHeat_wes', 'GTE', units.unit1), \
'conHeat_wes_max' : ('conHeat_wes', 'LTE', units.unit1), \
'conHeat_hal_min' : ('conHeat_hal', 'GTE', units.unit1), \
'conHeat_hal_max' : ('conHeat_hal', 'LTE', units.unit1), \
'conHeat_eas_min' : ('conHeat_eas', 'sGTE', units.unit1, 100), \
'conHeat_eas_max' : ('conHeat_eas', 'sLTE', units.unit1, 200)};
# Instantiate weather object
self.constraints = exodata.ConstraintFromCSV(csv_filepath, \
variable_map);
def tearDown(self):
del self.constraints
def test_collect_data(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Get constraint data
self.constraints.collect_data(start_time, final_time);
# Check reference
df_test = self.constraints.display_data();
self.check_df(df_test, 'collect_data.csv');
class ConstraintFromDF(TestCaseMPCPy):
'''Test the collection of constraint data from a df.
'''
def setUp(self):
self.df = pd.read_csv(os.path.join(self.get_unittest_path(), 'resources', 'optimization', 'sampleConstraintCSV_Setback.csv'));
time = pd.to_datetime(self.df['Time']);
self.df.set_index(time, inplace=True);
self.variable_map = {'wesTdb_min' : ('wesTdb', 'GTE', units.degC), \
'wesTdb_max' : ('wesTdb', 'LTE', units.degC), \
'easTdb_min' : ('easTdb', 'GTE', units.degC), \
'easTdb_max' : ('easTdb', 'LTE', units.degC), \
'halTdb_min' : ('halTdb', 'GTE', units.degC), \
'halTdb_max' : ('halTdb', 'LTE', units.degC), \
'conHeat_wes_min' : ('conHeat_wes', 'GTE', units.unit1), \
'conHeat_wes_max' : ('conHeat_wes', 'LTE', units.unit1), \
'conHeat_hal_min' : ('conHeat_hal', 'GTE', units.unit1), \
'conHeat_hal_max' : ('conHeat_hal', 'LTE', units.unit1), \
'conHeat_eas_min' : ('conHeat_eas', 'sGTE', units.unit1, 100), \
'conHeat_eas_max' : ('conHeat_eas', 'sLTE', units.unit1, 200)};
def tearDown(self):
del self.df
del self.variable_map
def test_collect_data(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Instantiate weather object
constraints = exodata.ConstraintFromDF(self.df, \
self.variable_map);
# Get constraint data
constraints.collect_data(start_time, final_time);
# Check reference
df_test = constraints.display_data();
self.check_df(df_test, 'collect_data.csv');
def test_collect_data_tz_handling(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Localize timezone
self.df = self.df.tz_localize('UTC')
# Instantiate weather object
with self.assertRaises(TypeError):
constraints = exodata.ConstraintFromDF(self.df, \
self.variable_map);
# Remove timezone
self.df = self.df.tz_convert(None)
# Instantiate weather object
constraints = exodata.ConstraintFromDF(self.df, \
self.variable_map);
# Get control data
constraints.collect_data(start_time, final_time);
# Collect twice
constraints.collect_data(start_time, final_time);
# Check reference
df_test = constraints.display_data();
self.check_df(df_test, 'collect_data.csv');
class ConstraintFromOccupancyModel(TestCaseMPCPy):
'''Test the collection of constraint data from an occupancy model.
'''
def setUp(self):
# Time
start_time_occupancy = '3/1/2012';
final_time_occupancy = '3/7/2012 23:55:00';
# Load occupancy models
with open(os.path.join(self.get_unittest_path(), 'references', 'test_models',\
'OccupancyFromQueueing', 'occupancy_model_estimated.txt'), 'r') as f:
self.occupancy_model = pickle.load(f);
# Define state variables and values
self.state_variable_list = ['wesTdb', 'wesTdb', 'easTdb', 'easTdb', 'halTdb', 'halTdb'];
self.values_list = [[25,30], [20,15], [25+273.15, 30+273.15], [20+273.15, 15+273.15], [25,30], [20,15]];
self.constraint_type_list = ['LTE', 'GTE', 'LTE', 'GTE', 'LTE', 'GTE'];
self.unit_list = [units.degC, units.degC, units.K, units.K, units.degC, units.degC]
# Simulate occupancy model
simulate_options = self.occupancy_model.get_simulate_options();
simulate_options['iter_num'] = 5;
np.random.seed(1); # start with same seed for random number generation
self.occupancy_model.simulate(start_time_occupancy, final_time_occupancy);
def tearDown(self):
del self.constraints
def test_collect_data(self):
start_time = '3/2/2012';
final_time = '3/4/2012';
# Instantiate constraint object
self.constraints = exodata.ConstraintFromOccupancyModel(self.state_variable_list, self.values_list, self.constraint_type_list, self.unit_list, self.occupancy_model);
# Get constraint data
self.constraints.collect_data(start_time, final_time);
# Check reference
df_test = self.constraints.display_data();
self.check_df(df_test, 'collect_data.csv');
def test_collect_data_slack_constraints(self):
# Reset constraint data
self.constraint_type_list = ['sLTE', 'GTE', 'LTE', 'GTE', 'LTE', 'GTE'];
# Instantiate constraint object
with self.assertRaises(TypeError):
self.constraints = exodata.ConstraintFromOccupancyModel(self.state_variable_list, self.values_list, self.constraint_type_list, self.unit_list, self.occupancy_model);
self.constraints = None
#%% Prices Tests
class PriceFromCSV(TestCaseMPCPy):
'''Test the collection of control data from a CSV file.
'''
def setUp(self):
csv_filepath = os.path.join(self.get_unittest_path(), 'resources', 'optimization', 'PriceCSV.csv');
variable_map = {'pi_e' : ('pi_e', units.unit1)};
# Instantiate weather object
self.prices = exodata.PriceFromCSV(csv_filepath, \
variable_map);
def tearDown(self):
del self.prices
def test_print(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Get price data
self.prices.collect_data(start_time, final_time);
# Check reference
df_test = self.prices.display_data();
self.check_df(df_test, 'collect_data.csv');
class PriceFromDF(TestCaseMPCPy):
'''Test the collection of price data from a df.
'''
def setUp(self):
self.df = pd.read_csv(os.path.join(self.get_unittest_path(), 'resources', 'optimization', 'PriceCSV.csv'));
time = pd.to_datetime(self.df['Time']);
self.df.set_index(time, inplace=True);
self.variable_map = {'pi_e' : ('pi_e', units.unit1)};
def tearDown(self):
del self.df
del self.variable_map
def test_print(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Instantiate weather object
prices = exodata.PriceFromDF(self.df, \
self.variable_map);
# Get price data
prices.collect_data(start_time, final_time);
# Check reference
df_test = prices.display_data();
self.check_df(df_test, 'collect_data.csv');
def test_collect_data_tz_handling(self):
start_time = '1/1/2015 13:00:00';
final_time = '1/2/2015';
# Localize timezone
self.df = self.df.tz_localize('UTC')
# Instantiate weather object
with self.assertRaises(TypeError):
prices = exodata.PriceFromDF(self.df, \
self.variable_map);
# Remove timezone
self.df = self.df.tz_convert(None)
# Instantiate weather object
prices = exodata.PriceFromDF(self.df, \
self.variable_map);
# Get control data
prices.collect_data(start_time, final_time);
# Collect twice
prices.collect_data(start_time, final_time);
# Check reference
df_test = prices.display_data();
self.check_df(df_test, 'collect_data.csv');
#%% Source Tests
class Type(TestCaseMPCPy):
'''Test the general methods of a Type object.
'''
def setUp(self):
epw_filepath = os.path.join(self.get_unittest_path(), 'resources', 'weather', 'USA_IL_Chicago-OHare.Intl.AP.725300_TMY3.epw');
self.weather = exodata.WeatherFromEPW(epw_filepath);
def tearDown(self):
del self.weather
def test_set_time_interval(self):
'''Test this method sets the time metrics properly in the exodata source.'''
# Set start and final time
start_time = '1/2/2016';
final_time = '1/4/2016';
# Set known time metrics
elapsed_seconds = 86400*2;
year_start_seconds = 86400;
year_final_seconds = 86400*3;
# Set start and final time in exodata source
self.weather._set_time_interval(start_time, final_time);
# Check time metrics are correct
self.assertAlmostEqual(elapsed_seconds, self.weather.elapsed_seconds, places=3);
self.assertAlmostEqual(year_start_seconds, self.weather.year_start_seconds, places=3);
self.assertAlmostEqual(year_final_seconds, self.weather.year_final_seconds, places=3);
#%% Main
if __name__ == '__main__':
unittest.main() | 42.481728 | 177 | 0.571309 | 4,273 | 38,361 | 4.910133 | 0.073017 | 0.047186 | 0.039321 | 0.026453 | 0.844002 | 0.827892 | 0.808684 | 0.79734 | 0.75983 | 0.721462 | 0 | 0.034778 | 0.317145 | 38,361 | 903 | 178 | 42.481728 | 0.766177 | 0.108339 | 0 | 0.687188 | 0 | 0 | 0.122998 | 0.020819 | 0 | 0 | 0 | 0 | 0.049917 | 1 | 0.138103 | false | 0 | 0.018303 | 0 | 0.18802 | 0.003328 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bd1e657372441c1096f70b6d83006b2f9caa4662 | 38 | py | Python | BACK-END/App/views/__init__.py | PonchoCeniceros/reltery.com | 41463b18bbaa7e419236f47b79f53bd9b7b59144 | [
"MIT"
] | null | null | null | BACK-END/App/views/__init__.py | PonchoCeniceros/reltery.com | 41463b18bbaa7e419236f47b79f53bd9b7b59144 | [
"MIT"
] | null | null | null | BACK-END/App/views/__init__.py | PonchoCeniceros/reltery.com | 41463b18bbaa7e419236f47b79f53bd9b7b59144 | [
"MIT"
] | null | null | null | from .computeByMLP import computeByMLP | 38 | 38 | 0.894737 | 4 | 38 | 8.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bd26bcdbfa5c224e2d500c81b1ff7c67ef5844d9 | 994 | py | Python | terrascript/databricks/d.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/databricks/d.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/databricks/d.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/databricks/d.py
# Automatically generated by tools/makecode.py ()
import warnings
warnings.warn(
"using the 'legacy layout' is deprecated", DeprecationWarning, stacklevel=2
)
import terrascript
class databricks_aws_assume_role_policy(terrascript.Data):
pass
class databricks_aws_bucket_policy(terrascript.Data):
pass
class databricks_aws_crossaccount_policy(terrascript.Data):
pass
class databricks_current_user(terrascript.Data):
pass
class databricks_dbfs_file(terrascript.Data):
pass
class databricks_dbfs_file_paths(terrascript.Data):
pass
class databricks_group(terrascript.Data):
pass
class databricks_node_type(terrascript.Data):
pass
class databricks_notebook(terrascript.Data):
pass
class databricks_notebook_paths(terrascript.Data):
pass
class databricks_spark_version(terrascript.Data):
pass
class databricks_user(terrascript.Data):
pass
class databricks_zones(terrascript.Data):
pass
| 16.032258 | 79 | 0.782696 | 117 | 994 | 6.418803 | 0.358974 | 0.259654 | 0.328895 | 0.383489 | 0.641811 | 0.551265 | 0.226365 | 0 | 0 | 0 | 0 | 0.001178 | 0.145875 | 994 | 61 | 80 | 16.295082 | 0.883392 | 0.075453 | 0 | 0.419355 | 1 | 0 | 0.042576 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.419355 | 0.064516 | 0 | 0.483871 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
1fbb7f48c1ba623ff204bcd72a9149c36360059d | 30 | py | Python | src/vbr/utils/helpers/__init__.py | a2cps/python-vbr | 9d5d4480386d0530450d59157e0da6937320f928 | [
"BSD-3-Clause"
] | 1 | 2021-05-26T19:08:29.000Z | 2021-05-26T19:08:29.000Z | src/vbr/utils/helpers/__init__.py | a2cps/python-vbr | 9d5d4480386d0530450d59157e0da6937320f928 | [
"BSD-3-Clause"
] | 7 | 2021-05-04T13:12:39.000Z | 2022-03-09T21:04:33.000Z | src/vbr/utils/helpers/__init__.py | a2cps/python-vbr | 9d5d4480386d0530450d59157e0da6937320f928 | [
"BSD-3-Clause"
] | 2 | 2021-04-20T14:46:52.000Z | 2021-06-07T20:28:28.000Z | from .tapis import get_client
| 15 | 29 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1fe09a963addd8d92ac69af99c41d8bbf9eed394 | 149 | py | Python | pineboolib/q3widgets/qspinbox.py | Aulla/pineboo | 3ad6412d365a6ad65c3bb2bdc03f5798d7c37004 | [
"MIT"
] | 2 | 2017-12-10T23:06:16.000Z | 2017-12-10T23:06:23.000Z | pineboolib/q3widgets/qspinbox.py | Aulla/pineboo | 3ad6412d365a6ad65c3bb2bdc03f5798d7c37004 | [
"MIT"
] | 36 | 2017-11-05T21:13:47.000Z | 2020-08-26T15:56:15.000Z | pineboolib/q3widgets/qspinbox.py | Aulla/pineboo | 3ad6412d365a6ad65c3bb2bdc03f5798d7c37004 | [
"MIT"
] | 8 | 2017-11-05T15:56:31.000Z | 2019-04-25T16:32:28.000Z | """Qspinbox module."""
from PyQt6 import QtWidgets # type: ignore[import]
class QSpinBox(QtWidgets.QSpinBox):
"""QSpinBox class."""
pass
| 16.555556 | 51 | 0.677852 | 16 | 149 | 6.3125 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00813 | 0.174497 | 149 | 8 | 52 | 18.625 | 0.813008 | 0.362416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
1ff555cbb12c62451a2ea27ef1d27114af039c4f | 272 | py | Python | regraph/backends/neo4j/cypher_utils/__init__.py | ismailbennani/ReGraph | bb148a7cbd94e87f622443263e04c3fae2d4d00b | [
"MIT"
] | 54 | 2016-07-20T11:47:07.000Z | 2022-03-28T20:10:29.000Z | regraph/backends/neo4j/cypher_utils/__init__.py | ismailbennani/ReGraph | bb148a7cbd94e87f622443263e04c3fae2d4d00b | [
"MIT"
] | 8 | 2017-02-27T15:36:15.000Z | 2021-01-08T08:37:24.000Z | regraph/backends/neo4j/cypher_utils/__init__.py | ismailbennani/ReGraph | bb148a7cbd94e87f622443263e04c3fae2d4d00b | [
"MIT"
] | 11 | 2016-07-25T13:19:28.000Z | 2022-01-14T07:20:53.000Z | # from regraph.neo4j.cypher_utils.generic import *
# from regraph.neo4j.cypher_utils.rewriting import *
# from regraph.neo4j.cypher_utils.propagation import *
# from regraph.neo4j.cypher_utils.categorical import *
# from regraph.neo4j.cypher_utils.query_analysis import *
| 45.333333 | 57 | 0.816176 | 36 | 272 | 6 | 0.333333 | 0.25463 | 0.37037 | 0.509259 | 0.736111 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0.020243 | 0.091912 | 272 | 5 | 58 | 54.4 | 0.854251 | 0.959559 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
952c917cf5791dfa18b2b57614423aa24686978b | 89 | py | Python | stella/core/automata/__init__.py | xabinapal/stella | ae02055749f997323390d642c99a37b80aa5df68 | [
"MIT"
] | null | null | null | stella/core/automata/__init__.py | xabinapal/stella | ae02055749f997323390d642c99a37b80aa5df68 | [
"MIT"
] | null | null | null | stella/core/automata/__init__.py | xabinapal/stella | ae02055749f997323390d642c99a37b80aa5df68 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .automata import *
from .enfa import *
from .regex import * | 17.8 | 23 | 0.640449 | 12 | 89 | 4.75 | 0.666667 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013889 | 0.191011 | 89 | 5 | 24 | 17.8 | 0.777778 | 0.235955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1f16111f2f72cf420b2eb16c2b4da383ff3d20e3 | 100 | py | Python | bazel/workspace-status/foo/status.py | laszlocsomor/projects | fb0f8b62046c0c420dc409d2e44c10728028ea1a | [
"Apache-2.0"
] | 2 | 2018-02-21T13:51:38.000Z | 2019-09-10T19:35:15.000Z | bazel/workspace-status/foo/status.py | laszlocsomor/projects | fb0f8b62046c0c420dc409d2e44c10728028ea1a | [
"Apache-2.0"
] | null | null | null | bazel/workspace-status/foo/status.py | laszlocsomor/projects | fb0f8b62046c0c420dc409d2e44c10728028ea1a | [
"Apache-2.0"
] | 2 | 2018-08-14T11:31:21.000Z | 2020-01-10T15:47:49.000Z | from __future__ import print_function
print("PY_FOO1 py_foo")
print("STABLE_PY_BAR1 stable_py_bar")
| 25 | 37 | 0.84 | 17 | 100 | 4.294118 | 0.647059 | 0.219178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.08 | 100 | 3 | 38 | 33.333333 | 0.771739 | 0 | 0 | 0 | 0 | 0 | 0.42 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
1f49c2b794c6e8e9b96cb255996e2d7fdb82a6d6 | 146 | py | Python | backend/oauth2/oauth2.py | projectpai/paipass | 8b8e70b6808bf026cf957e240c7eed7bfcf4c55d | [
"MIT"
] | 3 | 2021-04-17T10:20:26.000Z | 2022-03-08T07:36:13.000Z | backend/oauth2/oauth2.py | projectpai/paipass | 8b8e70b6808bf026cf957e240c7eed7bfcf4c55d | [
"MIT"
] | null | null | null | backend/oauth2/oauth2.py | projectpai/paipass | 8b8e70b6808bf026cf957e240c7eed7bfcf4c55d | [
"MIT"
] | null | null | null |
def scope_expression_to_symbols(scope):
rw_permission, namespace, attr = scope.lower().split('.')
return rw_permission, namespace, attr
| 24.333333 | 61 | 0.739726 | 18 | 146 | 5.722222 | 0.666667 | 0.23301 | 0.407767 | 0.485437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143836 | 146 | 5 | 62 | 29.2 | 0.824 | 0 | 0 | 0 | 0 | 0 | 0.006944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f613bf8f2e2c755d1397032e962dc5673ca6cdb | 165 | py | Python | armstrong-number/test.py | shivajivarma/python | ad70a534053a44c8283f388abcd5e76ffee9e42f | [
"MIT"
] | null | null | null | armstrong-number/test.py | shivajivarma/python | ad70a534053a44c8283f388abcd5e76ffee9e42f | [
"MIT"
] | null | null | null | armstrong-number/test.py | shivajivarma/python | ad70a534053a44c8283f388abcd5e76ffee9e42f | [
"MIT"
] | null | null | null | from program import isArmstrong
def test_answer():
assert isArmstrong(153)
assert isArmstrong(371)
assert not isArmstrong(150)
assert not isArmstrong(1150)
| 20.625 | 31 | 0.781818 | 21 | 165 | 6.095238 | 0.619048 | 0.265625 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092857 | 0.151515 | 165 | 7 | 32 | 23.571429 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f67a4a871b14572127bb8a414efbfdca2135c3a | 3,285 | py | Python | tests/spoortakmodel/test_spoortak_model_inspector.py | ProRail-DataLab/openspoor | 548c7ffc12a5a97459bafe5327aa9b0d0546537c | [
"MIT"
] | 7 | 2022-01-28T09:54:10.000Z | 2022-03-25T10:02:08.000Z | tests/spoortakmodel/test_spoortak_model_inspector.py | ProRail-DataLab/openspoor | 548c7ffc12a5a97459bafe5327aa9b0d0546537c | [
"MIT"
] | 11 | 2022-03-17T12:48:30.000Z | 2022-03-25T11:22:39.000Z | tests/spoortakmodel/test_spoortak_model_inspector.py | ProRail-DataLab/openspoor | 548c7ffc12a5a97459bafe5327aa9b0d0546537c | [
"MIT"
] | 2 | 2022-03-16T14:11:48.000Z | 2022-03-25T09:10:30.000Z | import os
import pathlib
import unittest
from unittest import skip
from unittest.mock import patch
from pprint import pprint
from openspoor.spoortakmodel import SpoortakModelInspector, SpoortakModelsData
MODELS_DATA_DIR = str(pathlib.Path(__file__).parents[2].resolve().joinpath('data'))
class TestSpoortakModelInspector(unittest.TestCase):
@patch('openspoor.spoortakmodel.spoortak_model_inspector.pprint')
def test_spoortak_in_modfwe(self, mock_pprint):
expected_spoortak_model_rows = 15
expected_bericht_rows = 3
# For development and debugging we still want to see the prints
mock_pprint.side_effect = pprint
sut = SpoortakModelInspector(SpoortakModelsData(MODELS_DATA_DIR))
sut.inspect('603_89R_2.6')
mock_pprint.assert_called()
self.assertEqual(expected_spoortak_model_rows, len(mock_pprint.call_args_list[0][0][0]), )
self.assertEqual(expected_bericht_rows, len(mock_pprint.call_args_list[1][0][0]))
@patch('openspoor.spoortakmodel.spoortak_model_inspector.pprint')
def test_spoortak_in_dassignname(self, mock_pprint):
expected_spoortak_model_rows = 4
expected_bericht_rows = 9
# For development and debugging we still want to see the prints
mock_pprint.side_effect = pprint
sut = SpoortakModelInspector(SpoortakModelsData(MODELS_DATA_DIR))
sut.inspect('087_1321R_24.1')
mock_pprint.assert_called()
self.assertEqual(expected_spoortak_model_rows, len(mock_pprint.call_args_list[0][0][0]), )
self.assertEqual(expected_bericht_rows, len(mock_pprint.call_args_list[1][0][0]))
@patch('openspoor.spoortakmodel.spoortak_model_inspector.pprint')
def test_spoortak_in_fwename(self, mock_pprint):
expected_spoortak_model_rows = 4
expected_bericht_rows = 1
# For development and debugging we still want to see the prints
mock_pprint.side_effect = pprint
sut = SpoortakModelInspector(SpoortakModelsData(MODELS_DATA_DIR))
sut.inspect('927_3325R_904.6')
mock_pprint.assert_called()
self.assertEqual(expected_spoortak_model_rows, len(mock_pprint.call_args_list[0][0][0]), )
self.assertEqual(expected_bericht_rows, len(mock_pprint.call_args_list[1][0][0]))
@patch('openspoor.spoortakmodel.spoortak_model_inspector.pprint')
def test_spoortak_478_1201V_0_6(self, mock_pprint):
""" Bug test: Gives issues on linux, but works on windows """
expected_spoortak_model_rows = 4
expected_bericht_rows = 1
# For development and debugging we still want to see the prints
mock_pprint.side_effect = pprint
sut = SpoortakModelInspector(SpoortakModelsData(MODELS_DATA_DIR))
sut.inspect('478_1201V_0.6')
mock_pprint.assert_called()
self.assertEqual(expected_spoortak_model_rows, len(mock_pprint.call_args_list[0][0][0]), )
self.assertEqual(expected_bericht_rows, len(mock_pprint.call_args_list[1][0][0]))
@skip('Development only test, comment this skip to enable it')
def test_develop(self):
""" quickly inspect the data of a spoortak"""
sut = SpoortakModelInspector(SpoortakModelsData(MODELS_DATA_DIR))
sut.inspect('518_107BL_63.5')
| 42.662338 | 98 | 0.73516 | 426 | 3,285 | 5.356808 | 0.232394 | 0.087642 | 0.07362 | 0.087642 | 0.785276 | 0.762051 | 0.762051 | 0.744961 | 0.716039 | 0.716039 | 0 | 0.032902 | 0.17656 | 3,285 | 76 | 99 | 43.223684 | 0.810721 | 0.10411 | 0 | 0.566038 | 0 | 0 | 0.117526 | 0.075162 | 0 | 0 | 0 | 0 | 0.226415 | 1 | 0.09434 | false | 0 | 0.132075 | 0 | 0.245283 | 0.471698 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1f7e89ffe24abfc7d482c981c91e98d83611dc0b | 720 | py | Python | src/dot/entities/dotkog.py | alisonbento/steering-all | 99797f99180dd64189ea5ed85ff71b66bfd9cf6f | [
"MIT"
] | 3 | 2016-10-10T18:34:55.000Z | 2017-08-02T15:18:28.000Z | src/dot/entities/dotkog.py | alisonbento/steering-all | 99797f99180dd64189ea5ed85ff71b66bfd9cf6f | [
"MIT"
] | null | null | null | src/dot/entities/dotkog.py | alisonbento/steering-all | 99797f99180dd64189ea5ed85ff71b66bfd9cf6f | [
"MIT"
] | null | null | null | import src.dot.dotentity
class DotKog(src.dot.dotentity.DotEntity):
def __init__(self):
res = [
"assets/img/black-brick.png",
"assets/img/gray-brick.png",
"assets/img/brown-brick.png"
]
grid = [
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1]
]
src.dot.dotentity.DotEntity.__init__(self, grid, res, 0.5)
| 24.827586 | 62 | 0.411111 | 148 | 720 | 1.945946 | 0.141892 | 0.361111 | 0.427083 | 0.416667 | 0.375 | 0.375 | 0.375 | 0.375 | 0.375 | 0.375 | 0 | 0.232068 | 0.341667 | 720 | 28 | 63 | 25.714286 | 0.375527 | 0 | 0 | 0.478261 | 0 | 0 | 0.106944 | 0.106944 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.130435 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f7ee7e54207cc591b14758cb7df6af4c767bde3 | 26 | py | Python | jenkins.py | dhirajberi/seopy | a638dfceee0021ffe0ddeeb50949aa4c1c881380 | [
"MIT"
] | null | null | null | jenkins.py | dhirajberi/seopy | a638dfceee0021ffe0ddeeb50949aa4c1c881380 | [
"MIT"
] | null | null | null | jenkins.py | dhirajberi/seopy | a638dfceee0021ffe0ddeeb50949aa4c1c881380 | [
"MIT"
] | null | null | null | print("Jenkins khatam 1")
| 13 | 25 | 0.730769 | 4 | 26 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.115385 | 26 | 1 | 26 | 26 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2f07bec13b761d2548f198d7d159adc120dbb1a7 | 25,035 | py | Python | db/data/2012/12/get_gzs.py | nataliemcmullen/WikiMiney | 562177a74179cb28469604f4fb9d021cebeed0a6 | [
"MIT"
] | 1 | 2020-11-18T19:20:58.000Z | 2020-11-18T19:20:58.000Z | db/data/2012/12/get_gzs.py | nataliemcmullen/WikiMiney | 562177a74179cb28469604f4fb9d021cebeed0a6 | [
"MIT"
] | null | null | null | db/data/2012/12/get_gzs.py | nataliemcmullen/WikiMiney | 562177a74179cb28469604f4fb9d021cebeed0a6 | [
"MIT"
] | null | null | null | urls = [
"pagecounts-20121201-000000.gz",
"pagecounts-20121201-010000.gz",
"pagecounts-20121201-020001.gz",
"pagecounts-20121201-030000.gz",
"pagecounts-20121201-040000.gz",
"pagecounts-20121201-050000.gz",
"pagecounts-20121201-060000.gz",
"pagecounts-20121201-070000.gz",
"pagecounts-20121201-080000.gz",
"pagecounts-20121201-090000.gz",
"pagecounts-20121201-100000.gz",
"pagecounts-20121201-110000.gz",
"pagecounts-20121201-120000.gz",
"pagecounts-20121201-130000.gz",
"pagecounts-20121201-140000.gz",
"pagecounts-20121201-150000.gz",
"pagecounts-20121201-160001.gz",
"pagecounts-20121201-170000.gz",
"pagecounts-20121201-180000.gz",
"pagecounts-20121201-190000.gz",
"pagecounts-20121201-200000.gz",
"pagecounts-20121201-210000.gz",
"pagecounts-20121201-220000.gz",
"pagecounts-20121201-230000.gz",
"pagecounts-20121202-000000.gz",
"pagecounts-20121202-010000.gz",
"pagecounts-20121202-020000.gz",
"pagecounts-20121202-030000.gz",
"pagecounts-20121202-040001.gz",
"pagecounts-20121202-050000.gz",
"pagecounts-20121202-060000.gz",
"pagecounts-20121202-070000.gz",
"pagecounts-20121202-080000.gz",
"pagecounts-20121202-090000.gz",
"pagecounts-20121202-100000.gz",
"pagecounts-20121202-110000.gz",
"pagecounts-20121202-120000.gz",
"pagecounts-20121202-130000.gz",
"pagecounts-20121202-140000.gz",
"pagecounts-20121202-150000.gz",
"pagecounts-20121202-160000.gz",
"pagecounts-20121202-170001.gz",
"pagecounts-20121202-180000.gz",
"pagecounts-20121202-190000.gz",
"pagecounts-20121202-200000.gz",
"pagecounts-20121202-210000.gz",
"pagecounts-20121202-220000.gz",
"pagecounts-20121202-230000.gz",
"pagecounts-20121203-000000.gz",
"pagecounts-20121203-010000.gz",
"pagecounts-20121203-020000.gz",
"pagecounts-20121203-030000.gz",
"pagecounts-20121203-040000.gz",
"pagecounts-20121203-050000.gz",
"pagecounts-20121203-060001.gz",
"pagecounts-20121203-070000.gz",
"pagecounts-20121203-080000.gz",
"pagecounts-20121203-090000.gz",
"pagecounts-20121203-100000.gz",
"pagecounts-20121203-110000.gz",
"pagecounts-20121203-120000.gz",
"pagecounts-20121203-130000.gz",
"pagecounts-20121203-140000.gz",
"pagecounts-20121203-150000.gz",
"pagecounts-20121203-160000.gz",
"pagecounts-20121203-170000.gz",
"pagecounts-20121203-180000.gz",
"pagecounts-20121203-190001.gz",
"pagecounts-20121203-200000.gz",
"pagecounts-20121203-210000.gz",
"pagecounts-20121203-220000.gz",
"pagecounts-20121203-230000.gz",
"pagecounts-20121204-000000.gz",
"pagecounts-20121204-010000.gz",
"pagecounts-20121204-020000.gz",
"pagecounts-20121204-030000.gz",
"pagecounts-20121204-040000.gz",
"pagecounts-20121204-050000.gz",
"pagecounts-20121204-060000.gz",
"pagecounts-20121204-070000.gz",
"pagecounts-20121204-080001.gz",
"pagecounts-20121204-090000.gz",
"pagecounts-20121204-100000.gz",
"pagecounts-20121204-110000.gz",
"pagecounts-20121204-120000.gz",
"pagecounts-20121204-130000.gz",
"pagecounts-20121204-140000.gz",
"pagecounts-20121204-150000.gz",
"pagecounts-20121204-160000.gz",
"pagecounts-20121204-170000.gz",
"pagecounts-20121204-180000.gz",
"pagecounts-20121204-190000.gz",
"pagecounts-20121204-200001.gz",
"pagecounts-20121204-210000.gz",
"pagecounts-20121204-220000.gz",
"pagecounts-20121204-230000.gz",
"pagecounts-20121205-000000.gz",
"pagecounts-20121205-010000.gz",
"pagecounts-20121205-020000.gz",
"pagecounts-20121205-030000.gz",
"pagecounts-20121205-040000.gz",
"pagecounts-20121205-050000.gz",
"pagecounts-20121205-060000.gz",
"pagecounts-20121205-070000.gz",
"pagecounts-20121205-080001.gz",
"pagecounts-20121205-090000.gz",
"pagecounts-20121205-100000.gz",
"pagecounts-20121205-110000.gz",
"pagecounts-20121205-120000.gz",
"pagecounts-20121205-130000.gz",
"pagecounts-20121205-140000.gz",
"pagecounts-20121205-150000.gz",
"pagecounts-20121205-160000.gz",
"pagecounts-20121205-170000.gz",
"pagecounts-20121205-180000.gz",
"pagecounts-20121205-190000.gz",
"pagecounts-20121205-200000.gz",
"pagecounts-20121205-210001.gz",
"pagecounts-20121205-220000.gz",
"pagecounts-20121205-230000.gz",
"pagecounts-20121206-000000.gz",
"pagecounts-20121206-010000.gz",
"pagecounts-20121206-020000.gz",
"pagecounts-20121206-030000.gz",
"pagecounts-20121206-040000.gz",
"pagecounts-20121206-050000.gz",
"pagecounts-20121206-060000.gz",
"pagecounts-20121206-070000.gz",
"pagecounts-20121206-080000.gz",
"pagecounts-20121206-090000.gz",
"pagecounts-20121206-100001.gz",
"pagecounts-20121206-110000.gz",
"pagecounts-20121206-120000.gz",
"pagecounts-20121206-130000.gz",
"pagecounts-20121206-140000.gz",
"pagecounts-20121206-150000.gz",
"pagecounts-20121206-160000.gz",
"pagecounts-20121206-170000.gz",
"pagecounts-20121206-180000.gz",
"pagecounts-20121206-190000.gz",
"pagecounts-20121206-200000.gz",
"pagecounts-20121206-210000.gz",
"pagecounts-20121206-220000.gz",
"pagecounts-20121206-230001.gz",
"pagecounts-20121207-000000.gz",
"pagecounts-20121207-010000.gz",
"pagecounts-20121207-020000.gz",
"pagecounts-20121207-030000.gz",
"pagecounts-20121207-040000.gz",
"pagecounts-20121207-050000.gz",
"pagecounts-20121207-060000.gz",
"pagecounts-20121207-070000.gz",
"pagecounts-20121207-080000.gz",
"pagecounts-20121207-090000.gz",
"pagecounts-20121207-100000.gz",
"pagecounts-20121207-110000.gz",
"pagecounts-20121207-120001.gz",
"pagecounts-20121207-130000.gz",
"pagecounts-20121207-140000.gz",
"pagecounts-20121207-150000.gz",
"pagecounts-20121207-160000.gz",
"pagecounts-20121207-170000.gz",
"pagecounts-20121207-180000.gz",
"pagecounts-20121207-190000.gz",
"pagecounts-20121207-200000.gz",
"pagecounts-20121207-210000.gz",
"pagecounts-20121207-220000.gz",
"pagecounts-20121207-230000.gz",
"pagecounts-20121208-000000.gz",
"pagecounts-20121208-010001.gz",
"pagecounts-20121208-020000.gz",
"pagecounts-20121208-030000.gz",
"pagecounts-20121208-040000.gz",
"pagecounts-20121208-050000.gz",
"pagecounts-20121208-060000.gz",
"pagecounts-20121208-070000.gz",
"pagecounts-20121208-080000.gz",
"pagecounts-20121208-090000.gz",
"pagecounts-20121208-100000.gz",
"pagecounts-20121208-110000.gz",
"pagecounts-20121208-120000.gz",
"pagecounts-20121208-130000.gz",
"pagecounts-20121208-140001.gz",
"pagecounts-20121208-150000.gz",
"pagecounts-20121208-160000.gz",
"pagecounts-20121208-170000.gz",
"pagecounts-20121208-180000.gz",
"pagecounts-20121208-190000.gz",
"pagecounts-20121208-200000.gz",
"pagecounts-20121208-210000.gz",
"pagecounts-20121208-220000.gz",
"pagecounts-20121208-230000.gz",
"pagecounts-20121209-000000.gz",
"pagecounts-20121209-010000.gz",
"pagecounts-20121209-020000.gz",
"pagecounts-20121209-030001.gz",
"pagecounts-20121209-040000.gz",
"pagecounts-20121209-050000.gz",
"pagecounts-20121209-060000.gz",
"pagecounts-20121209-070000.gz",
"pagecounts-20121209-080000.gz",
"pagecounts-20121209-090000.gz",
"pagecounts-20121209-100000.gz",
"pagecounts-20121209-110000.gz",
"pagecounts-20121209-120000.gz",
"pagecounts-20121209-130000.gz",
"pagecounts-20121209-140000.gz",
"pagecounts-20121209-150000.gz",
"pagecounts-20121209-160001.gz",
"pagecounts-20121209-170000.gz",
"pagecounts-20121209-180000.gz",
"pagecounts-20121209-190000.gz",
"pagecounts-20121209-200000.gz",
"pagecounts-20121209-210000.gz",
"pagecounts-20121209-220000.gz",
"pagecounts-20121209-230000.gz",
"pagecounts-20121210-000000.gz",
"pagecounts-20121210-010000.gz",
"pagecounts-20121210-020000.gz",
"pagecounts-20121210-030000.gz",
"pagecounts-20121210-040000.gz",
"pagecounts-20121210-050001.gz",
"pagecounts-20121210-060000.gz",
"pagecounts-20121210-070000.gz",
"pagecounts-20121210-080000.gz",
"pagecounts-20121210-090000.gz",
"pagecounts-20121210-100000.gz",
"pagecounts-20121210-110000.gz",
"pagecounts-20121210-120000.gz",
"pagecounts-20121210-130000.gz",
"pagecounts-20121210-140000.gz",
"pagecounts-20121210-150000.gz",
"pagecounts-20121210-160000.gz",
"pagecounts-20121210-170001.gz",
"pagecounts-20121210-180000.gz",
"pagecounts-20121210-190000.gz",
"pagecounts-20121210-200000.gz",
"pagecounts-20121210-210000.gz",
"pagecounts-20121210-220000.gz",
"pagecounts-20121210-230000.gz",
"pagecounts-20121211-000000.gz",
"pagecounts-20121211-010000.gz",
"pagecounts-20121211-020000.gz",
"pagecounts-20121211-030000.gz",
"pagecounts-20121211-040000.gz",
"pagecounts-20121211-050000.gz",
"pagecounts-20121211-060001.gz",
"pagecounts-20121211-070000.gz",
"pagecounts-20121211-080000.gz",
"pagecounts-20121211-090000.gz",
"pagecounts-20121211-100000.gz",
"pagecounts-20121211-110000.gz",
"pagecounts-20121211-120000.gz",
"pagecounts-20121211-130000.gz",
"pagecounts-20121211-140000.gz",
"pagecounts-20121211-150000.gz",
"pagecounts-20121211-160000.gz",
"pagecounts-20121211-170000.gz",
"pagecounts-20121211-180001.gz",
"pagecounts-20121211-190000.gz",
"pagecounts-20121211-200000.gz",
"pagecounts-20121211-210000.gz",
"pagecounts-20121211-220000.gz",
"pagecounts-20121211-230000.gz",
"pagecounts-20121212-000000.gz",
"pagecounts-20121212-010000.gz",
"pagecounts-20121212-020000.gz",
"pagecounts-20121212-030000.gz",
"pagecounts-20121212-040000.gz",
"pagecounts-20121212-050000.gz",
"pagecounts-20121212-060001.gz",
"pagecounts-20121212-070000.gz",
"pagecounts-20121212-080000.gz",
"pagecounts-20121212-090000.gz",
"pagecounts-20121212-100000.gz",
"pagecounts-20121212-110000.gz",
"pagecounts-20121212-120000.gz",
"pagecounts-20121212-130000.gz",
"pagecounts-20121212-140000.gz",
"pagecounts-20121212-150000.gz",
"pagecounts-20121212-160000.gz",
"pagecounts-20121212-170000.gz",
"pagecounts-20121212-180001.gz",
"pagecounts-20121212-190000.gz",
"pagecounts-20121212-200000.gz",
"pagecounts-20121212-210000.gz",
"pagecounts-20121212-220000.gz",
"pagecounts-20121212-230000.gz",
"pagecounts-20121213-000000.gz",
"pagecounts-20121213-010000.gz",
"pagecounts-20121213-020000.gz",
"pagecounts-20121213-030000.gz",
"pagecounts-20121213-040000.gz",
"pagecounts-20121213-050000.gz",
"pagecounts-20121213-060000.gz",
"pagecounts-20121213-070001.gz",
"pagecounts-20121213-080000.gz",
"pagecounts-20121213-090000.gz",
"pagecounts-20121213-100000.gz",
"pagecounts-20121213-110000.gz",
"pagecounts-20121213-120000.gz",
"pagecounts-20121213-130000.gz",
"pagecounts-20121213-140000.gz",
"pagecounts-20121213-150000.gz",
"pagecounts-20121213-160000.gz",
"pagecounts-20121213-170000.gz",
"pagecounts-20121213-180000.gz",
"pagecounts-20121213-190000.gz",
"pagecounts-20121213-200001.gz",
"pagecounts-20121213-210000.gz",
"pagecounts-20121213-220000.gz",
"pagecounts-20121213-230000.gz",
"pagecounts-20121214-000000.gz",
"pagecounts-20121214-010000.gz",
"pagecounts-20121214-020000.gz",
"pagecounts-20121214-030000.gz",
"pagecounts-20121214-040000.gz",
"pagecounts-20121214-050000.gz",
"pagecounts-20121214-060000.gz",
"pagecounts-20121214-070000.gz",
"pagecounts-20121214-080000.gz",
"pagecounts-20121214-090001.gz",
"pagecounts-20121214-100000.gz",
"pagecounts-20121214-110000.gz",
"pagecounts-20121214-120000.gz",
"pagecounts-20121214-130000.gz",
"pagecounts-20121214-140000.gz",
"pagecounts-20121214-150000.gz",
"pagecounts-20121214-160000.gz",
"pagecounts-20121214-170000.gz",
"pagecounts-20121214-180000.gz",
"pagecounts-20121214-190000.gz",
"pagecounts-20121214-200000.gz",
"pagecounts-20121214-210001.gz",
"pagecounts-20121214-220000.gz",
"pagecounts-20121214-230000.gz",
"pagecounts-20121215-000000.gz",
"pagecounts-20121215-010000.gz",
"pagecounts-20121215-020000.gz",
"pagecounts-20121215-030000.gz",
"pagecounts-20121215-040000.gz",
"pagecounts-20121215-050000.gz",
"pagecounts-20121215-060000.gz",
"pagecounts-20121215-070000.gz",
"pagecounts-20121215-080000.gz",
"pagecounts-20121215-090000.gz",
"pagecounts-20121215-100001.gz",
"pagecounts-20121215-110000.gz",
"pagecounts-20121215-120000.gz",
"pagecounts-20121215-130000.gz",
"pagecounts-20121215-140000.gz",
"pagecounts-20121215-150000.gz",
"pagecounts-20121215-160000.gz",
"pagecounts-20121215-170000.gz",
"pagecounts-20121215-180000.gz",
"pagecounts-20121215-190000.gz",
"pagecounts-20121215-200000.gz",
"pagecounts-20121215-210000.gz",
"pagecounts-20121215-220001.gz",
"pagecounts-20121215-230000.gz",
"pagecounts-20121216-000000.gz",
"pagecounts-20121216-010000.gz",
"pagecounts-20121216-020000.gz",
"pagecounts-20121216-030000.gz",
"pagecounts-20121216-040000.gz",
"pagecounts-20121216-050000.gz",
"pagecounts-20121216-060000.gz",
"pagecounts-20121216-070000.gz",
"pagecounts-20121216-080000.gz",
"pagecounts-20121216-090000.gz",
"pagecounts-20121216-100000.gz",
"pagecounts-20121216-110001.gz",
"pagecounts-20121216-120000.gz",
"pagecounts-20121216-130000.gz",
"pagecounts-20121216-140000.gz",
"pagecounts-20121216-150000.gz",
"pagecounts-20121216-160000.gz",
"pagecounts-20121216-170000.gz",
"pagecounts-20121216-180000.gz",
"pagecounts-20121216-190000.gz",
"pagecounts-20121216-200000.gz",
"pagecounts-20121216-210000.gz",
"pagecounts-20121216-220000.gz",
"pagecounts-20121216-230000.gz",
"pagecounts-20121217-000001.gz",
"pagecounts-20121217-010000.gz",
"pagecounts-20121217-020000.gz",
"pagecounts-20121217-030000.gz",
"pagecounts-20121217-040000.gz",
"pagecounts-20121217-050000.gz",
"pagecounts-20121217-060000.gz",
"pagecounts-20121217-070000.gz",
"pagecounts-20121217-080000.gz",
"pagecounts-20121217-090000.gz",
"pagecounts-20121217-100000.gz",
"pagecounts-20121217-110000.gz",
"pagecounts-20121217-120000.gz",
"pagecounts-20121217-130001.gz",
"pagecounts-20121217-140000.gz",
"pagecounts-20121217-150000.gz",
"pagecounts-20121217-160000.gz",
"pagecounts-20121217-170000.gz",
"pagecounts-20121217-180000.gz",
"pagecounts-20121217-190000.gz",
"pagecounts-20121217-200000.gz",
"pagecounts-20121217-210000.gz",
"pagecounts-20121217-220000.gz",
"pagecounts-20121217-230000.gz",
"pagecounts-20121218-000000.gz",
"pagecounts-20121218-010001.gz",
"pagecounts-20121218-020000.gz",
"pagecounts-20121218-030000.gz",
"pagecounts-20121218-040000.gz",
"pagecounts-20121218-050000.gz",
"pagecounts-20121218-060000.gz",
"pagecounts-20121218-070000.gz",
"pagecounts-20121218-080000.gz",
"pagecounts-20121218-090000.gz",
"pagecounts-20121218-100000.gz",
"pagecounts-20121218-110000.gz",
"pagecounts-20121218-120000.gz",
"pagecounts-20121218-130001.gz",
"pagecounts-20121218-140000.gz",
"pagecounts-20121218-150000.gz",
"pagecounts-20121218-160000.gz",
"pagecounts-20121218-170000.gz",
"pagecounts-20121218-180000.gz",
"pagecounts-20121218-190000.gz",
"pagecounts-20121218-200000.gz",
"pagecounts-20121218-210000.gz",
"pagecounts-20121218-220000.gz",
"pagecounts-20121218-230000.gz",
"pagecounts-20121219-000000.gz",
"pagecounts-20121219-010000.gz",
"pagecounts-20121219-020001.gz",
"pagecounts-20121219-030000.gz",
"pagecounts-20121219-040000.gz",
"pagecounts-20121219-050000.gz",
"pagecounts-20121219-060000.gz",
"pagecounts-20121219-070000.gz",
"pagecounts-20121219-080000.gz",
"pagecounts-20121219-090000.gz",
"pagecounts-20121219-100000.gz",
"pagecounts-20121219-110000.gz",
"pagecounts-20121219-120000.gz",
"pagecounts-20121219-130000.gz",
"pagecounts-20121219-140000.gz",
"pagecounts-20121219-150001.gz",
"pagecounts-20121219-160000.gz",
"pagecounts-20121219-170000.gz",
"pagecounts-20121219-180000.gz",
"pagecounts-20121219-190000.gz",
"pagecounts-20121219-200000.gz",
"pagecounts-20121219-210000.gz",
"pagecounts-20121219-220000.gz",
"pagecounts-20121219-230000.gz",
"pagecounts-20121220-000000.gz",
"pagecounts-20121220-010000.gz",
"pagecounts-20121220-020001.gz",
"pagecounts-20121220-030000.gz",
"pagecounts-20121220-040000.gz",
"pagecounts-20121220-050000.gz",
"pagecounts-20121220-060000.gz",
"pagecounts-20121220-070000.gz",
"pagecounts-20121220-080000.gz",
"pagecounts-20121220-090000.gz",
"pagecounts-20121220-100000.gz",
"pagecounts-20121220-110000.gz",
"pagecounts-20121220-120000.gz",
"pagecounts-20121220-130000.gz",
"pagecounts-20121220-140000.gz",
"pagecounts-20121220-150001.gz",
"pagecounts-20121220-160000.gz",
"pagecounts-20121220-170000.gz",
"pagecounts-20121220-180000.gz",
"pagecounts-20121220-190000.gz",
"pagecounts-20121220-200000.gz",
"pagecounts-20121220-210000.gz",
"pagecounts-20121220-220000.gz",
"pagecounts-20121220-230000.gz",
"pagecounts-20121221-000000.gz",
"pagecounts-20121221-010000.gz",
"pagecounts-20121221-020000.gz",
"pagecounts-20121221-030000.gz",
"pagecounts-20121221-040001.gz",
"pagecounts-20121221-050000.gz",
"pagecounts-20121221-060000.gz",
"pagecounts-20121221-070000.gz",
"pagecounts-20121221-080000.gz",
"pagecounts-20121221-090000.gz",
"pagecounts-20121221-100000.gz",
"pagecounts-20121221-110000.gz",
"pagecounts-20121221-120000.gz",
"pagecounts-20121221-130000.gz",
"pagecounts-20121221-140000.gz",
"pagecounts-20121221-150000.gz",
"pagecounts-20121221-160001.gz",
"pagecounts-20121221-170000.gz",
"pagecounts-20121221-180000.gz",
"pagecounts-20121221-190000.gz",
"pagecounts-20121221-200000.gz",
"pagecounts-20121221-210000.gz",
"pagecounts-20121221-220000.gz",
"pagecounts-20121221-230000.gz",
"pagecounts-20121222-000000.gz",
"pagecounts-20121222-010000.gz",
"pagecounts-20121222-020000.gz",
"pagecounts-20121222-030000.gz",
"pagecounts-20121222-040001.gz",
"pagecounts-20121222-050000.gz",
"pagecounts-20121222-060000.gz",
"pagecounts-20121222-070000.gz",
"pagecounts-20121222-080000.gz",
"pagecounts-20121222-090000.gz",
"pagecounts-20121222-100000.gz",
"pagecounts-20121222-110000.gz",
"pagecounts-20121222-120000.gz",
"pagecounts-20121222-130000.gz",
"pagecounts-20121222-140000.gz",
"pagecounts-20121222-150000.gz",
"pagecounts-20121222-160001.gz",
"pagecounts-20121222-170000.gz",
"pagecounts-20121222-180000.gz",
"pagecounts-20121222-190000.gz",
"pagecounts-20121222-200000.gz",
"pagecounts-20121222-210000.gz",
"pagecounts-20121222-220000.gz",
"pagecounts-20121222-230000.gz",
"pagecounts-20121223-000000.gz",
"pagecounts-20121223-010000.gz",
"pagecounts-20121223-020000.gz",
"pagecounts-20121223-030000.gz",
"pagecounts-20121223-040001.gz",
"pagecounts-20121223-050000.gz",
"pagecounts-20121223-060000.gz",
"pagecounts-20121223-070000.gz",
"pagecounts-20121223-080000.gz",
"pagecounts-20121223-090000.gz",
"pagecounts-20121223-100000.gz",
"pagecounts-20121223-110000.gz",
"pagecounts-20121223-120000.gz",
"pagecounts-20121223-130000.gz",
"pagecounts-20121223-140000.gz",
"pagecounts-20121223-150000.gz",
"pagecounts-20121223-160000.gz",
"pagecounts-20121223-170001.gz",
"pagecounts-20121223-180000.gz",
"pagecounts-20121223-190000.gz",
"pagecounts-20121223-200000.gz",
"pagecounts-20121223-210000.gz",
"pagecounts-20121223-220000.gz",
"pagecounts-20121223-230000.gz",
"pagecounts-20121224-000000.gz",
"pagecounts-20121224-010000.gz",
"pagecounts-20121224-020000.gz",
"pagecounts-20121224-030000.gz",
"pagecounts-20121224-040000.gz",
"pagecounts-20121224-050001.gz",
"pagecounts-20121224-060000.gz",
"pagecounts-20121224-070000.gz",
"pagecounts-20121224-080000.gz",
"pagecounts-20121224-090000.gz",
"pagecounts-20121224-100000.gz",
"pagecounts-20121224-110000.gz",
"pagecounts-20121224-120000.gz",
"pagecounts-20121224-130000.gz",
"pagecounts-20121224-140000.gz",
"pagecounts-20121224-150000.gz",
"pagecounts-20121224-160000.gz",
"pagecounts-20121224-170000.gz",
"pagecounts-20121224-180001.gz",
"pagecounts-20121224-190000.gz",
"pagecounts-20121224-200000.gz",
"pagecounts-20121224-210000.gz",
"pagecounts-20121224-220000.gz",
"pagecounts-20121224-230000.gz",
"pagecounts-20121225-000000.gz",
"pagecounts-20121225-010000.gz",
"pagecounts-20121225-020000.gz",
"pagecounts-20121225-030000.gz",
"pagecounts-20121225-040000.gz",
"pagecounts-20121225-050000.gz",
"pagecounts-20121225-060000.gz",
"pagecounts-20121225-070001.gz",
"pagecounts-20121225-080000.gz",
"pagecounts-20121225-090000.gz",
"pagecounts-20121225-100000.gz",
"pagecounts-20121225-110000.gz",
"pagecounts-20121225-120000.gz",
"pagecounts-20121225-130000.gz",
"pagecounts-20121225-140000.gz",
"pagecounts-20121225-150000.gz",
"pagecounts-20121225-160000.gz",
"pagecounts-20121225-170000.gz",
"pagecounts-20121225-180000.gz",
"pagecounts-20121225-190001.gz",
"pagecounts-20121225-200000.gz",
"pagecounts-20121225-210000.gz",
"pagecounts-20121225-220000.gz",
"pagecounts-20121225-230000.gz",
"pagecounts-20121226-000000.gz",
"pagecounts-20121226-010000.gz",
"pagecounts-20121226-020000.gz",
"pagecounts-20121226-030000.gz",
"pagecounts-20121226-040000.gz",
"pagecounts-20121226-050000.gz",
"pagecounts-20121226-060000.gz",
"pagecounts-20121226-070001.gz",
"pagecounts-20121226-080000.gz",
"pagecounts-20121226-090000.gz",
"pagecounts-20121226-100000.gz",
"pagecounts-20121226-110000.gz",
"pagecounts-20121226-120000.gz",
"pagecounts-20121226-130000.gz",
"pagecounts-20121226-140000.gz",
"pagecounts-20121226-150000.gz",
"pagecounts-20121226-160000.gz",
"pagecounts-20121226-170000.gz",
"pagecounts-20121226-180000.gz",
"pagecounts-20121226-190000.gz",
"pagecounts-20121226-200001.gz",
"pagecounts-20121226-210000.gz",
"pagecounts-20121226-220000.gz",
"pagecounts-20121226-230000.gz",
"pagecounts-20121227-000000.gz",
"pagecounts-20121227-010000.gz",
"pagecounts-20121227-020000.gz",
"pagecounts-20121227-030000.gz",
"pagecounts-20121227-040000.gz",
"pagecounts-20121227-050000.gz",
"pagecounts-20121227-060000.gz",
"pagecounts-20121227-070000.gz",
"pagecounts-20121227-080001.gz",
"pagecounts-20121227-090000.gz",
"pagecounts-20121227-100000.gz",
"pagecounts-20121227-110000.gz",
"pagecounts-20121227-120000.gz",
"pagecounts-20121227-130000.gz",
"pagecounts-20121227-140000.gz",
"pagecounts-20121227-150000.gz",
"pagecounts-20121227-160000.gz",
"pagecounts-20121227-170000.gz",
"pagecounts-20121227-180000.gz",
"pagecounts-20121227-190000.gz",
"pagecounts-20121227-200000.gz",
"pagecounts-20121227-210001.gz",
"pagecounts-20121227-220000.gz",
"pagecounts-20121227-230000.gz",
"pagecounts-20121228-000000.gz",
"pagecounts-20121228-010000.gz",
"pagecounts-20121228-020000.gz",
"pagecounts-20121228-030000.gz",
"pagecounts-20121228-040000.gz",
"pagecounts-20121228-050000.gz",
"pagecounts-20121228-060000.gz",
"pagecounts-20121228-070000.gz",
"pagecounts-20121228-080000.gz",
"pagecounts-20121228-090001.gz",
"pagecounts-20121228-100000.gz",
"pagecounts-20121228-110000.gz",
"pagecounts-20121228-120000.gz",
"pagecounts-20121228-130000.gz",
"pagecounts-20121228-140000.gz",
"pagecounts-20121228-150000.gz",
"pagecounts-20121228-160000.gz",
"pagecounts-20121228-170000.gz",
"pagecounts-20121228-180000.gz",
"pagecounts-20121228-190000.gz",
"pagecounts-20121228-200000.gz",
"pagecounts-20121228-210000.gz",
"pagecounts-20121228-220001.gz",
"pagecounts-20121228-230000.gz",
"pagecounts-20121229-000000.gz",
"pagecounts-20121229-010000.gz",
"pagecounts-20121229-020000.gz",
"pagecounts-20121229-030000.gz",
"pagecounts-20121229-040000.gz",
"pagecounts-20121229-050000.gz",
"pagecounts-20121229-060000.gz",
"pagecounts-20121229-070000.gz",
"pagecounts-20121229-080000.gz",
"pagecounts-20121229-090000.gz",
"pagecounts-20121229-100000.gz",
"pagecounts-20121229-110001.gz",
"pagecounts-20121229-120000.gz",
"pagecounts-20121229-130000.gz",
"pagecounts-20121229-140000.gz",
"pagecounts-20121229-150000.gz",
"pagecounts-20121229-160000.gz",
"pagecounts-20121229-170000.gz",
"pagecounts-20121229-180000.gz",
"pagecounts-20121229-190000.gz",
"pagecounts-20121229-200000.gz",
"pagecounts-20121229-210000.gz",
"pagecounts-20121229-220000.gz",
"pagecounts-20121229-230001.gz",
"pagecounts-20121230-000000.gz",
"pagecounts-20121230-010000.gz",
"pagecounts-20121230-020000.gz",
"pagecounts-20121230-030000.gz",
"pagecounts-20121230-040000.gz",
"pagecounts-20121230-050000.gz",
"pagecounts-20121230-060000.gz",
"pagecounts-20121230-070000.gz",
"pagecounts-20121230-080000.gz",
"pagecounts-20121230-090000.gz",
"pagecounts-20121230-100000.gz",
"pagecounts-20121230-110001.gz",
"pagecounts-20121230-120000.gz",
"pagecounts-20121230-130000.gz",
"pagecounts-20121230-140000.gz",
"pagecounts-20121230-150000.gz",
"pagecounts-20121230-160000.gz",
"pagecounts-20121230-170000.gz",
"pagecounts-20121230-180000.gz",
"pagecounts-20121230-190000.gz",
"pagecounts-20121230-200000.gz",
"pagecounts-20121230-210000.gz",
"pagecounts-20121230-220000.gz",
"pagecounts-20121230-230000.gz",
"pagecounts-20121231-000001.gz",
"pagecounts-20121231-010000.gz",
"pagecounts-20121231-020000.gz",
"pagecounts-20121231-030000.gz",
"pagecounts-20121231-040000.gz",
"pagecounts-20121231-050000.gz",
"pagecounts-20121231-060000.gz",
"pagecounts-20121231-070000.gz",
"pagecounts-20121231-080000.gz",
"pagecounts-20121231-090000.gz",
"pagecounts-20121231-100000.gz",
"pagecounts-20121231-110000.gz",
"pagecounts-20121231-120000.gz",
"pagecounts-20121231-130001.gz",
"pagecounts-20121231-140000.gz",
"pagecounts-20121231-150000.gz",
"pagecounts-20121231-160000.gz",
"pagecounts-20121231-170000.gz",
"pagecounts-20121231-180000.gz",
"pagecounts-20121231-190000.gz",
"pagecounts-20121231-200000.gz",
"pagecounts-20121231-210000.gz",
"pagecounts-20121231-220000.gz",
"pagecounts-20121231-230000.gz",
]
import os
base = "http://dumps.wikimedia.org/other/pagecounts-raw/"
tail = "2012/2012-12/"
i = 0
for url in urls:
i = i + 1
one = "en-" + url[:-3]
two = url[:-3]
three = url
if not (os.path.isfile(one) or os.path.isfile(two) or os.path.isfile(three)):
#os.system("curl --silent -O %s >> /dev/null" % (base + tail + url))
os.system("curl -O %s" % (base + tail + url))
print "%d completeted of %d total. %d remaining" % (i, len(urls), len(urls) - i)
| 32.854331 | 84 | 0.783863 | 3,058 | 25,035 | 6.417266 | 0.040549 | 0.454342 | 0.027517 | 0.001427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.43117 | 0.033753 | 25,035 | 761 | 85 | 32.897503 | 0.380074 | 0.002676 | 0 | 0 | 0 | 0 | 0.868747 | 0.864181 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.001319 | null | null | 0.001319 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2f82847e712a5383cc91d0207f3951fd9cbbcfd5 | 338 | py | Python | website/contact/models.py | phayv/open_project | 2ab2b87683ea120f6f7baa734df9a6920716232b | [
"BSD-3-Clause"
] | null | null | null | website/contact/models.py | phayv/open_project | 2ab2b87683ea120f6f7baa734df9a6920716232b | [
"BSD-3-Clause"
] | 14 | 2020-03-24T15:57:26.000Z | 2022-03-11T23:26:57.000Z | website/contact/models.py | phayv/open_project | 2ab2b87683ea120f6f7baa734df9a6920716232b | [
"BSD-3-Clause"
] | 1 | 2018-08-01T02:17:01.000Z | 2018-08-01T02:17:01.000Z | from django.db import models
"""
class Contact(models.Model):
first_name = models.CharField(max_length=100, default='')
last_name = models.CharField(max_length=100, default='')
email = models.CharField(max_length=300, default='')
phone = models.CharField(max_length=30, default='')
extra_info = models.TextField()
""" | 33.8 | 61 | 0.710059 | 43 | 338 | 5.418605 | 0.534884 | 0.257511 | 0.309013 | 0.412017 | 0.32618 | 0.32618 | 0.32618 | 0 | 0 | 0 | 0 | 0.037801 | 0.139053 | 338 | 10 | 62 | 33.8 | 0.762887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
85cd82ad6359f04307ad63f28f66595bd5121383 | 7,878 | py | Python | hkm/tests/test_clean_unused_data.py | City-of-Helsinki/kuvaselaamo | 3fa9b69e3f5496620852d8b138129d0069339fcd | [
"MIT"
] | 1 | 2017-05-07T10:46:24.000Z | 2017-05-07T10:46:24.000Z | hkm/tests/test_clean_unused_data.py | City-of-Helsinki/kuvaselaamo | 3fa9b69e3f5496620852d8b138129d0069339fcd | [
"MIT"
] | 60 | 2016-10-18T11:18:48.000Z | 2022-02-13T20:04:18.000Z | hkm/tests/test_clean_unused_data.py | City-of-Helsinki/kuvaselaamo | 3fa9b69e3f5496620852d8b138129d0069339fcd | [
"MIT"
] | 9 | 2017-04-18T13:26:26.000Z | 2020-02-13T20:05:13.000Z | import pytest
from factories import FeedbackFactory, TmpImageFactory, ProductOrderFactory, UserFactory
from hkm.models.models import Feedback, TmpImage, ProductOrder, UserProfile, User, Collection, Record
from django.core.management import call_command
from freezegun import freeze_time
from datetime import datetime, timedelta
from hkm.management.commands.clean_unused_data import DEFAULT_DAYS_UNTIL_REMOVAL, DEFAULT_DAYS_UNTIL_NOTIFICATION
CUSTOM_DAYS_UNTIL_REMOVAL = 10
CUSTOM_DAYS_UNTIL_NOTIFICATION = 5
@pytest.fixture
def day_older_than_removal_date():
return datetime.today() - timedelta(days=DEFAULT_DAYS_UNTIL_REMOVAL + 1)
@pytest.fixture
def day_newer_than_removal_date():
return datetime.today() - timedelta(days=DEFAULT_DAYS_UNTIL_REMOVAL - 1)
@pytest.fixture
def day_within_grace_period():
return datetime.today() - timedelta(days=DEFAULT_DAYS_UNTIL_REMOVAL - DEFAULT_DAYS_UNTIL_NOTIFICATION - 1)
@pytest.fixture
def day_outside_grace_period():
return datetime.today() - timedelta(days=DEFAULT_DAYS_UNTIL_REMOVAL - DEFAULT_DAYS_UNTIL_NOTIFICATION + 1)
@pytest.fixture
def day_older_than_custom_removal_date():
return datetime.today() - timedelta(days=CUSTOM_DAYS_UNTIL_REMOVAL + 1)
@pytest.fixture
def day_newer_than_custom_removal_date():
return datetime.today() - timedelta(days=CUSTOM_DAYS_UNTIL_REMOVAL - 1)
@pytest.fixture
def day_within_custom_grace_period():
return datetime.today() - timedelta(days=CUSTOM_DAYS_UNTIL_REMOVAL - CUSTOM_DAYS_UNTIL_NOTIFICATION - 1)
@pytest.fixture
def day_outside_custom_grace_period():
return datetime.today() - timedelta(days=CUSTOM_DAYS_UNTIL_REMOVAL - CUSTOM_DAYS_UNTIL_NOTIFICATION + 1)
@pytest.mark.django_db
def test_that_old_anonymous_data_is_removed_using_default_date(day_older_than_removal_date,
day_newer_than_removal_date):
_create_anonymous_data(day_older_than_removal_date)
_create_anonymous_data(day_newer_than_removal_date)
_assert_anonymous_data_counts(2)
call_command('clean_unused_data')
_assert_anonymous_data_counts(1)
def _create_anonymous_data(modified_date):
with freeze_time(modified_date):
FeedbackFactory()
TmpImageFactory()
ProductOrderFactory()
@pytest.mark.django_db
def test_that_old_anonymous_data_is_removed_using_custom_date(day_older_than_custom_removal_date,
day_newer_than_custom_removal_date):
_create_anonymous_data(day_older_than_custom_removal_date)
_create_anonymous_data(day_newer_than_custom_removal_date)
_assert_anonymous_data_counts(2)
call_command('clean_unused_data', days_until_removal=CUSTOM_DAYS_UNTIL_REMOVAL,
days_until_notification=CUSTOM_DAYS_UNTIL_NOTIFICATION)
_assert_anonymous_data_counts(1)
def _assert_anonymous_data_counts(count):
assert Feedback.objects.count() == count
assert TmpImage.objects.count() == count
assert ProductOrder.objects.count() == count
@pytest.mark.django_db
def test_that_old_user_gets_deleted_using_default_date(day_older_than_removal_date, day_newer_than_removal_date,
day_outside_grace_period, day_within_grace_period):
not_deleted = [
UserFactory(last_login=day_newer_than_removal_date, profile__removal_notification_sent=None),
UserFactory(last_login=day_newer_than_removal_date, profile__removal_notification_sent=day_within_grace_period)
]
UserFactory(last_login=day_older_than_removal_date, profile__removal_notification_sent=day_outside_grace_period)
_assert_user_data_counts(3)
call_command('clean_unused_data')
_assert_user_data_counts(2)
_assert_users_exist(not_deleted)
@pytest.mark.django_db
def test_that_old_user_gets_deleted_using_custom_date(day_older_than_custom_removal_date,
day_newer_than_custom_removal_date,
day_outside_custom_grace_period):
not_deleted = [UserFactory(last_login=day_newer_than_custom_removal_date)]
UserFactory(last_login=day_older_than_custom_removal_date,
profile__removal_notification_sent=day_outside_custom_grace_period)
_assert_user_data_counts(2)
call_command('clean_unused_data', days_until_removal=CUSTOM_DAYS_UNTIL_REMOVAL,
days_until_notification=CUSTOM_DAYS_UNTIL_NOTIFICATION)
_assert_user_data_counts(1)
_assert_users_exist(not_deleted)
def _assert_user_data_counts(count):
assert User.objects.count() == count
assert UserProfile.objects.count() == count
assert Feedback.objects.count() == count
assert TmpImage.objects.count() == count
assert ProductOrder.objects.count() == count
assert Collection.objects.count() == count
assert Record.objects.count() == count
@pytest.mark.django_db
def test_that_staff_users_dont_get_deleted(day_older_than_removal_date):
not_deleted = [
UserFactory(last_login=day_older_than_removal_date, is_superuser=True),
UserFactory(last_login=day_older_than_removal_date, is_staff=True),
UserFactory(last_login=day_older_than_removal_date, profile__is_museum=True),
UserFactory(last_login=day_older_than_removal_date, profile__is_admin=True)
]
_assert_user_data_counts(4)
call_command('clean_unused_data')
_assert_user_data_counts(4)
_assert_users_exist(not_deleted)
def _assert_users_exist(users):
for user in users:
assert User.objects.get(pk=user.id)
@pytest.mark.django_db
def test_that_user_is_not_deleted_without_notification(day_older_than_removal_date, day_within_grace_period,
day_outside_grace_period):
not_deleted = [
UserFactory(last_login=day_older_than_removal_date, profile__removal_notification_sent=None),
UserFactory(last_login=day_older_than_removal_date, profile__removal_notification_sent=day_within_grace_period)
]
UserFactory(last_login=day_older_than_removal_date, profile__removal_notification_sent=day_outside_grace_period)
_assert_user_data_counts(3)
call_command('clean_unused_data')
_assert_user_data_counts(2)
_assert_users_exist(not_deleted)
@pytest.mark.django_db
def test_that_user_is_not_deleted_without_notification_using_custom_dates(
day_older_than_custom_removal_date, day_within_custom_grace_period,
day_outside_custom_grace_period):
not_deleted = [
UserFactory(last_login=day_older_than_custom_removal_date, profile__removal_notification_sent=None),
UserFactory(last_login=day_older_than_custom_removal_date,
profile__removal_notification_sent=day_within_custom_grace_period)
]
UserFactory(last_login=day_older_than_custom_removal_date,
profile__removal_notification_sent=day_outside_custom_grace_period)
_assert_user_data_counts(3)
call_command('clean_unused_data', days_until_removal=CUSTOM_DAYS_UNTIL_REMOVAL,
days_until_notification=CUSTOM_DAYS_UNTIL_NOTIFICATION)
_assert_user_data_counts(2)
_assert_users_exist(not_deleted)
def test_negative_days_until_removal(capsys):
call_command('clean_unused_data', days_until_removal=-1)
assert "Invalid parameters given." in capsys.readouterr()[0]
def test_negative_days_until_notification(capsys):
call_command('clean_unused_data', days_until_notification=-1)
assert "Invalid parameters given." in capsys.readouterr()[0]
def test_days_until_notification_cant_be_more_than_days_until_removal(capsys):
call_command('clean_unused_data', days_until_notification=10, days_until_removal=9)
assert "Invalid parameters given." in capsys.readouterr()[0]
| 36.304147 | 119 | 0.777101 | 1,017 | 7,878 | 5.442478 | 0.104228 | 0.058537 | 0.049865 | 0.062331 | 0.843902 | 0.825294 | 0.798555 | 0.780126 | 0.730623 | 0.694309 | 0 | 0.00498 | 0.158924 | 7,878 | 216 | 120 | 36.472222 | 0.830365 | 0 | 0 | 0.454545 | 0 | 0 | 0.031099 | 0 | 0 | 0 | 0 | 0 | 0.251748 | 1 | 0.153846 | false | 0 | 0.048951 | 0.055944 | 0.258741 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c86c1c4765c49c445f226a7900b41cb5672068bb | 30 | py | Python | xview3/centernet/models/heads/__init__.py | BloodAxe/xView3-The-First-Place-Solution | 9a9600e7dfbaa24ff5a72c81061fbbbfed865847 | [
"MIT"
] | 39 | 2022-01-11T05:20:29.000Z | 2022-03-09T10:35:47.000Z | xview3/centernet/models/heads/__init__.py | DIUx-xView/xView3_first_place | 63a594601bf71a96909230d31097dca790b40ff2 | [
"MIT"
] | 1 | 2022-03-07T09:50:22.000Z | 2022-03-07T09:50:22.000Z | xview3/centernet/models/heads/__init__.py | BloodAxe/xView3-The-First-Place-Solution | 9a9600e7dfbaa24ff5a72c81061fbbbfed865847 | [
"MIT"
] | 4 | 2022-01-14T16:14:02.000Z | 2022-02-22T02:17:33.000Z | from .decoupled_head import *
| 15 | 29 | 0.8 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c0a88c8779cdcb8df85ccd800774b6537e16b8a8 | 150 | py | Python | django_images/admin.py | nickali/pinry | 5e0c9ee79a70c4997810c6e6b51a460db4fe4fa0 | [
"BSD-2-Clause"
] | 1,193 | 2015-01-04T15:48:41.000Z | 2022-03-31T21:17:59.000Z | django_images/admin.py | nickali/pinry | 5e0c9ee79a70c4997810c6e6b51a460db4fe4fa0 | [
"BSD-2-Clause"
] | 201 | 2015-01-17T09:12:01.000Z | 2022-03-29T03:04:15.000Z | django_images/admin.py | nickali/pinry | 5e0c9ee79a70c4997810c6e6b51a460db4fe4fa0 | [
"BSD-2-Clause"
] | 234 | 2015-01-05T18:57:32.000Z | 2022-03-08T11:17:52.000Z | from django.contrib import admin
from .models import Image
from .models import Thumbnail
admin.site.register(Image)
admin.site.register(Thumbnail)
| 16.666667 | 32 | 0.813333 | 21 | 150 | 5.809524 | 0.47619 | 0.163934 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113333 | 150 | 8 | 33 | 18.75 | 0.917293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c0c87a7f668caa7e5ce578b129c7dc4730ec2963 | 46 | py | Python | qulab_toolbox/test/old/gate/__init__.py | weiyangliu/QuLab_toolbox | 68188a3abe19062d789db4e2853bb0293a756235 | [
"MIT"
] | 5 | 2019-08-29T00:52:05.000Z | 2020-04-10T02:50:45.000Z | qulab_toolbox/test/old/gate/__init__.py | weiyangliu/QuLab_toolbox | 68188a3abe19062d789db4e2853bb0293a756235 | [
"MIT"
] | null | null | null | qulab_toolbox/test/old/gate/__init__.py | weiyangliu/QuLab_toolbox | 68188a3abe19062d789db4e2853bb0293a756235 | [
"MIT"
] | 6 | 2018-11-15T09:34:43.000Z | 2019-09-18T10:29:24.000Z | from .clifford import *
from .XYgate import *
| 15.333333 | 23 | 0.73913 | 6 | 46 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 24 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
23d620e4620eb966aa2626d7dd393d30909a6cc8 | 70 | py | Python | src/agents/__init__.py | threewisemonkeys-as/drone_automation | 4ca63939d08e984c3962bf69e582aa87d463aae2 | [
"MIT"
] | 3 | 2021-04-16T02:28:49.000Z | 2021-07-19T17:29:59.000Z | src/agents/__init__.py | threewisemonkeys-as/drone_automation | 4ca63939d08e984c3962bf69e582aa87d463aae2 | [
"MIT"
] | null | null | null | src/agents/__init__.py | threewisemonkeys-as/drone_automation | 4ca63939d08e984c3962bf69e582aa87d463aae2 | [
"MIT"
] | 3 | 2020-06-15T19:13:54.000Z | 2021-07-06T05:22:39.000Z | from base_agent import BaseAgent
from random_agent import RandomAgent
| 23.333333 | 36 | 0.885714 | 10 | 70 | 6 | 0.7 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 70 | 2 | 37 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
23f94201c63c324672b94436eea07641eae015af | 169 | py | Python | other-apps/alto-function/main.py | jama22/alto | 129516dae3c2ac90bade8625766993a5661be9e8 | [
"Apache-2.0"
] | null | null | null | other-apps/alto-function/main.py | jama22/alto | 129516dae3c2ac90bade8625766993a5661be9e8 | [
"Apache-2.0"
] | null | null | null | other-apps/alto-function/main.py | jama22/alto | 129516dae3c2ac90bade8625766993a5661be9e8 | [
"Apache-2.0"
] | null | null | null | from flask import escape, render_template
import functions_framework
@functions_framework.http
def alto_function_http(request):
return render_template("index.html") | 33.8 | 41 | 0.840237 | 22 | 169 | 6.181818 | 0.727273 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094675 | 169 | 5 | 42 | 33.8 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f1cb51a7c0ae7eb80cd082d84ae6209a00f353bc | 203 | py | Python | criticalsyncing/core/admin.py | martinthenext/criticalsyncing | 4b54f44d85945bc671913d8cda9580a74d7a356f | [
"MIT"
] | 4 | 2017-02-01T13:37:01.000Z | 2020-11-26T10:14:18.000Z | criticalsyncing/core/admin.py | martinthenext/criticalsyncing | 4b54f44d85945bc671913d8cda9580a74d7a356f | [
"MIT"
] | null | null | null | criticalsyncing/core/admin.py | martinthenext/criticalsyncing | 4b54f44d85945bc671913d8cda9580a74d7a356f | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import SourceTag, Source, Article, Cache
admin.site.register(SourceTag)
admin.site.register(Source)
admin.site.register(Article)
admin.site.register(Cache)
| 25.375 | 53 | 0.817734 | 28 | 203 | 5.928571 | 0.428571 | 0.216867 | 0.409639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078818 | 203 | 7 | 54 | 29 | 0.887701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f1f19bf7eaf6324f6fcac6f0d618852d15c98200 | 1,397 | py | Python | bitmap-mario/main.py | luscafter/graphical-computing | 16ab24baf0ba5eca81d5ca310a4830ab53ff3639 | [
"MIT"
] | null | null | null | bitmap-mario/main.py | luscafter/graphical-computing | 16ab24baf0ba5eca81d5ca310a4830ab53ff3639 | [
"MIT"
] | null | null | null | bitmap-mario/main.py | luscafter/graphical-computing | 16ab24baf0ba5eca81d5ca310a4830ab53ff3639 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as p
import numpy as n
# https://github.com/luscafter/graphical-computing
# Teste o software na ferramenta Colab:
# https://colab.research.google.com/
R = n.array([
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 0, 1, 1, 1, 1, 1, 0, 1],
[0, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1]
], dtype=n.float32)
G = n.array([
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 0, 1, 0, 0, 1, 1, 0, 1],
[0, 0, 1, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1]
], dtype=n.float32)
B = n.array([
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 0, 1, 0, 0, 1, 1, 0, 1],
[0, 0, 1, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1]
], dtype=n.float32)
# Cria-se uma matriz 8x9x3
v = n.zeros((8, 9, 3), dtype=n.float32)
# O vetor recebe a camada RED
v[:,:,0] = R
# O vetor recebe a camada GREEN
v[:,:,1] = G
# O vetor recebe a camada BLUE
v[:,:,2] = B
p.figure(figsize=(3, 3))
image = p.imshow(v)
p.axis("off")
p.show()
| 24.946429 | 51 | 0.413028 | 315 | 1,397 | 1.831746 | 0.171429 | 0.256499 | 0.22357 | 0.166378 | 0.571924 | 0.473137 | 0.473137 | 0.473137 | 0.471404 | 0.443674 | 0 | 0.25 | 0.32713 | 1,397 | 55 | 52 | 25.4 | 0.36383 | 0.166786 | 0 | 0.65 | 0 | 0 | 0.002725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9e814aaf4671ca35484c43bc38677849d02a81ec | 411 | py | Python | torch/utils/benchmark/__init__.py | Hacky-DH/pytorch | 80dc4be615854570aa39a7e36495897d8a040ecc | [
"Intel"
] | 60,067 | 2017-01-18T17:21:31.000Z | 2022-03-31T21:37:45.000Z | torch/utils/benchmark/__init__.py | Hacky-DH/pytorch | 80dc4be615854570aa39a7e36495897d8a040ecc | [
"Intel"
] | 66,955 | 2017-01-18T17:21:38.000Z | 2022-03-31T23:56:11.000Z | torch/utils/benchmark/__init__.py | Hacky-DH/pytorch | 80dc4be615854570aa39a7e36495897d8a040ecc | [
"Intel"
] | 19,210 | 2017-01-18T17:45:04.000Z | 2022-03-31T23:51:56.000Z | from torch.utils.benchmark.utils.common import * # noqa: F403
from torch.utils.benchmark.utils.timer import * # noqa: F403
from torch.utils.benchmark.utils.compare import * # noqa: F403
from torch.utils.benchmark.utils.fuzzer import * # noqa: F403
from torch.utils.benchmark.utils.valgrind_wrapper.timer_interface import * # noqa: F403
from torch.utils.benchmark.utils.sparse_fuzzer import * # noqa: F403
| 58.714286 | 88 | 0.778589 | 58 | 411 | 5.465517 | 0.258621 | 0.170347 | 0.264984 | 0.435331 | 0.750789 | 0.662461 | 0.662461 | 0.662461 | 0 | 0 | 0 | 0.049587 | 0.116788 | 411 | 6 | 89 | 68.5 | 0.823691 | 0.158151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e81ba56d143e396928b675bc405b7b8c2987558 | 184 | py | Python | colossalai/engine/ophooks/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 1,630 | 2021-10-30T01:00:27.000Z | 2022-03-31T23:02:41.000Z | colossalai/engine/ophooks/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 166 | 2021-10-30T01:03:01.000Z | 2022-03-31T14:19:07.000Z | colossalai/engine/ophooks/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 253 | 2021-10-30T06:10:29.000Z | 2022-03-31T13:30:06.000Z | from .utils import register_ophooks_recursively, BaseOpHook
from ._memtracer_ophook import MemTracerOpHook
__all__ = ["BaseOpHook", "MemTracerOpHook", "register_ophooks_recursively"]
| 36.8 | 75 | 0.842391 | 18 | 184 | 8.055556 | 0.611111 | 0.206897 | 0.358621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081522 | 184 | 4 | 76 | 46 | 0.857988 | 0 | 0 | 0 | 0 | 0 | 0.288043 | 0.152174 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ebfc6b0211bdfbdb4a113c88ae437dbbe24afa9 | 2,834 | py | Python | utils/views/confirm.py | DiscordGIR/Bloo2 | bb4d8b6ac54f2242ad0ce3663e4ae97f05064dff | [
"MIT"
] | null | null | null | utils/views/confirm.py | DiscordGIR/Bloo2 | bb4d8b6ac54f2242ad0ce3663e4ae97f05064dff | [
"MIT"
] | null | null | null | utils/views/confirm.py | DiscordGIR/Bloo2 | bb4d8b6ac54f2242ad0ce3663e4ae97f05064dff | [
"MIT"
] | 1 | 2022-03-31T07:32:30.000Z | 2022-03-31T07:32:30.000Z | import discord
from discord import ui
from utils import GIRContext
from utils.framework import gatekeeper
class Confirm(ui.View):
def __init__(self, ctx: GIRContext, true_response = None, false_response = None):
super().__init__(timeout=20)
self.ctx = ctx
self.value = None
self.true_response = true_response
self.false_response = false_response
async def on_timeout(self) -> None:
await self.ctx.send_warning("Timed out.")
return await super().on_timeout()
async def interaction_check(self, interaction: discord.Interaction) -> bool:
return interaction.user == self.ctx.author
# When the confirm button is pressed, set the inner value to `True` and
# stop the View from listening to more input.
# We also send the user an ephemeral message that we're confirming their choice.
@ui.button(label='Yes', style=discord.ButtonStyle.success)
async def confirm(self, interaction: discord.Interaction, _: ui.Button):
self.ctx.interaction = interaction
self.value = True
self.stop()
# This one is similar to the confirmation button except sets the inner value to `False`
@ui.button(label='No', style=discord.ButtonStyle.grey)
async def cancel(self, interaction: discord.Interaction, _: ui.Button):
self.ctx.interaction = interaction
if self.false_response is not None:
await self.ctx.send_warning(description=self.false_response)
self.value = False
self.stop()
class SecondStaffConfirm(ui.View):
def __init__(self, ctx: GIRContext, og_mod: discord.Member):
super().__init__(timeout=20)
self.ctx = ctx
self.value = None
self.og_mod = og_mod
async def on_timeout(self) -> None:
await self.ctx.send_warning("Timed out.")
return await super().on_timeout()
async def interaction_check(self, interaction: discord.Interaction) -> bool:
return interaction.user.id != self.og_mod.id and gatekeeper.has(self.ctx.guild, interaction.user, 5)
# When the confirm button is pressed, set the inner value to `True` and
# stop the View from listening to more input.
# We also send the user an ephemeral message that we're confirming their choice.
@ui.button(label='Yes', style=discord.ButtonStyle.success)
async def confirm(self, interaction: discord.Interaction, _: ui.Button):
self.ctx.interaction = interaction
self.value = True
self.stop()
# This one is similar to the confirmation button except sets the inner value to `False`
@ui.button(label='No', style=discord.ButtonStyle.grey)
async def cancel(self, interaction: discord.Interaction, _: ui.Button):
self.ctx.interaction = interaction
self.value = False
self.stop()
| 40.485714 | 108 | 0.688426 | 375 | 2,834 | 5.093333 | 0.224 | 0.047644 | 0.06911 | 0.103665 | 0.809948 | 0.791623 | 0.777487 | 0.746073 | 0.746073 | 0.746073 | 0 | 0.002256 | 0.218066 | 2,834 | 69 | 109 | 41.072464 | 0.859657 | 0.196542 | 0 | 0.693878 | 0 | 0 | 0.013228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.081633 | 0 | 0.244898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b5b85777d43456e51068021d60db0bd7c3062cb | 8,335 | py | Python | network.py | jjandnn/Fully-Automatic-Video-Colorization-with-Self-Regularization-and-Diversity | 627390640d379d340142997159600d1bbb9baff4 | [
"MIT"
] | null | null | null | network.py | jjandnn/Fully-Automatic-Video-Colorization-with-Self-Regularization-and-Diversity | 627390640d379d340142997159600d1bbb9baff4 | [
"MIT"
] | null | null | null | network.py | jjandnn/Fully-Automatic-Video-Colorization-with-Self-Regularization-and-Diversity | 627390640d379d340142997159600d1bbb9baff4 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os,time,cv2,scipy.io
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np
import subprocess
def identity_initializer():
def _initializer(shape, dtype=tf.float32, partition_info=None):
array = np.zeros(shape, dtype=float)
cx, cy = shape[0]//2, shape[1]//2
for i in range(np.minimum(shape[2],shape[3])):
array[cx, cy, i, i] = 1
return tf.constant(array, dtype=dtype)
return _initializer
def lrelu(x):
return tf.maximum(x*0.2,x)
def bilinear_up_and_concat(x1, x2, output_channels, in_channels, scope):
with tf.variable_scope(scope):
upconv = tf.image.resize_images(x1, [tf.shape(x1)[1]*2, tf.shape(x1)[2]*2] )
upconv.set_shape([None, None, None, in_channels])
upconv = slim.conv2d(upconv,output_channels,[3,3], rate=1, activation_fn=None, weights_initializer=tf.contrib.layers.xavier_initializer(),scope='up_conv1')
upconv_output = tf.concat([upconv, x2], axis=3)
upconv_output.set_shape([None, None, None, output_channels*2])
return upconv_output
def VCN(input, channel=32, output_channel=3,reuse=False,ext="",div_num=4):
if reuse:
tf.get_variable_scope().reuse_variables()
conv1=slim.conv2d(input,channel,[1,1], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv1_1')
conv1=slim.conv2d(conv1,channel,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv1_2')
pool1=slim.max_pool2d(conv1, [2, 2], padding='SAME' )
conv2=slim.conv2d(pool1,channel*2,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv2_1')
conv2=slim.conv2d(conv2,channel*2,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv2_2')
pool2=slim.max_pool2d(conv2, [2, 2], padding='SAME' )
conv3=slim.conv2d(pool2,channel*4,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv3_1')
conv3=slim.conv2d(conv3,channel*4,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv3_2')
pool3=slim.max_pool2d(conv3, [2, 2], padding='SAME' )
conv4=slim.conv2d(pool3,channel*8,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv4_1')
conv4=slim.conv2d(conv4,channel*8,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv4_2')
pool4=slim.max_pool2d(conv4, [2, 2], padding='SAME' )
conv5=slim.conv2d(pool4,channel*16,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv5_1')
conv5=slim.conv2d(conv5,channel*16,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv5_2')
up6 = bilinear_up_and_concat( conv5, conv4, channel*8, channel*16, scope=ext+"g_up_1" )
conv6=slim.conv2d(up6, channel*8,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv6_1')
conv6=slim.conv2d(conv6,channel*8,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv6_2')
up7 = bilinear_up_and_concat( conv6, conv3, channel*4, channel*8, scope=ext+"g_up_2" )
conv7=slim.conv2d(up7, channel*4,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv7_1')
conv7=slim.conv2d(conv7,channel*4,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv7_2')
up8 = bilinear_up_and_concat( conv7, conv2, channel*2, channel*4, scope=ext+"g_up_3" )
conv8=slim.conv2d(up8, channel*2,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv8_1')
conv8=slim.conv2d(conv8,channel*2,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv8_2')
up9 = bilinear_up_and_concat( conv8, conv1, channel, channel*2, scope=ext+"g_up_4" )
conv9=slim.conv2d(up9, channel,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv9_1')
conv9=slim.conv2d(conv9,output_channel*div_num,[3,3], rate=1, activation_fn=None, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'g_conv9_2')
return conv9
def VCRN(input, channel=32, output_channel=3,reuse=False,ext="VCRN"):
if reuse:
tf.get_variable_scope().reuse_variables()
conv1=slim.conv2d(input,channel,[1,1], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv1_1')
conv1=slim.conv2d(conv1,channel,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv1_2')
pool1=slim.max_pool2d(conv1, [2, 2], padding='SAME' )
conv2=slim.conv2d(pool1,channel*2,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv2_1')
conv2=slim.conv2d(conv2,channel*2,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv2_2')
pool2=slim.max_pool2d(conv2, [2, 2], padding='SAME' )
conv3=slim.conv2d(pool2,channel*4,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv3_1')
conv3=slim.conv2d(conv3,channel*4,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv3_2')
pool3=slim.max_pool2d(conv3, [2, 2], padding='SAME' )
conv4=slim.conv2d(pool3,channel*8,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv4_1')
conv4=slim.conv2d(conv4,channel*8,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv4_2')
pool4=slim.max_pool2d(conv4, [2, 2], padding='SAME' )
conv5=slim.conv2d(pool4,channel*16,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv5_1')
conv5=slim.conv2d(conv5,channel*16,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv5_2')
up6 = bilinear_up_and_concat( conv5, conv4, channel*8, channel*16, scope=ext+"r_up_1" )
conv6=slim.conv2d(up6, channel*8,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv6_1')
conv6=slim.conv2d(conv6,channel*8,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv6_2')
up7 = bilinear_up_and_concat( conv6, conv3, channel*4, channel*8, scope=ext+"r_up_2" )
conv7=slim.conv2d(up7, channel*4,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv7_1')
conv7=slim.conv2d(conv7,channel*4,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv7_2')
up8 = bilinear_up_and_concat( conv7, conv2, channel*2, channel*4, scope=ext+"r_up_3" )
conv8=slim.conv2d(up8, channel*2,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv8_1')
conv8=slim.conv2d(conv8,channel*2,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv8_2')
up9 = bilinear_up_and_concat( conv8, conv1, channel, channel*2, scope=ext+"r_up_4" )
conv9=slim.conv2d(up9, channel,[3,3], rate=1, activation_fn=lrelu, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv9_1')
conv9=slim.conv2d(conv9,output_channel,[3,3], rate=1, activation_fn=None, weights_initializer=tf.contrib.layers.xavier_initializer(),scope=ext+'r_conv9_2')
return conv9
| 86.822917 | 168 | 0.75201 | 1,321 | 8,335 | 4.535201 | 0.084027 | 0.058755 | 0.092639 | 0.104991 | 0.874311 | 0.867635 | 0.867635 | 0.867635 | 0.856284 | 0.842597 | 0 | 0.058544 | 0.090102 | 8,335 | 95 | 169 | 87.736842 | 0.731408 | 0 | 0 | 0.162791 | 0 | 0 | 0.049922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.093023 | 0.011628 | 0.232558 | 0.011628 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c89f7a3ecbe2dffef8935e791b7fa689692f0d28 | 112 | py | Python | core/settings/__init__.py | fj-fj-fj/tech-store | e07214354a51490df53acceec2091812ffd31360 | [
"MIT"
] | null | null | null | core/settings/__init__.py | fj-fj-fj/tech-store | e07214354a51490df53acceec2091812ffd31360 | [
"MIT"
] | null | null | null | core/settings/__init__.py | fj-fj-fj/tech-store | e07214354a51490df53acceec2091812ffd31360 | [
"MIT"
] | null | null | null | from core.settings.dev import Development # noqa: F401
from core.settings.prod import Production # noqa: F401
| 37.333333 | 55 | 0.785714 | 16 | 112 | 5.5 | 0.625 | 0.181818 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.142857 | 112 | 2 | 56 | 56 | 0.854167 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8d58772e403395fa411076bc8825b2ece13680e | 214 | py | Python | tests/test_exceptions.py | GamePad64/jsonrpcclient | 214cae96a174d475b740d0a84dc3bb199db70291 | [
"MIT"
] | 1 | 2021-01-27T21:46:58.000Z | 2021-01-27T21:46:58.000Z | tests/test_exceptions.py | GamePad64/jsonrpcclient | 214cae96a174d475b740d0a84dc3bb199db70291 | [
"MIT"
] | null | null | null | tests/test_exceptions.py | GamePad64/jsonrpcclient | 214cae96a174d475b740d0a84dc3bb199db70291 | [
"MIT"
] | null | null | null | import pytest
from jsonrpcclient import exceptions
def test_non_2xx_status_code_error():
with pytest.raises(exceptions.ReceivedNon2xxResponseError):
raise exceptions.ReceivedNon2xxResponseError(404)
| 23.777778 | 63 | 0.82243 | 22 | 214 | 7.772727 | 0.772727 | 0.432749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032086 | 0.126168 | 214 | 8 | 64 | 26.75 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cdb6abc816a93dc97c305852e901b6e3e74a547e | 240 | py | Python | USA/paths_usa.py | KatharinaGruber/windpower_GWA | 6d4eddc48f37cb66ac33ebab431b9a223366d4e1 | [
"MIT"
] | 4 | 2020-03-30T13:17:11.000Z | 2021-02-21T12:57:48.000Z | USA/paths_usa.py | KatharinaGruber/windpower_GWA | 6d4eddc48f37cb66ac33ebab431b9a223366d4e1 | [
"MIT"
] | null | null | null | USA/paths_usa.py | KatharinaGruber/windpower_GWA | 6d4eddc48f37cb66ac33ebab431b9a223366d4e1 | [
"MIT"
] | 1 | 2020-11-19T23:50:21.000Z | 2020-11-19T23:50:21.000Z | usa_path = "/data/users/kgruber/other-data/USA"
results_path = "/data/users/kgruber/results/USA"
mer_path = "/data/users/kgruber/Data/MERRA/USA"
era_path = "/data/users/kgruber/Data/era5/USA"
script_path = "/data/users/kgruber/Skripts/USA" | 40 | 48 | 0.758333 | 38 | 240 | 4.657895 | 0.342105 | 0.225989 | 0.367232 | 0.564972 | 0.271186 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004444 | 0.0625 | 240 | 6 | 49 | 40 | 0.782222 | 0 | 0 | 0 | 0 | 0 | 0.676349 | 0.676349 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5178d5dbacd0e84c441ed4781d6c15ad107a5e4 | 9,012 | py | Python | main.py | 100dlswjd/draw_helper | 88010bac5cf70c4bb6a6aba3f462d104eaaac358 | [
"MIT"
] | null | null | null | main.py | 100dlswjd/draw_helper | 88010bac5cf70c4bb6a6aba3f462d104eaaac358 | [
"MIT"
] | null | null | null | main.py | 100dlswjd/draw_helper | 88010bac5cf70c4bb6a6aba3f462d104eaaac358 | [
"MIT"
] | null | null | null | import sys
import time
import win32api
import win32con
import keyboard_tool
from threading import Thread, Event
from PySide6.QtWidgets import QApplication, QMainWindow, QLabel, QDialog
from PySide6.QtGui import QCloseEvent
from main_form import Ui_draw_helper
from range_set_form import Ui_range_set
class range_set(QDialog, Ui_range_set):
def __init__(self):
super(range_set, self).__init__()
self.setupUi(self)
self.flag = False
self.range = 5
self.btn_ok.clicked.connect(self.btn_ok_click)
self.lineEdit.textChanged.connect(self.change_text)
self.exec()
def change_text(self, text):
self.lineEdit.setText(text)
self.range = int(text)
def btn_ok_click(self):
self.flag = True
self.close()
def setting(self):
return self.flag, self.range
class Mainwindow(QMainWindow, Ui_draw_helper):
def __init__(self):
super(Mainwindow, self).__init__()
self.setupUi(self)
self.range = 5
self.action_rangesetting.triggered.connect(self.rangesetting_click)
self.action_exit.triggered.connect(self.exit_click)
self.label_2.setText(f"현재 설정 값 : {self.range}")
self.exit_event = Event()
self.exit_event.clear()
self.worker = Thread(target = self.main_thread_proc)
self.worker.start()
def main_thread_proc(self):
while self.exit_event.is_set() == False:
Pos = win32api.GetCursorPos()
temPos = [0,0]
if win32api.GetAsyncKeyState(win32con.VK_NUMPAD1) & 0x8000:
keyboard_tool.pressAndHold("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
keyboard_tool.release("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.05)
for idx in range(self.range):
temPos[0] = Pos[0] - idx
temPos[1] = Pos[1] + idx
win32api.SetCursorPos((temPos[0], temPos[1]))
time.sleep(.001)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
win32api.SetCursorPos(Pos)
elif win32api.GetAsyncKeyState(win32con.VK_NUMPAD2) & 0x8000:
keyboard_tool.pressAndHold("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
keyboard_tool.release("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.05)
for idx in range(self.range):
temPos[0] = Pos[0]
temPos[1] = Pos[1] + idx
win32api.SetCursorPos((temPos[0], temPos[1]))
time.sleep(.001)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
win32api.SetCursorPos(Pos)
elif win32api.GetAsyncKeyState(win32con.VK_NUMPAD3) & 0x8000:
keyboard_tool.pressAndHold("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
keyboard_tool.release("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.05)
for idx in range(self.range):
temPos[0] = Pos[0] + idx
temPos[1] = Pos[1] + idx
win32api.SetCursorPos((temPos[0], temPos[1]))
time.sleep(.001)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
win32api.SetCursorPos(Pos)
elif win32api.GetAsyncKeyState(win32con.VK_NUMPAD4) & 0x8000:
keyboard_tool.pressAndHold("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
keyboard_tool.release("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.05)
for idx in range(self.range):
temPos[0] = Pos[0] - idx
temPos[1] = Pos[1]
win32api.SetCursorPos((temPos[0], temPos[1]))
time.sleep(.001)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
win32api.SetCursorPos(Pos)
elif win32api.GetAsyncKeyState(win32con.VK_NUMPAD6) & 0x8000:
keyboard_tool.pressAndHold("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
keyboard_tool.release("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.05)
for idx in range(self.range):
temPos[0] = Pos[0] + idx
temPos[1] = Pos[1]
win32api.SetCursorPos((temPos[0], temPos[1]))
time.sleep(.001)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
win32api.SetCursorPos(Pos)
elif win32api.GetAsyncKeyState(win32con.VK_NUMPAD7) & 0x8000:
keyboard_tool.pressAndHold("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
keyboard_tool.release("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.05)
for idx in range(self.range):
temPos[0] = Pos[0] - idx
temPos[1] = Pos[1] - idx
win32api.SetCursorPos((temPos[0], temPos[1]))
time.sleep(.001)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
win32api.SetCursorPos(Pos)
elif win32api.GetAsyncKeyState(win32con.VK_NUMPAD8) & 0x8000:
keyboard_tool.pressAndHold("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
keyboard_tool.release("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.05)
for idx in range(self.range):
temPos[0] = Pos[0]
temPos[1] = Pos[1] - idx
win32api.SetCursorPos((temPos[0], temPos[1]))
time.sleep(.001)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
win32api.SetCursorPos(Pos)
elif win32api.GetAsyncKeyState(win32con.VK_NUMPAD9) & 0x8000:
keyboard_tool.pressAndHold("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
keyboard_tool.release("alt")
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, Pos[0], Pos[1], 0, 0)
time.sleep(.05)
for idx in range(self.range):
temPos[0] = Pos[0] + idx
temPos[1] = Pos[1] - idx
win32api.SetCursorPos((temPos[0], temPos[1]))
time.sleep(.001)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, Pos[0], Pos[1], 0, 0)
win32api.SetCursorPos(Pos)
def rangesetting_click(self):
print("거리설정 클릭 !")
flag, self.range = range_set.setting(range_set())
if flag:
self.label_2.setText(f"현재 설정 값 : {self.range}")
pass
def exit_click(self):
print("나가기 클릭")
self.close()
pass
def closeEvent(self, event: QCloseEvent) -> None:
self.exit_event.set()
return super().closeEvent(event)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = Mainwindow()
window.show()
app.exec() | 44.613861 | 89 | 0.561474 | 1,036 | 9,012 | 4.737452 | 0.109073 | 0.0326 | 0.117359 | 0.169519 | 0.745721 | 0.745721 | 0.734719 | 0.734719 | 0.734719 | 0.734719 | 0 | 0.078232 | 0.322015 | 9,012 | 202 | 90 | 44.613861 | 0.725041 | 0 | 0 | 0.685083 | 0 | 0 | 0.012759 | 0 | 0 | 0 | 0.005326 | 0 | 0 | 1 | 0.049724 | false | 0.01105 | 0.055249 | 0.005525 | 0.127072 | 0.01105 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b56d845dfefad7ad1266cc601f0b6561daaed133 | 42 | py | Python | src/atcoder/abc212/g/sol_11.py | kagemeka/competitive-programming | c70fe481bcd518f507b885fc9234691d8ce63171 | [
"MIT"
] | 1 | 2021-07-11T03:20:10.000Z | 2021-07-11T03:20:10.000Z | src/atcoder/abc212/g/sol_11.py | kagemeka/competitive-programming | c70fe481bcd518f507b885fc9234691d8ce63171 | [
"MIT"
] | 39 | 2021-07-10T05:21:09.000Z | 2021-12-15T06:10:12.000Z | src/atcoder/abc212/g/sol_11.py | kagemeka/competitive-programming | c70fe481bcd518f507b885fc9234691d8ce63171 | [
"MIT"
] | null | null | null | # TODO implement map with red black tree.
| 21 | 41 | 0.761905 | 7 | 42 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 1 | 42 | 42 | 0.941176 | 0.928571 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a97ce3f18832c939c631b5f5d0e1ebf48290e54a | 48 | py | Python | narwhallet/core/ksc/__init__.py | Snider/narwhallet | 0d528763c735f1e68b8264e302854d41e7cf1956 | [
"MIT"
] | 3 | 2021-12-29T11:25:13.000Z | 2022-01-16T13:57:17.000Z | narwhallet/core/ksc/__init__.py | Snider/narwhallet | 0d528763c735f1e68b8264e302854d41e7cf1956 | [
"MIT"
] | null | null | null | narwhallet/core/ksc/__init__.py | Snider/narwhallet | 0d528763c735f1e68b8264e302854d41e7cf1956 | [
"MIT"
] | 1 | 2022-01-16T13:57:20.000Z | 2022-01-16T13:57:20.000Z | from narwhallet.core.ksc.scripts import Scripts
| 24 | 47 | 0.854167 | 7 | 48 | 5.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d4716ef0f343262efbc3a5d7d29cee6b0d7ef4f | 9,283 | py | Python | applaud/endpoints/ci_xcode_versions.py | codinn/applaud | ed168ca67465b5c0acf4ab4f4e285a2ab348c96d | [
"MIT"
] | 3 | 2022-01-22T15:34:13.000Z | 2022-03-22T06:46:48.000Z | applaud/endpoints/ci_xcode_versions.py | codinn/applaud | ed168ca67465b5c0acf4ab4f4e285a2ab348c96d | [
"MIT"
] | 1 | 2022-03-13T17:13:01.000Z | 2022-03-16T05:04:51.000Z | applaud/endpoints/ci_xcode_versions.py | codinn/applaud | ed168ca67465b5c0acf4ab4f4e285a2ab348c96d | [
"MIT"
] | 1 | 2022-03-16T02:17:30.000Z | 2022-03-16T02:17:30.000Z | from __future__ import annotations
from .base import Endpoint, IDEndpoint, SortOrder, endpoint
from ..fields import *
from typing import Union
from ..schemas.models import *
from ..schemas.responses import *
from ..schemas.requests import *
from ..schemas.enums import *
class CiXcodeVersionsEndpoint(Endpoint):
path = '/v1/ciXcodeVersions'
def fields(self, *, ci_xcode_version: Union[CiXcodeVersionField, list[CiXcodeVersionField]]=None, ci_mac_os_version: Union[CiMacOsVersionField, list[CiMacOsVersionField]]=None) -> CiXcodeVersionsEndpoint:
'''Fields to return for included related types.
:param ci_xcode_version: the fields to include for returned resources of type ciXcodeVersions
:type ci_xcode_version: Union[CiXcodeVersionField, list[CiXcodeVersionField]] = None
:param ci_mac_os_version: the fields to include for returned resources of type ciMacOsVersions
:type ci_mac_os_version: Union[CiMacOsVersionField, list[CiMacOsVersionField]] = None
:returns: self
:rtype: applaud.endpoints.CiXcodeVersionsEndpoint
'''
if ci_xcode_version: self._set_fields('ciXcodeVersions',ci_xcode_version if type(ci_xcode_version) is list else [ci_xcode_version])
if ci_mac_os_version: self._set_fields('ciMacOsVersions',ci_mac_os_version if type(ci_mac_os_version) is list else [ci_mac_os_version])
return self
class Include(StringEnum):
MAC_OS_VERSIONS = 'macOsVersions'
def include(self, relationship: Union[Include, list[Include]]) -> CiXcodeVersionsEndpoint:
'''Relationship data to include in the response.
:returns: self
:rtype: applaud.endpoints.CiXcodeVersionsEndpoint
'''
if relationship: self._set_includes(relationship if type(relationship) is list else [relationship])
return self
def limit(self, number: int=None, *, mac_os_versions: int=None) -> CiXcodeVersionsEndpoint:
'''Number of resources or included related resources to return.
:param number: maximum resources per page. The maximum limit is 200
:type number: int = None
:param mac_os_versions: maximum number of related macOsVersions returned (when they are included). The maximum limit is 50
:type mac_os_versions: int = None
:returns: self
:rtype: applaud.endpoints.CiXcodeVersionsEndpoint
'''
if number and number > 200:
raise ValueError(f'The maximum limit of number is 200')
if number: self._set_limit(number)
if mac_os_versions and mac_os_versions > 50:
raise ValueError(f'The maximum limit of mac_os_versions is 50')
if mac_os_versions: self._set_limit(mac_os_versions, 'macOsVersions')
return self
def get(self) -> CiXcodeVersionsResponse:
'''Get one or more resources.
:returns: List of CiXcodeVersions
:rtype: CiXcodeVersionsResponse
:raises: :py:class:`applaud.schemas.responses.ErrorResponse`: if a error reponse returned.
:py:class:`requests.RequestException`: if a connection or a HTTP error occurred.
'''
json = super()._perform_get()
return CiXcodeVersionsResponse.parse_obj(json)
class CiXcodeVersionEndpoint(IDEndpoint):
path = '/v1/ciXcodeVersions/{id}'
@endpoint('/v1/ciXcodeVersions/{id}/macOsVersions')
def mac_os_versions(self) -> MacOsVersionsOfCiXcodeVersionEndpoint:
return MacOsVersionsOfCiXcodeVersionEndpoint(self.id, self.session)
def fields(self, *, ci_xcode_version: Union[CiXcodeVersionField, list[CiXcodeVersionField]]=None, ci_mac_os_version: Union[CiMacOsVersionField, list[CiMacOsVersionField]]=None) -> CiXcodeVersionEndpoint:
'''Fields to return for included related types.
:param ci_xcode_version: the fields to include for returned resources of type ciXcodeVersions
:type ci_xcode_version: Union[CiXcodeVersionField, list[CiXcodeVersionField]] = None
:param ci_mac_os_version: the fields to include for returned resources of type ciMacOsVersions
:type ci_mac_os_version: Union[CiMacOsVersionField, list[CiMacOsVersionField]] = None
:returns: self
:rtype: applaud.endpoints.CiXcodeVersionEndpoint
'''
if ci_xcode_version: self._set_fields('ciXcodeVersions',ci_xcode_version if type(ci_xcode_version) is list else [ci_xcode_version])
if ci_mac_os_version: self._set_fields('ciMacOsVersions',ci_mac_os_version if type(ci_mac_os_version) is list else [ci_mac_os_version])
return self
class Include(StringEnum):
MAC_OS_VERSIONS = 'macOsVersions'
def include(self, relationship: Union[Include, list[Include]]) -> CiXcodeVersionEndpoint:
'''Relationship data to include in the response.
:returns: self
:rtype: applaud.endpoints.CiXcodeVersionEndpoint
'''
if relationship: self._set_includes(relationship if type(relationship) is list else [relationship])
return self
def limit(self, *, mac_os_versions: int=None) -> CiXcodeVersionEndpoint:
'''Number of included related resources to return.
:param mac_os_versions: maximum number of related macOsVersions returned (when they are included). The maximum limit is 50
:type mac_os_versions: int = None
:returns: self
:rtype: applaud.endpoints.CiXcodeVersionEndpoint
'''
if mac_os_versions and mac_os_versions > 50:
raise ValueError(f'The maximum limit of mac_os_versions is 50')
if mac_os_versions: self._set_limit(mac_os_versions, 'macOsVersions')
return self
def get(self) -> CiXcodeVersionResponse:
'''Get the resource.
:returns: Single CiXcodeVersion
:rtype: CiXcodeVersionResponse
:raises: :py:class:`applaud.schemas.responses.ErrorResponse`: if a error reponse returned.
:py:class:`requests.RequestException`: if a connection or a HTTP error occurred.
'''
json = super()._perform_get()
return CiXcodeVersionResponse.parse_obj(json)
class MacOsVersionsOfCiXcodeVersionEndpoint(IDEndpoint):
path = '/v1/ciXcodeVersions/{id}/macOsVersions'
def fields(self, *, ci_xcode_version: Union[CiXcodeVersionField, list[CiXcodeVersionField]]=None, ci_mac_os_version: Union[CiMacOsVersionField, list[CiMacOsVersionField]]=None) -> MacOsVersionsOfCiXcodeVersionEndpoint:
'''Fields to return for included related types.
:param ci_xcode_version: the fields to include for returned resources of type ciXcodeVersions
:type ci_xcode_version: Union[CiXcodeVersionField, list[CiXcodeVersionField]] = None
:param ci_mac_os_version: the fields to include for returned resources of type ciMacOsVersions
:type ci_mac_os_version: Union[CiMacOsVersionField, list[CiMacOsVersionField]] = None
:returns: self
:rtype: applaud.endpoints.MacOsVersionsOfCiXcodeVersionEndpoint
'''
if ci_xcode_version: self._set_fields('ciXcodeVersions',ci_xcode_version if type(ci_xcode_version) is list else [ci_xcode_version])
if ci_mac_os_version: self._set_fields('ciMacOsVersions',ci_mac_os_version if type(ci_mac_os_version) is list else [ci_mac_os_version])
return self
class Include(StringEnum):
XCODE_VERSIONS = 'xcodeVersions'
def include(self, relationship: Union[Include, list[Include]]) -> MacOsVersionsOfCiXcodeVersionEndpoint:
'''Relationship data to include in the response.
:returns: self
:rtype: applaud.endpoints.MacOsVersionsOfCiXcodeVersionEndpoint
'''
if relationship: self._set_includes(relationship if type(relationship) is list else [relationship])
return self
def limit(self, number: int=None, *, xcode_versions: int=None) -> MacOsVersionsOfCiXcodeVersionEndpoint:
'''Number of resources or included related resources to return.
:param number: maximum resources per page. The maximum limit is 200
:type number: int = None
:param xcode_versions: maximum number of related xcodeVersions returned (when they are included). The maximum limit is 50
:type xcode_versions: int = None
:returns: self
:rtype: applaud.endpoints.MacOsVersionsOfCiXcodeVersionEndpoint
'''
if number and number > 200:
raise ValueError(f'The maximum limit of number is 200')
if number: self._set_limit(number)
if xcode_versions and xcode_versions > 50:
raise ValueError(f'The maximum limit of xcode_versions is 50')
if xcode_versions: self._set_limit(xcode_versions, 'xcodeVersions')
return self
def get(self) -> CiMacOsVersionsResponse:
'''Get one or more resources.
:returns: List of related resources
:rtype: CiMacOsVersionsResponse
:raises: :py:class:`applaud.schemas.responses.ErrorResponse`: if a error reponse returned.
:py:class:`requests.RequestException`: if a connection or a HTTP error occurred.
'''
json = super()._perform_get()
return CiMacOsVersionsResponse.parse_obj(json)
| 46.648241 | 222 | 0.709577 | 1,058 | 9,283 | 6.043478 | 0.101134 | 0.031279 | 0.045981 | 0.045981 | 0.812637 | 0.780732 | 0.775727 | 0.762903 | 0.711605 | 0.70488 | 0 | 0.005466 | 0.211677 | 9,283 | 198 | 223 | 46.883838 | 0.86827 | 0.391684 | 0 | 0.513514 | 0 | 0 | 0.094843 | 0.019759 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175676 | false | 0 | 0.108108 | 0.013514 | 0.581081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d583f40e95046c5f580f8f5131710f0549b6443 | 180 | py | Python | src/mh_en_exec/connection/__init__.py | kkopus/ExternalNodes | 342f5e4151fc4b72839eef06c28ce20bf1746c0d | [
"MIT"
] | null | null | null | src/mh_en_exec/connection/__init__.py | kkopus/ExternalNodes | 342f5e4151fc4b72839eef06c28ce20bf1746c0d | [
"MIT"
] | null | null | null | src/mh_en_exec/connection/__init__.py | kkopus/ExternalNodes | 342f5e4151fc4b72839eef06c28ce20bf1746c0d | [
"MIT"
] | null | null | null | from .connection_base import ConnectionListenerBase, ConnectionResourceListenerBase
from .connection_grpc import ConnectionGrpc
from .connection_pb2_grpc import ConnectionGrpcStub
| 45 | 83 | 0.905556 | 17 | 180 | 9.352941 | 0.588235 | 0.264151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005988 | 0.072222 | 180 | 3 | 84 | 60 | 0.946108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d8e9deb6cb7fce5b91c4b151495df2684bdaa88 | 321 | py | Python | lightning_baselines3/off_policy_models/__init__.py | HenryJia/lightning-baselines3 | 10d1a0eed6136978204323250e37d49915a12e14 | [
"MIT"
] | 3 | 2021-01-18T23:27:38.000Z | 2021-10-04T12:07:16.000Z | lightning_baselines3/off_policy_models/__init__.py | HenryJia/lightning-baselines3 | 10d1a0eed6136978204323250e37d49915a12e14 | [
"MIT"
] | 8 | 2021-01-21T03:29:29.000Z | 2021-07-25T18:45:39.000Z | lightning_baselines3/off_policy_models/__init__.py | HenryJia/lightning-baselines3 | 10d1a0eed6136978204323250e37d49915a12e14 | [
"MIT"
] | null | null | null | from lightning_baselines3.off_policy_models.off_policy_model import OffPolicyModel
from lightning_baselines3.off_policy_models.dqn import DQN
from lightning_baselines3.off_policy_models.td3 import TD3
from lightning_baselines3.off_policy_models.ddpg import DDPG
from lightning_baselines3.off_policy_models.sac import SAC
| 53.5 | 82 | 0.906542 | 47 | 321 | 5.829787 | 0.276596 | 0.19708 | 0.419708 | 0.474453 | 0.693431 | 0.693431 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.062305 | 321 | 5 | 83 | 64.2 | 0.887043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d94c3ca05db801183ef1a49912d186a8c42a3b7 | 69,472 | py | Python | .venv/lib/python3.8/site-packages/aws_cdk/aws_ecr/__init__.py | sandipganguly/cdkpipeline | aecde04724a99e55d20a62cd3ccded6ceedbe967 | [
"MIT-0"
] | null | null | null | .venv/lib/python3.8/site-packages/aws_cdk/aws_ecr/__init__.py | sandipganguly/cdkpipeline | aecde04724a99e55d20a62cd3ccded6ceedbe967 | [
"MIT-0"
] | null | null | null | .venv/lib/python3.8/site-packages/aws_cdk/aws_ecr/__init__.py | sandipganguly/cdkpipeline | aecde04724a99e55d20a62cd3ccded6ceedbe967 | [
"MIT-0"
] | null | null | null | """
## Amazon ECR Construct Library
<!--BEGIN STABILITY BANNER-->---


---
<!--END STABILITY BANNER-->
This package contains constructs for working with Amazon Elastic Container Registry.
### Repositories
Define a repository by creating a new instance of `Repository`. A repository
holds multiple verions of a single container image.
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
repository = ecr.Repository(self, "Repository")
```
### Image scanning
Amazon ECR image scanning helps in identifying software vulnerabilities in your container images. You can manually scan container images stored in Amazon ECR, or you can configure your repositories to scan images when you push them to a repository. To create a new repository to scan on push, simply enable `imageScanOnPush` in the properties
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
repository = ecr.Repository(stack, "Repo",
image_scan_on_push=True
)
```
To create an `onImageScanCompleted` event rule and trigger the event target
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
repository.on_image_scan_completed("ImageScanComplete").add_target(...)
```
### Automatically clean up repositories
You can set life cycle rules to automatically clean up old images from your
repository. The first life cycle rule that matches an image will be applied
against that image. For example, the following deletes images older than
30 days, while keeping all images tagged with prod (note that the order
is important here):
```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
repository.add_lifecycle_rule(tag_prefix_list=["prod"], max_image_count=9999)
repository.add_lifecycle_rule(max_image_age=cdk.Duration.days(30))
```
"""
import abc
import builtins
import datetime
import enum
import typing
import jsii
import jsii.compat
import publication
from ._jsii import *
import aws_cdk.aws_events
import aws_cdk.aws_iam
import aws_cdk.core
@jsii.implements(aws_cdk.core.IInspectable)
class CfnRepository(
aws_cdk.core.CfnResource,
metaclass=jsii.JSIIMeta,
jsii_type="@aws-cdk/aws-ecr.CfnRepository",
):
"""A CloudFormation ``AWS::ECR::Repository``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html
cloudformationResource:
:cloudformationResource:: AWS::ECR::Repository
"""
def __init__(
self,
scope: aws_cdk.core.Construct,
id: str,
*,
lifecycle_policy: typing.Optional[
typing.Union["LifecyclePolicyProperty", aws_cdk.core.IResolvable]
] = None,
repository_name: typing.Optional[str] = None,
repository_policy_text: typing.Any = None,
tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]] = None,
) -> None:
"""Create a new ``AWS::ECR::Repository``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param lifecycle_policy: ``AWS::ECR::Repository.LifecyclePolicy``.
:param repository_name: ``AWS::ECR::Repository.RepositoryName``.
:param repository_policy_text: ``AWS::ECR::Repository.RepositoryPolicyText``.
:param tags: ``AWS::ECR::Repository.Tags``.
"""
props = CfnRepositoryProps(
lifecycle_policy=lifecycle_policy,
repository_name=repository_name,
repository_policy_text=repository_policy_text,
tags=tags,
)
jsii.create(CfnRepository, self, [scope, id, props])
@jsii.member(jsii_name="fromCloudFormation")
@builtins.classmethod
def from_cloud_formation(
cls,
scope: aws_cdk.core.Construct,
id: str,
resource_attributes: typing.Any,
*,
finder: aws_cdk.core.ICfnFinder,
) -> "CfnRepository":
"""A factory method that creates a new instance of this class from an object containing the CloudFormation properties of this resource.
Used in the @aws-cdk/cloudformation-include module.
:param scope: -
:param id: -
:param resource_attributes: -
:param finder: The finder interface used to resolve references across the template.
stability
:stability: experimental
"""
options = aws_cdk.core.FromCloudFormationOptions(finder=finder)
return jsii.sinvoke(
cls, "fromCloudFormation", [scope, id, resource_attributes, options]
)
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(
self, props: typing.Mapping[str, typing.Any]
) -> typing.Mapping[str, typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@jsii.python.classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@builtins.property
@jsii.member(jsii_name="attrArn")
def attr_arn(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: Arn
"""
return jsii.get(self, "attrArn")
@builtins.property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str, typing.Any]:
return jsii.get(self, "cfnProperties")
@builtins.property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ECR::Repository.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html#cfn-ecr-repository-tags
"""
return jsii.get(self, "tags")
@builtins.property
@jsii.member(jsii_name="repositoryPolicyText")
def repository_policy_text(self) -> typing.Any:
"""``AWS::ECR::Repository.RepositoryPolicyText``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html#cfn-ecr-repository-repositorypolicytext
"""
return jsii.get(self, "repositoryPolicyText")
@repository_policy_text.setter
def repository_policy_text(self, value: typing.Any) -> None:
jsii.set(self, "repositoryPolicyText", value)
@builtins.property
@jsii.member(jsii_name="lifecyclePolicy")
def lifecycle_policy(
self,
) -> typing.Optional[
typing.Union["LifecyclePolicyProperty", aws_cdk.core.IResolvable]
]:
"""``AWS::ECR::Repository.LifecyclePolicy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html#cfn-ecr-repository-lifecyclepolicy
"""
return jsii.get(self, "lifecyclePolicy")
@lifecycle_policy.setter
def lifecycle_policy(
self,
value: typing.Optional[
typing.Union["LifecyclePolicyProperty", aws_cdk.core.IResolvable]
],
) -> None:
jsii.set(self, "lifecyclePolicy", value)
@builtins.property
@jsii.member(jsii_name="repositoryName")
def repository_name(self) -> typing.Optional[str]:
"""``AWS::ECR::Repository.RepositoryName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html#cfn-ecr-repository-repositoryname
"""
return jsii.get(self, "repositoryName")
@repository_name.setter
def repository_name(self, value: typing.Optional[str]) -> None:
jsii.set(self, "repositoryName", value)
@jsii.data_type(
jsii_type="@aws-cdk/aws-ecr.CfnRepository.LifecyclePolicyProperty",
jsii_struct_bases=[],
name_mapping={
"lifecycle_policy_text": "lifecyclePolicyText",
"registry_id": "registryId",
},
)
class LifecyclePolicyProperty:
def __init__(
self,
*,
lifecycle_policy_text: typing.Optional[str] = None,
registry_id: typing.Optional[str] = None,
) -> None:
"""
:param lifecycle_policy_text: ``CfnRepository.LifecyclePolicyProperty.LifecyclePolicyText``.
:param registry_id: ``CfnRepository.LifecyclePolicyProperty.RegistryId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecr-repository-lifecyclepolicy.html
"""
self._values = {}
if lifecycle_policy_text is not None:
self._values["lifecycle_policy_text"] = lifecycle_policy_text
if registry_id is not None:
self._values["registry_id"] = registry_id
@builtins.property
def lifecycle_policy_text(self) -> typing.Optional[str]:
"""``CfnRepository.LifecyclePolicyProperty.LifecyclePolicyText``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecr-repository-lifecyclepolicy.html#cfn-ecr-repository-lifecyclepolicy-lifecyclepolicytext
"""
return self._values.get("lifecycle_policy_text")
@builtins.property
def registry_id(self) -> typing.Optional[str]:
"""``CfnRepository.LifecyclePolicyProperty.RegistryId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecr-repository-lifecyclepolicy.html#cfn-ecr-repository-lifecyclepolicy-registryid
"""
return self._values.get("registry_id")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "LifecyclePolicyProperty(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-ecr.CfnRepositoryProps",
jsii_struct_bases=[],
name_mapping={
"lifecycle_policy": "lifecyclePolicy",
"repository_name": "repositoryName",
"repository_policy_text": "repositoryPolicyText",
"tags": "tags",
},
)
class CfnRepositoryProps:
def __init__(
self,
*,
lifecycle_policy: typing.Optional[
typing.Union[
"CfnRepository.LifecyclePolicyProperty", aws_cdk.core.IResolvable
]
] = None,
repository_name: typing.Optional[str] = None,
repository_policy_text: typing.Any = None,
tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]] = None,
) -> None:
"""Properties for defining a ``AWS::ECR::Repository``.
:param lifecycle_policy: ``AWS::ECR::Repository.LifecyclePolicy``.
:param repository_name: ``AWS::ECR::Repository.RepositoryName``.
:param repository_policy_text: ``AWS::ECR::Repository.RepositoryPolicyText``.
:param tags: ``AWS::ECR::Repository.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html
"""
self._values = {}
if lifecycle_policy is not None:
self._values["lifecycle_policy"] = lifecycle_policy
if repository_name is not None:
self._values["repository_name"] = repository_name
if repository_policy_text is not None:
self._values["repository_policy_text"] = repository_policy_text
if tags is not None:
self._values["tags"] = tags
@builtins.property
def lifecycle_policy(
self,
) -> typing.Optional[
typing.Union["CfnRepository.LifecyclePolicyProperty", aws_cdk.core.IResolvable]
]:
"""``AWS::ECR::Repository.LifecyclePolicy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html#cfn-ecr-repository-lifecyclepolicy
"""
return self._values.get("lifecycle_policy")
@builtins.property
def repository_name(self) -> typing.Optional[str]:
"""``AWS::ECR::Repository.RepositoryName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html#cfn-ecr-repository-repositoryname
"""
return self._values.get("repository_name")
@builtins.property
def repository_policy_text(self) -> typing.Any:
"""``AWS::ECR::Repository.RepositoryPolicyText``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html#cfn-ecr-repository-repositorypolicytext
"""
return self._values.get("repository_policy_text")
@builtins.property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::ECR::Repository.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html#cfn-ecr-repository-tags
"""
return self._values.get("tags")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "CfnRepositoryProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.interface(jsii_type="@aws-cdk/aws-ecr.IRepository")
class IRepository(aws_cdk.core.IResource, jsii.compat.Protocol):
"""Represents an ECR repository."""
@builtins.staticmethod
def __jsii_proxy_class__():
return _IRepositoryProxy
@builtins.property
@jsii.member(jsii_name="repositoryArn")
def repository_arn(self) -> str:
"""The ARN of the repository.
attribute:
:attribute:: true
"""
...
@builtins.property
@jsii.member(jsii_name="repositoryName")
def repository_name(self) -> str:
"""The name of the repository.
attribute:
:attribute:: true
"""
...
@builtins.property
@jsii.member(jsii_name="repositoryUri")
def repository_uri(self) -> str:
"""The URI of this repository (represents the latest image):.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/REPOSITORY
attribute:
:attribute:: true
"""
...
@jsii.member(jsii_name="addToResourcePolicy")
def add_to_resource_policy(
self, statement: aws_cdk.aws_iam.PolicyStatement
) -> aws_cdk.aws_iam.AddToResourcePolicyResult:
"""Add a policy statement to the repository's resource policy.
:param statement: -
"""
...
@jsii.member(jsii_name="grant")
def grant(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Grant the given principal identity permissions to perform the actions on this repository.
:param grantee: -
:param actions: -
"""
...
@jsii.member(jsii_name="grantPull")
def grant_pull(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant the given identity permissions to pull images in this repository.
:param grantee: -
"""
...
@jsii.member(jsii_name="grantPullPush")
def grant_pull_push(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Grant the given identity permissions to pull and push images to this repository.
:param grantee: -
"""
...
@jsii.member(jsii_name="onCloudTrailEvent")
def on_cloud_trail_event(
self,
id: str,
*,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Define a CloudWatch event that triggers when something happens to this repository.
Requires that there exists at least one CloudTrail Trail in your account
that captures the event. This method will not create the Trail.
:param id: The id of the rule.
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
...
@jsii.member(jsii_name="onCloudTrailImagePushed")
def on_cloud_trail_image_pushed(
self,
id: str,
*,
image_tag: typing.Optional[str] = None,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines an AWS CloudWatch event rule that can trigger a target when an image is pushed to this repository.
Requires that there exists at least one CloudTrail Trail in your account
that captures the event. This method will not create the Trail.
:param id: The id of the rule.
:param image_tag: Only watch changes to this image tag. Default: - Watch changes to all tags
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
...
@jsii.member(jsii_name="onEvent")
def on_event(
self,
id: str,
*,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines a CloudWatch event rule which triggers for repository events.
Use
``rule.addEventPattern(pattern)`` to specify a filter.
:param id: -
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
...
@jsii.member(jsii_name="onImageScanCompleted")
def on_image_scan_completed(
self,
id: str,
*,
image_tags: typing.Optional[typing.List[str]] = None,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines an AWS CloudWatch event rule that can trigger a target when the image scan is completed.
:param id: The id of the rule.
:param image_tags: Only watch changes to the image tags spedified. Leave it undefined to watch the full repository. Default: - Watch the changes to the repository with all image tags
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
...
@jsii.member(jsii_name="repositoryUriForTag")
def repository_uri_for_tag(self, tag: typing.Optional[str] = None) -> str:
"""Returns the URI of the repository for a certain tag. Can be used in ``docker push/pull``.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/REPOSITORY[:TAG]
:param tag: Image tag to use (tools usually default to "latest" if omitted).
"""
...
class _IRepositoryProxy(jsii.proxy_for(aws_cdk.core.IResource)):
"""Represents an ECR repository."""
__jsii_type__ = "@aws-cdk/aws-ecr.IRepository"
@builtins.property
@jsii.member(jsii_name="repositoryArn")
def repository_arn(self) -> str:
"""The ARN of the repository.
attribute:
:attribute:: true
"""
return jsii.get(self, "repositoryArn")
@builtins.property
@jsii.member(jsii_name="repositoryName")
def repository_name(self) -> str:
"""The name of the repository.
attribute:
:attribute:: true
"""
return jsii.get(self, "repositoryName")
@builtins.property
@jsii.member(jsii_name="repositoryUri")
def repository_uri(self) -> str:
"""The URI of this repository (represents the latest image):.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/REPOSITORY
attribute:
:attribute:: true
"""
return jsii.get(self, "repositoryUri")
@jsii.member(jsii_name="addToResourcePolicy")
def add_to_resource_policy(
self, statement: aws_cdk.aws_iam.PolicyStatement
) -> aws_cdk.aws_iam.AddToResourcePolicyResult:
"""Add a policy statement to the repository's resource policy.
:param statement: -
"""
return jsii.invoke(self, "addToResourcePolicy", [statement])
@jsii.member(jsii_name="grant")
def grant(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Grant the given principal identity permissions to perform the actions on this repository.
:param grantee: -
:param actions: -
"""
return jsii.invoke(self, "grant", [grantee, *actions])
@jsii.member(jsii_name="grantPull")
def grant_pull(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant the given identity permissions to pull images in this repository.
:param grantee: -
"""
return jsii.invoke(self, "grantPull", [grantee])
@jsii.member(jsii_name="grantPullPush")
def grant_pull_push(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Grant the given identity permissions to pull and push images to this repository.
:param grantee: -
"""
return jsii.invoke(self, "grantPullPush", [grantee])
@jsii.member(jsii_name="onCloudTrailEvent")
def on_cloud_trail_event(
self,
id: str,
*,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Define a CloudWatch event that triggers when something happens to this repository.
Requires that there exists at least one CloudTrail Trail in your account
that captures the event. This method will not create the Trail.
:param id: The id of the rule.
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
options = aws_cdk.aws_events.OnEventOptions(
description=description,
event_pattern=event_pattern,
rule_name=rule_name,
target=target,
)
return jsii.invoke(self, "onCloudTrailEvent", [id, options])
@jsii.member(jsii_name="onCloudTrailImagePushed")
def on_cloud_trail_image_pushed(
self,
id: str,
*,
image_tag: typing.Optional[str] = None,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines an AWS CloudWatch event rule that can trigger a target when an image is pushed to this repository.
Requires that there exists at least one CloudTrail Trail in your account
that captures the event. This method will not create the Trail.
:param id: The id of the rule.
:param image_tag: Only watch changes to this image tag. Default: - Watch changes to all tags
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
options = OnCloudTrailImagePushedOptions(
image_tag=image_tag,
description=description,
event_pattern=event_pattern,
rule_name=rule_name,
target=target,
)
return jsii.invoke(self, "onCloudTrailImagePushed", [id, options])
@jsii.member(jsii_name="onEvent")
def on_event(
self,
id: str,
*,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines a CloudWatch event rule which triggers for repository events.
Use
``rule.addEventPattern(pattern)`` to specify a filter.
:param id: -
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
options = aws_cdk.aws_events.OnEventOptions(
description=description,
event_pattern=event_pattern,
rule_name=rule_name,
target=target,
)
return jsii.invoke(self, "onEvent", [id, options])
@jsii.member(jsii_name="onImageScanCompleted")
def on_image_scan_completed(
self,
id: str,
*,
image_tags: typing.Optional[typing.List[str]] = None,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines an AWS CloudWatch event rule that can trigger a target when the image scan is completed.
:param id: The id of the rule.
:param image_tags: Only watch changes to the image tags spedified. Leave it undefined to watch the full repository. Default: - Watch the changes to the repository with all image tags
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
options = OnImageScanCompletedOptions(
image_tags=image_tags,
description=description,
event_pattern=event_pattern,
rule_name=rule_name,
target=target,
)
return jsii.invoke(self, "onImageScanCompleted", [id, options])
@jsii.member(jsii_name="repositoryUriForTag")
def repository_uri_for_tag(self, tag: typing.Optional[str] = None) -> str:
"""Returns the URI of the repository for a certain tag. Can be used in ``docker push/pull``.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/REPOSITORY[:TAG]
:param tag: Image tag to use (tools usually default to "latest" if omitted).
"""
return jsii.invoke(self, "repositoryUriForTag", [tag])
@jsii.data_type(
jsii_type="@aws-cdk/aws-ecr.LifecycleRule",
jsii_struct_bases=[],
name_mapping={
"description": "description",
"max_image_age": "maxImageAge",
"max_image_count": "maxImageCount",
"rule_priority": "rulePriority",
"tag_prefix_list": "tagPrefixList",
"tag_status": "tagStatus",
},
)
class LifecycleRule:
def __init__(
self,
*,
description: typing.Optional[str] = None,
max_image_age: typing.Optional[aws_cdk.core.Duration] = None,
max_image_count: typing.Optional[jsii.Number] = None,
rule_priority: typing.Optional[jsii.Number] = None,
tag_prefix_list: typing.Optional[typing.List[str]] = None,
tag_status: typing.Optional["TagStatus"] = None,
) -> None:
"""An ECR life cycle rule.
:param description: Describes the purpose of the rule. Default: No description
:param max_image_age: The maximum age of images to retain. The value must represent a number of days. Specify exactly one of maxImageCount and maxImageAge.
:param max_image_count: The maximum number of images to retain. Specify exactly one of maxImageCount and maxImageAge.
:param rule_priority: Controls the order in which rules are evaluated (low to high). All rules must have a unique priority, where lower numbers have higher precedence. The first rule that matches is applied to an image. There can only be one rule with a tagStatus of Any, and it must have the highest rulePriority. All rules without a specified priority will have incrementing priorities automatically assigned to them, higher than any rules that DO have priorities. Default: Automatically assigned
:param tag_prefix_list: Select images that have ALL the given prefixes in their tag. Only if tagStatus == TagStatus.Tagged
:param tag_status: Select images based on tags. Only one rule is allowed to select untagged images, and it must have the highest rulePriority. Default: TagStatus.Tagged if tagPrefixList is given, TagStatus.Any otherwise
"""
self._values = {}
if description is not None:
self._values["description"] = description
if max_image_age is not None:
self._values["max_image_age"] = max_image_age
if max_image_count is not None:
self._values["max_image_count"] = max_image_count
if rule_priority is not None:
self._values["rule_priority"] = rule_priority
if tag_prefix_list is not None:
self._values["tag_prefix_list"] = tag_prefix_list
if tag_status is not None:
self._values["tag_status"] = tag_status
@builtins.property
def description(self) -> typing.Optional[str]:
"""Describes the purpose of the rule.
default
:default: No description
"""
return self._values.get("description")
@builtins.property
def max_image_age(self) -> typing.Optional[aws_cdk.core.Duration]:
"""The maximum age of images to retain. The value must represent a number of days.
Specify exactly one of maxImageCount and maxImageAge.
"""
return self._values.get("max_image_age")
@builtins.property
def max_image_count(self) -> typing.Optional[jsii.Number]:
"""The maximum number of images to retain.
Specify exactly one of maxImageCount and maxImageAge.
"""
return self._values.get("max_image_count")
@builtins.property
def rule_priority(self) -> typing.Optional[jsii.Number]:
"""Controls the order in which rules are evaluated (low to high).
All rules must have a unique priority, where lower numbers have
higher precedence. The first rule that matches is applied to an image.
There can only be one rule with a tagStatus of Any, and it must have
the highest rulePriority.
All rules without a specified priority will have incrementing priorities
automatically assigned to them, higher than any rules that DO have priorities.
default
:default: Automatically assigned
"""
return self._values.get("rule_priority")
@builtins.property
def tag_prefix_list(self) -> typing.Optional[typing.List[str]]:
"""Select images that have ALL the given prefixes in their tag.
Only if tagStatus == TagStatus.Tagged
"""
return self._values.get("tag_prefix_list")
@builtins.property
def tag_status(self) -> typing.Optional["TagStatus"]:
"""Select images based on tags.
Only one rule is allowed to select untagged images, and it must
have the highest rulePriority.
default
:default: TagStatus.Tagged if tagPrefixList is given, TagStatus.Any otherwise
"""
return self._values.get("tag_status")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "LifecycleRule(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-ecr.OnCloudTrailImagePushedOptions",
jsii_struct_bases=[aws_cdk.aws_events.OnEventOptions],
name_mapping={
"description": "description",
"event_pattern": "eventPattern",
"rule_name": "ruleName",
"target": "target",
"image_tag": "imageTag",
},
)
class OnCloudTrailImagePushedOptions(aws_cdk.aws_events.OnEventOptions):
def __init__(
self,
*,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
image_tag: typing.Optional[str] = None,
) -> None:
"""Options for the onCloudTrailImagePushed method.
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
:param image_tag: Only watch changes to this image tag. Default: - Watch changes to all tags
"""
if isinstance(event_pattern, dict):
event_pattern = aws_cdk.aws_events.EventPattern(**event_pattern)
self._values = {}
if description is not None:
self._values["description"] = description
if event_pattern is not None:
self._values["event_pattern"] = event_pattern
if rule_name is not None:
self._values["rule_name"] = rule_name
if target is not None:
self._values["target"] = target
if image_tag is not None:
self._values["image_tag"] = image_tag
@builtins.property
def description(self) -> typing.Optional[str]:
"""A description of the rule's purpose.
default
:default: - No description
"""
return self._values.get("description")
@builtins.property
def event_pattern(self) -> typing.Optional[aws_cdk.aws_events.EventPattern]:
"""Additional restrictions for the event to route to the specified target.
The method that generates the rule probably imposes some type of event
filtering. The filtering implied by what you pass here is added
on top of that filtering.
default
:default: - No additional filtering based on an event pattern.
see
:see: https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html
"""
return self._values.get("event_pattern")
@builtins.property
def rule_name(self) -> typing.Optional[str]:
"""A name for the rule.
default
:default: AWS CloudFormation generates a unique physical ID.
"""
return self._values.get("rule_name")
@builtins.property
def target(self) -> typing.Optional[aws_cdk.aws_events.IRuleTarget]:
"""The target to register for the event.
default
:default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
return self._values.get("target")
@builtins.property
def image_tag(self) -> typing.Optional[str]:
"""Only watch changes to this image tag.
default
:default: - Watch changes to all tags
"""
return self._values.get("image_tag")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "OnCloudTrailImagePushedOptions(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-ecr.OnImageScanCompletedOptions",
jsii_struct_bases=[aws_cdk.aws_events.OnEventOptions],
name_mapping={
"description": "description",
"event_pattern": "eventPattern",
"rule_name": "ruleName",
"target": "target",
"image_tags": "imageTags",
},
)
class OnImageScanCompletedOptions(aws_cdk.aws_events.OnEventOptions):
def __init__(
self,
*,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
image_tags: typing.Optional[typing.List[str]] = None,
) -> None:
"""Options for the OnImageScanCompleted method.
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
:param image_tags: Only watch changes to the image tags spedified. Leave it undefined to watch the full repository. Default: - Watch the changes to the repository with all image tags
"""
if isinstance(event_pattern, dict):
event_pattern = aws_cdk.aws_events.EventPattern(**event_pattern)
self._values = {}
if description is not None:
self._values["description"] = description
if event_pattern is not None:
self._values["event_pattern"] = event_pattern
if rule_name is not None:
self._values["rule_name"] = rule_name
if target is not None:
self._values["target"] = target
if image_tags is not None:
self._values["image_tags"] = image_tags
@builtins.property
def description(self) -> typing.Optional[str]:
"""A description of the rule's purpose.
default
:default: - No description
"""
return self._values.get("description")
@builtins.property
def event_pattern(self) -> typing.Optional[aws_cdk.aws_events.EventPattern]:
"""Additional restrictions for the event to route to the specified target.
The method that generates the rule probably imposes some type of event
filtering. The filtering implied by what you pass here is added
on top of that filtering.
default
:default: - No additional filtering based on an event pattern.
see
:see: https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html
"""
return self._values.get("event_pattern")
@builtins.property
def rule_name(self) -> typing.Optional[str]:
"""A name for the rule.
default
:default: AWS CloudFormation generates a unique physical ID.
"""
return self._values.get("rule_name")
@builtins.property
def target(self) -> typing.Optional[aws_cdk.aws_events.IRuleTarget]:
"""The target to register for the event.
default
:default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
return self._values.get("target")
@builtins.property
def image_tags(self) -> typing.Optional[typing.List[str]]:
"""Only watch changes to the image tags spedified.
Leave it undefined to watch the full repository.
default
:default: - Watch the changes to the repository with all image tags
"""
return self._values.get("image_tags")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "OnImageScanCompletedOptions(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@aws-cdk/aws-ecr.RepositoryAttributes",
jsii_struct_bases=[],
name_mapping={
"repository_arn": "repositoryArn",
"repository_name": "repositoryName",
},
)
class RepositoryAttributes:
def __init__(self, *, repository_arn: str, repository_name: str) -> None:
"""
:param repository_arn:
:param repository_name:
"""
self._values = {
"repository_arn": repository_arn,
"repository_name": repository_name,
}
@builtins.property
def repository_arn(self) -> str:
return self._values.get("repository_arn")
@builtins.property
def repository_name(self) -> str:
return self._values.get("repository_name")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "RepositoryAttributes(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.implements(IRepository)
class RepositoryBase(
aws_cdk.core.Resource,
metaclass=jsii.JSIIAbstractClass,
jsii_type="@aws-cdk/aws-ecr.RepositoryBase",
):
"""Base class for ECR repository.
Reused between imported repositories and owned repositories.
"""
@builtins.staticmethod
def __jsii_proxy_class__():
return _RepositoryBaseProxy
def __init__(
self,
scope: aws_cdk.core.Construct,
id: str,
*,
physical_name: typing.Optional[str] = None,
) -> None:
"""
:param scope: -
:param id: -
:param physical_name: The value passed in by users to the physical name prop of the resource. - ``undefined`` implies that a physical name will be allocated by CloudFormation during deployment. - a concrete value implies a specific physical name - ``PhysicalName.GENERATE_IF_NEEDED`` is a marker that indicates that a physical will only be generated by the CDK if it is needed for cross-environment references. Otherwise, it will be allocated by CloudFormation. Default: - The physical name will be allocated by CloudFormation at deployment time
"""
props = aws_cdk.core.ResourceProps(physical_name=physical_name)
jsii.create(RepositoryBase, self, [scope, id, props])
@jsii.member(jsii_name="addToResourcePolicy")
@abc.abstractmethod
def add_to_resource_policy(
self, statement: aws_cdk.aws_iam.PolicyStatement
) -> aws_cdk.aws_iam.AddToResourcePolicyResult:
"""Add a policy statement to the repository's resource policy.
:param statement: -
"""
...
@jsii.member(jsii_name="grant")
def grant(
self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str
) -> aws_cdk.aws_iam.Grant:
"""Grant the given principal identity permissions to perform the actions on this repository.
:param grantee: -
:param actions: -
"""
return jsii.invoke(self, "grant", [grantee, *actions])
@jsii.member(jsii_name="grantPull")
def grant_pull(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant the given identity permissions to use the images in this repository.
:param grantee: -
"""
return jsii.invoke(self, "grantPull", [grantee])
@jsii.member(jsii_name="grantPullPush")
def grant_pull_push(
self, grantee: aws_cdk.aws_iam.IGrantable
) -> aws_cdk.aws_iam.Grant:
"""Grant the given identity permissions to pull and push images to this repository.
:param grantee: -
"""
return jsii.invoke(self, "grantPullPush", [grantee])
@jsii.member(jsii_name="onCloudTrailEvent")
def on_cloud_trail_event(
self,
id: str,
*,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Define a CloudWatch event that triggers when something happens to this repository.
Requires that there exists at least one CloudTrail Trail in your account
that captures the event. This method will not create the Trail.
:param id: The id of the rule.
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
options = aws_cdk.aws_events.OnEventOptions(
description=description,
event_pattern=event_pattern,
rule_name=rule_name,
target=target,
)
return jsii.invoke(self, "onCloudTrailEvent", [id, options])
@jsii.member(jsii_name="onCloudTrailImagePushed")
def on_cloud_trail_image_pushed(
self,
id: str,
*,
image_tag: typing.Optional[str] = None,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines an AWS CloudWatch event rule that can trigger a target when an image is pushed to this repository.
Requires that there exists at least one CloudTrail Trail in your account
that captures the event. This method will not create the Trail.
:param id: The id of the rule.
:param image_tag: Only watch changes to this image tag. Default: - Watch changes to all tags
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
options = OnCloudTrailImagePushedOptions(
image_tag=image_tag,
description=description,
event_pattern=event_pattern,
rule_name=rule_name,
target=target,
)
return jsii.invoke(self, "onCloudTrailImagePushed", [id, options])
@jsii.member(jsii_name="onEvent")
def on_event(
self,
id: str,
*,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines a CloudWatch event rule which triggers for repository events.
Use
``rule.addEventPattern(pattern)`` to specify a filter.
:param id: -
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
options = aws_cdk.aws_events.OnEventOptions(
description=description,
event_pattern=event_pattern,
rule_name=rule_name,
target=target,
)
return jsii.invoke(self, "onEvent", [id, options])
@jsii.member(jsii_name="onImageScanCompleted")
def on_image_scan_completed(
self,
id: str,
*,
image_tags: typing.Optional[typing.List[str]] = None,
description: typing.Optional[str] = None,
event_pattern: typing.Optional[aws_cdk.aws_events.EventPattern] = None,
rule_name: typing.Optional[str] = None,
target: typing.Optional[aws_cdk.aws_events.IRuleTarget] = None,
) -> aws_cdk.aws_events.Rule:
"""Defines an AWS CloudWatch event rule that can trigger a target when an image scan is completed.
:param id: The id of the rule.
:param image_tags: Only watch changes to the image tags spedified. Leave it undefined to watch the full repository. Default: - Watch the changes to the repository with all image tags
:param description: A description of the rule's purpose. Default: - No description
:param event_pattern: Additional restrictions for the event to route to the specified target. The method that generates the rule probably imposes some type of event filtering. The filtering implied by what you pass here is added on top of that filtering. Default: - No additional filtering based on an event pattern.
:param rule_name: A name for the rule. Default: AWS CloudFormation generates a unique physical ID.
:param target: The target to register for the event. Default: - No target is added to the rule. Use ``addTarget()`` to add a target.
"""
options = OnImageScanCompletedOptions(
image_tags=image_tags,
description=description,
event_pattern=event_pattern,
rule_name=rule_name,
target=target,
)
return jsii.invoke(self, "onImageScanCompleted", [id, options])
@jsii.member(jsii_name="repositoryUriForTag")
def repository_uri_for_tag(self, tag: typing.Optional[str] = None) -> str:
"""Returns the URL of the repository. Can be used in ``docker push/pull``.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/REPOSITORY[:TAG]
:param tag: Optional image tag.
"""
return jsii.invoke(self, "repositoryUriForTag", [tag])
@builtins.property
@jsii.member(jsii_name="repositoryArn")
@abc.abstractmethod
def repository_arn(self) -> str:
"""The ARN of the repository."""
...
@builtins.property
@jsii.member(jsii_name="repositoryName")
@abc.abstractmethod
def repository_name(self) -> str:
"""The name of the repository."""
...
@builtins.property
@jsii.member(jsii_name="repositoryUri")
def repository_uri(self) -> str:
"""The URI of this repository (represents the latest image):.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/REPOSITORY
"""
return jsii.get(self, "repositoryUri")
class _RepositoryBaseProxy(RepositoryBase, jsii.proxy_for(aws_cdk.core.Resource)):
@jsii.member(jsii_name="addToResourcePolicy")
def add_to_resource_policy(
self, statement: aws_cdk.aws_iam.PolicyStatement
) -> aws_cdk.aws_iam.AddToResourcePolicyResult:
"""Add a policy statement to the repository's resource policy.
:param statement: -
"""
return jsii.invoke(self, "addToResourcePolicy", [statement])
@builtins.property
@jsii.member(jsii_name="repositoryArn")
def repository_arn(self) -> str:
"""The ARN of the repository."""
return jsii.get(self, "repositoryArn")
@builtins.property
@jsii.member(jsii_name="repositoryName")
def repository_name(self) -> str:
"""The name of the repository."""
return jsii.get(self, "repositoryName")
@jsii.data_type(
jsii_type="@aws-cdk/aws-ecr.RepositoryProps",
jsii_struct_bases=[],
name_mapping={
"image_scan_on_push": "imageScanOnPush",
"lifecycle_registry_id": "lifecycleRegistryId",
"lifecycle_rules": "lifecycleRules",
"removal_policy": "removalPolicy",
"repository_name": "repositoryName",
},
)
class RepositoryProps:
def __init__(
self,
*,
image_scan_on_push: typing.Optional[bool] = None,
lifecycle_registry_id: typing.Optional[str] = None,
lifecycle_rules: typing.Optional[typing.List["LifecycleRule"]] = None,
removal_policy: typing.Optional[aws_cdk.core.RemovalPolicy] = None,
repository_name: typing.Optional[str] = None,
) -> None:
"""
:param image_scan_on_push: Enable the scan on push when creating the repository. Default: false
:param lifecycle_registry_id: The AWS account ID associated with the registry that contains the repository. Default: The default registry is assumed.
:param lifecycle_rules: Life cycle rules to apply to this registry. Default: No life cycle rules
:param removal_policy: Determine what happens to the repository when the resource/stack is deleted. Default: RemovalPolicy.Retain
:param repository_name: Name for this repository. Default: Automatically generated name.
"""
self._values = {}
if image_scan_on_push is not None:
self._values["image_scan_on_push"] = image_scan_on_push
if lifecycle_registry_id is not None:
self._values["lifecycle_registry_id"] = lifecycle_registry_id
if lifecycle_rules is not None:
self._values["lifecycle_rules"] = lifecycle_rules
if removal_policy is not None:
self._values["removal_policy"] = removal_policy
if repository_name is not None:
self._values["repository_name"] = repository_name
@builtins.property
def image_scan_on_push(self) -> typing.Optional[bool]:
"""Enable the scan on push when creating the repository.
default
:default: false
"""
return self._values.get("image_scan_on_push")
@builtins.property
def lifecycle_registry_id(self) -> typing.Optional[str]:
"""The AWS account ID associated with the registry that contains the repository.
default
:default: The default registry is assumed.
see
:see: https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_PutLifecyclePolicy.html
"""
return self._values.get("lifecycle_registry_id")
@builtins.property
def lifecycle_rules(self) -> typing.Optional[typing.List["LifecycleRule"]]:
"""Life cycle rules to apply to this registry.
default
:default: No life cycle rules
"""
return self._values.get("lifecycle_rules")
@builtins.property
def removal_policy(self) -> typing.Optional[aws_cdk.core.RemovalPolicy]:
"""Determine what happens to the repository when the resource/stack is deleted.
default
:default: RemovalPolicy.Retain
"""
return self._values.get("removal_policy")
@builtins.property
def repository_name(self) -> typing.Optional[str]:
"""Name for this repository.
default
:default: Automatically generated name.
"""
return self._values.get("repository_name")
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return "RepositoryProps(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.enum(jsii_type="@aws-cdk/aws-ecr.TagStatus")
class TagStatus(enum.Enum):
"""Select images based on tags."""
ANY = "ANY"
"""Rule applies to all images."""
TAGGED = "TAGGED"
"""Rule applies to tagged images."""
UNTAGGED = "UNTAGGED"
"""Rule applies to untagged images."""
class Repository(
RepositoryBase, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-ecr.Repository"
):
"""Define an ECR repository."""
def __init__(
self,
scope: aws_cdk.core.Construct,
id: str,
*,
image_scan_on_push: typing.Optional[bool] = None,
lifecycle_registry_id: typing.Optional[str] = None,
lifecycle_rules: typing.Optional[typing.List["LifecycleRule"]] = None,
removal_policy: typing.Optional[aws_cdk.core.RemovalPolicy] = None,
repository_name: typing.Optional[str] = None,
) -> None:
"""
:param scope: -
:param id: -
:param image_scan_on_push: Enable the scan on push when creating the repository. Default: false
:param lifecycle_registry_id: The AWS account ID associated with the registry that contains the repository. Default: The default registry is assumed.
:param lifecycle_rules: Life cycle rules to apply to this registry. Default: No life cycle rules
:param removal_policy: Determine what happens to the repository when the resource/stack is deleted. Default: RemovalPolicy.Retain
:param repository_name: Name for this repository. Default: Automatically generated name.
"""
props = RepositoryProps(
image_scan_on_push=image_scan_on_push,
lifecycle_registry_id=lifecycle_registry_id,
lifecycle_rules=lifecycle_rules,
removal_policy=removal_policy,
repository_name=repository_name,
)
jsii.create(Repository, self, [scope, id, props])
@jsii.member(jsii_name="arnForLocalRepository")
@builtins.classmethod
def arn_for_local_repository(
cls, repository_name: str, scope: aws_cdk.core.IConstruct
) -> str:
"""Returns an ECR ARN for a repository that resides in the same account/region as the current stack.
:param repository_name: -
:param scope: -
"""
return jsii.sinvoke(cls, "arnForLocalRepository", [repository_name, scope])
@jsii.member(jsii_name="fromRepositoryArn")
@builtins.classmethod
def from_repository_arn(
cls, scope: aws_cdk.core.Construct, id: str, repository_arn: str
) -> "IRepository":
"""
:param scope: -
:param id: -
:param repository_arn: -
"""
return jsii.sinvoke(cls, "fromRepositoryArn", [scope, id, repository_arn])
@jsii.member(jsii_name="fromRepositoryAttributes")
@builtins.classmethod
def from_repository_attributes(
cls,
scope: aws_cdk.core.Construct,
id: str,
*,
repository_arn: str,
repository_name: str,
) -> "IRepository":
"""Import a repository.
:param scope: -
:param id: -
:param repository_arn:
:param repository_name:
"""
attrs = RepositoryAttributes(
repository_arn=repository_arn, repository_name=repository_name
)
return jsii.sinvoke(cls, "fromRepositoryAttributes", [scope, id, attrs])
@jsii.member(jsii_name="fromRepositoryName")
@builtins.classmethod
def from_repository_name(
cls, scope: aws_cdk.core.Construct, id: str, repository_name: str
) -> "IRepository":
"""
:param scope: -
:param id: -
:param repository_name: -
"""
return jsii.sinvoke(cls, "fromRepositoryName", [scope, id, repository_name])
@jsii.member(jsii_name="addLifecycleRule")
def add_lifecycle_rule(
self,
*,
description: typing.Optional[str] = None,
max_image_age: typing.Optional[aws_cdk.core.Duration] = None,
max_image_count: typing.Optional[jsii.Number] = None,
rule_priority: typing.Optional[jsii.Number] = None,
tag_prefix_list: typing.Optional[typing.List[str]] = None,
tag_status: typing.Optional["TagStatus"] = None,
) -> None:
"""Add a life cycle rule to the repository.
Life cycle rules automatically expire images from the repository that match
certain conditions.
:param description: Describes the purpose of the rule. Default: No description
:param max_image_age: The maximum age of images to retain. The value must represent a number of days. Specify exactly one of maxImageCount and maxImageAge.
:param max_image_count: The maximum number of images to retain. Specify exactly one of maxImageCount and maxImageAge.
:param rule_priority: Controls the order in which rules are evaluated (low to high). All rules must have a unique priority, where lower numbers have higher precedence. The first rule that matches is applied to an image. There can only be one rule with a tagStatus of Any, and it must have the highest rulePriority. All rules without a specified priority will have incrementing priorities automatically assigned to them, higher than any rules that DO have priorities. Default: Automatically assigned
:param tag_prefix_list: Select images that have ALL the given prefixes in their tag. Only if tagStatus == TagStatus.Tagged
:param tag_status: Select images based on tags. Only one rule is allowed to select untagged images, and it must have the highest rulePriority. Default: TagStatus.Tagged if tagPrefixList is given, TagStatus.Any otherwise
"""
rule = LifecycleRule(
description=description,
max_image_age=max_image_age,
max_image_count=max_image_count,
rule_priority=rule_priority,
tag_prefix_list=tag_prefix_list,
tag_status=tag_status,
)
return jsii.invoke(self, "addLifecycleRule", [rule])
@jsii.member(jsii_name="addToResourcePolicy")
def add_to_resource_policy(
self, statement: aws_cdk.aws_iam.PolicyStatement
) -> aws_cdk.aws_iam.AddToResourcePolicyResult:
"""Add a policy statement to the repository's resource policy.
:param statement: -
"""
return jsii.invoke(self, "addToResourcePolicy", [statement])
@builtins.property
@jsii.member(jsii_name="repositoryArn")
def repository_arn(self) -> str:
"""The ARN of the repository."""
return jsii.get(self, "repositoryArn")
@builtins.property
@jsii.member(jsii_name="repositoryName")
def repository_name(self) -> str:
"""The name of the repository."""
return jsii.get(self, "repositoryName")
__all__ = [
"CfnRepository",
"CfnRepositoryProps",
"IRepository",
"LifecycleRule",
"OnCloudTrailImagePushedOptions",
"OnImageScanCompletedOptions",
"Repository",
"RepositoryAttributes",
"RepositoryBase",
"RepositoryProps",
"TagStatus",
]
publication.publish()
| 40.865882 | 553 | 0.667031 | 8,439 | 69,472 | 5.35407 | 0.054746 | 0.017529 | 0.019321 | 0.022708 | 0.834274 | 0.809597 | 0.770135 | 0.750127 | 0.736361 | 0.729257 | 0 | 0.000377 | 0.23716 | 69,472 | 1,699 | 554 | 40.889935 | 0.852196 | 0.4242 | 0 | 0.683502 | 0 | 0 | 0.112401 | 0.033864 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136925 | false | 0 | 0.013468 | 0.029181 | 0.276094 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
572f7860b5ef5966c1bc98cadfbbc1f1443c57d3 | 128 | py | Python | producers/short-texts-clustering/src/clustering/__init__.py | gasparian/ml-serving-template | 02d2e4c73617647e42c455e7ff79105422d08681 | [
"MIT"
] | 6 | 2021-03-07T22:11:00.000Z | 2022-03-08T15:04:49.000Z | producers/short-texts-clustering/src/clustering/__init__.py | gasparian/ml-serving-template | 02d2e4c73617647e42c455e7ff79105422d08681 | [
"MIT"
] | null | null | null | producers/short-texts-clustering/src/clustering/__init__.py | gasparian/ml-serving-template | 02d2e4c73617647e42c455e7ff79105422d08681 | [
"MIT"
] | null | null | null | from .clustering import *
from .preprocessing import *
from .feature_extractors import *
from .config import ClusteringConfig
| 21.333333 | 36 | 0.804688 | 14 | 128 | 7.285714 | 0.571429 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 128 | 5 | 37 | 25.6 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93b232896f413f9a4db6f74f39e772818c046fb0 | 65 | py | Python | toolbox/utils/__init__.py | avatao-content/challenge-engine | 38700b96895f12f310e6cf266b9c24b639dbad40 | [
"Apache-2.0"
] | 23 | 2017-08-15T08:18:27.000Z | 2021-05-16T20:38:30.000Z | toolbox/utils/__init__.py | avatao-content/challenge-engine | 38700b96895f12f310e6cf266b9c24b639dbad40 | [
"Apache-2.0"
] | 7 | 2017-08-31T18:18:04.000Z | 2019-10-20T00:07:10.000Z | toolbox/utils/__init__.py | avatao-content/challenge-engine | 38700b96895f12f310e6cf266b9c24b639dbad40 | [
"Apache-2.0"
] | 9 | 2017-08-31T18:37:01.000Z | 2020-02-11T08:30:46.000Z | from .config import *
from .deploy import *
from .utils import *
| 16.25 | 21 | 0.723077 | 9 | 65 | 5.222222 | 0.555556 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184615 | 65 | 3 | 22 | 21.666667 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93c326ab7d7a919cbc7d07beb0401fb61e5e1e5c | 112 | py | Python | __init__.py | VolkerGast/GraphPynt | e053c03da7088dbbb736b1e928be6e7ccf72df49 | [
"MIT"
] | null | null | null | __init__.py | VolkerGast/GraphPynt | e053c03da7088dbbb736b1e928be6e7ccf72df49 | [
"MIT"
] | null | null | null | __init__.py | VolkerGast/GraphPynt | e053c03da7088dbbb736b1e928be6e7ccf72df49 | [
"MIT"
] | null | null | null | from GraphPynt.GrnoCorp import GrnoCorp
from GraphPynt.GrnoFile import GrnoFile
from GraphPynt.Grno import Grno
| 28 | 39 | 0.866071 | 15 | 112 | 6.466667 | 0.4 | 0.402062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 112 | 3 | 40 | 37.333333 | 0.97 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e0c487cf4049ba837ee54b66a9c60e02fc81414 | 694 | py | Python | tests/unit/mmap_file_test.py | vsinitsyn/ixy.py | 0d10685fa9f77797eb55244205bffe357ddaeb36 | [
"MIT"
] | 29 | 2018-10-18T07:37:52.000Z | 2021-11-22T14:08:11.000Z | tests/unit/mmap_file_test.py | vsinitsyn/ixy.py | 0d10685fa9f77797eb55244205bffe357ddaeb36 | [
"MIT"
] | null | null | null | tests/unit/mmap_file_test.py | vsinitsyn/ixy.py | 0d10685fa9f77797eb55244205bffe357ddaeb36 | [
"MIT"
] | 4 | 2019-09-29T04:37:05.000Z | 2021-06-11T10:10:42.000Z | import mmap
def test_write_to_mmap_file(tmpdir):
# GIVEN
tmp_file = tmpdir.join('mmap_file.txt')
tmp_file.write('This is the content')
fd = tmp_file.open(mode='r+b')
# WHEN
mm = mmap.mmap(fd.fileno(), 0, access=mmap.ACCESS_WRITE)
mm[:5] = b'That '
# THAN
assert tmp_file.read() == 'That is the content'
def test_memoryview_on_mmapped_file(tmpdir):
# GIVEN
tmp_file = tmpdir.join('mmap_file.txt')
tmp_file.write('This is the content')
fd = tmp_file.open(mode='r+b')
# WHEN
mm = memoryview(mmap.mmap(fd.fileno(), 0, access=mmap.ACCESS_WRITE))
mm[:5] = b'That '
# THAN
assert tmp_file.read() == 'That is the content'
| 23.133333 | 72 | 0.636888 | 109 | 694 | 3.87156 | 0.311927 | 0.132701 | 0.113744 | 0.085308 | 0.848341 | 0.848341 | 0.848341 | 0.848341 | 0.848341 | 0.848341 | 0 | 0.007353 | 0.216138 | 694 | 29 | 73 | 23.931034 | 0.768382 | 0.044669 | 0 | 0.666667 | 0 | 0 | 0.179878 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.133333 | false | 0 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f523df50d457c36d61ef82193bcb9c643f443783 | 15,268 | py | Python | pieceClasses.py | victorsongyw/land-battle-chess-game | 94416edc6178eee379cb98293c581334f90bde93 | [
"MIT"
] | null | null | null | pieceClasses.py | victorsongyw/land-battle-chess-game | 94416edc6178eee379cb98293c581334f90bde93 | [
"MIT"
] | null | null | null | pieceClasses.py | victorsongyw/land-battle-chess-game | 94416edc6178eee379cb98293c581334f90bde93 | [
"MIT"
] | null | null | null |
######## This file contains the classes of all key components of the chess board
######## including all the building blocks of the board and all types of game pieces
# This dictionary maps the coordinate of a post to its x and y location on canvas
getCoord = {}
for x in range(5):
for y in range(12):
if y < 6:
getCoord[(y, x)] = (50 + 125 * x, 50 + 60 * y)
else:
getCoord[(y, x)] = (50 + 125 * x, 450 + 60 * (y - 6))
# This is a reversed dictionary for multiplayer viewing (for PlayerA)
getReversedCoord = {}
for x in range(5):
for y in range(12):
if y < 6:
getReversedCoord[(11-y, 4-x)] = (50 + 125 * x, 50 + 60 * y)
else:
getReversedCoord[(11-y, 4-x)] = (50 + 125 * x, 450 + 60 * (y - 6))
# get the coordinate of a post given its location on canvas
def getLocation(x, y):
for (i, j) in getCoord:
(a, b) = getCoord[(i, j)]
if a - 25 < x < a + 25 and b - 12 < y < b + 12:
return (i, j)
class Post():
def __init__(self, x, y, piece=None): # x, y are center coordinates
self.x, self.y = getCoord[(x, y)]
self.reversedX, self.reversedY = getReversedCoord[(x, y)]
self.w = 25
self.h = 12
self.piece = piece
self.selected = False
self.highlighted = False
def select(self):
self.selected = not self.selected
def highlight(self):
self.highlighted = not self.highlighted
# drawing the Post with the piece
def draw(self, canvas):
self.drawSkeleton(canvas)
if self.piece != None:
if self.highlighted or self.selected:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill=self.piece.color, width=0)
else:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill=self.piece.color)
canvas.create_text(self.x, self.y, text=self.piece.name)
# drawing the Post and the piece in Dark Mode
def drawDark(self, canvas):
self.drawSkeleton(canvas)
if self.piece != None:
if self.highlighted or self.selected:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill=self.piece.color, width=0)
else:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill=self.piece.color)
# drawing the Post without the piece
def drawSkeleton(self, canvas):
if self.highlighted:
canvas.create_rectangle(self.x - self.w, self.y - self.h, self.x +
self.w, self.y + self.h, fill="PaleGreen3", outline="lawn green", width=4)
elif self.selected:
canvas.create_rectangle(self.x - self.w, self.y - self.h, self.x +
self.w, self.y + self.h, fill="PaleGreen3", outline="red3", width=4)
else:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill="PaleGreen3")
# drawing the Post with the piece in reversed position
def reversedDraw(self, canvas):
self.reversedDrawSkeleton(canvas)
if self.piece != None:
if self.highlighted or self.selected:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill=self.piece.color, width=0)
else:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill=self.piece.color)
canvas.create_text(self.reversedX, self.reversedY, text=self.piece.name)
# drawing the Post and the piece in reversed position in Dark Mode
def reversedDrawDark(self, canvas):
self.reversedDrawSkeleton(canvas)
if self.piece != None:
if self.highlighted or self.selected:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill=self.piece.color, width=0)
else:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill=self.piece.color)
# drawing the Post without the piece in reversed position
def reversedDrawSkeleton(self, canvas):
if self.highlighted:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h, self.reversedX +
self.w, self.reversedY + self.h, fill="PaleGreen3", outline="lawn green", width=4)
elif self.selected:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h, self.reversedX +
self.w, self.reversedY + self.h, fill="PaleGreen3", outline="red3", width=4)
else:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill="PaleGreen3")
class Camp(Post):
# Camps inherit the __init__ function from Post, and the drawing functions behave similarly
def draw(self, canvas):
self.drawSkeleton(canvas)
if self.piece != None:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill=self.piece.color)
canvas.create_text(self.x, self.y, text=self.piece.name)
def drawDark(self, canvas):
self.drawSkeleton(canvas)
if self.piece != None:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill=self.piece.color)
def drawSkeleton(self, canvas):
if self.highlighted:
canvas.create_oval(self.x - self.w, self.y - self.w + 7, self.x +
self.w, self.y + self.w - 7, outline="lawn green", fill="PaleGreen3", width=3)
elif self.selected:
canvas.create_oval(self.x - self.w, self.y - self.w + 7, self.x +
self.w, self.y + self.w - 7, outline="red3", fill="PaleGreen3", width=3)
else:
canvas.create_oval(self.x - self.w, self.y - self.w + 7,
self.x + self.w, self.y + self.w - 7, fill="PaleGreen3")
def reversedDraw(self, canvas):
self.reversedDrawSkeleton(canvas)
if self.piece != None:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill=self.piece.color)
canvas.create_text(self.reversedX, self.reversedY, text=self.piece.name)
def reversedDrawDark(self, canvas):
self.reversedDrawSkeleton(canvas)
if self.piece != None:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill=self.piece.color)
def reversedDrawSkeleton(self, canvas):
if self.highlighted:
canvas.create_oval(self.reversedX - self.w, self.reversedY - self.w + 7, self.reversedX +
self.w, self.reversedY + self.w - 7, outline="lawn green", fill="PaleGreen3", width=3)
elif self.selected:
canvas.create_oval(self.reversedX - self.w, self.reversedY - self.w + 7, self.reversedX +
self.w, self.reversedY + self.w - 7, outline="red3", fill="PaleGreen3", width=3)
else:
canvas.create_oval(self.reversedX - self.w, self.reversedY - self.w + 7,
self.reversedX + self.w, self.reversedY + self.w - 7, fill="PaleGreen3")
class Headquarters(Post):
# Headquarters inherit the __init__ function from Post, and the drawing functions behave similarly
def draw(self, canvas):
self.drawSkeleton(canvas)
if self.piece != None:
if self.highlighted or self.selected:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill=self.piece.color, width=0)
else:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill=self.piece.color)
canvas.create_text(self.x, self.y, text=self.piece.name)
def drawSkeleton(self, canvas):
canvas.create_oval(self.x - self.w / 2, self.y - 2 * self.h,
self.x + self.w / 2, self.y, fill="black")
canvas.create_oval(self.x - self.w / 2, self.y, self.x +
self.w / 2, self.y + 2 * self.h, fill="black")
if self.highlighted:
canvas.create_rectangle(self.x - self.w, self.y - self.h, self.x +
self.w, self.y + self.h, fill="PaleGreen3", outline="lawn green", width=3)
elif self.selected:
canvas.create_rectangle(self.x - self.w, self.y - self.h, self.x +
self.w, self.y + self.h, fill="PaleGreen3", outline="red3", width=3)
else:
canvas.create_rectangle(self.x - self.w, self.y - self.h,
self.x + self.w, self.y + self.h, fill="PaleGreen3")
def reversedDraw(self, canvas):
self.reversedDrawSkeleton(canvas)
if self.piece != None:
if self.highlighted or self.selected:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill=self.piece.color, width=0)
else:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill=self.piece.color)
canvas.create_text(self.reversedX, self.reversedY, text=self.piece.name)
def reversedDrawSkeleton(self, canvas):
canvas.create_oval(self.reversedX - self.w / 2, self.reversedY - 2 * self.h,
self.reversedX + self.w / 2, self.reversedY, fill="black")
canvas.create_oval(self.reversedX - self.w / 2, self.reversedY, self.reversedX +
self.w / 2, self.reversedY + 2 * self.h, fill="black")
if self.highlighted:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h, self.reversedX +
self.w, self.reversedY + self.h, fill="PaleGreen3", outline="lawn green", width=3)
elif self.selected:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h, self.reversedX +
self.w, self.reversedY + self.h, fill="PaleGreen3", outline="red3", width=3)
else:
canvas.create_rectangle(self.reversedX - self.w, self.reversedY - self.h,
self.reversedX + self.w, self.reversedY + self.h, fill="PaleGreen3")
#### classes of different pieces
class Mar():
def __init__(self, side):
self.order = 9
self.value = 50
self.name = "Mar10"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class Gen():
def __init__(self, side):
self.order = 8
self.value = 40
self.name = "Gen9"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class MGen():
def __init__(self, side):
self.order = 7
self.value = 25
self.name = "MGen8"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class BGen():
def __init__(self, side):
self.order = 6
self.value = 15
self.name = "BGen7"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class Col():
def __init__(self, side):
self.order = 5
self.value = 8
self.name = "Col6"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class Maj():
def __init__(self, side):
self.order = 4
self.value = 5
self.name = "Maj5"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class Capt():
def __init__(self, side):
self.order = 3
self.value = 2
self.name = "Capt4"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class Lt():
def __init__(self, side):
self.order = 2
self.value = 1
self.name = "Lt3"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class Spr():
def __init__(self, side):
self.order = 1
self.value = 10
self.name = "Spr2"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class Bomb():
# Bombs do not have fixed values
# their values are calculated as 1/3 of the opponent's most valuable piece (excluding the Flag)
def __init__(self, side):
self.order = None
self.name = "Bomb"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class LMN():
def __init__(self, side):
self.order = 10
self.value = 70
self.name = "LMN"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
class Flag():
def __init__(self, side):
self.order = 0
self.value = 2000
self.name = "Flag1"
self.side = side
if self.side == "A":
self.color = "orange"
elif self.side == "B":
self.color = "deep sky blue"
| 41.715847 | 121 | 0.541787 | 1,935 | 15,268 | 4.221189 | 0.087855 | 0.054481 | 0.074927 | 0.046523 | 0.85835 | 0.84978 | 0.802767 | 0.790034 | 0.777791 | 0.759427 | 0 | 0.017504 | 0.3377 | 15,268 | 365 | 122 | 41.830137 | 0.790249 | 0.065955 | 0 | 0.677741 | 0 | 0 | 0.041289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106312 | false | 0 | 0 | 0 | 0.159468 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f57eb2c1da0870cb02a00469498372c60f8b788b | 133 | py | Python | iridauploader/config/__init__.py | COMBAT-SARS-COV-2/irida-uploader | b9d04d187d6a5a9fdcaef5b27135965ffac99db0 | [
"Apache-2.0"
] | 7 | 2019-01-25T16:56:11.000Z | 2021-01-12T15:32:08.000Z | iridauploader/config/__init__.py | COMBAT-SARS-COV-2/irida-uploader | b9d04d187d6a5a9fdcaef5b27135965ffac99db0 | [
"Apache-2.0"
] | 80 | 2019-01-29T14:54:26.000Z | 2022-03-25T18:51:51.000Z | iridauploader/config/__init__.py | COMBAT-SARS-COV-2/irida-uploader | b9d04d187d6a5a9fdcaef5b27135965ffac99db0 | [
"Apache-2.0"
] | 9 | 2019-03-14T09:58:05.000Z | 2022-01-06T20:14:45.000Z | from iridauploader.config.config import setup, read_config_option, set_config_options, write_config_options_to_file, set_config_file
| 66.5 | 132 | 0.894737 | 20 | 133 | 5.45 | 0.6 | 0.165138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06015 | 133 | 1 | 133 | 133 | 0.872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5859ee391e05246f031c0ba5e3902a8c66a386b | 95 | py | Python | handlers/__init__.py | IlyaLyalin/english_audience | 359a4e59bc15394ec6c1e8b1cf0070f7cf70e473 | [
"Apache-2.0"
] | 1 | 2021-05-04T14:42:27.000Z | 2021-05-04T14:42:27.000Z | handlers/__init__.py | IlyaLyalin/english_audience | 359a4e59bc15394ec6c1e8b1cf0070f7cf70e473 | [
"Apache-2.0"
] | null | null | null | handlers/__init__.py | IlyaLyalin/english_audience | 359a4e59bc15394ec6c1e8b1cf0070f7cf70e473 | [
"Apache-2.0"
] | null | null | null | from handlers.group.handlers import dp
from handlers.user.handlers import dp
__all__ = ["dp"]
| 19 | 38 | 0.778947 | 14 | 95 | 5 | 0.5 | 0.342857 | 0.457143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126316 | 95 | 4 | 39 | 23.75 | 0.843373 | 0 | 0 | 0 | 0 | 0 | 0.021053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5fecd122b91672f3898f029c61df150a4363ff76 | 65 | py | Python | tests/_testsite/importerror_app/forum/dummy.py | BrendaH/django-machina | c75b6f39f61ca92745aebb0bb6ab3c707d88063d | [
"BSD-3-Clause"
] | 572 | 2015-04-10T06:15:43.000Z | 2022-03-30T06:40:25.000Z | tests/_testsite/importerror_app/forum/dummy.py | BrendaH/django-machina | c75b6f39f61ca92745aebb0bb6ab3c707d88063d | [
"BSD-3-Clause"
] | 241 | 2015-10-26T22:23:59.000Z | 2022-03-25T12:30:56.000Z | tests/_testsite/importerror_app/forum/dummy.py | BrendaH/django-machina | c75b6f39f61ca92745aebb0bb6ab3c707d88063d | [
"BSD-3-Clause"
] | 156 | 2015-10-02T19:32:08.000Z | 2022-03-30T06:40:11.000Z | from x import bad_import # noqa
class Dummy(object):
pass
| 10.833333 | 32 | 0.692308 | 10 | 65 | 4.4 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246154 | 65 | 5 | 33 | 13 | 0.897959 | 0.061538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
27147f36959f845359d3196a1b45752557d445b3 | 132 | py | Python | collect_timelines.py | Teebr/Teebr | a10cd66191452a406f8f90ba5cd5b22c5a02dc42 | [
"MIT"
] | 1 | 2015-11-11T14:26:40.000Z | 2015-11-11T14:26:40.000Z | collect_timelines.py | Teebr/Teebr | a10cd66191452a406f8f90ba5cd5b22c5a02dc42 | [
"MIT"
] | 6 | 2019-10-21T08:39:51.000Z | 2019-10-21T08:40:14.000Z | collect_timelines.py | Teebr/Teebr | a10cd66191452a406f8f90ba5cd5b22c5a02dc42 | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
from teebr.data_imports import fetch_user_timelines
if __name__ == '__main__':
fetch_user_timelines()
| 18.857143 | 51 | 0.719697 | 17 | 132 | 4.823529 | 0.823529 | 0.219512 | 0.439024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008929 | 0.151515 | 132 | 6 | 52 | 22 | 0.723214 | 0.159091 | 0 | 0 | 0 | 0 | 0.073395 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
272a8e2e7bc5ce7007321854a3d2948f8ad690e9 | 45 | py | Python | main.py | mdonath/micropython-maxmatrix | f675058e55a9ea9ce51d47c156ee28f36b4e356b | [
"MIT"
] | 1 | 2019-02-09T05:19:10.000Z | 2019-02-09T05:19:10.000Z | main.py | mdonath/micropython-maxmatrix | f675058e55a9ea9ce51d47c156ee28f36b4e356b | [
"MIT"
] | null | null | null | main.py | mdonath/micropython-maxmatrix | f675058e55a9ea9ce51d47c156ee28f36b4e356b | [
"MIT"
] | null | null | null | from show_mqtt import show_mqtt
show_mqtt()
| 11.25 | 31 | 0.822222 | 8 | 45 | 4.25 | 0.5 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 3 | 32 | 15 | 0.871795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
273a6eda20cc912f93d86e115b5fe7756b182962 | 98 | py | Python | kaolin/graphics/dib_renderer/utils/__init__.py | Bob-Yeah/kaolin | 7ad34f8158000499a30b8dfa14fb3ed86d2e57a6 | [
"ECL-2.0",
"Apache-2.0"
] | 90 | 2020-08-15T16:14:45.000Z | 2022-01-22T10:24:13.000Z | kaolin/graphics/dib_renderer/utils/__init__.py | Bob-Yeah/kaolin | 7ad34f8158000499a30b8dfa14fb3ed86d2e57a6 | [
"ECL-2.0",
"Apache-2.0"
] | 11 | 2020-09-07T17:31:18.000Z | 2021-11-25T12:07:30.000Z | kaolin/graphics/dib_renderer/utils/__init__.py | Bob-Yeah/kaolin | 7ad34f8158000499a30b8dfa14fb3ed86d2e57a6 | [
"ECL-2.0",
"Apache-2.0"
] | 13 | 2020-09-03T04:25:50.000Z | 2021-12-23T08:23:33.000Z | from .utils import *
from .mesh import *
from .perspective import *
from .sphericalcoord import *
| 19.6 | 29 | 0.755102 | 12 | 98 | 6.166667 | 0.5 | 0.405405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 98 | 4 | 30 | 24.5 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
274c6264700495c50d43ac41626ad933f66cefe5 | 207 | py | Python | about/views.py | TrueDi1905/yatube | 074fac97a47332933f35350a95f661903aac014f | [
"BSD-3-Clause"
] | null | null | null | about/views.py | TrueDi1905/yatube | 074fac97a47332933f35350a95f661903aac014f | [
"BSD-3-Clause"
] | null | null | null | about/views.py | TrueDi1905/yatube | 074fac97a47332933f35350a95f661903aac014f | [
"BSD-3-Clause"
] | null | null | null | from django.views.generic.base import TemplateView
class StaticPageAuthor(TemplateView):
template_name = 'about_author.html'
class StaticPageTech(TemplateView):
template_name = 'about_tech.html'
| 20.7 | 50 | 0.792271 | 23 | 207 | 6.956522 | 0.695652 | 0.25 | 0.3 | 0.3625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125604 | 207 | 9 | 51 | 23 | 0.883978 | 0 | 0 | 0 | 0 | 0 | 0.154589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
27ef3db1e3cb7ef4fc56913ea3c91b8de3c4ad8d | 11,944 | py | Python | tests/unit/test_change.py | besbes/formica | 94b43a11ed534ee7afa6a4f45848842bb163bbb6 | [
"MIT"
] | 50 | 2017-02-14T13:26:04.000Z | 2019-02-05T08:02:45.000Z | tests/unit/test_change.py | besbes/formica | 94b43a11ed534ee7afa6a4f45848842bb163bbb6 | [
"MIT"
] | 54 | 2017-02-06T11:06:33.000Z | 2019-02-07T16:55:08.000Z | tests/unit/test_change.py | besbes/formica | 94b43a11ed534ee7afa6a4f45848842bb163bbb6 | [
"MIT"
] | 7 | 2017-03-20T10:29:46.000Z | 2018-08-02T12:41:31.000Z | import pytest
from formica import cli
from tests.unit.constants import REGION, PROFILE, STACK, TEMPLATE, ROLE_ARN, ACCOUNT_ID
from botocore.exceptions import ClientError
def test_change_creates_update_change_set(change_set, loader, aws_client):
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--profile', PROFILE, '--region', REGION])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, role_arn=None, s3=False,
resource_types=False)
change_set.return_value.describe.assert_called_once()
def test_change_uses_parameters_for_update(change_set, aws_client, loader):
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--parameters', 'A=B', 'C=D', '--profile', PROFILE, '--region', REGION])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={'A': 'B', 'C': 'D'}, tags={},
capabilities=None, role_arn=None, s3=False,
resource_types=False)
change_set.return_value.describe.assert_called_once()
def test_change_tests_parameter_format(capsys):
with pytest.raises(SystemExit) as pytest_wrapped_e:
cli.main(['change', '--stack', STACK, '--parameter', 'A=B', 'CD', '--profile', PROFILE, '--region', REGION])
out, err = capsys.readouterr()
assert 'needs to be in format KEY=VALUE' in err
assert pytest_wrapped_e.value.code == 2
def test_change_uses_tags_for_creation(change_set, aws_client, loader):
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--tags', 'A=B', 'C=D', '--profile', PROFILE, '--region', REGION])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={}, tags={'A': 'B', 'C': 'D'},
capabilities=None, role_arn=None, s3=False,
resource_types=False)
def test_change_tests_tag_format(capsys):
with pytest.raises(SystemExit) as pytest_wrapped_e:
cli.main(['change', '--stack', STACK, '--parameters', 'A=B', '--profile', PROFILE, '--region', REGION,
'--tags', 'CD'])
out, err = capsys.readouterr()
assert "argument --tags: CD needs to be in format KEY=VALUE" in err
assert pytest_wrapped_e.value.code == 2
def test_change_uses_capabilities_for_creation(change_set, aws_client, loader):
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--capabilities', 'A', 'B'])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={},
tags={}, capabilities=['A', 'B'], role_arn=None, s3=False,
resource_types=False)
def test_change_sets_s3_flag(change_set, aws_client, loader):
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--s3'])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, role_arn=None, s3=True,
resource_types=False)
def test_change_with_role_arn(change_set, aws_client, loader):
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--role-arn', ROLE_ARN])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, role_arn=ROLE_ARN, s3=False,
resource_types=False)
def test_change_with_role_name(change_set, aws_client, loader):
aws_client.get_caller_identity.return_value = {'Account': ACCOUNT_ID}
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--role-name', 'some-stack-role'])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, role_arn=ROLE_ARN, s3=False,
resource_types=False)
def test_change_with_role_name_and_arn(change_set, aws_client, loader):
aws_client.get_caller_identity.return_value = {'Account': ACCOUNT_ID}
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--role-name', 'UnusedRole', '--role-arn', ROLE_ARN])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, role_arn=ROLE_ARN, s3=False,
resource_types=False)
def test_change_with_resource_types(change_set, aws_client, loader):
aws_client.get_caller_identity.return_value = {'Account': ACCOUNT_ID}
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--resource-types'])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, s3=False, resource_types=True,
role_arn=None)
def test_change_create_if_missing_without_parameter(change_set, aws_client, loader):
aws_client.get_caller_identity.return_value = {'Account': ACCOUNT_ID}
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK])
aws_client.describe_stacks.assert_not_called()
def test_change_create_if_missing(change_set, aws_client, loader):
aws_client.get_caller_identity.return_value = {'Account': ACCOUNT_ID}
exception = ClientError(
dict(Error={'Code': 'ValidationError', 'Message': 'Stack with id teststack does not exist'}), "DescribeStack")
aws_client.describe_stacks.side_effect = exception
loader.return_value.template.return_value = TEMPLATE
cli.main(['change', '--stack', STACK, '--create-missing'])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(template=TEMPLATE, change_set_type='CREATE',
parameters={},
tags={}, capabilities=None, s3=False, resource_types=False,
role_arn=None)
def test_change_create_if_missing_exception_handling(change_set, aws_client, loader):
aws_client.get_caller_identity.return_value = {'Account': ACCOUNT_ID}
exception = ClientError(
dict(Error={'Code': 'OtherError', 'Message': 'Stack with id teststack does not exist'}), "DescribeStack")
aws_client.describe_stacks.side_effect = exception
loader.return_value.template.return_value = TEMPLATE
with pytest.raises(SystemExit):
cli.main(['change', '--stack', STACK, '--create-missing'])
exception = ClientError(
dict(Error={'Code': 'ValidationError', 'Message': 'Other Error Message'}), "DescribeStack")
aws_client.describe_stacks.side_effect = exception
with pytest.raises(SystemExit):
cli.main(['change', '--stack', STACK, '--create-missing'])
def test_allow_previous_template_usage(change_set, aws_client):
cli.main(['change', '--stack', STACK, '--use-previous-template'])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, resource_types=False,
role_arn=None, s3=False, use_previous_template=True)
def test_use_previous_parameters(change_set, aws_client):
cli.main(['change', '--stack', STACK, '--use-previous-parameters', '--use-previous-template', '--parameters',
'FGHIJ=12345'])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(change_set_type='UPDATE',
parameters={'FGHIJ': '12345'},
tags={}, capabilities=None, resource_types=False,
role_arn=None, s3=False, use_previous_template=True,
use_previous_parameters=True)
def test_upload_artifacts(change_set, aws_client, temp_bucket_cli):
cli.main(['change', '--use-previous-template', '--stack', STACK, '--upload-artifacts', '--artifacts', 'testfile'])
change_set.assert_called_with(stack=STACK, nested_change_sets=False)
change_set.return_value.create.assert_called_once_with(change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, resource_types=False,
role_arn=None, s3=False, use_previous_template=True)
temp_bucket_cli.add_file.assert_called_once_with('testfile')
temp_bucket_cli.upload.assert_called_once()
def test_nested_change_sets(change_set, aws_client):
cli.main(['change', '--stack', STACK, '--nested-change-sets', '--use-previous-template'])
change_set.assert_called_with(stack=STACK, nested_change_sets=True)
change_set.return_value.create.assert_called_once_with(change_set_type='UPDATE',
parameters={},
tags={}, capabilities=None, resource_types=False,
role_arn=None, s3=False, use_previous_template=True)
| 60.938776 | 118 | 0.596701 | 1,279 | 11,944 | 5.245504 | 0.091478 | 0.08183 | 0.067968 | 0.048293 | 0.866299 | 0.85035 | 0.844537 | 0.832166 | 0.804442 | 0.786704 | 0 | 0.003317 | 0.293285 | 11,944 | 195 | 119 | 61.251282 | 0.791494 | 0 | 0 | 0.615385 | 0 | 0 | 0.099799 | 0.009796 | 0 | 0 | 0 | 0 | 0.237179 | 1 | 0.115385 | false | 0 | 0.025641 | 0 | 0.141026 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd8389ea8870a190c07123d896c3f3837fecbefa | 32 | py | Python | magnet/data/__init__.py | ysglh/magnet | cd37f0ebe46a17e0948158795c6715a60c34b9db | [
"MIT"
] | 343 | 2018-09-03T09:59:36.000Z | 2022-02-08T11:32:34.000Z | magnet/data/__init__.py | ysglh/magnet | cd37f0ebe46a17e0948158795c6715a60c34b9db | [
"MIT"
] | 7 | 2018-09-04T07:03:11.000Z | 2019-03-21T07:17:14.000Z | magnet/data/__init__.py | ysglh/magnet | cd37f0ebe46a17e0948158795c6715a60c34b9db | [
"MIT"
] | 23 | 2018-09-03T19:12:04.000Z | 2021-02-20T09:23:30.000Z | from .data import DIR_DATA, Data | 32 | 32 | 0.8125 | 6 | 32 | 4.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b4f1f3e6d99e827ac2f1de81937d8070111bc7c | 11,153 | py | Python | services/resource/project/tests/test_article.py | spruce-cq/sblog | 287571bffcf19c224d3b4ad4e4e9347225245350 | [
"MIT"
] | null | null | null | services/resource/project/tests/test_article.py | spruce-cq/sblog | 287571bffcf19c224d3b4ad4e4e9347225245350 | [
"MIT"
] | 7 | 2020-09-07T15:06:12.000Z | 2022-02-26T19:09:01.000Z | services/resource/project/tests/test_article.py | spruce-cq/sblog | 287571bffcf19c224d3b4ad4e4e9347225245350 | [
"MIT"
] | null | null | null | # services/resource/project/tests/test_article.py
import unittest
from project.tests.base import BaseTestCase
from project.tests.utils import get_token, add_category, add_article
from project.api.blog.models import Article
class TestArticleService(BaseTestCase):
"""Test for article service."""
def test_add_article_with_user(self):
add_category('default')
token = get_token(self.client)
response = self.client.post(
'/articles',
json={
'title': 'test title',
'body': 'test body.',
'category': 1
},
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'success')
self.assertEqual(data['message'], 'article: test title was added.')
self.assertEqual(response.status_code, 201)
def test_add_article_without_user(self):
add_category('default')
response = self.client.post(
'/articles',
json={
'title': 'test title',
'body': 'test body.',
'category': 1
},
)
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(
data['message'], 'Invalid payload. Please log in again.')
self.assertEqual(response.status_code, 403)
def test_add_article_no_title(self):
add_category('default')
token = get_token(self.client)
response = self.client.post(
'/articles',
json={
'body': 'test body.',
'category': 1
},
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(data['message'], 'Invalid payload.')
self.assertEqual(response.status_code, 400)
def test_add_article_no_body(self):
add_category('default')
token = get_token(self.client)
response = self.client.post(
'/articles',
json={
'title': 'test title',
'category': 1
},
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(data['message'], 'Invalid payload.')
self.assertEqual(response.status_code, 400)
def test_add_article_no_category(self):
add_category('default')
token = get_token(self.client)
response = self.client.post(
'/articles',
json={
'title': 'test title',
'body': 'test body'
},
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'success')
self.assertEqual(data['message'], 'article: test title was added.')
self.assertEqual(response.status_code, 201)
def test_get_all_article(self):
cate1 = add_category('default')
cate2 = add_category('test')
add_article('default title', 'default body', cate1)
add_article('test title', 'test body', cate2)
response = self.client.get('/articles')
data = response.get_json()
self.assertEqual(data['status'], 'success')
self.assertEqual(len(data['data']), 2)
self.assertEqual(data['data'][0]['title'], 'default title')
self.assertEqual(data['data'][0]['body'], 'default body')
self.assertIsInstance(data['data'][0]['id'], int)
self.assertIsInstance(data['data'][0]['category'][0], int)
self.assertEqual(data['data'][0]['category'][1], 'default')
self.assertTrue(data['data'][0]['timestamp'])
self.assertEqual(data['data'][1]['title'], 'test title')
self.assertEqual(data['data'][1]['body'], 'test body')
self.assertIsInstance(data['data'][1]['id'], int)
self.assertIsInstance(data['data'][1]['category'][0], int)
self.assertEqual(data['data'][1]['category'][1], 'test')
self.assertTrue(data['data'][1]['timestamp'])
self.assertEqual(response.status_code, 200)
def test_get_single_article(self):
cate1 = add_category('default')
article = add_article('test title', 'test body', cate1)
response = self.client.get(f'/articles/{article.id}')
data = response.get_json()
self.assertEqual(data['status'], 'success')
self.assertEqual(data['data']['id'], article.id)
self.assertEqual(data['data']['title'], article.title)
self.assertEqual(data['data']['body'], article.body)
self.assertTrue(data['data']['timestamp'])
self.assertEqual(data['data']['category'][0], cate1.id)
self.assertEqual(data['data']['category'][1], cate1.name)
self.assertEqual(response.status_code, 200)
def test_get_single_article_incorrect_id(self):
cate1 = add_category('default')
add_article('test title', 'test body', cate1)
response = self.client.get(f'/articles/3')
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(data['message'], 'Invalid path params.')
self.assertEqual(response.status_code, 404)
def test_get_single_article_invalid_id(self):
cate1 = add_category('default')
add_article('test title', 'test body', cate1)
response = self.client.get(f'/articles/jianxin')
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(data['message'], 'Invalid path params.')
self.assertEqual(response.status_code, 404)
def test_delete_single_article(self):
cate = add_category('default')
article = add_article('default title', 'default body', cate)
token = get_token(self.client, admin=True)
response = self.client.delete(
f'/articles/{article.id}',
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'success')
self.assertEqual(
data['message'],
f'{article.id}: {article.title} is already deleted.')
self.assertEqual(response.status_code, 202)
def test_delete_single_article_incorrect_id(self):
cate = add_category('default')
article = add_article('default title', 'default body', cate)
token = get_token(self.client, admin=True)
response = self.client.delete(
f'/articles/{article.id + 1}',
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(data['message'], 'Invalid path params.')
self.assertEqual(response.status_code, 404)
def test_delete_single_article_without_user(self):
cate = add_category('default')
article = add_article('default title', 'default body', cate)
response = self.client.delete(
f'/articles/{article.id}',
)
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(
data['message'], 'Invalid payload. Please log in again.')
self.assertEqual(response.status_code, 403)
def test_update_single_article(self):
cate1 = add_category('default')
cate2 = add_category('test')
article = add_article('title', 'body', cate1)
token = get_token(self.client)
response = self.client.put(
'/articles',
json={
'aid': f'{article.id}',
'title': 'new title',
'body': 'new body',
'category': f'{cate2.id}'
},
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'success')
self.assertEqual(
data['message'], f'{article.id}: article is already updated.')
self.assertEqual(response.status_code, 200)
def test_update_single_article_without_user(self):
cate1 = add_category('default')
cate2 = add_category('test')
article = add_article('title', 'body', cate1)
response = self.client.put(
'/articles',
json={
'aid': f'{article.id}',
'title': 'new title',
'body': 'new body',
'category': f'{cate2.id}'
},
)
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(
data['message'], 'Invalid payload. Please log in again.')
self.assertEqual(response.status_code, 403)
def test_udpate_single_article_no_aid(self):
cate1 = add_category('default')
cate2 = add_category('test')
add_article('title', 'body', cate1)
token = get_token(self.client)
response = self.client.put(
'/articles',
json={
'title': 'new title',
'body': 'new body',
'category': f'{cate2.id}'
},
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(data['message'], 'Invalid payload.')
self.assertEqual(response.status_code, 404)
def test_udpate_single_article_no_title(self):
cate1 = add_category('default')
cate2 = add_category('test')
article = add_article('title', 'body', cate1)
token = get_token(self.client)
response = self.client.put(
'/articles',
json={
'aid': f'{article.id}',
'body': 'new body',
'category': f'{cate2.id}'
},
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'fail')
self.assertEqual(data['message'], 'Invalid payload.')
self.assertEqual(response.status_code, 400)
def test_update_single_article_no_category(self):
add_category('default')
cate2 = add_category('test')
article = add_article('title', 'body', cate2)
token = get_token(self.client)
response = self.client.put(
'/articles',
json={
'aid': f'{article.id}',
'title': 'new title',
'body': 'new body',
},
headers={'Authorization': f'Bearer {token}'}
)
data = response.get_json()
self.assertEqual(data['status'], 'success')
self.assertEqual(
data['message'], f'{article.id}: article is already updated.')
self.assertEqual(response.status_code, 200)
self.assertEqual(Article.query.get(article.id).category.id, 1)
if __name__ == "__main__":
unittest.main()
| 38.064846 | 75 | 0.572761 | 1,180 | 11,153 | 5.274576 | 0.083051 | 0.149422 | 0.131266 | 0.051896 | 0.892031 | 0.810572 | 0.784062 | 0.762853 | 0.756266 | 0.756266 | 0 | 0.013335 | 0.280552 | 11,153 | 292 | 76 | 38.195205 | 0.762338 | 0.006635 | 0 | 0.69434 | 0 | 0 | 0.200217 | 0.007857 | 0 | 0 | 0 | 0 | 0.260377 | 1 | 0.064151 | false | 0 | 0.015094 | 0 | 0.083019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8b51c3bf06d0a3256a5e88666b2fbd14324595a4 | 199 | py | Python | python_files/Admin.py | KJHJason/App-Development-Project | 6fb027055ba0d1b469721aae46aa61f7bbd15f74 | [
"MIT"
] | 3 | 2022-02-13T11:07:01.000Z | 2022-02-22T15:27:37.000Z | python_files/Admin.py | nekoney/App-Development-Project | 1563f14bf9f85eb21eaea874a13c6397234d6b95 | [
"MIT"
] | null | null | null | python_files/Admin.py | nekoney/App-Development-Project | 1563f14bf9f85eb21eaea874a13c6397234d6b95 | [
"MIT"
] | 1 | 2022-03-01T07:52:24.000Z | 2022-03-01T07:52:24.000Z | from .User import User
# Done by Jason
class Admin(User):
def __init__(self, user_id, username, email, password):
super().__init__(user_id, username, email, password, "Admin", "Active") | 28.428571 | 79 | 0.693467 | 27 | 199 | 4.740741 | 0.62963 | 0.09375 | 0.21875 | 0.296875 | 0.421875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175879 | 199 | 7 | 79 | 28.428571 | 0.780488 | 0.065327 | 0 | 0 | 0 | 0 | 0.059459 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.5 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
8b682b4dd40c542e67f93f2ccb99152cc3ba0251 | 13,650 | py | Python | src/grassfire/test/test_parallel_simple.py | bmmeijers/grassfire | 20995d97ca41763d40b23f0fadb6e0581dff859b | [
"MIT"
] | 1 | 2021-07-09T14:53:34.000Z | 2021-07-09T14:53:34.000Z | src/grassfire/test/test_parallel_simple.py | bmmeijers/grassfire | 20995d97ca41763d40b23f0fadb6e0581dff859b | [
"MIT"
] | null | null | null | src/grassfire/test/test_parallel_simple.py | bmmeijers/grassfire | 20995d97ca41763d40b23f0fadb6e0581dff859b | [
"MIT"
] | null | null | null | import unittest
from tri.delaunay import ToPointsAndSegments
from grassfire import calc_skel
from grassfire.events import at_same_location
PAUSE = False
OUTPUT = False
LOGGING = False
class TestSimpleParallelEvents(unittest.TestCase):
def setUp(self):
pass
def test_dent(self):
"""Simple parallel event
"""
conv = ToPointsAndSegments()
lines = [
[[0., 0.], [10., 0.]],
[[10., 0.], [10., 10.]],
[[10., 10.], [1., 10.]],
[[1., 10.], [1., 7.]],
[[1., 7.], [3., 7.]],
[[3., 7.], [3. , 6.5]],
[[3., 6.5], [0., 6.5]],
[[0., 6.5], [0., 0. ]]
]
for line in lines:
start, end = map(tuple, line)
conv.add_point(start)
conv.add_point(end)
conv.add_segment(start, end)
skel = calc_skel(conv, pause=PAUSE, output=OUTPUT, shrink=True)
# check the amount of skeleton nodes
assert len(skel.sk_nodes) == 16, len(skel.sk_nodes)
# check the amount of segments in the skeleton
assert len(skel.segments()) == 23, len(skel.segments())
# check the amount of kinetic vertices that are (not) stopped
not_stopped = filter(lambda v: v.stops_at is None, skel.vertices)
stopped = filter(lambda v: v.stops_at is not None, skel.vertices)
assert len(not_stopped) == 6, len(not_stopped)
assert len(stopped) == 17, len(stopped)
# check cross relationship between kinetic vertices and skeleton nodes
for v in skel.vertices:
assert at_same_location((v.start_node, v), v.starts_at)
if v.stops_at is not None and not v.inf_fast:
assert at_same_location((v.stop_node, v), v.stops_at), \
"{} {} {}".format(id(v),
v.stop_node.pos,
v.position_at(v.stops_at) )
def test_handle_fan_cw(self):
import json
s = """{
"type": "FeatureCollection",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:EPSG::28992" } },
"features": [
{ "type": "Feature", "properties": { "id": 140092307709904.000000, "side": 1 }, "geometry": { "type": "LineString", "coordinates": [ [ -0.95871967752, -0.627450761189 ], [ -0.98800624377, -0.432206986189 ] ] } },
{ "type": "Feature", "properties": { "id": 140092307712976.000000, "side": 2 }, "geometry": { "type": "LineString", "coordinates": [ [ -0.98800624377, -0.432206986189 ], [ -0.985872786033, -0.431940303972 ] ] } },
{ "type": "Feature", "properties": { "id": 140092307713104.000000, "side": 1 }, "geometry": { "type": "LineString", "coordinates": [ [ -0.993971487578, -0.378572900681 ], [ -1.02090042121, -0.181094054061 ] ] } },
{ "type": "Feature", "properties": { "id": 140092307782672.000000, "side": 1 }, "geometry": { "type": "LineString", "coordinates": [ [ -0.985872786033, -0.431940303972 ], [ -0.991522629252, -0.37826679339 ] ] } },
{ "type": "Feature", "properties": { "id": 140092307782672.000000, "side": 2 }, "geometry": { "type": "LineString", "coordinates": [ [ -0.991522629252, -0.37826679339 ], [ -0.993971487578, -0.378572900681 ] ] } },
{ "type": "Feature", "properties": { "id": 140092307713104.000000, "side": 1 }, "geometry": { "type": "LineString", "coordinates": [ [ -1.02090042121, -0.181094054061 ], [5, 0] ] } },
{ "type": "Feature", "properties": { "id": 140092307709904.000000, "side": 1 }, "geometry": { "type": "LineString", "coordinates": [ [ -0.95871967752, -0.627450761189 ], [5,0]] } }
]
}"""
x = json.loads(s)
# parse segments from geo-json
segments = []
for y in x['features']:
segments.append(tuple(map(tuple, y['geometry']['coordinates'])))
# convert to triangulation input
conv = ToPointsAndSegments()
for line in segments:
conv.add_point(line[0])
conv.add_point(line[1])
conv.add_segment(*line)
# skeletonize / offset
skel = calc_skel(conv, pause=PAUSE, output=OUTPUT)
# check the amount of skeleton nodes
assert len(skel.sk_nodes) == 14, len(skel.sk_nodes)
# check the amount of segments in the skeleton
assert len(skel.segments()) == 20, len(skel.segments())
# check the amount of kinetic vertices that are (not) stopped
not_stopped = filter(lambda v: v.stops_at is None, skel.vertices)
stopped = filter(lambda v: v.stops_at is not None, skel.vertices)
assert len(not_stopped) == 5, len(not_stopped)
assert len(stopped) == 15, len(stopped)
# check cross relationship between kinetic vertices and skeleton nodes
for v in skel.vertices:
assert at_same_location((v.start_node, v), v.starts_at)
if v.stops_at is not None and not v.inf_fast:
assert at_same_location((v.stop_node, v), v.stops_at), \
"{} {} {}".format(id(v),
v.stop_node.pos,
v.position_at(v.stops_at) )
def test_dent_unequal_wavefront_side(self):
"""Simple parallel event, starting from wavefront side
"""
conv = ToPointsAndSegments()
lines = [
[[51046.4, 391515.7], [51046.3, 391516.65]],
[[51047.95, 391513.05], [51047.55, 391515.85]],
[[51047.55, 391515.85], [51046.4, 391515.7]],
[[51047.45, 391516.8], [51046.9, 391520.8]],
[[51046.3, 391516.65], [51047.45, 391516.8]],
[[51055, 391521], [51057, 391514]],
[[51046.9, 391520.8, ], [51055, 391521]],
[[51047.95, 391513.05], [51057, 391514]]]
for line in lines:
start, end = map(tuple, line)
conv.add_point(start)
conv.add_point(end)
conv.add_segment(start, end)
skel = calc_skel(conv, pause=PAUSE, output=OUTPUT, shrink=True)
# check the amount of skeleton nodes
assert len(skel.sk_nodes) == 15, len(skel.sk_nodes)
# check the amount of segments in the skeleton
assert len(skel.segments()) == 22, len(skel.segments())
# check the amount of kinetic vertices that are (not) stopped
assert len(filter(lambda v: v.stops_at is None, skel.vertices)) == 6
assert len(
filter(lambda v: v.stops_at is not None, skel.vertices)) == 13 + 4
# check cross relationship between kinetic vertices and skeleton nodes
for v in skel.vertices:
assert at_same_location((v.start_node, v), v.starts_at)
if v.stops_at is not None and not v.inf_fast:
assert at_same_location((v.stop_node, v), v.stops_at), "{} {} {}".format(id(v), v.stop_node.pos, v.position_at(v.stops_at) )
def test_dent_unequal_wavefront_side_flipped_y(self):
"""Simple parallel event, starting from wavefront side
"""
def flip_y(pt):
return (pt[0], -pt[1])
conv = ToPointsAndSegments()
lines = [
[[51046.4, 391515.7], [51046.3, 391516.65]],
[[51047.95, 391513.05], [51047.55, 391515.85]],
[[51047.55, 391515.85], [51046.4, 391515.7]],
[[51047.45, 391516.8], [51046.9, 391520.8]],
[[51046.3, 391516.65], [51047.45, 391516.8]],
[[51055, 391521], [51057, 391514]],
[[51046.9, 391520.8, ], [51055, 391521]],
[[51047.95, 391513.05], [51057, 391514]]]
for line in lines:
start, end = map(tuple, map(flip_y, line))
conv.add_point(start)
conv.add_point(end)
conv.add_segment(start, end)
skel = calc_skel(conv, pause=PAUSE, output=OUTPUT, shrink=True)
#
# check the amount of skeleton nodes
assert len(skel.sk_nodes) == 16, len(skel.sk_nodes)
# check the amount of segments in the skeleton
assert len(skel.segments()) == 23, len(skel.segments())
# check the amount of kinetic vertices that are (not) stopped
assert len(filter(lambda v: v.stops_at is None, skel.vertices)) == 6
assert len(
filter(lambda v: v.stops_at is not None, skel.vertices)) == 13 + 4
# check cross relationship between kinetic vertices and skeleton nodes
for v in skel.vertices:
assert at_same_location((v.start_node, v), v.starts_at)
if v.stops_at is not None and not v.inf_fast:
assert at_same_location((v.stop_node, v), v.stops_at), "{} {} {}".format(id(v), v.stop_node.pos, v.position_at(v.stops_at) )
def test_square(self):
conv = ToPointsAndSegments()
polygon = [[(0,0), (10,0), (10,10), (0,10), (0,0)]]
conv.add_polygon(polygon)
skel = calc_skel(conv, pause=PAUSE, output=OUTPUT)
# check the amount of segments in the skeleton
assert len(skel.segments()) == (4 + 4), len(skel.segments())
# check the amount of skeleton nodes
assert len(skel.sk_nodes) == 5, len(skel.sk_nodes)
# check the amount of kinetic vertices that are (not) stopped
assert len(filter(lambda v: v.stops_at is None, skel.vertices)) == 4
assert len(filter(lambda v: v.stops_at is not None, skel.vertices)) == 4
# check cross relationship between kinetic vertices and skeleton nodes
for v in skel.vertices:
assert at_same_location((v.start_node, v), v.starts_at)
if v.stops_at is not None:
assert at_same_location((v.stop_node, v), v.stops_at)
def test_rectangle(self):
conv = ToPointsAndSegments()
polygon = [[(0,0), (10,0), (10,5), (0,5), (0,0)]]
conv.add_polygon(polygon)
skel = calc_skel(conv, pause=PAUSE, output=OUTPUT)
# check the amount of segments in the skeleton
assert len(skel.segments()) == (5 + 4), len(skel.segments())
# check the amount of skeleton nodes
assert len(skel.sk_nodes) == 6, len(skel.sk_nodes)
# check the amount of kinetic vertices that are (not) stopped
assert len(filter(lambda v: v.stops_at is None, skel.vertices)) == 4
assert len(filter(lambda v: v.stops_at is not None, skel.vertices)) == 5
# check cross relationship between kinetic vertices and skeleton nodes
for v in skel.vertices:
assert at_same_location((v.start_node, v), v.starts_at)
if v.stops_at is not None and not v.inf_fast:
assert at_same_location((v.stop_node, v), v.stops_at), "{} {} {}".format(id(v), v.stop_node.pos, v.position_at(v.stops_at) )
def test_dent_unequal_top(self):
conv = ToPointsAndSegments()
polygon = [[(0, 0), (10., 0), (10,20), (-0.5,20.), (-0.5,11.), (-1,11), (-1,10), (0,10), (0,0)]]
conv.add_polygon(polygon)
skel = calc_skel(conv, pause=PAUSE, output=OUTPUT)
# check the amount of segments in the skeleton
assert len(skel.segments()) == (12 + 8), len(skel.segments())
# check the amount of skeleton nodes
assert len(skel.sk_nodes) == 13, len(skel.sk_nodes)
# check the amount of kinetic vertices that are (not) stopped
assert len(filter(lambda v: v.stops_at is None, skel.vertices)) == 8
assert len(filter(lambda v: v.stops_at is not None, skel.vertices)) == 12
# check cross relationship between kinetic vertices and skeleton nodes
for v in skel.vertices:
assert at_same_location((v.start_node, v), v.starts_at)
if v.stops_at is not None and not v.inf_fast:
assert at_same_location((v.stop_node, v), v.stops_at), \
"{} {} {}".format(id(v),
v.stop_node.pos,
v.position_at(v.stops_at) )
def test_dent_unequal_bottom(self):
conv = ToPointsAndSegments()
polygon = [[(-0.5, 0), (10., 0), (10,20), (0,20.), (0,11.), (-1,11), (-1,10), (-0.5,10), (-0.5,0)]]
conv.add_polygon(polygon)
skel = calc_skel(conv, pause=PAUSE, output=OUTPUT)
# check the amount of segments in the skeleton
assert len(skel.segments()) == (13 + 8), len(skel.segments())
# check the amount of skeleton nodes
assert len(skel.sk_nodes) == 14, len(skel.sk_nodes)
# check the amount of kinetic vertices that are (not) stopped
assert len(filter(lambda v: v.stops_at is None, skel.vertices)) == 8
assert len(filter(lambda v: v.stops_at is not None, skel.vertices)) == 13
# check cross relationship between kinetic vertices and skeleton nodes
for v in skel.vertices:
assert at_same_location((v.start_node, v), v.starts_at)
if v.stops_at is not None and not v.inf_fast:
assert at_same_location((v.stop_node, v), v.stops_at), \
"{} {} {}".format(id(v),
v.stop_node.pos,
v.position_at(v.stops_at) )
if __name__ == "__main__":
if LOGGING:
import logging
import sys
root = logging.getLogger()
root.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(message)s')
ch.setFormatter(formatter)
root.addHandler(ch)
unittest.main(verbosity=5)
| 49.817518 | 213 | 0.575165 | 1,784 | 13,650 | 4.290359 | 0.107623 | 0.010191 | 0.040763 | 0.05017 | 0.833159 | 0.826104 | 0.813823 | 0.803893 | 0.773582 | 0.757512 | 0 | 0.110169 | 0.282491 | 13,650 | 273 | 214 | 50 | 0.671329 | 0.139634 | 0 | 0.566502 | 0 | 0.039409 | 0.15428 | 0.016249 | 0 | 0 | 0 | 0 | 0.236453 | 1 | 0.049261 | false | 0.004926 | 0.034483 | 0.004926 | 0.093596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8b7cbfe16b95064af2cd615f3909720dd8d532dc | 45 | py | Python | tests/python/doufo/tensor/impls/test_construct.py | Hong-Xiang/doufo | 3d375fef30670597768a6eef809b75b4b1b5a3fd | [
"Apache-2.0"
] | 3 | 2018-08-05T07:16:34.000Z | 2018-08-10T05:28:24.000Z | tests/python/doufo/tensor/impls/test_construct.py | tech-pi/doufo | 3d375fef30670597768a6eef809b75b4b1b5a3fd | [
"Apache-2.0"
] | 10 | 2018-09-16T15:44:19.000Z | 2018-10-06T10:39:59.000Z | tests/python/doufo/tensor/impls/test_construct.py | tech-pi/doufo | 3d375fef30670597768a6eef809b75b4b1b5a3fd | [
"Apache-2.0"
] | 1 | 2018-08-04T08:13:50.000Z | 2018-08-04T08:13:50.000Z | from doufo.tensor.impls.construct import *
| 11.25 | 42 | 0.777778 | 6 | 45 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 3 | 43 | 15 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b9fa49a045b65b93df616a57fc2c3b7bd7bb27c | 169 | py | Python | pokebattle/admin.py | raymundosaraiva/pokemon | fc1b1c054ce896c395b706958192a8f0723d1b0c | [
"MIT"
] | null | null | null | pokebattle/admin.py | raymundosaraiva/pokemon | fc1b1c054ce896c395b706958192a8f0723d1b0c | [
"MIT"
] | 6 | 2020-06-06T00:49:56.000Z | 2021-09-22T18:04:13.000Z | pokebattle/admin.py | raymundosaraiva/pokemon | fc1b1c054ce896c395b706958192a8f0723d1b0c | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(Trainer)
admin.site.register(Pokemon)
admin.site.register(Battle)
admin.site.register(Game)
| 18.777778 | 32 | 0.804734 | 24 | 169 | 5.666667 | 0.5 | 0.264706 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08284 | 169 | 8 | 33 | 21.125 | 0.877419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9a01719580de1a8c8b89a03f4715df95b7ac8a81 | 48 | py | Python | hydrobox/io/__init__.py | joergmeyer-kit/hydrobox | af75a5ba87147e00656435c170535c69fc3298a8 | [
"MIT"
] | 1 | 2019-01-15T13:32:56.000Z | 2019-01-15T13:32:56.000Z | hydrobox/io/__init__.py | joergmeyer-kit/hydrobox | af75a5ba87147e00656435c170535c69fc3298a8 | [
"MIT"
] | null | null | null | hydrobox/io/__init__.py | joergmeyer-kit/hydrobox | af75a5ba87147e00656435c170535c69fc3298a8 | [
"MIT"
] | null | null | null | from .random import timeseries_from_distribution | 48 | 48 | 0.916667 | 6 | 48 | 7 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a1223638e78a8f72976fd8c3a1b6cae88f97bac | 193 | py | Python | backend/users/admin.py | mnieber/taskboard | 7925342751e2782bd0a0258eb2d43d9ec90ce9d8 | [
"MIT"
] | null | null | null | backend/users/admin.py | mnieber/taskboard | 7925342751e2782bd0a0258eb2d43d9ec90ce9d8 | [
"MIT"
] | null | null | null | backend/users/admin.py | mnieber/taskboard | 7925342751e2782bd0a0258eb2d43d9ec90ce9d8 | [
"MIT"
] | null | null | null | from django.contrib import admin
from users import models
class ProfileAdmin(admin.ModelAdmin):
pass
admin.site.register(models.User)
admin.site.register(models.Profile, ProfileAdmin)
| 16.083333 | 49 | 0.797927 | 25 | 193 | 6.16 | 0.6 | 0.116883 | 0.220779 | 0.298701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119171 | 193 | 11 | 50 | 17.545455 | 0.905882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.333333 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
9a3dae651453ab8c1874e676c121e42f2a020cce | 7,554 | py | Python | tests/__init__.py | carlmatt/personal-data-anonymizer | 68e777dde3a45df2135e23ebcb425842c999eede | [
"MIT"
] | null | null | null | tests/__init__.py | carlmatt/personal-data-anonymizer | 68e777dde3a45df2135e23ebcb425842c999eede | [
"MIT"
] | null | null | null | tests/__init__.py | carlmatt/personal-data-anonymizer | 68e777dde3a45df2135e23ebcb425842c999eede | [
"MIT"
] | null | null | null | import unittest
from personal_data_anonymizer import PersonalDataAnonymizer
from personal_data_anonymizer.utils.finnish import conjugate
class TestPersonalDataAnonymizer(unittest.TestCase):
def test_check_text_corpus(self):
text = 123
app = PersonalDataAnonymizer()
self.assertRaises(TypeError, app.check_text_corpus, text)
def test_anonymize_social_security_number(self):
text = 'Nimeni on Matti Seppälä ja henkilötunnukseni on 010101-123P.'
app = PersonalDataAnonymizer()
actual = app.anonymize_social_security_number(text)
expected = ['Nimeni on Matti Seppälä ja henkilötunnukseni on [redacted].']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_social_security_number_21st_century(self):
text = 'Henkilötunnukseni on 311214A123P.'
app = PersonalDataAnonymizer()
actual = app.anonymize_social_security_number(text)
expected = ['Henkilötunnukseni on [redacted].']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_social_security_number_19th_century(self):
text = 'Henkilötunnukseni on 311299+0010.'
app = PersonalDataAnonymizer()
actual = app.anonymize_social_security_number(text)
expected = ['Henkilötunnukseni on [redacted].']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_first_name(self):
text = ['Nimeni on matti seppälä, ja henkilötunnukseni on "010101-123P".']
app = PersonalDataAnonymizer()
actual = app.anonymize_name(text, names='first_names_finland')
expected = ['Nimeni on [redacted] seppälä, ja henkilötunnukseni on "010101-123P".']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_first_name_conjugated(self):
text = ['Matilla on koira. Sarilla on kissa.']
app = PersonalDataAnonymizer()
actual = app.anonymize_name(text, names='first_names_finland')
expected = ['[redacted] on koira. [redacted] on kissa.']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_last_name(self):
text = ['Nimeni on matti seppälä, ja henkilötunnukseni on 010101-123P.']
app = PersonalDataAnonymizer()
actual = app.anonymize_name(text, names='last_names_finland')
expected = ['Nimeni on matti [redacted], ja henkilötunnukseni on 010101-123P.']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_last_name_conjugated(self):
text = ['Menemme Virtaselle. Tulemme Perältä.']
app = PersonalDataAnonymizer()
actual = app.anonymize_name(text, names='last_names_finland')
expected = ['Menemme [redacted]. Tulemme [redacted].']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_first_name_case_sensitive(self):
text = ['Nimeni on Pekka Haapa-aho ja henkilötunnukseni on 010101-123P.']
app = PersonalDataAnonymizer()
actual = app.anonymize_name(text, names='first_names_finland', case_sensitive=True)
expected = ['Nimeni on [redacted] Haapa-aho ja henkilötunnukseni on 010101-123P.']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_last_name_case_sensitive(self):
text = ['Nimeni on Pekka Haapa-aho ja henkilötunnukseni on 010101-123P.']
app = PersonalDataAnonymizer()
actual = app.anonymize_name(text, names='last_names_finland', case_sensitive=True)
expected = ['Nimeni on Pekka [redacted] ja henkilötunnukseni on 010101-123P.']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_name_type_error(self):
text = ['Nimeni on matti seppälä, ja henkilötunnukseni on "010101-123P".']
app = PersonalDataAnonymizer()
self.assertRaises(TypeError, app.anonymize_name, text, 123)
def test_anonymize_phone_number(self):
text = 'Puhelinnumeroni on 0401234567'
app = PersonalDataAnonymizer()
actual = app.anonymize_phone_number(text)
expected = ['Puhelinnumeroni on [redacted]']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_email_address(self):
text = 'Säköpostiosoitteeni on mari.eskola@luukku.com.'
app = PersonalDataAnonymizer()
actual = app.anonmyize_email_address(text)
expected = ['Säköpostiosoitteeni on [redacted]']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_everything(self):
text = ['Nimeni on Matti Seppälä. Henkilötunnukseni on 010101-123P ja puhelinnumeroni on 0501234567.',
'olen riikka toivola hetuni on 090909-0000 ja puhelinnumeroni 0441212312',
'Nimeni on Esa Kokkonen ja sähköpostiosoitteeni on esa.kokkonen@gmail.com.']
app = PersonalDataAnonymizer()
actual = app.anonymize_everything(text)
expected = ['Nimeni on [redacted] [redacted]. Henkilötunnukseni on [redacted] ja puhelinnumeroni on [redacted].',
'[redacted] [redacted] [redacted] hetuni on [redacted] ja puhelinnumeroni [redacted]',
'Nimeni on [redacted] [redacted] ja sähköpostiosoitteeni on [redacted]']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_anonymize_everything_case_sensitive(self):
text = ['Nimeni on Matti Seppälä. Henkilötunnukseni on 010101-123P ja puhelinnumeroni on 0501234567.',
'olen riikka toivola hetuni on 090909-0000 ja puhelinnumeroni 0441212312',
'Nimeni on Esa Kokkonen ja sähköpostiosoitteeni on esa.kokkonen@gmail.com.']
app = PersonalDataAnonymizer()
actual = app.anonymize_everything(text, case_sensitive=True)
expected = ['Nimeni on [redacted] [redacted]. Henkilötunnukseni on [redacted] ja puhelinnumeroni on [redacted].',
'olen riikka toivola hetuni on [redacted] ja puhelinnumeroni [redacted]',
'Nimeni on [redacted] [redacted] ja sähköpostiosoitteeni on [redacted]']
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
class TestUtilsFinnish(unittest.TestCase):
def test_conjugate_1(self):
word = 'pallo'
case = 'genetiivi'
actual = conjugate(word, case)
expected = 'pallon'
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_conjugate_2(self):
word = 'ässä'
case = 'essiivi'
actual = conjugate(word, case)
expected = 'ässänä'
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_conjugate_3(self):
word = 'Jarmo'
case = 'elatiivi'
actual = conjugate(word, case)
expected = 'Jarmosta'
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
def test_conjugate_4(self):
word = 'Essi'
case = 'komitatiivi'
actual = conjugate(word, case)
expected = 'Essinsä'
self.assertEqual(actual, expected, f'Expected \"{expected}\", got \"{actual}\".')
| 52.096552 | 121 | 0.671432 | 793 | 7,554 | 6.257251 | 0.139975 | 0.026804 | 0.071947 | 0.099355 | 0.825675 | 0.764208 | 0.736598 | 0.725514 | 0.712213 | 0.701129 | 0 | 0.035296 | 0.208631 | 7,554 | 144 | 122 | 52.458333 | 0.794747 | 0 | 0 | 0.479675 | 0 | 0 | 0.393434 | 0.009134 | 0 | 0 | 0 | 0 | 0.154472 | 1 | 0.154472 | false | 0 | 0.02439 | 0 | 0.195122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d0070b166fc4bddb69bb99b9c207b1b94fe8ea46 | 171 | py | Python | soup.py | farooq-teqniqly/wine-review-parser | c127c79dc4b9a2ee756480ae10db3cc3f3b88425 | [
"MIT"
] | null | null | null | soup.py | farooq-teqniqly/wine-review-parser | c127c79dc4b9a2ee756480ae10db3cc3f3b88425 | [
"MIT"
] | null | null | null | soup.py | farooq-teqniqly/wine-review-parser | c127c79dc4b9a2ee756480ae10db3cc3f3b88425 | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
class SoupFactory:
def __init__(self):
pass
def create_soup(self, html):
return BeautifulSoup(html, "html.parser")
| 19 | 49 | 0.678363 | 20 | 171 | 5.55 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007692 | 0.239766 | 171 | 8 | 50 | 21.375 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.064327 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
d03d08e9bbddc59ff6d6bbb2f448b08bbb46a754 | 121 | py | Python | contract/scripts/alper.py | ebloc/EBlocBroker | 8e0a8be0fb4e998b0b7214c3eb3eff20b90a8253 | [
"MIT"
] | 7 | 2018-02-10T22:57:28.000Z | 2020-11-20T14:46:18.000Z | contract/scripts/alper.py | ebloc/EBlocBroker | 8e0a8be0fb4e998b0b7214c3eb3eff20b90a8253 | [
"MIT"
] | 5 | 2020-10-30T18:43:27.000Z | 2021-02-04T12:39:30.000Z | contract/scripts/alper.py | ebloc/EBlocBroker | 8e0a8be0fb4e998b0b7214c3eb3eff20b90a8253 | [
"MIT"
] | 5 | 2017-07-06T14:14:13.000Z | 2019-02-22T14:40:16.000Z | import importlib
from contract.scripts import project
def main():
importlib.reload(project)
project.project()
| 13.444444 | 36 | 0.743802 | 14 | 121 | 6.428571 | 0.642857 | 0.311111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173554 | 121 | 8 | 37 | 15.125 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0537fc9b6272873c77387f08506b3e6d559a60a | 166 | py | Python | python/test.py | MisakaMikoto128/SimpleDPP | ecf1000d90bf73f33a1aed9c5638520aa9587c8f | [
"MIT"
] | 1 | 2022-01-25T06:18:13.000Z | 2022-01-25T06:18:13.000Z | python/test.py | MisakaMikoto128/SimpleDPP | ecf1000d90bf73f33a1aed9c5638520aa9587c8f | [
"MIT"
] | null | null | null | python/test.py | MisakaMikoto128/SimpleDPP | ecf1000d90bf73f33a1aed9c5638520aa9587c8f | [
"MIT"
] | null | null | null | class Person:
def say_hi(self,name):
print("hello I am " ,self.name)
def say_hi(self,name,age):
print("hello I am{0} age{1}",format(name,age)) | 33.2 | 54 | 0.60241 | 29 | 166 | 3.37931 | 0.517241 | 0.244898 | 0.163265 | 0.244898 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.228916 | 166 | 5 | 54 | 33.2 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.185629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0.4 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
d09122189b20507f45f1ba9e76c8af196e6c21e3 | 806 | py | Python | preprocessing/create_train_subset.py | ajevnisek/SupContrast | 5ae2c18f20dab570177445508314538b4340e7ff | [
"BSD-2-Clause"
] | null | null | null | preprocessing/create_train_subset.py | ajevnisek/SupContrast | 5ae2c18f20dab570177445508314538b4340e7ff | [
"BSD-2-Clause"
] | null | null | null | preprocessing/create_train_subset.py | ajevnisek/SupContrast | 5ae2c18f20dab570177445508314538b4340e7ff | [
"BSD-2-Clause"
] | null | null | null | import os
import random
src_root = '/home/uriel/dev/SupContrast/dataset/Deepfakes/train/manipulated/'
dst_root = '/home/uriel/dev/SupContrast/dataset/Deepfakes/train_subset/manipulated/'
l = os.listdir(src_root)
to_copy = random.sample(list(set(l)), k=int(0.1 * len(l)))
for x in to_copy:
if not os.path.exists(os.path.join(dst_root, x)):
os.symlink(os.path.join(src_root, x), os.path.join(dst_root, x))
src_root = '/home/uriel/dev/SupContrast/dataset/Deepfakes/train/original/'
dst_root = '/home/uriel/dev/SupContrast/dataset/Deepfakes/train_subset/original/'
l = os.listdir(src_root)
to_copy = random.sample(list(set(l)), k=int(0.1 * len(l)))
for x in to_copy:
if not os.path.exists(os.path.join(dst_root, x)):
os.symlink(os.path.join(src_root, x), os.path.join(dst_root, x))
| 38.380952 | 84 | 0.722084 | 142 | 806 | 3.971831 | 0.260563 | 0.085106 | 0.106383 | 0.113475 | 0.897163 | 0.897163 | 0.897163 | 0.897163 | 0.897163 | 0.716312 | 0 | 0.005556 | 0.1067 | 806 | 20 | 85 | 40.3 | 0.777778 | 0 | 0 | 0.625 | 0 | 0 | 0.327543 | 0.327543 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d0aae0589810630a310b1ec76f4b67e843e8c82f | 20,693 | py | Python | pybind/slxos/v16r_1_00b/sub_interface_statistics_state/vlan_statistics/port_statistics/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/sub_interface_statistics_state/vlan_statistics/port_statistics/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/sub_interface_statistics_state/vlan_statistics/port_statistics/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class port_statistics(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-nsm-operational - based on the path /sub-interface-statistics-state/vlan-statistics/port-statistics. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: port statistics
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__rx_packets','__tx_packets','__rx_bytes','__tx_bytes','__port_id','__port_name',)
_yang_name = 'port-statistics'
_rest_name = 'port-statistics'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__rx_packets = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="rx-packets", rest_name="rx-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
self.__rx_bytes = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="rx-bytes", rest_name="rx-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
self.__tx_packets = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="tx-packets", rest_name="tx-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
self.__port_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="port-name", rest_name="port-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='string', is_config=False)
self.__tx_bytes = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="tx-bytes", rest_name="tx-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
self.__port_id = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="port-id", rest_name="port-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint32', is_config=False)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'sub-interface-statistics-state', u'vlan-statistics', u'port-statistics']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'sub-interface-statistics-state', u'vlan-statistics', u'port-statistics']
def _get_rx_packets(self):
"""
Getter method for rx_packets, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/rx_packets (uint64)
YANG Description: Received Packets count in Lif
"""
return self.__rx_packets
def _set_rx_packets(self, v, load=False):
"""
Setter method for rx_packets, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/rx_packets (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_rx_packets is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_rx_packets() directly.
YANG Description: Received Packets count in Lif
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="rx-packets", rest_name="rx-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """rx_packets must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="rx-packets", rest_name="rx-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)""",
})
self.__rx_packets = t
if hasattr(self, '_set'):
self._set()
def _unset_rx_packets(self):
self.__rx_packets = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="rx-packets", rest_name="rx-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
def _get_tx_packets(self):
"""
Getter method for tx_packets, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/tx_packets (uint64)
YANG Description: Transmitted Packets count in Lif
"""
return self.__tx_packets
def _set_tx_packets(self, v, load=False):
"""
Setter method for tx_packets, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/tx_packets (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_tx_packets is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_tx_packets() directly.
YANG Description: Transmitted Packets count in Lif
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="tx-packets", rest_name="tx-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """tx_packets must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="tx-packets", rest_name="tx-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)""",
})
self.__tx_packets = t
if hasattr(self, '_set'):
self._set()
def _unset_tx_packets(self):
self.__tx_packets = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="tx-packets", rest_name="tx-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
def _get_rx_bytes(self):
"""
Getter method for rx_bytes, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/rx_bytes (uint64)
YANG Description: Received Bytes count in Lif
"""
return self.__rx_bytes
def _set_rx_bytes(self, v, load=False):
"""
Setter method for rx_bytes, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/rx_bytes (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_rx_bytes is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_rx_bytes() directly.
YANG Description: Received Bytes count in Lif
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="rx-bytes", rest_name="rx-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """rx_bytes must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="rx-bytes", rest_name="rx-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)""",
})
self.__rx_bytes = t
if hasattr(self, '_set'):
self._set()
def _unset_rx_bytes(self):
self.__rx_bytes = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="rx-bytes", rest_name="rx-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
def _get_tx_bytes(self):
"""
Getter method for tx_bytes, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/tx_bytes (uint64)
YANG Description: Transmitted Bytes count in Lif
"""
return self.__tx_bytes
def _set_tx_bytes(self, v, load=False):
"""
Setter method for tx_bytes, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/tx_bytes (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_tx_bytes is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_tx_bytes() directly.
YANG Description: Transmitted Bytes count in Lif
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="tx-bytes", rest_name="tx-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """tx_bytes must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="tx-bytes", rest_name="tx-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)""",
})
self.__tx_bytes = t
if hasattr(self, '_set'):
self._set()
def _unset_tx_bytes(self):
self.__tx_bytes = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="tx-bytes", rest_name="tx-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint64', is_config=False)
def _get_port_id(self):
"""
Getter method for port_id, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/port_id (uint32)
YANG Description: port index
"""
return self.__port_id
def _set_port_id(self, v, load=False):
"""
Setter method for port_id, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/port_id (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_port_id is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_port_id() directly.
YANG Description: port index
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="port-id", rest_name="port-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """port_id must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="port-id", rest_name="port-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint32', is_config=False)""",
})
self.__port_id = t
if hasattr(self, '_set'):
self._set()
def _unset_port_id(self):
self.__port_id = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="port-id", rest_name="port-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='uint32', is_config=False)
def _get_port_name(self):
"""
Getter method for port_name, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/port_name (string)
YANG Description: port name
"""
return self.__port_name
def _set_port_name(self, v, load=False):
"""
Setter method for port_name, mapped from YANG variable /sub_interface_statistics_state/vlan_statistics/port_statistics/port_name (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_port_name is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_port_name() directly.
YANG Description: port name
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="port-name", rest_name="port-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='string', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """port_name must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="port-name", rest_name="port-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='string', is_config=False)""",
})
self.__port_name = t
if hasattr(self, '_set'):
self._set()
def _unset_port_name(self):
self.__port_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="port-name", rest_name="port-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-nsm-operational', defining_module='brocade-nsm-operational', yang_type='string', is_config=False)
rx_packets = __builtin__.property(_get_rx_packets)
tx_packets = __builtin__.property(_get_tx_packets)
rx_bytes = __builtin__.property(_get_rx_bytes)
tx_bytes = __builtin__.property(_get_tx_bytes)
port_id = __builtin__.property(_get_port_id)
port_name = __builtin__.property(_get_port_name)
_pyangbind_elements = {'rx_packets': rx_packets, 'tx_packets': tx_packets, 'rx_bytes': rx_bytes, 'tx_bytes': tx_bytes, 'port_id': port_id, 'port_name': port_name, }
| 62.896657 | 454 | 0.740975 | 2,794 | 20,693 | 5.209377 | 0.068003 | 0.042597 | 0.050017 | 0.046376 | 0.842116 | 0.816627 | 0.800687 | 0.779114 | 0.765441 | 0.765441 | 0 | 0.028271 | 0.13507 | 20,693 | 328 | 455 | 63.088415 | 0.784948 | 0.186488 | 0 | 0.447917 | 0 | 0.03125 | 0.341178 | 0.196579 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109375 | false | 0 | 0.041667 | 0 | 0.270833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d0b0ed31596ad2fe0567956ad0518fe606a873ff | 321 | py | Python | src/webapp/model/load.py | Ciprian15/DeepDoc | 4ae22030db2a7081cb179e52ac6c1c896f528fb1 | [
"MIT"
] | null | null | null | src/webapp/model/load.py | Ciprian15/DeepDoc | 4ae22030db2a7081cb179e52ac6c1c896f528fb1 | [
"MIT"
] | null | null | null | src/webapp/model/load.py | Ciprian15/DeepDoc | 4ae22030db2a7081cb179e52ac6c1c896f528fb1 | [
"MIT"
] | null | null | null | import os
import json
from sklearn.externals import joblib
def get_model(dataset_name,version="latest"):
return joblib.load("models/{}/model.{}".format(dataset_name,version))
def get_label_encoder(dataset_name,version="latest"):
return joblib.load("models/{}/label_encoder.{}".format(dataset_name,version))
| 22.928571 | 81 | 0.760125 | 43 | 321 | 5.488372 | 0.465116 | 0.186441 | 0.305085 | 0.20339 | 0.389831 | 0.389831 | 0.389831 | 0.389831 | 0 | 0 | 0 | 0 | 0.093458 | 321 | 13 | 82 | 24.692308 | 0.810997 | 0 | 0 | 0 | 0 | 0 | 0.174455 | 0.080997 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.428571 | 0.285714 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d0cadcd56a7c1063c591a70f3ffcc6ebee10fbcc | 43 | py | Python | python_code.py | duhita123/Coursera_Capstone | 746cddd5bf00a6d88aef2dc9a8c11dbb445aa17b | [
"MIT"
] | null | null | null | python_code.py | duhita123/Coursera_Capstone | 746cddd5bf00a6d88aef2dc9a8c11dbb445aa17b | [
"MIT"
] | null | null | null | python_code.py | duhita123/Coursera_Capstone | 746cddd5bf00a6d88aef2dc9a8c11dbb445aa17b | [
"MIT"
] | null | null | null | print("Hellow github I am Deep Learning")
| 14.333333 | 41 | 0.744186 | 7 | 43 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 2 | 42 | 21.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ef89133aaf03c313a774f7002ecb0965cc2fe939 | 25 | py | Python | time_execution/__init__.py | eigenein/py-timeexecution | 497e74d55833c13cca949826600d7eb64a7a96c1 | [
"Apache-2.0"
] | null | null | null | time_execution/__init__.py | eigenein/py-timeexecution | 497e74d55833c13cca949826600d7eb64a7a96c1 | [
"Apache-2.0"
] | null | null | null | time_execution/__init__.py | eigenein/py-timeexecution | 497e74d55833c13cca949826600d7eb64a7a96c1 | [
"Apache-2.0"
] | null | null | null | from .decorator import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
efb808f8138788c7f2af85e2a00edfacf6c2c71e | 49 | py | Python | gettoken.py | taygetea/ankifriends | 1f87077cfe3a3e5cb8d178fb4b6970f404775195 | [
"MIT"
] | null | null | null | gettoken.py | taygetea/ankifriends | 1f87077cfe3a3e5cb8d178fb4b6970f404775195 | [
"MIT"
] | null | null | null | gettoken.py | taygetea/ankifriends | 1f87077cfe3a3e5cb8d178fb4b6970f404775195 | [
"MIT"
] | null | null | null | import localserv
print localserv.serverrunner()
| 12.25 | 30 | 0.836735 | 5 | 49 | 8.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 3 | 31 | 16.333333 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
4bdef0f78f1db7ec659ab6f20afaad849f17a2d3 | 151 | py | Python | Funny_Js_Crack/54-geetest2/__init__.py | qqizai/Func_Js_Crack | 8cc8586107fecace4b71d0519cfbc760584171b1 | [
"MIT"
] | 18 | 2020-12-09T06:49:46.000Z | 2022-01-27T03:20:36.000Z | Funny_Js_Crack/54-geetest2/__init__.py | useafter/Func_Js_Crack | 8cc8586107fecace4b71d0519cfbc760584171b1 | [
"MIT"
] | null | null | null | Funny_Js_Crack/54-geetest2/__init__.py | useafter/Func_Js_Crack | 8cc8586107fecace4b71d0519cfbc760584171b1 | [
"MIT"
] | 9 | 2020-12-20T08:52:09.000Z | 2021-12-19T09:13:09.000Z | # -*- coding: utf-8 -*-
# @Time : 2019/9/7 15:22
# @Author : Esbiya
# @Email : 18829040039@163.com
# @File : __init__.py
# @Software: PyCharm
| 21.571429 | 32 | 0.576159 | 20 | 151 | 4.15 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213675 | 0.225166 | 151 | 6 | 33 | 25.166667 | 0.495727 | 0.913907 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
089ce76de28112bdc8db9d75c923772bb4f31709 | 182 | py | Python | bark_ml/evaluators/__init__.py | bark-simulator/rl | 84f9c74b60becbc4bc758e19b201d85a21880717 | [
"MIT"
] | 58 | 2019-10-07T12:10:27.000Z | 2022-03-01T08:08:47.000Z | bark_ml/evaluators/__init__.py | bark-simulator/rl | 84f9c74b60becbc4bc758e19b201d85a21880717 | [
"MIT"
] | 31 | 2019-09-10T15:33:20.000Z | 2022-03-30T08:52:08.000Z | bark_ml/evaluators/__init__.py | bark-simulator/rl | 84f9c74b60becbc4bc758e19b201d85a21880717 | [
"MIT"
] | 14 | 2019-10-01T08:23:37.000Z | 2021-12-16T15:55:38.000Z | from bark_ml.evaluators.general_evaluator import GeneralEvaluator # pylint: disable=unused-import
from bark_ml.evaluators.evaluator_configs import * # pylint: disable=unused-import | 91 | 98 | 0.846154 | 23 | 182 | 6.521739 | 0.521739 | 0.106667 | 0.133333 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082418 | 182 | 2 | 99 | 91 | 0.898204 | 0.324176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
08bbbbc7e4e9bef63d2c09cabfe9f93cf814bf2e | 33 | py | Python | django2_alexa/interfaces/response/__init__.py | timwoocker/django-alexa | 59f7e814de315992f3f703b6c6a742d20c726bdb | [
"Apache-2.0"
] | 2 | 2018-11-17T22:20:15.000Z | 2018-11-17T22:20:17.000Z | django2_alexa/interfaces/response/__init__.py | timwoocker/django-alexa | 59f7e814de315992f3f703b6c6a742d20c726bdb | [
"Apache-2.0"
] | null | null | null | django2_alexa/interfaces/response/__init__.py | timwoocker/django-alexa | 59f7e814de315992f3f703b6c6a742d20c726bdb | [
"Apache-2.0"
] | 2 | 2018-11-19T15:21:30.000Z | 2020-02-21T15:22:09.000Z | from .alexa import AlexaResponse
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ef81911c56629b8820920dd6059246cf9f056cf | 231 | py | Python | bot/modulo/sys.py | mjhunior/bot-telegram | 80cd0a96b94256ea5c9b56be625989f343f643e2 | [
"MIT"
] | null | null | null | bot/modulo/sys.py | mjhunior/bot-telegram | 80cd0a96b94256ea5c9b56be625989f343f643e2 | [
"MIT"
] | null | null | null | bot/modulo/sys.py | mjhunior/bot-telegram | 80cd0a96b94256ea5c9b56be625989f343f643e2 | [
"MIT"
] | null | null | null | import os
import subprocess
def temperatura():
return[
subprocess.check_output(["vcgencmd"," measure_temp"])
]
def reiniciar():
return[
subprocess.check_output(["sudo", "shutdown", "-r", "now"])
]
| 17.769231 | 66 | 0.614719 | 23 | 231 | 6.043478 | 0.695652 | 0.230216 | 0.302158 | 0.388489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225108 | 231 | 12 | 67 | 19.25 | 0.776536 | 0 | 0 | 0.2 | 0 | 0 | 0.164502 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0.2 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
4128e8cecf4d25349ef46896bae75a592a03b3e7 | 188 | py | Python | Server/Python/src/dbs/dao/MySQL/Site/GetID.py | vkuznet/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 8 | 2015-08-14T04:01:32.000Z | 2021-06-03T00:56:42.000Z | Server/Python/src/dbs/dao/MySQL/Site/GetID.py | yuyiguo/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 162 | 2015-01-07T21:34:47.000Z | 2021-10-13T09:42:41.000Z | Server/Python/src/dbs/dao/MySQL/Site/GetID.py | yuyiguo/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 16 | 2015-01-22T15:27:29.000Z | 2021-04-28T09:23:28.000Z | #!/usr/bin/env python
"""
This module provides Site.GetID data access object.
"""
from dbs.dao.Oracle.Site.GetID import GetID as OraSiteGetID
class GetID(OraSiteGetID):
pass
| 18.8 | 59 | 0.712766 | 26 | 188 | 5.153846 | 0.807692 | 0.134328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180851 | 188 | 9 | 60 | 20.888889 | 0.87013 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f5ba6374375c0cdd0331d098293e373adf92a941 | 33 | py | Python | src/Locators/__init__.py | Artem0791/Hackathon18_09 | 15f7e6c14264a574dc3efc42c5edd03e39b8dab8 | [
"MIT"
] | 1 | 2021-09-17T18:26:33.000Z | 2021-09-17T18:26:33.000Z | src/Locators/__init__.py | Artem0791/Hackathon18_09 | 15f7e6c14264a574dc3efc42c5edd03e39b8dab8 | [
"MIT"
] | null | null | null | src/Locators/__init__.py | Artem0791/Hackathon18_09 | 15f7e6c14264a574dc3efc42c5edd03e39b8dab8 | [
"MIT"
] | 3 | 2021-09-18T10:06:32.000Z | 2021-09-18T20:50:29.000Z | from .LoginPage import LoginPage
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5c028f77eea3fa45b3a26bce7420c42280d3f60 | 1,802 | py | Python | wagtail_draftail_snippet/views.py | TMFRook/wagtail-draftail-snippet | 1d8d76655b8d544e75884e168f1193b1bfaf02e2 | [
"BSD-3-Clause"
] | 15 | 2020-01-17T20:38:53.000Z | 2022-03-08T10:02:09.000Z | wagtail_draftail_snippet/views.py | TMFRook/wagtail-draftail-snippet | 1d8d76655b8d544e75884e168f1193b1bfaf02e2 | [
"BSD-3-Clause"
] | 11 | 2020-02-13T14:02:19.000Z | 2022-03-08T10:51:09.000Z | wagtail_draftail_snippet/views.py | TMFRook/wagtail-draftail-snippet | 1d8d76655b8d544e75884e168f1193b1bfaf02e2 | [
"BSD-3-Clause"
] | 5 | 2020-03-03T14:09:15.000Z | 2021-12-13T22:56:53.000Z | from django.template.loader import TemplateDoesNotExist, get_template
from wagtail.admin.modal_workflow import render_modal_workflow
from wagtail.snippets.models import get_snippet_models
from .utils import get_snippet_link_frontend_template, get_snippet_embed_frontend_template
def choose_snippet_link_model(request):
snippet_model_opts = []
# Only display those snippet models which have snippet link frontend template
for snippet_model in get_snippet_models():
snippet_frontend_template = get_snippet_link_frontend_template(
snippet_model._meta.app_label, snippet_model._meta.model_name
)
try:
get_template(snippet_frontend_template)
snippet_model_opts.append(snippet_model._meta)
except TemplateDoesNotExist:
pass
return render_modal_workflow(
request,
"wagtail_draftail_snippet/choose_snippet_model.html",
None,
{"snippet_model_opts": snippet_model_opts},
json_data={"step": "choose"},
)
def choose_snippet_embed_model(request):
snippet_model_opts = []
# Only display those snippet models which have snippet embed frontend template
for snippet_model in get_snippet_models():
snippet_frontend_template = get_snippet_embed_frontend_template(
snippet_model._meta.app_label, snippet_model._meta.model_name
)
try:
get_template(snippet_frontend_template)
snippet_model_opts.append(snippet_model._meta)
except TemplateDoesNotExist:
pass
return render_modal_workflow(
request,
"wagtail_draftail_snippet/choose_snippet_model.html",
None,
{"snippet_model_opts": snippet_model_opts},
json_data={"step": "choose"},
)
| 32.763636 | 90 | 0.719756 | 206 | 1,802 | 5.859223 | 0.223301 | 0.178956 | 0.106048 | 0.092792 | 0.81193 | 0.797017 | 0.797017 | 0.753935 | 0.753935 | 0.753935 | 0 | 0 | 0.219756 | 1,802 | 54 | 91 | 33.37037 | 0.858464 | 0.084351 | 0 | 0.7 | 0 | 0 | 0.094718 | 0.060716 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.05 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f5c8f91cb4fed314b305520b2b9b2335e20c6b81 | 105 | py | Python | pytypist/lessons/__init__.py | simongarisch/pytypist | 22c94f27480654eb0fa344d2b472691c9d6bf48d | [
"MIT"
] | null | null | null | pytypist/lessons/__init__.py | simongarisch/pytypist | 22c94f27480654eb0fa344d2b472691c9d6bf48d | [
"MIT"
] | 5 | 2020-06-30T00:13:18.000Z | 2020-11-03T21:01:46.000Z | pytypist/lessons/__init__.py | simongarisch/pytypist | 22c94f27480654eb0fa344d2b472691c9d6bf48d | [
"MIT"
] | null | null | null | from .lessons import Lesson, Lessons # noqa: F401
from .sections import Section, Sections # noqa: F401
| 35 | 53 | 0.752381 | 14 | 105 | 5.642857 | 0.571429 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.171429 | 105 | 2 | 54 | 52.5 | 0.83908 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
de2275149bf3a07ad35242efaa61964f647327ea | 180 | py | Python | kollect/schema/__init__.py | dcramer/kollect | a8586ec07f671e01e80df2336ad1fa5dfe4804e5 | [
"Apache-2.0"
] | null | null | null | kollect/schema/__init__.py | dcramer/kollect | a8586ec07f671e01e80df2336ad1fa5dfe4804e5 | [
"Apache-2.0"
] | null | null | null | kollect/schema/__init__.py | dcramer/kollect | a8586ec07f671e01e80df2336ad1fa5dfe4804e5 | [
"Apache-2.0"
] | null | null | null | from .collection import * # NOQA
from .comment import * # NOQA
from .decimal import * # NOQA
from .item import * # NOQA
from .like import * # NOQA
from .user import * # NOQA
| 25.714286 | 33 | 0.666667 | 24 | 180 | 5 | 0.375 | 0.5 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 180 | 6 | 34 | 30 | 0.869565 | 0.161111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dea5f652c07901f35c432654bfe65be601fa0b5b | 31 | py | Python | api/__init__.py | haolinnie/MaskTextSpotter | 928df1863c88f373b2a40e575986d12076393225 | [
"CNRI-Python"
] | null | null | null | api/__init__.py | haolinnie/MaskTextSpotter | 928df1863c88f373b2a40e575986d12076393225 | [
"CNRI-Python"
] | null | null | null | api/__init__.py | haolinnie/MaskTextSpotter | 928df1863c88f373b2a40e575986d12076393225 | [
"CNRI-Python"
] | null | null | null | from api.app import create_app
| 15.5 | 30 | 0.83871 | 6 | 31 | 4.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dedc54d63de7370924ec41f92e895ed4a1b679c1 | 32 | py | Python | lisa/regpotential/__init__.py | qinqian/lisa | 9fef7cb682264bdef5cca6847d09acf6c92a08f2 | [
"MIT"
] | 23 | 2017-05-10T12:55:25.000Z | 2022-03-07T16:35:47.000Z | lisa/regpotential/__init__.py | liulab-dfci/lisa | c60207a9e58efe943374aa7e002f61ee29042532 | [
"MIT"
] | 7 | 2019-10-15T15:18:44.000Z | 2021-09-15T17:44:46.000Z | lisa/regpotential/__init__.py | qinqian/lisa | 9fef7cb682264bdef5cca6847d09acf6c92a08f2 | [
"MIT"
] | 3 | 2017-12-13T00:32:03.000Z | 2020-04-15T14:25:42.000Z | import lisa._bw as regpotential
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
def6858c3b0a65840ee796ebec4715ab299058d1 | 21,368 | py | Python | Summary Results.py | hemanshudas/htnmodel | 72718ec63993aaffecea7ed598411080dc128bfc | [
"MIT"
] | null | null | null | Summary Results.py | hemanshudas/htnmodel | 72718ec63993aaffecea7ed598411080dc128bfc | [
"MIT"
] | null | null | null | Summary Results.py | hemanshudas/htnmodel | 72718ec63993aaffecea7ed598411080dc128bfc | [
"MIT"
] | null | null | null | import os
import numpy
import pandas as pd
import scipy.stats as st
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs')
def summary_cost(int_details,ctrl_m,ctrl_f,trt_m,trt_f, text):
int_dwc = 1 / (1 + discount_rate) ** numpy.array(range(time_horizon))
int_c = numpy.array([[prog_cost] * time_horizon for i in range(1)])
int_cost = numpy.sum(numpy.dot(int_c, int_dwc))
female_pop = 188340000
male_pop = 196604000
pop = female_pop + male_pop
f_prop = female_pop / pop
m_prop = male_pop / pop
samples = ctrl_m.shape[0]
cs = 0
nq = 0
ic = [0.00 for i in range(samples)]
q_gained = [0.00 for i in range(samples)]
q_inc_percent = [0.00 for i in range(samples)]
htn_cost = [0.00 for i in range(samples)]
cvd_cost = [0.00 for i in range(samples)]
net_cost = [0.00 for i in range(samples)]
exp_inc_per = [0.00 for i in range(samples)]
for i in range(samples):
q_gained[i] = (((ctrl_m.loc[i, "Average DALYs"] - trt_m.loc[i, "Average DALYs"])* m_prop) + ((ctrl_f.loc[i, "Average DALYs"] - trt_f.loc[i, "Average DALYs"])* f_prop))
q_inc_percent[i] = q_gained[i] * 100/((ctrl_m.loc[i, "Average DALYs"] * m_prop) + (ctrl_f.loc[i, "Average DALYs"] *f_prop))
htn_cost[i] = int_cost + ((trt_m.loc[i, "Average HTN Cost"] - ctrl_m.loc[i, "Average HTN Cost"]) * m_prop) + ((trt_f.loc[i, "Average HTN Cost"] - ctrl_f.loc[i, "Average HTN Cost"]) * f_prop)
cvd_cost[i] = ((trt_m.loc[i, "Average CVD Cost"] - ctrl_m.loc[i, "Average CVD Cost"] + trt_m.loc[i, "Average Chronic Cost"] - ctrl_m.loc[i, "Average Chronic Cost"]) * m_prop) + ((trt_f.loc[i, "Average CVD Cost"] - ctrl_f.loc[i, "Average CVD Cost"] + trt_f.loc[i, "Average Chronic Cost"] - ctrl_f.loc[i, "Average Chronic Cost"]) * f_prop)
exp_inc_per[i] = (((trt_m.loc[i, "Average Cost"] - ctrl_m.loc[i, "Average Cost"] + int_cost) * m_prop) + ((trt_f.loc[i, "Average Cost"] - ctrl_f.loc[i, "Average Cost"] + int_cost) * f_prop)) * 100 / ((ctrl_m.loc[i, "Average Cost"] * m_prop ) + (ctrl_f.loc[i, "Average Cost"] * f_prop))
net_cost[i] = htn_cost[i] + cvd_cost[i]
ic[i] = net_cost[i] / q_gained[i]
if net_cost[i] < 0:
cs = cs + 1
if q_gained[i] < 0:
nq = nq + 1
budget_impact = numpy.mean(net_cost) * pop / time_horizon
htn_percap = numpy.mean(htn_cost) / time_horizon
cvd_percap = numpy.mean(cvd_cost) / time_horizon
htn_annual = numpy.mean(htn_cost) * pop / time_horizon
cvd_annual = numpy.mean(cvd_cost) * pop / time_horizon
cost_inc = numpy.mean(exp_inc_per)
ICER = numpy.mean(ic)
QALY = numpy.mean(q_inc_percent)
HTN = numpy.mean(htn_cost)
CVD = numpy.mean(cvd_cost)
icer_95 = st.t.interval(0.95, samples - 1, loc=numpy.mean(ic), scale=st.sem(ic))
qaly_95 = st.t.interval(0.95, samples - 1, loc=numpy.mean(q_inc_percent), scale=st.sem(q_inc_percent))
htn = st.t.interval(0.95, samples - 1, loc=numpy.mean(htn_cost), scale=st.sem(htn_cost))
cvd = st.t.interval(0.95, samples - 1, loc=numpy.mean(cvd_cost), scale=st.sem(cvd_cost))
cost_inc_95 = st.t.interval(0.95, samples - 1, loc=numpy.mean(exp_inc_per), scale=st.sem(exp_inc_per))
if budget_impact < 0:
m_icer = 'Cost Saving'
s_icer = 'CS'
else:
m_icer = numpy.mean(net_cost) / numpy.mean(q_gained)
s_icer = str(numpy.round(m_icer,1))
m_daly = str(numpy.round(QALY,3)) + "\n(" + str(numpy.round(qaly_95[0],3)) + " to " + str(numpy.round(qaly_95[1],3)) + ")"
m_htn = str(numpy.round(HTN,2)) + "\n(" + str(numpy.round(htn[0],2)) + " to " + str(numpy.round(htn[1],2)) + ")"
m_cvd = str(numpy.round(CVD,2)) + "\n(" + str(numpy.round(cvd[0],2)) + " to " + str(numpy.round(cvd[1],2)) + ")"
m_costinc = str(numpy.round(cost_inc, 2)) + "\n(" + str(numpy.round(cost_inc_95[0], 2)) + " to " + str(numpy.round(cost_inc_95[1], 2)) + ")"
m_budget = str(numpy.round(budget_impact,0)/1000)
err_cost = 1.96 * st.sem(exp_inc_per)
err_daly = 1.96 * st.sem(q_inc_percent)
str_icer = text + " (" + s_icer + ")"
detailed = [int_details[2], int_details[0], int_details[1], int_details[3], int_details[4], ICER, icer_95[0],icer_95[1], QALY, qaly_95[0], qaly_95[1], htn[0], htn[1], cvd[0], cvd[1], budget_impact, htn_annual, cvd_annual, htn_percap, cvd_percap, cs, nq]
manuscript = [int_details[2], int_details[0], int_details[1], int_details[3], int_details[4], m_icer, m_daly, m_costinc, m_htn, m_cvd, m_budget, cs]
plot = [text, str_icer, cost_inc, QALY, err_cost, err_daly]
return detailed, manuscript, plot
summary_output = []
appendix_output = []
plot_output = []
'''Analysis 0: Baseline'''
time_horizon = 20
prog_cost = 0.13
discount_rate = 0.03
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/15Aug_AWS3')
fname = [0.4, 0.3, 0, 0.8, 0.6]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
treatment_f = pd.read_csv(file_name_f)
treatment_m = pd.read_csv(file_name_m)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'Base Case')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 1: Doubled Medication Cost'''
time_horizon = 20
prog_cost = 0.13
discount_rate = 0.03
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/PSAFinal')
fname = [0.4, 0.3, 0, 0.8, 0.6, 2, 0, 20]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8, 2, 0, 20]
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
treatment_f = pd.read_csv(file_name_f)
treatment_m = pd.read_csv(file_name_m)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'2X Medication Cost')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 2: Increased Programmatic Cost'''
time_horizon = 20
prog_cost = 0.13*4
discount_rate = 0.03
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/15Aug_AWS3')
fname = [0.4, 0.3, 0, 0.8, 0.6]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
treatment_f = pd.read_csv(file_name_f)
treatment_m = pd.read_csv(file_name_m)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'4X Programmatic Cost')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 3: 20% reduction in baseline CVD risk'''
time_horizon = 20
prog_cost = 0.13
discount_rate = 0.03
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/PSAFinal')
fname = [0.4, 0.3, 0, 0.8, 0.6, 1, 0.2, 20]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8, 1, 0.2, 20]
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
treatment_f = pd.read_csv(file_name_f)
treatment_m = pd.read_csv(file_name_m)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'Reduced Baseline Risk')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 4: NPCDCS Medication Protocol'''
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/15Aug_AWS3')
fname = [0.4, 0.3, 0, 0.8, 0.6]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 0, 0.8, 0.8]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
treatment_m = pd.read_csv(file_name_m)
treatment_f = pd.read_csv(file_name_f)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'NPCDCS Treatment Guideline')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 5: Private Sector Cost'''
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/PvtFinal')
fname = [0.4, 0.3, 0, 0.8, 0.6]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
treatment_m = pd.read_csv(file_name_m)
treatment_f = pd.read_csv(file_name_f)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'Private Sector')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 6: PubPvt Mix Cost'''
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/PubPvtFinal')
fname = [0.4, 0.3, 0, 0.8, 0.6]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
treatment_m = pd.read_csv(file_name_m)
treatment_f = pd.read_csv(file_name_f)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'Public-Private Mix')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
#Analysis 7: 10-year Time Horizon#
time_horizon = 10
prog_cost = 0.13
discount_rate = 0.03
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/10yFinal')
fname = [0.4, 0.3, 0, 0.8, 0.6, 1, 0, 10]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8, 1, 0, 10]
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
treatment_f = pd.read_csv(file_name_f)
treatment_m = pd.read_csv(file_name_m)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'10 year Horizon')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 8: 40-year Time Horizon'''
time_horizon = 40
prog_cost = 0.13
discount_rate = 0.03
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/40yFinal')
fname = [0.4, 0.3, 0, 0.8, 0.6, 1, 0, 40]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8, 1, 0, 40]
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) +"_CF_"+ str(fname[5]) + "_RR_"+ str(fname[6]) + "_TH_"+ str(fname[7]) + ".csv")
treatment_f = pd.read_csv(file_name_f)
treatment_m = pd.read_csv(file_name_m)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'40 year Horizon')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 9: Inlusion of Pill Disutility'''
time_horizon = 20
prog_cost = 0.13
discount_rate = 0.03
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/PillDisFinal')
fname = [0.4, 0.3, 0, 0.8, 0.6]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 1, 0.8, 0.8]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
treatment_f = pd.read_csv(file_name_f)
treatment_m = pd.read_csv(file_name_m)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'Pill Disutility')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
'''Analysis 10: Inclusion of Pill Disutility'''
time_horizon = 20
prog_cost = 0.13
discount_rate = 0.03
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/PillDisFinal')
fname = [0.4, 0.3, 0, 0.8, 0.6]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
control_m = pd.read_csv(file_name_m)
control_f = pd.read_csv(file_name_f)
fname = [0.7, 0.7, 0, 0.8, 0.8]
file_name_m = ("Aspire_Male_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
file_name_f = ("Aspire_Female_Cov_" + str(fname[0]) + "_Comp_" + str(fname[1]) + "_Pro_" + str(round(fname[2])) + "_Ini_" + str(fname[3]) + "_Per_" + str(
fname[4]) + ".csv")
treatment_f = pd.read_csv(file_name_f)
treatment_m = pd.read_csv(file_name_m)
res = summary_cost(fname, control_m, control_f, treatment_m, treatment_f,'Pill Disutility with NPCDCS')
summary_output.append(res[0])
appendix_output.append(res[1])
plot_output.append(res[2])
os.chdir('/Users/jarvis/Dropbox/Apps/HypertensionOutputs/')
manuscript_results = pd.DataFrame(summary_output,
columns=['Protocol', 'Coverage', 'Adherence', 'Initiation', 'Persistence', 'ICER', 'ICER_lower', 'ICER_upper', '% DALYs', '% DALYs_lower', '% DALYs_lower','HTN_lower', 'HTN_upper',
'CVD_lower','CVD_upper', 'budget', 'annual htn','annual cvd', 'HTN percapita', 'CVD percapita','p_costsaving', 'p_negativeDALY'])
manuscript_results.to_csv('ConsolidatedSensitivityResult_30Aug.csv')
manuscript_text = pd.DataFrame(appendix_output, columns = ['Protocol', 'Coverage', 'Adherence', 'Initiation', 'Persistence','ICER', 'DALYs Averted', 'Increase in Cost','HTN Cost', 'CVD Cost', 'Budget 000s', 'Cost Saving Scenarios'])
manuscript_text.to_csv("ConsolidatedSensitivityText_30Aug.csv")
manuscript_plot = pd.DataFrame(plot_output, columns = ['Scenario', 'ICER', 'Cost', 'DALY', 'Error_Cost', 'Error_DALY'])
manuscript_plot.to_csv("ConsolidatedPlot_30Aug.csv")
| 54.233503 | 346 | 0.628042 | 3,486 | 21,368 | 3.513769 | 0.053643 | 0.146298 | 0.05927 | 0.043106 | 0.817944 | 0.795249 | 0.757205 | 0.739489 | 0.702506 | 0.702506 | 0 | 0.044337 | 0.16403 | 21,368 | 393 | 347 | 54.371501 | 0.641382 | 0.001498 | 0 | 0.659236 | 0 | 0 | 0.18726 | 0.03979 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003185 | false | 0 | 0.012739 | 0 | 0.019108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9d0c128888ae3cc740885001e18e94bc80daa71a | 27 | py | Python | devel/lib/python2.7/dist-packages/whycon/msg/__init__.py | ankit131199/Rebellious-Cowards | 56ec395147f2fc59a26669a74a04fe02227bc7b7 | [
"BSD-2-Clause"
] | null | null | null | devel/lib/python2.7/dist-packages/whycon/msg/__init__.py | ankit131199/Rebellious-Cowards | 56ec395147f2fc59a26669a74a04fe02227bc7b7 | [
"BSD-2-Clause"
] | null | null | null | devel/lib/python2.7/dist-packages/whycon/msg/__init__.py | ankit131199/Rebellious-Cowards | 56ec395147f2fc59a26669a74a04fe02227bc7b7 | [
"BSD-2-Clause"
] | 3 | 2020-10-01T15:22:00.000Z | 2020-10-01T17:06:55.000Z | from ._Projection import *
| 13.5 | 26 | 0.777778 | 3 | 27 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d1bbc994391c9c1092fa6b4fa8fc7e46d74f6b9 | 3,113 | py | Python | tests/signal/test_Encoder4To2.py | jamesjiang52/Bitwise | c71f151d23034b3f9e2a939f637be0eaa16c45c3 | [
"MIT"
] | null | null | null | tests/signal/test_Encoder4To2.py | jamesjiang52/Bitwise | c71f151d23034b3f9e2a939f637be0eaa16c45c3 | [
"MIT"
] | null | null | null | tests/signal/test_Encoder4To2.py | jamesjiang52/Bitwise | c71f151d23034b3f9e2a939f637be0eaa16c45c3 | [
"MIT"
] | null | null | null | import bitwise as bw
class TestEncoder4To2:
def test_Encoder4To2(self):
enable = bw.wire.Wire()
input_1 = bw.wire.Wire()
input_2 = bw.wire.Wire()
input_3 = bw.wire.Wire()
input_4 = bw.wire.Wire()
valid = bw.wire.Wire()
output_1 = bw.wire.Wire()
output_2 = bw.wire.Wire()
input_bus = bw.wire.Bus4(input_1, input_2, input_3, input_4)
a = bw.signal.Encoder4To2(enable, input_bus, valid, output_1, output_2)
enable.value = 0
input_1.value = 0
input_2.value = 0
input_3.value = 0
input_4.value = 0
assert valid.value == 0
assert output_1.value == 0
assert output_2.value == 0
enable.value = 0
input_1.value = 1
input_2.value = 1
input_3.value = 1
input_4.value = 1
assert valid.value == 0
assert output_1.value == 0
assert output_2.value == 0
enable.value = 1
input_1.value = 0
input_2.value = 0
input_3.value = 0
input_4.value = 0
assert valid.value == 0
assert output_1.value == 0
assert output_2.value == 0
enable.value = 1
input_1.value = 0
input_2.value = 0
input_3.value = 0
input_4.value = 1
assert valid.value == 1
assert output_1.value == 0
assert output_2.value == 0
enable.value = 1
input_1.value = 0
input_2.value = 0
input_3.value = 1
input_4.value = 0
assert valid.value == 1
assert output_1.value == 0
assert output_2.value == 1
enable.value = 1
input_1.value = 0
input_2.value = 0
input_3.value = 1
input_4.value = 1
assert valid.value == 1
assert output_1.value == 0
assert output_2.value == 1
enable.value = 1
input_1.value = 0
input_2.value = 1
input_3.value = 0
input_4.value = 0
assert valid.value == 1
assert output_1.value == 1
assert output_2.value == 0
enable.value = 1
input_1.value = 0
input_2.value = 1
input_3.value = 1
input_4.value = 1
assert valid.value == 1
assert output_1.value == 1
assert output_2.value == 0
enable.value = 1
input_1.value = 1
input_2.value = 0
input_3.value = 0
input_4.value = 0
assert valid.value == 1
assert output_1.value == 1
assert output_2.value == 1
enable.value = 1
input_1.value = 1
input_2.value = 1
input_3.value = 1
input_4.value = 1
assert valid.value == 1
assert output_1.value == 1
assert output_2.value == 1
print(a.__doc__)
print(a)
a(
enable=1,
input_bus=(0, 0, 0, 1),
valid=None,
output_1=None,
output_2=None
)
assert valid.value == 1
assert output_1.value == 0
assert output_2.value == 0
| 25.727273 | 79 | 0.526181 | 439 | 3,113 | 3.539863 | 0.066059 | 0.162162 | 0.14157 | 0.138996 | 0.79601 | 0.775418 | 0.767053 | 0.767053 | 0.767053 | 0.767053 | 0 | 0.089109 | 0.383553 | 3,113 | 120 | 80 | 25.941667 | 0.720688 | 0 | 0 | 0.790476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.314286 | 1 | 0.009524 | false | 0 | 0.009524 | 0 | 0.028571 | 0.019048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c234397e52cba4b81a69181a95b9246d645b226b | 20 | py | Python | SmartMedApp/GUI/__init__.py | vovochkab/SmartMed | 123540263db8ec2225e7bb0ea949ba96a8c5d4f3 | [
"Apache-2.0"
] | 2 | 2020-10-05T18:03:46.000Z | 2020-11-15T18:54:53.000Z | GUI/__init__.py | IlyaKomutkov/SmartMed-1 | b6389cbf9240fa3ba909c0fe7eed6772d8b6b0df | [
"Apache-2.0"
] | 5 | 2021-11-14T17:18:32.000Z | 2022-02-12T17:06:33.000Z | GUI/__init__.py | IlyaKomutkov/SmartMed-1 | b6389cbf9240fa3ba909c0fe7eed6772d8b6b0df | [
"Apache-2.0"
] | 7 | 2020-09-07T17:28:54.000Z | 2021-10-01T14:32:18.000Z | from .GUI import GUI | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dfc3a41a3e617dbe2b8ed33776258765933c48a6 | 39 | py | Python | iktomi/cli/__init__.py | boltnev/iktomi | bc92006c026f9b42e56f1af5ced2fe577673a486 | [
"MIT"
] | 14 | 2015-02-15T05:24:22.000Z | 2020-03-19T10:07:28.000Z | iktomi/cli/__init__.py | boltnev/iktomi | bc92006c026f9b42e56f1af5ced2fe577673a486 | [
"MIT"
] | 10 | 2015-04-04T10:10:41.000Z | 2016-06-01T13:17:58.000Z | iktomi/cli/__init__.py | boltnev/iktomi | bc92006c026f9b42e56f1af5ced2fe577673a486 | [
"MIT"
] | 5 | 2015-02-20T11:18:58.000Z | 2016-10-18T15:30:13.000Z | from .base import *
from .app import *
| 13 | 19 | 0.692308 | 6 | 39 | 4.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 39 | 2 | 20 | 19.5 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a02dc0d81c4c7ba18ce6120c9b506e940e06194d | 13,031 | py | Python | app/test/test_model.py | ShivaniMehendarge/medical-transcription-analysis | 6eda0dd1614a4466aab035998bf0e745a2c200a0 | [
"MIT-0"
] | 53 | 2020-05-07T19:00:17.000Z | 2022-02-01T07:09:01.000Z | app/test/test_model.py | ShivaniMehendarge/medical-transcription-analysis | 6eda0dd1614a4466aab035998bf0e745a2c200a0 | [
"MIT-0"
] | 21 | 2020-05-08T02:52:06.000Z | 2022-02-15T00:49:23.000Z | app/test/test_model.py | ShivaniMehendarge/medical-transcription-analysis | 6eda0dd1614a4466aab035998bf0e745a2c200a0 | [
"MIT-0"
] | 22 | 2020-05-19T02:48:12.000Z | 2022-03-28T19:00:10.000Z | import sys
sys.path.append("../lambda/")
import unittest
import boto3
from moto import mock_dynamodb2
from datastore import DataStore
from models import *
from constant_variables import *
from data.test_constants import *
class TestModel(unittest.TestCase):
@mock_dynamodb2
def create_dyanamodb(self):
dynamodb = boto3.resource('dynamodb', 'us-west-2')
return dynamodb
@mock_dynamodb2
def create_patient_table(self):
dynamodb = self.create_dyanamodb()
table = dynamodb.create_table(
TableName='Patients',
KeySchema=[{'AttributeName': 'PatientId', 'KeyType': 'HASH'},],
AttributeDefinitions=[{'AttributeName': 'PatientId', 'AttributeType': 'S'},],
ProvisionedThroughput={'ReadCapacityUnits': 5, 'WriteCapacityUnits': 5}
)
return table
@mock_dynamodb2
def create_health_care_professional_table(self):
dynamodb = self.create_dyanamodb()
table = dynamodb.create_table(
TableName='HealthCareProfessionals',
KeySchema=[{'AttributeName': 'HealthCareProfessionalId', 'KeyType': 'HASH'},],
AttributeDefinitions=[{'AttributeName': 'HealthCareProfessionalId', 'AttributeType': 'S'},],
ProvisionedThroughput={'ReadCapacityUnits': 5, 'WriteCapacityUnits': 5}
)
return table
@mock_dynamodb2
def create_session_table(self):
dynamodb = self.create_dyanamodb()
table = dynamodb.create_table(
TableName='Sessions',
KeySchema=[{'AttributeName': 'PatientId', 'KeyType': 'HASH'},{'AttributeName': 'SessionId', 'KeyType': 'RANGE'}],
AttributeDefinitions=[{'AttributeName': 'PatientId', 'AttributeType': 'S'},{'AttributeName': 'SessionId', 'AttributeType': 'S'}],
ProvisionedThroughput={'ReadCapacityUnits': 5, 'WriteCapacityUnits': 5}
)
return table
@mock_dynamodb2
def test_patient_create_patient(self):
table = self.create_patient_table()
p = Patient()
response = p.createPatient(PATIENT_INFO_1)
self.assertEqual(response['status'], 'OK')
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_PATIENT_NAME],'p1')
@mock_dynamodb2
def test_patient_create_patient_multiple(self):
table = self.create_patient_table()
p = Patient()
response = p.createPatient(PATIENT_INFO_1)
self.assertEqual(response['status'], 'OK')
response = p.createPatient(PATIENT_INFO_2)
self.assertEqual(response['status'], 'OK')
response = p.createPatient(PATIENT_INFO_3)
self.assertEqual(response['status'], 'OK')
response = table.scan()
self.assertEqual(len(response['Items']),3)
@mock_dynamodb2
def test_patient_create_patient_empty(self):
table = self.create_patient_table()
p = Patient()
response = p.createPatient(PATIENT_INFO_EMPTY)
self.assertEqual(response['status'], 'BAD')
@mock_dynamodb2
def test_patient_create_patient_conflict(self):
table = self.create_patient_table()
p = Patient()
response = p.createPatient(PATIENT_INFO_1)
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_PATIENT_NAME],'p1')
response = p.createPatient(PATIENT_INFO_1_SAMEKEY)
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_PATIENT_NAME],'pConflict')
@mock_dynamodb2
def test_patient_request_patient(self):
table = self.create_patient_table()
p = Patient()
p.createPatient(PATIENT_INFO_1)
response = p.requestPatients('p-1')
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_NAME],'p1')
@mock_dynamodb2
def test_patient_request_patient_multiple(self):
table = self.create_patient_table()
p = Patient()
p.createPatient(PATIENT_INFO_1)
p.createPatient(PATIENT_INFO_2)
p.createPatient(PATIENT_INFO_3)
response = p.requestPatients('p-1')
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_NAME],'p1')
response = p.requestPatients('p-2')
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_NAME],'p2')
response = p.requestPatients('p-3')
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_NAME],'p3')
@mock_dynamodb2
def test_patient_request_patient_empty(self):
table = self.create_patient_table()
p = Patient()
response = p.requestPatients('p-1')
self.assertEqual(len(response), 0)
p.createPatient(PATIENT_INFO_1)
response = p.requestPatients('p-2')
self.assertEqual(len(response), 0)
response = p.requestPatients('p-3')
self.assertEqual(len(response), 0)
@mock_dynamodb2
def test_hcp_create_hcp(self):
table = self.create_health_care_professional_table()
hp = HealthCareProfessional()
response = hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_1)
self.assertEqual(response['status'], 'OK')
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_NAME],'h1')
@mock_dynamodb2
def test_hcp_create_hcp_multiple(self):
table = self.create_health_care_professional_table()
hp = HealthCareProfessional()
response = hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_1)
self.assertEqual(response['status'], 'OK')
response = hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_2)
self.assertEqual(response['status'], 'OK')
response = hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_3)
self.assertEqual(response['status'], 'OK')
response = table.scan()
self.assertEqual(len(response['Items']),3)
@mock_dynamodb2
def test_hcp_create_hcp_empty(self):
table = self.create_health_care_professional_table()
hp = HealthCareProfessional()
response = hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_EMPTY)
self.assertEqual(response['status'], 'BAD')
@mock_dynamodb2
def test_hcp_create_hcp_conflict(self):
table = self.create_health_care_professional_table()
hp = HealthCareProfessional()
response = hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_1)
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_NAME],'h1')
response = hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_1_SAMEKEY)
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_NAME],'hConflict')
@mock_dynamodb2
def test_hcp_request_hcp(self):
table = self.create_health_care_professional_table()
hp = HealthCareProfessional()
hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_1)
response = hp.requestHealthCareProfessionals('h-1')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_NAME],'h1')
@mock_dynamodb2
def test_hcp_request_hcp_multiple(self):
table = self.create_health_care_professional_table()
hp = HealthCareProfessional()
hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_1)
hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_2)
hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_3)
response = hp.requestHealthCareProfessionals('h-1')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_NAME],'h1')
response = hp.requestHealthCareProfessionals('h-2')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_NAME],'h2')
response = hp.requestHealthCareProfessionals('h-3')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_NAME],'h3')
@mock_dynamodb2
def test_hcp_request_hcp_empty(self):
able = self.create_health_care_professional_table()
hp = HealthCareProfessional()
response = hp.requestHealthCareProfessionals('h-1')
self.assertEqual(len(response), 0)
hp.createHealthCareProfessional(HEALTH_CARE_PROFESSIONAL_INFO_1)
response = hp.requestHealthCareProfessionals('h-2')
self.assertEqual(len(response), 0)
response = hp.requestHealthCareProfessionals('h-3')
self.assertEqual(len(response), 0)
@mock_dynamodb2
def test_session_create_session(self):
table = self.create_session_table()
s = Session()
response = s.createSession(SESSION_INFO_1)
self.assertEqual(response['status'], 'OK')
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_SESSION_NAME],'SessionName')
@mock_dynamodb2
def test_session_create_session_multiple(self):
table = self.create_session_table()
s = Session()
response = s.createSession(SESSION_INFO_1)
self.assertEqual(response['status'], 'OK')
response = s.createSession(SESSION_INFO_2)
self.assertEqual(response['status'], 'OK')
response = s.createSession(SESSION_INFO_3)
self.assertEqual(response['status'], 'OK')
response = table.scan()
self.assertEqual(len(response['Items']),3)
@mock_dynamodb2
def test_session_create_session_conflict(self):
table = self.create_session_table()
s = Session()
response = s.createSession(SESSION_INFO_1)
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_SESSION_NAME],'SessionName')
response = s.createSession(SESSION_INFO_1_SAMEKEY)
response = table.scan()
self.assertEqual(response['Items'][0][DATASTORE_COLUMN_SESSION_NAME],'NO')
@mock_dynamodb2
def test_session_create_session_empty(self):
table = self.create_session_table()
s = Session()
response = s.createSession(SESSION_INFO_EMPTY)
self.assertEqual(response['status'], 'BAD')
@mock_dynamodb2
def test_session_request_session(self):
table = self.create_session_table()
s = Session()
response = s.createSession(SESSION_INFO_1)
response = s.requestSession('p-1', None, None)
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_ID],'p-1')
self.assertEqual(response[0][DATASTORE_COLUMN_SESSION_ID],'s-1')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_ID],'h-1')
@mock_dynamodb2
def test_session_request_session_multiple(self):
table = self.create_session_table()
s = Session()
s.createSession(SESSION_INFO_1)
s.createSession(SESSION_INFO_2)
s.createSession(SESSION_INFO_3)
response = s.requestSession('p-1', None, None)
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_ID],'p-1')
self.assertEqual(response[0][DATASTORE_COLUMN_SESSION_ID],'s-1')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_ID],'h-1')
response = s.requestSession('p-1', 's-1', None)
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_ID],'p-1')
self.assertEqual(response[0][DATASTORE_COLUMN_SESSION_ID],'s-1')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_ID],'h-1')
response = s.requestSession('p-2', None, None)
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_ID],'p-2')
self.assertEqual(response[0][DATASTORE_COLUMN_SESSION_ID],'s-2')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_ID],'h-2')
response = s.requestSession('p-3', None, None)
self.assertEqual(response[0][DATASTORE_COLUMN_PATIENT_ID],'p-3')
self.assertEqual(response[0][DATASTORE_COLUMN_SESSION_ID],'s-3')
self.assertEqual(response[0][DATASTORE_COLUMN_HEALTH_CARE_PROFESSSIONAL_ID],'h-3')
@mock_dynamodb2
def test_session_request_session_empty(self):
table = self.create_session_table()
s = Session()
response = s.requestSession('p-1', None, None)
self.assertEqual(len(response), 0)
response = s.requestSession('p-1', 's-1', None)
self.assertEqual(len(response), 0)
s.createSession(SESSION_INFO_1)
response = s.requestSession('p-1', 's-2', None)
self.assertEqual(len(response), 0)
response = s.requestSession('p-2', 's-1', None)
self.assertEqual(len(response), 0)
response = s.requestSession('p-2', None, None)
self.assertEqual(len(response), 0)
response = s.requestSession(None, 's-2', None)
self.assertTrue('status' in response)
self.assertEqual(response['status'], 'BAD')
if __name__ == '__main__':
unittest.main() | 44.626712 | 141 | 0.688819 | 1,421 | 13,031 | 6.032372 | 0.073188 | 0.108493 | 0.128791 | 0.064396 | 0.922072 | 0.875992 | 0.867359 | 0.768432 | 0.74615 | 0.725152 | 0 | 0.016223 | 0.195841 | 13,031 | 292 | 142 | 44.626712 | 0.801794 | 0 | 0 | 0.669173 | 0 | 0 | 0.06768 | 0.005448 | 0 | 0 | 0 | 0 | 0.236842 | 1 | 0.093985 | false | 0 | 0.030075 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a0595696ff372989b7828a1b64f294e109fd69e2 | 39 | py | Python | simple_sqlite/__init__.py | Svinokur/simple_sqlite | 190d6dca5853c637208a7cd9dfe14e5bd0e2a4f1 | [
"MIT"
] | 2 | 2021-04-27T15:04:07.000Z | 2021-10-19T07:59:22.000Z | simple_sqlite/__init__.py | Svinokur/simple_sqlite | 190d6dca5853c637208a7cd9dfe14e5bd0e2a4f1 | [
"MIT"
] | null | null | null | simple_sqlite/__init__.py | Svinokur/simple_sqlite | 190d6dca5853c637208a7cd9dfe14e5bd0e2a4f1 | [
"MIT"
] | null | null | null | from .simple_sqlite import SimpleSqlite | 39 | 39 | 0.897436 | 5 | 39 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a08658d1477bc86992450ceaaad2beb8493d379f | 146 | py | Python | HEC/__init__.py | LANINCE3/LAN_INF | 6c02093e5d48fe60f672717d86a1c29571fa4b56 | [
"MIT"
] | null | null | null | HEC/__init__.py | LANINCE3/LAN_INF | 6c02093e5d48fe60f672717d86a1c29571fa4b56 | [
"MIT"
] | null | null | null | HEC/__init__.py | LANINCE3/LAN_INF | 6c02093e5d48fe60f672717d86a1c29571fa4b56 | [
"MIT"
] | null | null | null | from ArcGeom import *
from Geom import *
from HMS_Writer import *
from LANGisReport import *
from lanHydro import *
from RAS_Writer import * | 24.333333 | 27 | 0.760274 | 20 | 146 | 5.45 | 0.45 | 0.458716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19863 | 146 | 6 | 28 | 24.333333 | 0.931624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2606c4e7081333559b27e3b4b9bc3c7b928648fe | 233 | py | Python | test/try.py | TxuanYu/packaging_practice | 4f62953bdac18f542e216b429e4e70e892ee8f6b | [
"MIT"
] | null | null | null | test/try.py | TxuanYu/packaging_practice | 4f62953bdac18f542e216b429e4e70e892ee8f6b | [
"MIT"
] | null | null | null | test/try.py | TxuanYu/packaging_practice | 4f62953bdac18f542e216b429e4e70e892ee8f6b | [
"MIT"
] | null | null | null | from myproject import feature
from myproject.pkgA import A_feature
from myproject.pkgA.utils import utils
from myproject.pkgA.utils.again import again
from myproject.pkgB import B_feature
feature.nice_test()
A_feature.nice_test()
| 21.181818 | 44 | 0.83691 | 36 | 233 | 5.277778 | 0.333333 | 0.342105 | 0.268421 | 0.252632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107296 | 233 | 10 | 45 | 23.3 | 0.913462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.