hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e09e76c97b54a2a98b2a224ce595759b170d249e | 30,372 | py | Python | pybind/slxos/v16r_1_00b/ssh_sa/ssh/server/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/ssh_sa/ssh/server/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/ssh_sa/ssh/server/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import standby
import key
import ssh_vrf_cont
class server(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-sec-services - based on the path /ssh-sa/ssh/server. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__shutdown','__key_exchange','__rekey_interval','__ssh_server_port','__cipher','__mac','__standby','__key','__ssh_vrf_cont',)
_yang_name = 'server'
_rest_name = 'server'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__ssh_server_port = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'22', u'1024..49151']}), is_leaf=True, yang_name="ssh-server-port", rest_name="port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'The designated SSH server port', u'cli-full-command': None, u'alt-name': u'port', u'callpoint': u'ssh_server_port_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='uint32', is_config=True)
self.__key_exchange = YANGDynClass(base=unicode, is_leaf=True, yang_name="key-exchange", rest_name="key-exchange", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Key Exchange algorithm(s))', u'cli-full-command': None, u'callpoint': u'ssh_server_list_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
self.__key = YANGDynClass(base=key.key, is_container='container', presence=False, yang_name="key", rest_name="key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure SSH host keys', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
self.__standby = YANGDynClass(base=standby.standby, is_container='container', presence=False, yang_name="standby", rest_name="standby", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Standby SSH'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
self.__ssh_vrf_cont = YANGDynClass(base=ssh_vrf_cont.ssh_vrf_cont, is_container='container', presence=False, yang_name="ssh-vrf-cont", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
self.__mac = YANGDynClass(base=unicode, is_leaf=True, yang_name="mac", rest_name="mac", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure MAC algorithm(s)', u'cli-full-command': None, u'callpoint': u'ssh_server_mac_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
self.__cipher = YANGDynClass(base=unicode, is_leaf=True, yang_name="cipher", rest_name="cipher", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Cipher(s)', u'cli-full-command': None, u'callpoint': u'ssh_server_cipher_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
self.__shutdown = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="shutdown", rest_name="shutdown", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Shutdown SSH Server', u'cli-full-command': None, u'callpoint': u'ssh_server_disable_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='empty', is_config=True)
self.__rekey_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'900..3600']}), is_leaf=True, yang_name="rekey-interval", rest_name="rekey-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Time interval for session rekeying', u'cli-full-command': None, u'callpoint': u'ssh_server_rekey_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='uint32', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'ssh-sa', u'ssh', u'server']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'ssh', u'server']
def _get_shutdown(self):
"""
Getter method for shutdown, mapped from YANG variable /ssh_sa/ssh/server/shutdown (empty)
"""
return self.__shutdown
def _set_shutdown(self, v, load=False):
"""
Setter method for shutdown, mapped from YANG variable /ssh_sa/ssh/server/shutdown (empty)
If this variable is read-only (config: false) in the
source YANG file, then _set_shutdown is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_shutdown() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="shutdown", rest_name="shutdown", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Shutdown SSH Server', u'cli-full-command': None, u'callpoint': u'ssh_server_disable_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='empty', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """shutdown must be of a type compatible with empty""",
'defined-type': "empty",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="shutdown", rest_name="shutdown", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Shutdown SSH Server', u'cli-full-command': None, u'callpoint': u'ssh_server_disable_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='empty', is_config=True)""",
})
self.__shutdown = t
if hasattr(self, '_set'):
self._set()
def _unset_shutdown(self):
self.__shutdown = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="shutdown", rest_name="shutdown", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Shutdown SSH Server', u'cli-full-command': None, u'callpoint': u'ssh_server_disable_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='empty', is_config=True)
def _get_key_exchange(self):
"""
Getter method for key_exchange, mapped from YANG variable /ssh_sa/ssh/server/key_exchange (string)
"""
return self.__key_exchange
def _set_key_exchange(self, v, load=False):
"""
Setter method for key_exchange, mapped from YANG variable /ssh_sa/ssh/server/key_exchange (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_key_exchange is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_key_exchange() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="key-exchange", rest_name="key-exchange", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Key Exchange algorithm(s))', u'cli-full-command': None, u'callpoint': u'ssh_server_list_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """key_exchange must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="key-exchange", rest_name="key-exchange", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Key Exchange algorithm(s))', u'cli-full-command': None, u'callpoint': u'ssh_server_list_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)""",
})
self.__key_exchange = t
if hasattr(self, '_set'):
self._set()
def _unset_key_exchange(self):
self.__key_exchange = YANGDynClass(base=unicode, is_leaf=True, yang_name="key-exchange", rest_name="key-exchange", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Key Exchange algorithm(s))', u'cli-full-command': None, u'callpoint': u'ssh_server_list_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
def _get_rekey_interval(self):
"""
Getter method for rekey_interval, mapped from YANG variable /ssh_sa/ssh/server/rekey_interval (uint32)
"""
return self.__rekey_interval
def _set_rekey_interval(self, v, load=False):
"""
Setter method for rekey_interval, mapped from YANG variable /ssh_sa/ssh/server/rekey_interval (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_rekey_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_rekey_interval() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'900..3600']}), is_leaf=True, yang_name="rekey-interval", rest_name="rekey-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Time interval for session rekeying', u'cli-full-command': None, u'callpoint': u'ssh_server_rekey_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """rekey_interval must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'900..3600']}), is_leaf=True, yang_name="rekey-interval", rest_name="rekey-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Time interval for session rekeying', u'cli-full-command': None, u'callpoint': u'ssh_server_rekey_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='uint32', is_config=True)""",
})
self.__rekey_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_rekey_interval(self):
self.__rekey_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'900..3600']}), is_leaf=True, yang_name="rekey-interval", rest_name="rekey-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Time interval for session rekeying', u'cli-full-command': None, u'callpoint': u'ssh_server_rekey_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='uint32', is_config=True)
def _get_ssh_server_port(self):
"""
Getter method for ssh_server_port, mapped from YANG variable /ssh_sa/ssh/server/ssh_server_port (uint32)
"""
return self.__ssh_server_port
def _set_ssh_server_port(self, v, load=False):
"""
Setter method for ssh_server_port, mapped from YANG variable /ssh_sa/ssh/server/ssh_server_port (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_ssh_server_port is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ssh_server_port() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'22', u'1024..49151']}), is_leaf=True, yang_name="ssh-server-port", rest_name="port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'The designated SSH server port', u'cli-full-command': None, u'alt-name': u'port', u'callpoint': u'ssh_server_port_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ssh_server_port must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'22', u'1024..49151']}), is_leaf=True, yang_name="ssh-server-port", rest_name="port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'The designated SSH server port', u'cli-full-command': None, u'alt-name': u'port', u'callpoint': u'ssh_server_port_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='uint32', is_config=True)""",
})
self.__ssh_server_port = t
if hasattr(self, '_set'):
self._set()
def _unset_ssh_server_port(self):
self.__ssh_server_port = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'22', u'1024..49151']}), is_leaf=True, yang_name="ssh-server-port", rest_name="port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'The designated SSH server port', u'cli-full-command': None, u'alt-name': u'port', u'callpoint': u'ssh_server_port_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='uint32', is_config=True)
def _get_cipher(self):
"""
Getter method for cipher, mapped from YANG variable /ssh_sa/ssh/server/cipher (string)
"""
return self.__cipher
def _set_cipher(self, v, load=False):
"""
Setter method for cipher, mapped from YANG variable /ssh_sa/ssh/server/cipher (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_cipher is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_cipher() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="cipher", rest_name="cipher", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Cipher(s)', u'cli-full-command': None, u'callpoint': u'ssh_server_cipher_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """cipher must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="cipher", rest_name="cipher", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Cipher(s)', u'cli-full-command': None, u'callpoint': u'ssh_server_cipher_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)""",
})
self.__cipher = t
if hasattr(self, '_set'):
self._set()
def _unset_cipher(self):
self.__cipher = YANGDynClass(base=unicode, is_leaf=True, yang_name="cipher", rest_name="cipher", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Cipher(s)', u'cli-full-command': None, u'callpoint': u'ssh_server_cipher_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
def _get_mac(self):
"""
Getter method for mac, mapped from YANG variable /ssh_sa/ssh/server/mac (string)
"""
return self.__mac
def _set_mac(self, v, load=False):
"""
Setter method for mac, mapped from YANG variable /ssh_sa/ssh/server/mac (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_mac is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mac() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="mac", rest_name="mac", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure MAC algorithm(s)', u'cli-full-command': None, u'callpoint': u'ssh_server_mac_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """mac must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="mac", rest_name="mac", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure MAC algorithm(s)', u'cli-full-command': None, u'callpoint': u'ssh_server_mac_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)""",
})
self.__mac = t
if hasattr(self, '_set'):
self._set()
def _unset_mac(self):
self.__mac = YANGDynClass(base=unicode, is_leaf=True, yang_name="mac", rest_name="mac", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure MAC algorithm(s)', u'cli-full-command': None, u'callpoint': u'ssh_server_mac_cp'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='string', is_config=True)
def _get_standby(self):
"""
Getter method for standby, mapped from YANG variable /ssh_sa/ssh/server/standby (container)
"""
return self.__standby
def _set_standby(self, v, load=False):
"""
Setter method for standby, mapped from YANG variable /ssh_sa/ssh/server/standby (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_standby is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_standby() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=standby.standby, is_container='container', presence=False, yang_name="standby", rest_name="standby", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Standby SSH'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """standby must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=standby.standby, is_container='container', presence=False, yang_name="standby", rest_name="standby", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Standby SSH'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)""",
})
self.__standby = t
if hasattr(self, '_set'):
self._set()
def _unset_standby(self):
self.__standby = YANGDynClass(base=standby.standby, is_container='container', presence=False, yang_name="standby", rest_name="standby", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Standby SSH'}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
def _get_key(self):
"""
Getter method for key, mapped from YANG variable /ssh_sa/ssh/server/key (container)
"""
return self.__key
def _set_key(self, v, load=False):
"""
Setter method for key, mapped from YANG variable /ssh_sa/ssh/server/key (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_key is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_key() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=key.key, is_container='container', presence=False, yang_name="key", rest_name="key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure SSH host keys', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """key must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=key.key, is_container='container', presence=False, yang_name="key", rest_name="key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure SSH host keys', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)""",
})
self.__key = t
if hasattr(self, '_set'):
self._set()
def _unset_key(self):
self.__key = YANGDynClass(base=key.key, is_container='container', presence=False, yang_name="key", rest_name="key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure SSH host keys', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
def _get_ssh_vrf_cont(self):
"""
Getter method for ssh_vrf_cont, mapped from YANG variable /ssh_sa/ssh/server/ssh_vrf_cont (container)
"""
return self.__ssh_vrf_cont
def _set_ssh_vrf_cont(self, v, load=False):
"""
Setter method for ssh_vrf_cont, mapped from YANG variable /ssh_sa/ssh/server/ssh_vrf_cont (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ssh_vrf_cont is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ssh_vrf_cont() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ssh_vrf_cont.ssh_vrf_cont, is_container='container', presence=False, yang_name="ssh-vrf-cont", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ssh_vrf_cont must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ssh_vrf_cont.ssh_vrf_cont, is_container='container', presence=False, yang_name="ssh-vrf-cont", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)""",
})
self.__ssh_vrf_cont = t
if hasattr(self, '_set'):
self._set()
def _unset_ssh_vrf_cont(self):
self.__ssh_vrf_cont = YANGDynClass(base=ssh_vrf_cont.ssh_vrf_cont, is_container='container', presence=False, yang_name="ssh-vrf-cont", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-sec-services', defining_module='brocade-sec-services', yang_type='container', is_config=True)
shutdown = __builtin__.property(_get_shutdown, _set_shutdown)
key_exchange = __builtin__.property(_get_key_exchange, _set_key_exchange)
rekey_interval = __builtin__.property(_get_rekey_interval, _set_rekey_interval)
ssh_server_port = __builtin__.property(_get_ssh_server_port, _set_ssh_server_port)
cipher = __builtin__.property(_get_cipher, _set_cipher)
mac = __builtin__.property(_get_mac, _set_mac)
standby = __builtin__.property(_get_standby, _set_standby)
key = __builtin__.property(_get_key, _set_key)
ssh_vrf_cont = __builtin__.property(_get_ssh_vrf_cont, _set_ssh_vrf_cont)
_pyangbind_elements = {'shutdown': shutdown, 'key_exchange': key_exchange, 'rekey_interval': rekey_interval, 'ssh_server_port': ssh_server_port, 'cipher': cipher, 'mac': mac, 'standby': standby, 'key': key, 'ssh_vrf_cont': ssh_vrf_cont, }
| 74.807882 | 681 | 0.732286 | 4,283 | 30,372 | 4.948634 | 0.048331 | 0.040576 | 0.050201 | 0.041897 | 0.868601 | 0.849304 | 0.841425 | 0.835433 | 0.831658 | 0.824109 | 0 | 0.008058 | 0.125642 | 30,372 | 405 | 682 | 74.992593 | 0.790066 | 0.131338 | 0 | 0.47012 | 0 | 0.035857 | 0.394057 | 0.146177 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119522 | false | 0 | 0.043825 | 0 | 0.278884 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e09f785d8ab465f143eebb7d57745075ee8eb436 | 4,700 | py | Python | src/oscarcch/tests/test_cch_real.py | thelabnyc/django-oscar-cch | d98832c9cf642c6d241e3aaf4b1dc631c3d5ce0e | [
"0BSD"
] | null | null | null | src/oscarcch/tests/test_cch_real.py | thelabnyc/django-oscar-cch | d98832c9cf642c6d241e3aaf4b1dc631c3d5ce0e | [
"0BSD"
] | 14 | 2020-02-11T21:53:07.000Z | 2022-01-13T00:40:33.000Z | src/oscarcch/tests/test_cch_real.py | thelabnyc/django-oscar-cch | d98832c9cf642c6d241e3aaf4b1dc631c3d5ce0e | [
"0BSD"
] | 1 | 2016-05-31T10:02:38.000Z | 2016-05-31T10:02:38.000Z | from decimal import Decimal as D
from oscar.core.loading import get_model, get_class
from ..calculator import CCHTaxCalculator
from .base import BaseTest
import unittest
Basket = get_model("basket", "Basket")
ShippingAddress = get_model("order", "ShippingAddress")
Country = get_model("address", "Country")
PartnerAddress = get_model("partner", "PartnerAddress")
Range = get_model("offer", "Range")
Benefit = get_model("offer", "Benefit")
Condition = get_model("offer", "Condition")
ConditionalOffer = get_model("offer", "ConditionalOffer")
USStrategy = get_class("partner.strategy", "US")
Applicator = get_class("offer.applicator", "Applicator")
@unittest.skip("Disabled because it uses real cch")
class CCHTaxCalculatorRealTest(BaseTest):
def test_apply_taxes_five_digits_postal_code(self):
basket = self.prepare_basket_full_zip()
to_address = self.get_to_address_ohio_short_zip()
shipping_charge = self.get_shipping_charge()
CCHTaxCalculator().apply_taxes(to_address, basket, shipping_charge)
self.assertTrue(basket.is_tax_known)
self.assertEqual(basket.total_excl_tax, D("10.00"))
self.assertEqual(basket.total_incl_tax, D("10.68"))
self.assertEqual(basket.total_tax, D("0.68"))
purchase_info = basket.all_lines()[0].purchase_info
self.assertEqual(purchase_info.price.excl_tax, D("10.00"))
self.assertEqual(purchase_info.price.incl_tax, D("10.68"))
self.assertEqual(purchase_info.price.tax, D("0.68"))
details = purchase_info.price.taxation_details
self.assertEqual(len(details), 2)
self.assertEqual(details[0].authority_name, "OHIO, STATE OF")
self.assertEqual(details[0].tax_name, "STATE SALES TAX-GENERAL MERCHANDISE")
self.assertEqual(details[0].tax_applied, D("0.58"))
self.assertEqual(details[0].fee_applied, D("0.00"))
self.assertTrue(shipping_charge.is_tax_known)
self.assertEqual(shipping_charge.excl_tax, D("14.99"))
self.assertEqual(shipping_charge.incl_tax, D("16.3203625"))
self.assertEqual(len(shipping_charge.components[0].taxation_details), 3)
self.assertEqual(
shipping_charge.components[0].taxation_details[0].authority_name,
"NEW YORK, STATE OF",
)
self.assertEqual(
shipping_charge.components[0].taxation_details[0].tax_name,
"STATE SALES TAX-GENERAL MERCHANDISE",
)
self.assertEqual(
shipping_charge.components[0].taxation_details[0].tax_applied, D("0.5996")
)
self.assertEqual(
shipping_charge.components[0].taxation_details[0].fee_applied, D("0.00")
)
def test_apply_taxes_nine_digits_postal_code(self):
basket = self.prepare_basket_full_zip()
to_address = self.get_to_address_ohio_full_zip()
shipping_charge = self.get_shipping_charge()
CCHTaxCalculator().apply_taxes(to_address, basket, shipping_charge)
self.assertTrue(basket.is_tax_known)
print("basket: %s" % basket)
self.assertEqual(basket.total_excl_tax, D("10.00"))
self.assertEqual(basket.total_incl_tax, D("10.73"))
self.assertEqual(basket.total_tax, D("0.73"))
purchase_info = basket.all_lines()[0].purchase_info
self.assertEqual(purchase_info.price.excl_tax, D("10.00"))
self.assertEqual(purchase_info.price.incl_tax, D("10.73"))
self.assertEqual(purchase_info.price.tax, D("0.73"))
details = purchase_info.price.taxation_details
self.assertEqual(len(details), 2)
self.assertEqual(details[0].authority_name, "OHIO, STATE OF")
self.assertEqual(details[0].tax_name, "STATE SALES TAX-GENERAL MERCHANDISE")
self.assertEqual(details[0].tax_applied, D("0.58"))
self.assertEqual(details[0].fee_applied, D("0.00"))
self.assertTrue(shipping_charge.is_tax_known)
self.assertEqual(shipping_charge.excl_tax, D("14.99"))
self.assertEqual(shipping_charge.incl_tax, D("16.3203625"))
self.assertEqual(len(shipping_charge.components[0].taxation_details), 3)
self.assertEqual(
shipping_charge.components[0].taxation_details[0].authority_name,
"NEW YORK, STATE OF",
)
self.assertEqual(
shipping_charge.components[0].taxation_details[0].tax_name,
"STATE SALES TAX-GENERAL MERCHANDISE",
)
self.assertEqual(
shipping_charge.components[0].taxation_details[0].tax_applied, D("0.5996")
)
self.assertEqual(
shipping_charge.components[0].taxation_details[0].fee_applied, D("0.00")
)
| 43.518519 | 86 | 0.685957 | 592 | 4,700 | 5.212838 | 0.170608 | 0.174984 | 0.089436 | 0.112767 | 0.792612 | 0.792612 | 0.792612 | 0.769929 | 0.745949 | 0.745949 | 0 | 0.034049 | 0.18766 | 4,700 | 107 | 87 | 43.925234 | 0.774227 | 0 | 0 | 0.586957 | 0 | 0 | 0.114255 | 0 | 0 | 0 | 0 | 0 | 0.434783 | 1 | 0.021739 | false | 0 | 0.054348 | 0 | 0.086957 | 0.01087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ca932efae1100c1d8781ae57b18476dccde0c65 | 13,306 | py | Python | tests/service_test.py | Taller-de-programacion-2-Grupo-14/ubademy-exams | f84e8b4823f6443ba4124e4e8682ccc4c92620dd | [
"MIT"
] | null | null | null | tests/service_test.py | Taller-de-programacion-2-Grupo-14/ubademy-exams | f84e8b4823f6443ba4124e4e8682ccc4c92620dd | [
"MIT"
] | 1 | 2021-12-23T06:00:01.000Z | 2021-12-23T06:00:01.000Z | tests/service_test.py | Taller-de-programacion-2-Grupo-14/ubademy-exams | f84e8b4823f6443ba4124e4e8682ccc4c92620dd | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import Mock, patch
from exceptions.ExamException import (
IsNotTheCourseCreator,
ExamsLimitReached,
ExamDoesNotExist,
InvalidUserAction,
ResolutionDoesNotExists,
ExamAlreadyResolvedException,
)
from persistence.mongo import MongoDB
from validator.ExamValidator import ExamValidator
from service.Exam import ExamService
class TestCreateExam(unittest.TestCase):
def test_create_exam_when_not_creator(self):
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_course_creator.return_value": False}
mock_validator.configure_mock(**attrs)
service = ExamService(MongoDB, mock_validator)
with self.assertRaises(IsNotTheCourseCreator):
service.create_exam({})
def test_create_exam_when_creator(self):
mock_validator = Mock(spec=ExamValidator)
mock_db = Mock(spec=MongoDB)
attrs_validator = {
"is_course_creator.return_value": True,
"exams_limit_reached.return_value": False,
}
attrs_db = {"get_exams.return_value": []}
mock_validator.configure_mock(**attrs_validator)
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, mock_validator)
result = service.create_exam({})
self.assertIsNone(result)
class TestEditExam(unittest.TestCase):
def test_edit_exam_that_does_not_exist(self):
mock_db = Mock(spec=MongoDB)
attrs_db = {"get_course_status.return_value": []}
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, Mock())
with self.assertRaises(ExamDoesNotExist):
service.edit_exam({})
@patch("service.Exam.ExamService._check_draft_exam_existance")
def test_edit_exam_when_not_creator(self, mock_check_existance):
mock_check_existance.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs_validator = {"is_course_creator.return_value": False}
mock_validator.configure_mock(**attrs_validator)
service = ExamService(MongoDB, mock_validator)
with self.assertRaises(IsNotTheCourseCreator):
service.edit_exam({})
@patch("service.Exam.ExamService._check_draft_exam_existance")
def test_edit_exam_being_creator(self, mock_check_existance):
mock_check_existance.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs_validator = {"is_course_creator.return_value": True}
mock_validator.configure_mock(**attrs_validator)
mock_db = Mock(spec=MongoDB)
attrs_db = {"edit_exam.return_value": None}
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, mock_validator)
result = service.edit_exam({})
self.assertIsNone(result)
class TestPublishExam(unittest.TestCase):
def test_publish_exam_not_being_creator(self):
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_course_creator.return_value": False}
mock_validator.configure_mock(**attrs)
service = ExamService(MongoDB, mock_validator)
with self.assertRaises(IsNotTheCourseCreator):
service.publish_exam({})
@patch("service.Exam.ExamService._check_draft_exam_existance")
def test_publish_exam_when_limit_reached(self, mock_check_existance):
mock_check_existance.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs = {
"is_course_creator.return_value": True,
"exams_limit_reached.return_value": True,
}
mock_validator.configure_mock(**attrs)
mock_db = Mock(spec=MongoDB)
attrs_db = {"get_exams.return_value": []}
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, mock_validator)
with self.assertRaises(ExamsLimitReached):
service.publish_exam({})
def test_publish_exam_that_does_not_exist(self):
mock_db = Mock(spec=MongoDB)
attrs_db = {"get_course_status.return_value": []}
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, Mock())
with self.assertRaises(ExamDoesNotExist):
service.publish_exam({})
@patch("service.Exam.ExamService._check_draft_exam_existance")
def test_publish_exam_successfully(self, check_existance_mock):
check_existance_mock.return_value = None
mock_db = Mock(spec=MongoDB)
attrs_db = {"get_exams.return_value": [], "publish_exam.return_value": None}
mock_db.configure_mock(**attrs_db)
mock_validator = Mock(spec=ExamValidator)
attrs_validator = {
"is_course_creator.return_value": True,
"exams_limit_reached.return_value": False,
}
mock_validator.configure_mock(**attrs_validator)
service = ExamService(mock_db, mock_validator)
result = service.publish_exam({})
self.assertIsNone(result)
class TestGetExams(unittest.TestCase):
def test_get_exams_not_being_on_the_course(self):
mock_validator = Mock(spec=ExamValidator)
attrs = {
"is_course_creator.return_value": False,
"is_student.return_value": False,
"is_course_collaborator.return_value": False,
}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(InvalidUserAction):
service.get_exams(0, 0, {})
class TestGetResolutions(unittest.TestCase):
@patch("service.Exam.ExamService._check_published_exam_existance")
def test_get_resolutions_not_being_on_the_course(self, check_existance_mock):
check_existance_mock.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_grader.return_value": False, "is_student.return_value": False}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(InvalidUserAction):
service.get_resolutions(0, 0, None)
@patch("service.Exam.ExamService._check_published_exam_existance")
def test_get_resolution_of_nonexistent_exam(self, check_existance_mock):
check_existance_mock.side_effect = ExamDoesNotExist
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_grader.return_value": False, "is_student.return_value": False}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(ExamDoesNotExist):
service.get_resolution(0, "", 0, 0)
@patch("service.Exam.ExamService._check_published_exam_existance")
def test_get_resolution_of_classmate(self, check_existance_mock):
check_existance_mock.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_grader.return_value": False, "is_student.return_value": True}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(InvalidUserAction):
service.get_resolution(0, "", 0, 2)
@patch("service.Exam.ExamService._check_published_exam_existance")
def test_get_resolution_not_being_on_the_course(self, check_existance_mock):
check_existance_mock.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_grader.return_value": False, "is_student.return_value": False}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(InvalidUserAction):
service.get_resolution(0, "0", 0, 0)
class TestGradeResolution(unittest.TestCase):
def test_grade_resolution_not_being_grader(self):
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_grader.return_value": False}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(InvalidUserAction):
service.grade_resolution(0, {})
@patch("service.Exam.ExamService._check_resolution_exam")
def test_grade_nonexistent_resolution(self, check_existance_mock):
check_existance_mock.side_effect = ResolutionDoesNotExists
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_grader.return_value": True}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(ResolutionDoesNotExists):
service.grade_resolution(0, {})
@patch("service.Exam.ExamService.get_resolutions")
@patch("service.Exam.ExamService._check_resolution_exam")
def test_grade_resolution_successfully(
self, check_existance_mock, get_resolutions_mock
):
check_existance_mock.return_value = None
get_resolutions_mock.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs_validator = {
"is_grader.return_value": True,
"notify_course.return_value": None,
}
mock_validator.configure_mock(**attrs_validator)
mock_db = Mock(spec=MongoDB)
attrs_db = {"grade_exam.return_value": None}
mock_db.configure_mock(**attrs_db)
service = ExamService(Mock(), mock_validator)
result = service.grade_resolution(0, {})
self.assertIsNone(result)
class TestGetExam(unittest.TestCase):
def test_get_nonexistent_exam(self):
mock_validator = Mock(spec=ExamValidator)
attrs_validator = {
"is_student.return_value": True,
"is_course_creator.return_value": True,
}
mock_validator.configure_mock(**attrs_validator)
mock_db = Mock(spec=MongoDB)
attrs_db = {"get_exam.return_value": None}
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, mock_validator)
with self.assertRaises(ExamDoesNotExist):
service.get_exam(0, "", 8)
def test_get_exam_not_being_on_the_course(self):
mock_validator = Mock(spec=ExamValidator)
attrs = {
"is_student.return_value": False,
"is_course_creator.return_value": False,
}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(InvalidUserAction):
service.get_exam(0, "", 8)
def test_get_exam_being_student(self):
mock_validator = Mock(spec=ExamValidator)
attrs = {
"is_student.return_value": True,
"is_course_creator.return_value": False,
}
mock_validator.configure_mock(**attrs)
mock_db = Mock(spec=MongoDB)
attrs_db = {"get_exam.return_value": ["hola"]}
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, mock_validator)
result = service.get_exam(5, "", 4)
self.assertEqual(["hola"], result)
class TestCompleteExam(unittest.TestCase):
@patch("service.Exam.ExamService._check_published_exam_existance")
def test_complete_nonexistent_exam(self, check_existance_mock):
check_existance_mock.side_effect = ExamDoesNotExist
service = ExamService(Mock(), Mock())
with self.assertRaises(ExamDoesNotExist):
service.complete_exam(0, {})
@patch("service.Exam.ExamService._check_published_exam_existance")
def test_complete_exam_not_being_student(self, check_existance_mock):
check_existance_mock.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs = {"is_student.return_value": False}
mock_validator.configure_mock(**attrs)
service = ExamService(Mock(), mock_validator)
with self.assertRaises(InvalidUserAction):
service.complete_exam(0, {})
@patch("service.Exam.ExamService._check_published_exam_existance")
def test_complete_exam_already_done(self, check_existance_mock):
check_existance_mock.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs_validator = {"is_student.return_value": True}
mock_validator.configure_mock(**attrs_validator)
mock_validator.configure_mock(**attrs_validator)
mock_db = Mock(spec=MongoDB)
attrs_db = {"get_resolution.return_value": ["hola"]}
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, mock_validator)
with self.assertRaises(ExamAlreadyResolvedException):
service.complete_exam(0, {})
@patch("service.Exam.ExamService._check_published_exam_existance")
def test_complete_exam_successfully(self, check_existance_mock):
check_existance_mock.return_value = None
mock_validator = Mock(spec=ExamValidator)
attrs_validator = {"is_student.return_value": True}
mock_validator.configure_mock(**attrs_validator)
mock_validator.configure_mock(**attrs_validator)
mock_db = Mock(spec=MongoDB)
attrs_db = {
"get_resolution.return_value": None,
"add_resolution.return_value": None,
}
mock_db.configure_mock(**attrs_db)
service = ExamService(mock_db, mock_validator)
result = service.complete_exam(5, {})
self.assertIsNone(result)
| 43.201299 | 84 | 0.697129 | 1,474 | 13,306 | 5.920624 | 0.063772 | 0.096826 | 0.070127 | 0.068523 | 0.869829 | 0.839807 | 0.829266 | 0.815515 | 0.791108 | 0.779649 | 0 | 0.002559 | 0.206974 | 13,306 | 307 | 85 | 43.34202 | 0.824488 | 0 | 0 | 0.654545 | 0 | 0 | 0.151511 | 0.150534 | 0 | 0 | 0 | 0 | 0.087273 | 1 | 0.087273 | false | 0 | 0.021818 | 0 | 0.138182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1cadf3b1bd70aaf608846d5dbea18b9b1b93220e | 29,640 | py | Python | Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/parse/node_transformer.py | dendisuhubdy/Vitis-AI | 524f65224c52314155dafc011d488ed30e458fcb | [
"Apache-2.0"
] | 1 | 2021-08-30T13:42:30.000Z | 2021-08-30T13:42:30.000Z | Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/parse/node_transformer.py | dendisuhubdy/Vitis-AI | 524f65224c52314155dafc011d488ed30e458fcb | [
"Apache-2.0"
] | null | null | null | Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/parse/node_transformer.py | dendisuhubdy/Vitis-AI | 524f65224c52314155dafc011d488ed30e458fcb | [
"Apache-2.0"
] | null | null | null | import torch
from nndct_shared.base import NNDCT_OP
from nndct_shared.nndct_graph import Graph, Node, Tensor
from .torch_op_def import *
from .utils import _GRAPH_SCOPE_SYM, get_full_name
class _NodeCreator(object):
def __init__(self):
self._idx = 0
def __call__(self,
graph,
node_name,
op,
num_out_tensors,
shape=None,
in_tensors=None):
node_name = get_full_name(graph.name, node_name)
node = Node(node_name, op=op, dtype="float32", idx=self._idx)
for i in range(num_out_tensors):
tensor = Tensor(name=f"{node_name}_{i}", node=node, shape=shape)
node.out_tensors.append(tensor)
if in_tensors:
for tensor in in_tensors:
node.in_tensors.append(tensor)
graph.add_node(node)
self._idx += 1
class NodeTransformer(object):
r""" tansform node to graph"""
def __call__(self, node):
return getattr(self, node.op.type, "default")(node)
@staticmethod
def _connect_nodes(graph):
for nodeA in graph.nodes:
for input_tensor in nodeA.in_tensors:
for nodeB in graph.nodes:
if nodeB is not nodeA and input_tensor in nodeB.out_tensors:
#nodeB.outputs.add(input_tensor.node.name)
nodeB.add_out_node(nodeA.name)
nodeA.add_in_node(input_tensor.node.name)
def basic_lstm(self, node):
graph_scope_name = node.name.split(_GRAPH_SCOPE_SYM)[0]
node_creator = _NodeCreator()
graphs = []
bidirectional = node.node_attr(node.op.AttrName.BIDIRECTIONAL)
lstm_direction = ["forward"]
if bidirectional:
lstm_direction = ["forward", "backward"]
for i in range(node.node_attr(node.op.AttrName.NUM_LAYERS)):
lstm_cell_pair = {}
if i == 0:
input_size = node.node_attr(node.op.AttrName.INPUT_SIZE)
else:
input_size = len(lstm_direction) * node.node_attr(
node.op.AttrName.HIDDEN_SIZE)
hidden_size = node.node_attr(node.op.AttrName.HIDDEN_SIZE)
bias=True
for direction in lstm_direction:
if direction == "forward":
w_ih = node.op.params[node.op.ParamName.WEIGHT_IH][i]
w_hh = node.op.params[node.op.ParamName.WEIGHT_HH][i]
if node.op.ParamName.BIAS in node.op.params:
bias_hi = node.op.params[node.op.ParamName.BIAS][i]
else:
bias=False
else:
w_ih = node.op.params[node.op.ParamName.WEIGHT_IH_REVERSE][i]
w_hh = node.op.params[node.op.ParamName.WEIGHT_HH_REVERSE][i]
if node.op.ParamName.BIAS_REVERSE in node.op.params:
bias_hi = node.op.params[node.op.ParamName.BIAS_REVERSE][i]
else:
bias=False
# lstm_node_name = node.name.replace("/", "_")
graph_name = f"{graph_scope_name}_StandardLstmCell_layer_{i}_{direction}"
graph = Graph(graph_name=graph_name)
lstm_cell_pair[direction] = graph
w_ii = Tensor(get_full_name(graph.name, "weight_ii"))
w_if = Tensor(get_full_name(graph.name, "weight_if"))
w_ig = Tensor(get_full_name(graph.name, "weight_ig"))
w_io = Tensor(get_full_name(graph.name, "weight_io"))
w_ii.from_ndarray(w_ih.data[:hidden_size])
w_if.from_ndarray(w_ih.data[hidden_size:2 * hidden_size])
w_ig.from_ndarray(w_ih.data[2 * hidden_size:3 * hidden_size])
w_io.from_ndarray(w_ih.data[3 * hidden_size:4 * hidden_size])
w_hi = Tensor(get_full_name(graph.name, "weight_hi"))
w_hf = Tensor(get_full_name(graph.name, "weight_hf"))
w_hg = Tensor(get_full_name(graph.name, "weight_hg"))
w_ho = Tensor(get_full_name(graph.name, "weight_ho"))
w_hi.from_ndarray(w_hh.data[:hidden_size])
w_hf.from_ndarray(w_hh.data[hidden_size:2 * hidden_size])
w_hg.from_ndarray(w_hh.data[2 * hidden_size:3 * hidden_size])
w_ho.from_ndarray(w_hh.data[3 * hidden_size:4 * hidden_size])
bias_i = Tensor(get_full_name(graph.name, "bias_i"))
bias_f = Tensor(get_full_name(graph.name, "bias_f"))
bias_g = Tensor(get_full_name(graph.name, "bias_g"))
bias_o = Tensor(get_full_name(graph.name, "bias_o"))
if bias is True:
bias_i.from_ndarray(bias_hi.data[:hidden_size])
bias_f.from_ndarray(bias_hi.data[hidden_size:2 * hidden_size])
bias_g.from_ndarray(bias_hi.data[2 * hidden_size:3 * hidden_size])
bias_o.from_ndarray(bias_hi.data[3 * hidden_size:4 * hidden_size])
op = TorchBaseOperation(NNDCT_OP.INPUT, NNDCT_OP.INPUT)
op.set_config("input", "args[0]")
shape = [1, input_size]
node_creator(
graph=graph,
node_name="input_0",
op=op,
num_out_tensors=1,
shape=shape)
op = TorchBaseOperation(NNDCT_OP.INPUT, NNDCT_OP.INPUT)
op.set_config("input", "args[1]")
shape = [1, hidden_size]
node_creator(
graph=graph,
node_name="h_prev_1",
op=op,
num_out_tensors=1,
shape=shape)
op = TorchBaseOperation(NNDCT_OP.INPUT, NNDCT_OP.INPUT)
op.set_config("input", "args[2]")
shape = [1, hidden_size]
node_creator(
graph=graph,
node_name="c_prev_2",
op=op,
num_out_tensors=1,
shape=shape)
# y_i = w_ii * input_0 + w_hi * h_prev_1 + bias_i
op = TorchLinear()
op.set_config("bias", False)
op.set_config("out_features", hidden_size)
op.set_config("in_features", input_size)
op.set_param(op.ParamName.WEIGHTS, w_ii)
node_creator(
graph=graph,
node_name="w_ii * input_0",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "input_0")).out_tensors)
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("out_features", hidden_size)
op.set_config("in_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_hi)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_i)
node_creator(
graph=graph,
node_name="w_hi * h_prev_1 + bias_i",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors)
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "w_ii * input_0")).out_tensors[0])
op.set_config("other",
graph.node(get_full_name(graph.name, "w_hi * h_prev_1 + bias_i")).out_tensors[0])
node_creator(
graph=graph,
node_name="y_i",
op=op,
num_out_tensors=1,
in_tensors=[
graph.node(get_full_name(graph.name, "w_ii * input_0")).out_tensors[0],
graph.node(get_full_name(graph.name, "w_hi * h_prev_1 + bias_i")).out_tensors[0]
])
# y_f = w_if * input_0 + w_hf * h_prev_1 + bias_f
op = TorchLinear()
op.set_config("bias", False)
op.set_config("in_features", input_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_if)
node_creator(
graph=graph,
node_name="w_if * input_0",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "input_0")).out_tensors)
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("in_features", hidden_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_hf)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_f)
node_creator(
graph=graph,
node_name="w_hf * h_prev_1 + bias_f",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors)
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "w_if * input_0")).out_tensors[0])
op.set_config("other",
graph.node(get_full_name(graph.name, "w_hf * h_prev_1 + bias_f")).out_tensors[0])
node_creator(
graph=graph,
node_name="y_f",
op=op,
num_out_tensors=1,
in_tensors=[
graph.node(get_full_name(graph.name, "w_if * input_0")).out_tensors[0],
graph.node(get_full_name(graph.name, "w_hf * h_prev_1 + bias_f")).out_tensors[0]
])
# y_g = w_ig * input_0 + w_hg * h_prev_1 + bias_g
op = TorchLinear()
op.set_config("bias", False)
op.set_config("in_features", input_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_ig)
node_creator(
graph=graph,
node_name="w_ig * input_0",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "input_0")).out_tensors)
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("in_features", hidden_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_hg)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_g)
node_creator(
graph=graph,
node_name="w_hg * h_prev_1 + bias_g",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors)
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "w_ig * input_0")).out_tensors[0])
op.set_config("other",
graph.node(get_full_name(graph.name, "w_hg * h_prev_1 + bias_g")).out_tensors[0])
node_creator(
graph=graph,
node_name="y_g",
op=op,
num_out_tensors=1,
in_tensors=[
graph.node(get_full_name(graph.name, "w_ig * input_0")).out_tensors[0],
graph.node(get_full_name(graph.name, "w_hg * h_prev_1 + bias_g")).out_tensors[0]
])
# y_o = w_io * input_0 + w_ho * h_prev_1 + bias_o
op = TorchLinear()
op.set_config("bias", False)
op.set_config("in_features", input_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_io)
node_creator(
graph=graph,
node_name="w_io * input_0",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "input_0")).out_tensors)
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("in_features", hidden_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_ho)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_o)
node_creator(
graph=graph,
node_name="w_ho * h_prev_1 + bias_o",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors)
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "w_io * input_0")).out_tensors[0])
op.set_config("other",
graph.node(get_full_name(graph.name, "w_ho * h_prev_1 + bias_o")).out_tensors[0])
node_creator(
graph=graph,
node_name="y_o",
op=op,
num_out_tensors=1,
in_tensors=[
graph.node(get_full_name(graph.name, "w_io * input_0")).out_tensors[0],
graph.node(get_full_name(graph.name, "w_ho * h_prev_1 + bias_o")).out_tensors[0]
])
# op = Split(optype=NNDCT_OP.SPLIT)
# op.set_attr(op.AttrName.INPUT, graph.node("combine_2_linearity").out_tensors[0])
# op.set_attr(op.AttrName.SPLIT_SIZE_OR_SECTIONS, hidden_size)
# op.set_attr(op.AttrName.AXIS, 1)
# node_creator(graph=graph,
# node_name="split_ifgo",
# op=op,
# num_out_tensors=4,
# in_tensors=graph.node("combine_2_linearity").out_tensors)
op = TorchSigmoid()
node_creator(
graph=graph,
node_name="it",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "y_i")).out_tensors[0]])
op = TorchSigmoid()
node_creator(
graph=graph,
node_name="ft",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "y_f")).out_tensors[0]])
op = TorchTanh()
node_creator(
graph=graph,
node_name="cct",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "y_g")).out_tensors[0]])
op = TorchSigmoid()
node_creator(
graph=graph,
node_name="ot",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "y_o")).out_tensors[0]])
op = TorchMul()
op.set_config("input", graph.node(get_full_name(graph.name, "it")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "cct")).out_tensors[0])
node_creator(
graph=graph,
node_name="it*cct",
op=op,
num_out_tensors=1,
in_tensors=[
graph.node(get_full_name(graph.name, "it")).out_tensors[0],
graph.node(get_full_name(graph.name, "cct")).out_tensors[0]
])
op = TorchMul()
op.set_config("input", graph.node(get_full_name(graph.name, "ft")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "c_prev_2")).out_tensors[0])
node_creator(
graph=graph,
node_name="ft*c_prev_2",
op=op,
num_out_tensors=1,
in_tensors=[
graph.node(get_full_name(graph.name, "ft")).out_tensors[0],
graph.node(get_full_name(graph.name, "c_prev_2")).out_tensors[0]
])
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "it*cct")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "ft*c_prev_2")).out_tensors[0])
node_creator(
graph=graph,
node_name="c_next",
op=op,
num_out_tensors=1,
in_tensors=[
graph.node(get_full_name(graph.name, "it*cct")).out_tensors[0],
graph.node(get_full_name(graph.name, "ft*c_prev_2")).out_tensors[0]
])
op = TorchTanh()
node_creator(
graph=graph,
node_name="c_temp",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "c_next")).out_tensors)
op = TorchMul()
op.set_config("input", graph.node(get_full_name(graph.name, "ot")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "c_temp")).out_tensors[0])
node_creator(
graph=graph,
node_name="h_next",
op=op,
num_out_tensors=1,
in_tensors=[
graph.node(get_full_name(graph.name, "ot")).out_tensors[0],
graph.node(get_full_name(graph.name, "c_temp")).out_tensors[0]
])
self._connect_nodes(graph)
graph.add_end_tensor(graph.node(get_full_name(graph.name, "h_next")).out_tensors[0])
graph.add_end_tensor(graph.node(get_full_name(graph.name, "c_next")).out_tensors[0])
graphs.append(lstm_cell_pair)
return graphs
def basic_gru(self, node):
node_creator = _NodeCreator()
graphs = []
bidirectional = node.node_attr(node.op.AttrName.BIDIRECTIONAL)
lstm_direction = ["forward"]
if bidirectional:
lstm_direction = ["forward", "backward"]
for i in range(node.node_attr(node.op.AttrName.NUM_LAYERS)):
lstm_cell_pair = {}
if i == 0:
input_size = node.node_attr(node.op.AttrName.INPUT_SIZE)
else:
input_size = len(lstm_direction) * node.node_attr(node.op.AttrName.HIDDEN_SIZE)
hidden_size = node.node_attr(node.op.AttrName.HIDDEN_SIZE)
bias = True
for direction in lstm_direction:
if direction == "forward":
w_ih = node.op.params[node.op.ParamName.WEIGHT_IH][i]
w_hh = node.op.params[node.op.ParamName.WEIGHT_HH][i]
if node.op.ParamName.BIAS_IH and node.op.ParamName.BIAS_HH in node.op.params:
bias_ih = node.op.params[node.op.ParamName.BIAS_IH][i]
bias_hh = node.op.params[node.op.ParamName.BIAS_HH][i]
else:
bias = False
else:
w_ih = node.op.params[node.op.ParamName.WEIGHT_IH_REVERSE][i]
w_hh = node.op.params[node.op.ParamName.WEIGHT_HH_REVERSE][i]
if node.op.ParamName.BIAS_IH_REVERSE and node.op.ParamName.BIAS_HH_REVERSE in node.op.params:
bias_ih = node.op.params[node.op.ParamName.BIAS_IH_REVERSE][i]
bias_hh = node.op.params[node.op.ParamName.BIAS_HH_REVERSE][i]
else:
bias = False
# lstm_node_name = node.name.replace("/", "_")
graph_name = f"StandardGruCell_layer_{i}_{direction}"
graph = Graph(graph_name = graph_name)
lstm_cell_pair[direction] = graph
w_ii = Tensor(f"weight_ii")
w_if = Tensor(f"weight_if")
w_ig = Tensor(f"weight_ig")
w_ii.from_ndarray(w_ih.data[:hidden_size])
w_if.from_ndarray(w_ih.data[hidden_size:2 * hidden_size])
w_ig.from_ndarray(w_ih.data[2 * hidden_size:3 * hidden_size])
w_hi = Tensor(f"weight_hi")
w_hf = Tensor(f"weight_hf")
w_hg = Tensor(f"weight_hg")
w_hi.from_ndarray(w_hh.data[:hidden_size])
w_hf.from_ndarray(w_hh.data[hidden_size:2 * hidden_size])
w_hg.from_ndarray(w_hh.data[2 * hidden_size:3 * hidden_size])
bias_ii = Tensor(f"bias_ii")
bias_if = Tensor(f"bias_if")
bias_ig = Tensor(f"bias_ig")
bias_hi = Tensor(f"bias_hi")
bias_hf = Tensor(f"bias_hf")
bias_hg = Tensor(f"bias_hg")
if bias is True:
bias_ii.from_ndarray(bias_ih.data[:hidden_size])
bias_if.from_ndarray(bias_ih.data[hidden_size:2 * hidden_size])
bias_ig.from_ndarray(bias_ih.data[2 * hidden_size:3 * hidden_size])
bias_hi.from_ndarray(bias_hh.data[: hidden_size])
bias_hf.from_ndarray(bias_hh.data[hidden_size:2 * hidden_size])
bias_hg.from_ndarray(bias_hh.data[2 * hidden_size:3 * hidden_size])
op = TorchBaseOperation(NNDCT_OP.INPUT, NNDCT_OP.INPUT)
op.set_config("input", "args[0]")
shape = [1, input_size]
node_creator(graph=graph,
node_name="input_0",
op=op,
num_out_tensors=1,
shape=shape)
op = TorchBaseOperation(NNDCT_OP.INPUT, NNDCT_OP.INPUT)
op.set_config("input", "args[1]")
shape = [1, hidden_size]
node_creator(graph=graph,
node_name="h_prev_1",
op=op,
num_out_tensors=1,
shape=shape)
# y_i = w_ii * input_0 +bias_ii + w_hi * h_prev_1 + bias_hi
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("out_features", hidden_size)
op.set_config("in_features", input_size)
op.set_param(op.ParamName.WEIGHTS, w_ii)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_ii)
node_creator(graph=graph,
node_name="w_ii * input_0 + bias_ii",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "input_0")).out_tensors)
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("out_features", hidden_size)
op.set_config("in_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_hi)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_hi)
node_creator(graph=graph,
node_name="w_hi * h_prev_1 + bias_hi",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors)
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "w_ii * input_0 + bias_ii")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "w_hi * h_prev_1 + bias_hi")).out_tensors[0])
node_creator(graph=graph,
node_name="y_i",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "w_ii * input_0 + bias_ii")).out_tensors[0],
graph.node(get_full_name(graph.name, "w_hi * h_prev_1 + bias_hi")).out_tensors[0]])
# y_f = w_if * input_0 + w_hf * h_prev_1 + bias_f
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("in_features", input_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_if)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_if)
node_creator(graph=graph,
node_name="w_if * input_0 + bias_if",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "input_0")).out_tensors)
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("in_features", hidden_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_hf)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_hf)
node_creator(graph=graph,
node_name="w_hf * h_prev_1 + bias_hf",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors)
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "w_if * input_0 + bias_if")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "w_hf * h_prev_1 + bias_hf")).out_tensors[0])
node_creator(graph=graph,
node_name="y_f",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "w_if * input_0 + bias_if")).out_tensors[0],
graph.node(get_full_name(graph.name, "w_hf * h_prev_1 + bias_hf")).out_tensors[0]])
op = TorchSigmoid()
node_creator(graph=graph,
node_name="it",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "y_i")).out_tensors[0]])
op = TorchSigmoid()
node_creator(graph=graph,
node_name="ft",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "y_f")).out_tensors[0]])
# y_g = w_ig * input_0 + bias_ig + it*(w_hg * h_prev_1 + bias_hg)
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("in_features", input_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_ig)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_ig)
node_creator(graph=graph,
node_name="w_ig * input_0 + bias_ig",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "input_0")).out_tensors)
op = TorchLinear()
op.set_config("bias", bias)
op.set_config("in_features", hidden_size)
op.set_config("out_features", hidden_size)
op.set_param(op.ParamName.WEIGHTS, w_hg)
if bias is True:
op.set_param(op.ParamName.BIAS, bias_hg)
node_creator(graph=graph,
node_name="w_hg * h_prev_1 + bias_hg",
op=op,
num_out_tensors=1,
in_tensors=graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors)
op = TorchMul()
op.set_config("input", graph.node(get_full_name(graph.name, "it")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "w_hg * h_prev_1 + bias_hg")).out_tensors[0])
node_creator(graph=graph,
node_name="it*(w_hg * h_prev_1 + bias_hg)",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "it")).out_tensors[0],
graph.node(get_full_name(graph.name, "w_hg * h_prev_1 + bias_hg")).out_tensors[0]])
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "w_ig * input_0 + bias_ig")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "it*(w_hg * h_prev_1 + bias_hg)")).out_tensors[0])
node_creator(graph=graph,
node_name="y_g",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "w_ig * input_0 + bias_ig")).out_tensors[0],
graph.node(get_full_name(graph.name, "it*(w_hg * h_prev_1 + bias_hg)")).out_tensors[0]])
# op = Split(optype=NNDCT_OP.SPLIT)
# op.set_attr(op.AttrName.INPUT, graph.node("combine_2_linearity").out_tensors[0])
# op.set_attr(op.AttrName.SPLIT_SIZE_OR_SECTIONS, hidden_size)
# op.set_attr(op.AttrName.AXIS, 1)
# node_creator(graph=graph,
# node_name="split_ifgo",
# op=op,
# num_out_tensors=4,
# in_tensors=graph.node("combine_2_linearity").out_tensors)
op = TorchTanh()
node_creator(graph=graph,
node_name="cct",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "y_g")).out_tensors[0]])
op = TorchMul()
op.set_config("input", graph.node(get_full_name(graph.name, "ft")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "cct")).out_tensors[0])
node_creator(graph=graph,
node_name="ft*cct",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "ft")).out_tensors[0],
graph.node(get_full_name(graph.name, "cct")).out_tensors[0]])
op = TorchSub()
op.set_config("input", graph.node(get_full_name(graph.name, "cct")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "ft*cct")).out_tensors[0])
node_creator(graph=graph,
node_name="cct-ft*cct",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "cct")).out_tensors[0],
graph.node(get_full_name(graph.name, "ft*cct")).out_tensors[0]])
op = TorchMul()
op.set_config("input", graph.node(get_full_name(graph.name, "ft")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors[0])
node_creator(graph=graph,
node_name="ft*h_prev_1",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "ft")).out_tensors[0],
graph.node(get_full_name(graph.name, "h_prev_1")).out_tensors[0]])
op = TorchAdd()
op.set_config("input", graph.node(get_full_name(graph.name, "cct-ft*cct")).out_tensors[0])
op.set_config("other", graph.node(get_full_name(graph.name, "ft*h_prev_1")).out_tensors[0])
node_creator(graph=graph,
node_name="h_next",
op=op,
num_out_tensors=1,
in_tensors=[graph.node(get_full_name(graph.name, "cct-ft*cct")).out_tensors[0],
graph.node(get_full_name(graph.name, "ft*h_prev_1")).out_tensors[0]])
self._connect_nodes(graph)
graph.add_end_tensor(graph.node(get_full_name(graph.name, "h_next")).out_tensors[0])
graphs.append(lstm_cell_pair)
return graphs
| 42.464183 | 122 | 0.576316 | 4,106 | 29,640 | 3.830492 | 0.033609 | 0.090285 | 0.085961 | 0.103764 | 0.92453 | 0.908761 | 0.903166 | 0.855417 | 0.845053 | 0.842828 | 0 | 0.012492 | 0.300472 | 29,640 | 697 | 123 | 42.525108 | 0.746069 | 0.044467 | 0 | 0.687708 | 0 | 0 | 0.085725 | 0.003323 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009967 | false | 0 | 0.008306 | 0.001661 | 0.026578 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e809cd3ad6378d41459ef0a7cc770d396ed60895 | 1,234 | py | Python | server/map_generator.py | anuragpeshne/voyager | 95b5ef7a9e7a7dde36708ac8bff25ef96c6a72e0 | [
"MIT"
] | null | null | null | server/map_generator.py | anuragpeshne/voyager | 95b5ef7a9e7a7dde36708ac8bff25ef96c6a72e0 | [
"MIT"
] | null | null | null | server/map_generator.py | anuragpeshne/voyager | 95b5ef7a9e7a7dde36708ac8bff25ef96c6a72e0 | [
"MIT"
] | null | null | null | def generate(map_name):
name, size = map_name.split('_')
if name == 'nile':
return __generate_nile(size)
elif name == 'himalaya':
return __generate_himalaya(size)
else:
return __generate_default(size)
def __generate_nile(size):
map_ = [
[1, 1, 1, 500, 1, 1, 1, 1],
[1, 1, 1, 500, 1, 1, 1, 1],
[1, 1, 500, 500, 1, 1, 1, 1],
[1, 500, 1, 500, 1, 1, 1, 1],
[500, 1, 1, 500, 1, 1, 1, 1],
[1, 1, 500, 1, 1, 1, 1, 1],
[1, 500, 1, 1, 1, 1, 1, 1],
[500, 1, 1, 1, 1, 1, 1, 1]
]
return {
'start': [len(map_) - 2, 0],
'dest': [0, len(map_[0]) - 1],
'map': map_
}
def __generate_himalaya(size):
map_ = [
[ 1, 1, 1, 1, 1, 1, 1, 19],
[ 1, 1, 1, 1, 1, 1, 19, 299],
[299, 1, 1, 1, 1, 19, 299, 499],
[499, 299, 1, 1, 19, 299, 499, 899],
[999, 499, 299, 1, 1, 19, 299, 499],
[499, 299, 1, 1, 1, 1, 19, 299],
[299, 1, 1, 1, 1, 1, 1, 19]
]
return {
'start': [len(map_) - 1, 0],
'dest': [0, len(map_[0]) - 2],
'map': map_
}
| 28.045455 | 46 | 0.385737 | 186 | 1,234 | 2.419355 | 0.134409 | 0.293333 | 0.326667 | 0.311111 | 0.486667 | 0.455556 | 0.384444 | 0.284444 | 0.284444 | 0.213333 | 0 | 0.271967 | 0.418963 | 1,234 | 43 | 47 | 28.697674 | 0.355649 | 0 | 0 | 0.205128 | 1 | 0 | 0.029984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.205128 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e80bc6b130a15fb674ed9d8a5405b6412f446f94 | 95 | py | Python | src/hub/dataload/sources/wellderly/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 39 | 2017-07-01T22:34:39.000Z | 2022-03-15T22:25:59.000Z | src/hub/dataload/sources/wellderly/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 105 | 2017-06-28T17:26:06.000Z | 2022-03-17T17:49:53.000Z | src/hub/dataload/sources/wellderly/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 14 | 2017-06-12T18:29:36.000Z | 2021-03-18T15:51:27.000Z | from .wellderly_dumper import WellderlyDumper
from .wellderly_upload import WellderlyUploader
| 23.75 | 47 | 0.884211 | 10 | 95 | 8.2 | 0.7 | 0.317073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094737 | 95 | 3 | 48 | 31.666667 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e82c8c34c56128a930322e74c75ef45fc98ff7b3 | 35 | py | Python | yUMItools/__init__.py | adamn102/yUMItools | c2a4a40ffc672dd7168f3262b4b27ada1c609de9 | [
"MIT"
] | null | null | null | yUMItools/__init__.py | adamn102/yUMItools | c2a4a40ffc672dd7168f3262b4b27ada1c609de9 | [
"MIT"
] | null | null | null | yUMItools/__init__.py | adamn102/yUMItools | c2a4a40ffc672dd7168f3262b4b27ada1c609de9 | [
"MIT"
] | null | null | null | from yUMItools.utils import YUMISet | 35 | 35 | 0.885714 | 5 | 35 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
08fc0a6157001aa817e8b654705fcc065cd1046a | 16,565 | py | Python | src/geocat/comp/eofunc.py | dimaclimate/geocat-comp | dcb55e22d69d96762b683652cf83f6b9ef4fcc38 | [
"Apache-2.0"
] | null | null | null | src/geocat/comp/eofunc.py | dimaclimate/geocat-comp | dcb55e22d69d96762b683652cf83f6b9ef4fcc38 | [
"Apache-2.0"
] | null | null | null | src/geocat/comp/eofunc.py | dimaclimate/geocat-comp | dcb55e22d69d96762b683652cf83f6b9ef4fcc38 | [
"Apache-2.0"
] | null | null | null | import warnings
from typing import Iterable
import numpy as np
import xarray as xr
from eofs.xarray import Eof
def _generate_eofs_solver(data, time_dim=0, weights=None, center=True, ddof=1):
"""Convenience function to be used in both `eofunc_eofs` and `eofunc_pcs`
functions."""
# ''' Start of boilerplate
if not isinstance(data, xr.DataArray):
data = np.asarray(data)
if (time_dim >= data.ndim) or (time_dim < -data.ndim):
raise ValueError("ERROR eofunc_efs: `time_dim` out of bound.")
# Transpose data if time_dim is not 0 (i.e. the first/left-most dimension)
dims_to_transpose = np.arange(data.ndim).tolist()
dims_to_transpose.insert(
0, dims_to_transpose.pop(dims_to_transpose.index(time_dim)))
data = np.transpose(data, axes=dims_to_transpose)
dims = [f"dim_{i}" for i in range(data.ndim)]
dims[0] = 'time'
data = xr.DataArray(
data,
dims=dims,
)
solver = Eof(data, weights=weights, center=center, ddof=ddof)
return data, solver
def eofunc_eofs(data,
neofs=1,
time_dim=0,
eofscaling=0,
weights=None,
center=True,
ddof=1,
vfscaled=False,
meta=False):
"""Computes empirical orthogonal functions (EOFs, aka: Principal Component
Analysis).
Note: `eofunc_eofs` allows to perform the EOF analysis that was previously done via the NCL function `eofunc`.
However, there are a few changes to the NCL flow such as : (1) Only `np.nan` is supported as missing value,
(2) EOFs are computed only from covariance matrix and there is no support for computation from correlation matrix,
(3) percentage of non-missing points that must exist at any single point is no longer an input.
This implementation uses `eofs` package (https://anaconda.org/conda-forge/eofs), which is built upon the
following study: Dawson, Andrew, "eofs: A library for EOF analysis of meteorological, oceanographic, and
climate data," Journal of Open Research Software, vol. 4, no. 1, 2016. Further information about this
package can be found at: https://ajdawson.github.io/eofs/latest/index.html#
This implementation provides a few conveniences to the user on top of `eofs` package that are described below
in the Parameters section.
Parameters
----------
data : (:class:`xarray.DataArray` or :class:`numpy.ndarray` or :class:`list`)
Should contain numbers or `np.nan` for missing value representation. It must be at least a 2-dimensional array.
When input data is of type `xarray.DataArray`, `eofs.xarray` interface assumes the left-most dimension
(i.e. `dim_0`) is the `time` dimension. In this case, that dimension should have the name "time".
When input data is of type `numpy.ndarray` or `list`, this function still assumes the leftmost dimension
to be the number of observations or `time` dimension: however, in this case, user is allowed to input otherwise.
If the input do not have its leftmost dimension as the `time` or number of observations, then the user should
specify with `time_dim=x` to define which dimension must be treated as time or number of observations
neofs : (:class:`int`, Optional)
A scalar integer that specifies the number of empirical orthogonal functions (i.e. eigenvalues and
eigenvectors) to be returned. This is usually less than or equal to the minimum number of observations or
number of variables. Defaults to 1.
time_dim : (:class:`int`, Optional)
An integer defining the time dimension if it is not the leftmost dimension. When input data is of type
`xarray.DataArray`, this is ignored (assuming `xarray.DataArray` has its leftmost dimension with the exact
name 'time'). It must be between ``0`` and ``data.ndim - 1`` or it could be ``-1`` indicating the last
dimension. Defaults to 0.
Note: The `time_dim` argument allows to perform the EOF analysis that was previously done via the NCL
function `eofunc_n`.
eofscaling : (:class:`int`, Optional)
(From `eofs` package): Sets the scaling of the EOFs. The following values are accepted:
- 0 : Un-scaled EOFs (default).
- 1 : EOFs are divided by the square-root of their eigenvalues.
- 2 : EOFs are multiplied by the square-root of their eigenvalues.
weights : (:class:`array_like`, Optional)
(From `eofs` package): An array of weights whose shape is compatible with those of the input array dataset.
The weights can have the same shape as dataset or a shape compatible with an array broadcast (i.e., the shape
of the weights can can match the rightmost parts of the shape of the input array dataset). If the input array
dataset does not require weighting then the value None may be used. Defaults to None (no weighting).
center : (:class:`bool`, Optional)
(From `eofs` package): If True, the mean along the first axis of dataset (the time-mean) will be removed prior
to analysis. If False, the mean along the first axis will not be removed. Defaults to True (mean is removed).
The covariance interpretation relies on the input data being anomaly data with a time-mean of 0. Therefore this
option should usually be set to True. Setting this option to True has the useful side effect of propagating
missing values along the time dimension, ensuring that a solution can be found even if missing values occur
in different locations at different times.
ddof : (:class:`int`, Optional)
(From `eofs` package): ‘Delta degrees of freedom’. The divisor used to normalize the covariance matrix is
N - ddof where N is the number of samples. Defaults to 1.
vfscaled : (:class:`bool`, Optional)
(From `eofs` package): If True, scale the errors by the sum of the eigenvalues. This yields typical errors
with the same scale as the values returned by Eof.varianceFraction. If False then no scaling is done.
Defaults to False.
meta : (:class:`bool`, Optional)
If set to True and the input array is an Xarray, the metadata from the input array will be copied to the
output array. Defaults to False.
Returns
-------
A multi-dimensional array containing EOFs. The returned array will be of the same size as data with the
leftmost dimension removed and an additional dimension of the size `neofs` added.
The return variable will have associated with it the following attributes:
eigenvalues:
A one-dimensional array of size `neofs` that contains the eigenvalues associated with each EOF.
northTest:
(From `eofs` package): Typical errors for eigenvalues.
The method of North et al. (1982) is used to compute the typical error for each eigenvalue. It is
assumed that the number of times in the input data set is the same as the number of independent
realizations. If this assumption is not valid then the result may be inappropriate.
Note: The `northTest` attribute allows to perform the error analysis that was previously done via the NCL
function `eofunc_north`.
totalAnomalyVariance:
(From `eofs` package): Total variance associated with the field of anomalies (the sum of the eigenvalues).
varianceFraction:
(From `eofs` package): Fractional EOF mode variances.
The fraction of the total variance explained by each EOF mode, values between 0 and 1 inclusive..
"""
data, solver = _generate_eofs_solver(data,
time_dim=time_dim,
weights=weights,
center=center,
ddof=ddof)
# Checking number of EOFs
if neofs <= 0:
raise ValueError(
"ERROR eofunc_eofs: num_eofs must be a positive non-zero integer value."
)
eofs = solver.eofs(neofs=neofs, eofscaling=eofscaling)
# Populate attributes for output
attrs = {}
if meta:
attrs = data.attrs
attrs['eigenvalues'] = solver.eigenvalues(neigs=neofs)
attrs['northTest'] = solver.northTest(neigs=neofs, vfscaled=vfscaled)
attrs['totalAnomalyVariance'] = solver.totalAnomalyVariance()
attrs['varianceFraction'] = solver.varianceFraction(neigs=neofs)
if meta:
dims = ["eof"
] + [data.dims[i] for i in range(data.ndim) if i != time_dim]
coords = {
k: v for (k, v) in data.coords.items() if k != data.dims[time_dim]
}
else:
dims = ["eof"] + [f"dim_{i}" for i in range(data.ndim) if i != time_dim]
coords = {}
return xr.DataArray(eofs, attrs=attrs, dims=dims, coords=coords)
def eofunc_pcs(data,
npcs=1,
time_dim=0,
pcscaling=0,
weights=None,
center=True,
ddof=1,
meta=False):
"""Computes the principal components (time projection) in the empirical
orthogonal function analysis.
Note: `eofunc_pcs` allows to perform the analysis that was previously done via the NCL function `eofunc_ts`.
However, there are a few changes to the NCL flow such as : (1) Only `np.nan` is supported as missing value,
(2) EOFs are computed only from covariance matrix and there is no support for computation from correlation matrix,
(3) percentage of non-missing points that must exist at any single point is no longer an input.
This implementation uses `eofs` package (https://anaconda.org/conda-forge/eofs), which is built upon the
following study: Dawson, Andrew, "eofs: A library for EOF analysis of meteorological, oceanographic, and
climate data," Journal of Open Research Software, vol. 4, no. 1, 2016. Further information about this
package can be found at: https://ajdawson.github.io/eofs/latest/index.html#
This implementation provides a few conveniences to the user on top of `eofs` package that are described below
in the Parameters section.
Parameters
----------
data : :class:`xarray.DataArray` or :class:`numpy.ndarray` or :class:`list`
Should contain numbers or `np.nan` for missing value representation. It must be at least a 2-dimensional array.
When input data is of type `xarray.DataArray`, `eofs.xarray` interface assumes the left-most dimension
(i.e. `dim_0`) is the `time` dimension. In this case, that dimension should have the name "time".
When input data is of type `numpy.ndarray` or `list`, this function still assumes the leftmost dimension
to be the number of observations or `time` dimension: however, in this case, user is allowed to input otherwise.
If the input do not have its leftmost dimension as the `time` or number of observations, then the user should
specify with `time_dim=x` to define which dimension must be treated as time or number of observations
npcs : (:class:`int`, Optional)
A scalar integer that specifies the number of principal components (i.e. eigenvalues and eigenvectors) to be
returned. This is usually less than or equal to the minimum number of observations or number of variables.
Defaults to 1.
time_dim : (:class:`int`, Optional)
An integer defining the time dimension if it is not the leftmost dimension. When input data is of type
`xarray.DataArray`, this is ignored (assuming `xarray.DataArray` has its leftmost dimension with the exact
name 'time'). It must be between ``0`` and ``data.ndim - 1`` or it could be ``-1`` indicating the last
dimension. Defaults to 0.
Note: The `time_dim` argument allows to perform the EOF analysis that was previously done via the NCL
function `eofunc_ts_n`.
pcscaling : (:class:`int`, Optional)
(From `eofs` package): Sets the scaling of the retrieved PCs. The following values are accepted:
- 0 : Un-scaled PCs (default).
- 1 : PCs are divided by the square-root of their eigenvalues.
- 2 : PCs are multiplied by the square-root of their eigenvalues.
weights : (:class:`array_like`, Optional)
(From `eofs` package): An array of weights whose shape is compatible with those of the input array dataset.
The weights can have the same shape as dataset or a shape compatible with an array broadcast (i.e., the shape
of the weights can can match the rightmost parts of the shape of the input array dataset). If the input array
dataset does not require weighting then the value None may be used. Defaults to None (no weighting).
center : (:class:`bool`, Optional)
(From `eofs` package): If True, the mean along the first axis of dataset (the time-mean) will be removed prior
to analysis. If False, the mean along the first axis will not be removed. Defaults to True (mean is removed).
The covariance interpretation relies on the input data being anomaly data with a time-mean of 0. Therefore this
option should usually be set to True. Setting this option to True has the useful side effect of propagating
missing values along the time dimension, ensuring that a solution can be found even if missing values occur
in different locations at different times.
ddof : (:class:`int`, Optional)
(From `eofs` package): ‘Delta degrees of freedom’. The divisor used to normalize the covariance matrix is
N - ddof where N is the number of samples. Defaults to 1.
meta : (:class:`bool`, Optional)
If set to True and the input array is an Xarray, the metadata from the input array will be copied to the
output array. Defaults to False.
Returns
-------
"""
data, solver = _generate_eofs_solver(data,
time_dim=time_dim,
weights=weights,
center=center,
ddof=ddof)
# Checking number of EOFs
if npcs <= 0:
raise ValueError(
"ERROR eofunc_pcs: num_pcs must be a positive non-zero integer value."
)
solver = Eof(data, weights=weights, center=center, ddof=ddof)
pcs = solver.pcs(npcs=npcs, pcscaling=pcscaling)
pcs = pcs.transpose()
# Populate attributes for output
attrs = {}
if meta:
attrs = data.attrs
dims = ["pc", "time"]
if meta:
coords = {"time": data.coords[data.dims[time_dim]]}
else:
coords = {}
return xr.DataArray(pcs, attrs=attrs, dims=dims, coords=coords)
# Transparent wrappers for geocat.comp backwards compatibility
def eofunc(data: Iterable, neval, **kwargs) -> xr.DataArray:
warnings.warn(
"eofunc will be deprecated soon in a future version and may not currently generate proper results for some of "
"its arguments including `pcrit`, `jopt="
"correlation"
"`, and 'missing_value' other than np.nan. The output "
" and its attributes may thus not be as expected, too. Use `eofunc_eofs` instead.",
PendingDeprecationWarning)
if not isinstance(data, xr.DataArray) or not isinstance(data, np.ndarray):
data = np.asarray(data)
time_dim = int(kwargs.get("time_dim", data.ndim - 1))
meta = bool(kwargs.get("meta"))
return eofunc_eofs(data, neofs=neval, time_dim=time_dim, meta=meta)
def eofunc_ts(data: Iterable, evec, **kwargs) -> xr.DataArray:
warnings.warn(
"eofunc_ts will be deprecated soon in a future version and may not currently generate proper results for "
"some of its arguments including `evec`, `jopt="
"correlation"
"`, and 'missing_value' other than np.nan. The output "
" and its attributes may thus not be as expected, too. Use `eofunc_pcs` instead.",
PendingDeprecationWarning)
if not isinstance(data, xr.DataArray) or not isinstance(data, np.ndarray):
data = np.asarray(data)
time_dim = int(kwargs.get("time_dim", data.ndim - 1))
meta = bool(kwargs.get("meta"))
return eofunc_pcs(data, npcs=evec.shape[0], time_dim=time_dim, meta=meta)
| 47.737752 | 120 | 0.665801 | 2,322 | 16,565 | 4.715762 | 0.167528 | 0.019178 | 0.016438 | 0.018904 | 0.76411 | 0.75169 | 0.730411 | 0.723014 | 0.706393 | 0.695251 | 0 | 0.005375 | 0.258738 | 16,565 | 346 | 121 | 47.875723 | 0.886391 | 0.668458 | 0 | 0.469027 | 1 | 0 | 0.178804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044248 | false | 0 | 0.044248 | 0 | 0.132743 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
08fc5b54b27a2297b81e7f1f32927f318c1558e5 | 385 | py | Python | kaggle_adcal_2021/__init__.py | sinchir0/kaggle_adcal_2021 | 15eec5c3b99e98afc07c11c278230191379a1e30 | [
"MIT"
] | null | null | null | kaggle_adcal_2021/__init__.py | sinchir0/kaggle_adcal_2021 | 15eec5c3b99e98afc07c11c278230191379a1e30 | [
"MIT"
] | null | null | null | kaggle_adcal_2021/__init__.py | sinchir0/kaggle_adcal_2021 | 15eec5c3b99e98afc07c11c278230191379a1e30 | [
"MIT"
] | null | null | null | from kaggle_adcal_2021.model.sample import Sample
from kaggle_adcal_2021.model.get_tweet import GetTweetTask
from kaggle_adcal_2021.model.extract_text_info import PreprocessTextTask
from kaggle_adcal_2021.model.merge_tweet import MergeTweetTask
from kaggle_adcal_2021.model.classify_lang import AddClassifyLangColTask
from kaggle_adcal_2021.model.make_graph import MakeRankingGraphTask | 64.166667 | 72 | 0.909091 | 54 | 385 | 6.148148 | 0.407407 | 0.180723 | 0.271084 | 0.343373 | 0.433735 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066298 | 0.05974 | 385 | 6 | 73 | 64.166667 | 0.850829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c072e6aad3e3e024f07c2255c46e88552e3c74b | 87 | py | Python | cvgear/framework/torch/__init__.py | ivanpp/cvgear | a09ab5119f6578d7960042180d66141cf66d4dae | [
"MIT"
] | 1 | 2020-05-20T03:33:15.000Z | 2020-05-20T03:33:15.000Z | cvgear/framework/torch/__init__.py | ivanpp/cvgear | a09ab5119f6578d7960042180d66141cf66d4dae | [
"MIT"
] | null | null | null | cvgear/framework/torch/__init__.py | ivanpp/cvgear | a09ab5119f6578d7960042180d66141cf66d4dae | [
"MIT"
] | null | null | null | from .loader import TorchNestedLoader
from .loader_advanced import TorchNestedLoaderAdv | 43.5 | 49 | 0.896552 | 9 | 87 | 8.555556 | 0.666667 | 0.25974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08046 | 87 | 2 | 49 | 43.5 | 0.9625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98fc11aaff77a342e873b3f48e57cd78d2c46606 | 4,964 | py | Python | swagger_marshmallow_codegen/tests/legacy_dst/03paths.py | Danil-Grigorev/swagger-marshmallow-codegen | 4c077f6e1ef535bcbdbf1f643f97bc4cbc62c0e8 | [
"MIT"
] | 49 | 2017-02-05T17:32:18.000Z | 2022-01-30T13:20:22.000Z | swagger_marshmallow_codegen/tests/legacy_dst/03paths.py | Danil-Grigorev/swagger-marshmallow-codegen | 4c077f6e1ef535bcbdbf1f643f97bc4cbc62c0e8 | [
"MIT"
] | 62 | 2016-12-27T15:38:28.000Z | 2021-09-30T02:47:00.000Z | swagger_marshmallow_codegen/tests/legacy_dst/03paths.py | Danil-Grigorev/swagger-marshmallow-codegen | 4c077f6e1ef535bcbdbf1f643f97bc4cbc62c0e8 | [
"MIT"
] | 10 | 2017-07-19T12:38:25.000Z | 2020-04-07T09:11:22.000Z | from marshmallow import (
Schema,
fields,
)
from marshmallow.validate import (
Length,
Regexp,
)
from swagger_marshmallow_codegen.schema import PrimitiveValueSchema
import re
class Label(Schema):
color = fields.String(validate=[Length(min=6, max=6, equal=None)])
name = fields.String()
url = fields.String()
class IssuedLabelsInput:
class Delete:
"""
Remove all labels from an issue.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(data_key='X-GitHub-Media-Type', description='You can check the current version of media type in responses.\n')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class Get:
"""
List labels on an issue.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(data_key='X-GitHub-Media-Type', description='You can check the current version of media type in responses.\n')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class Post:
"""
Add labels to an issue.
"""
class Body(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String(validate=[Regexp(regex=re.compile('.+@.+'))]))
class Header(Schema):
X_GitHub_Media_Type = fields.String(data_key='X-GitHub-Media-Type', description='You can check the current version of media type in responses.\n')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class Put:
"""
Replace all labels for an issue.
Sending an empty array ([]) will remove all Labels from the Issue.
"""
class Body(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String(validate=[Regexp(regex=re.compile('.+@.+'))]))
class Header(Schema):
X_GitHub_Media_Type = fields.String(data_key='X-GitHub-Media-Type', description='You can check the current version of media type in responses.\n')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class IssuedLabelsOutput:
class Get200(Label):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Label):
"""Created"""
pass
class Put201(Label):
"""Created"""
pass
| 40.032258 | 158 | 0.64726 | 593 | 4,964 | 5.283305 | 0.165261 | 0.076604 | 0.051069 | 0.102139 | 0.827322 | 0.827322 | 0.827322 | 0.827322 | 0.827322 | 0.827322 | 0 | 0.002909 | 0.238114 | 4,964 | 123 | 159 | 40.357724 | 0.825489 | 0.04029 | 0 | 0.666667 | 0 | 0 | 0.21988 | 0.018072 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012821 | false | 0.025641 | 0.051282 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c731efdde3d041e5e7f0bea67300e41333b7d6ee | 51,402 | py | Python | logo.py | leigh123linux/streamtuner2 | 43ded3a68bcf3d968a99c849d779fc8c3fb3d8d8 | [
"MIT"
] | 1 | 2019-03-03T19:58:01.000Z | 2019-03-03T19:58:01.000Z | logo.py | leigh123linux/streamtuner2 | 43ded3a68bcf3d968a99c849d779fc8c3fb3d8d8 | [
"MIT"
] | null | null | null | logo.py | leigh123linux/streamtuner2 | 43ded3a68bcf3d968a99c849d779fc8c3fb3d8d8 | [
"MIT"
] | null | null | null | # type: binary
# api: gtk
# title: streamtuner2 logo
# description: encoded PNG
#
# Allows to be packaged within PYZ archive.
png = """
iVBORw0KGgoAAAANSUhEUgAAAUEAAABzCAYAAAAGy7AjAABq0UlEQVR4Aezda2xT5xnAcWi77rJKm9bJ9rHPCSG1WBLH5ziOnQtNRaR1+9BpFxB0ZQgGE0ViarnQDhQI5AIhxECAXEJCVkhKYlInjnMhhGZkhNwdiqpW+7BFmrRqFZXoNE1ojC4E3j6PzWsO5QhzROOm1vPhJ6PgONbR0V/v8x7nZB5jbFbV1tZm1tXVba+qqvpDZWXlVvD7Y8eOvQrWlJeXv5CXl/etefPmPYHPjTVCCIlFBHc0NDQwLfX19TdXr179K4fD8X0I4fy4OriEEIoghg1WgEU8elpgNXg1KysrLyMj4ztxd4AJIRRBGHtLHxZBtHnz5n6IYDKMxk/F1QEmhFAEYQ+wLFoEYc9wZunSpflpaWlGGovjDSE0Dh+MEkE+Fn/qdrt/TGNxvCGEIljBQxfNli1bBpxOZwqNxfGEEIrgsUeNIIzFt5ctW1YAF0poLI4bhFAEazBw
j8rj8fzL5XK9KMvyd+PhABNCKIJ1eiKItm3bNohjMewPfuPrfoAJIRTBBr0RxLF4+fLlhXfH4ifUr4doVCaEfG0iWF1dfRLDptehQ4f+nZub+5LNZnuGv57hHUOJ0CistL5s/QF8/ekVK1Y8OdeDSAihlWCT3gBy+/bt+1t2drbDarV+E1eExjbjFUO7gRl8hv+a3jadt9RaXnvujecS8P/xijI+Z65FkRBCH5Zuqa+vZydOnODUvzscNYQFBQWDiqIswhstGH3GDwwdBhbhB+2GW6YWU9DSYClKKk1ScnJyvo17iXMliIQQ+rW51uPHj+M+XwiGj0cRRt5bmzZt+k+0EG7durXZ7XYnmXymvxg7jcwYuMcQUEURGM8Yp8wnzVVShRT64DUGkcZmQr4yFMEjR4744E4yDGEM1UH0eDy3XC7XSHFx8UeqVeID4Ot31q9fXy74hL8auyF+XSqd3INhhJXjNaFJ8IrV4ivKWuXZmO8jEkIoghUVFX64OMJQ
TU1NCI9ieXn5tNPp9MMFkLUHDx78BFeI3BejCN83ndCa8Impx8Qiuk0Mo/hIYfQbbpiaYR+x3vK69U1rbPYRCSEUQRh5u2BfkMEFEoQxjCgrK5uGcdULnHADhV/CqvG61ujMpbWl3Tb1mljIWRW9YewwzghnhKD4llicVDaL+4iEEIogrPDOwr4g4zCIPIr79+/HCLaAZLyx6qpVqzbA/01rjc/4KPtlJvQJTDin0iuw6GF8eBxhr3HK0mSpko5KL9I+IiFfKoogjLx9sMJj6OjRoygSxNLS0lAEYV/wR/DcJ+HfP9ywYUMJBPKOenTm5ABE8LxwT5+Kdhh1rxpNftM182mzV6qVXlHWze19xOFTKfOHTqU8QycymcMoggcOHOg/fPgwg73BCB7FvXv3qiM4H2OTnp5uhivGJ7XGZ7lTZuZ+c9i7EbMXxoDxhuAV3hX/KL5u3a5vH3GizfkqOD3uc24c96Ur4Ev/Oyojb9tSh5tSZ8D7w42pNWDVcGPK
QjqxyRxCEYSRdwBGYvw4TAgGkUcRrgpHIsijgpGRZXkh3G26Qz0646PcBRG8AOH7k0o/iBJGjTjqH6e7TDOCT5gUG2Ef8UD0fcSg39UB2IQ/g020h1yfaMvohzAWgZ9CHL/3uMd3tNm+ZuR0GuMgitxVCKMfbIMwZoOn6WQnXxGKIIy8gx6PhyGMIYdBLCoqeiCCCEdQ/BrcX/AcH5/x0d5jZ5YByz0XwngYdcdRbxhVcRT8wpR4WqyGfcSfaO0jBjsyP54MZLJgwM2CHZyLYRjvxvE2hPFDUAdxXANhNOg9vmNeuXLUK7PRFmQPaw77Qhw/AyPgtzE+wQihCMLIOwwjMUOwPxjCo1hYWKgZQYSjJ95JBj4ofYGPz/azEMGLlrA/qwyEJTYm/l88L97hcZyFMGrGUQgI1ywtFq9UL610rHM866t1S5Nd2SzYmcUmu+7qzAwLILdWHH+u9/iOtzrGxlodLOSMEuZFMtOK40izvSDGJxghFMGSkpIxGInx
4zAIYxixe/du7QiqQqgoig1urTWEI7S9187ESyITB1UuhmEYlV8oH6UvTh9N3Zj6YVJl0qdirzgTddX4WOO0Rhi7TTfSfNKV4jY762vPYmMBiGFX2CS43I2PD8Zx6c9chXqOLawcnwI3YaxmsN/Ixt/hHGxcHUZVHNf9Ov2fcLzNMTzBCKEIwr7fBIzEjMMg8iju2rXroRHkIYTn2CGEE/Y+O5OGJCYOiWGX7icvk6fgwsp+eL3X4HuKXVmuU7a1tnGrx3o1IZAwzVeNusL4GKtG81mBLelIZG+22Zjfn8nGOxdDDHNYsCeHTXbnsMs94She8GbPwPtt03NsYeUoA9xrjIAohmAYteKY90LG3+HnxPIP3RNCEYSR9z0YiRnAu8KgSBB37twZLYLq0diZfC75M2lEYiHD9/Awula6rjkcjk3ABq+ZBvLg9X8D37oT1Nlftl9cVLLoHwtaF/yPj9JaYZytcTqjM4FtbE9hl9qfZ8Huxey93rDGI9k34X3q
GlVhrP5d8L6x2hWGYdSIY99bTgxte4xPMEIognv27HkfVoMMwWgcwqOYn58fNYIc3lcwcSDx44SxBCaNSvcbCVuyfsltt9t9KDMzMxHDibfoxxuz4utDGBfD43L4eW+ASuUlpTc5P3lqYePC6+KAeIfvM0aL4+OO0wt6LKypMocN+p9nl8/lsit9uaxke9Z1eE+69gQvd2cdn+zS3nNUx5GH8USZC0O7O8YnGCEUQbgV1gewGsQrwSHqIO7YseORI4gfnVkw9Dl75wEWxfUF+heLphgTu8D2sgtLEQSlINKRXiwUkN5ZCoigqBhL0GgSNSZGjSYk/ySmJKYkatDEgkHBQi8gaowFu7HEoIJw3znLzrI7jMImprz34ff9voV15+7M7M6Pc+499w73DKeUQzQ43I1jkiOBFalPQko8RSaTDcE2gQE4cqssaRkNEaEI3m8yPPrAoxyee9XM0exLWaasVrhZ+Bt7D7uD6msEMQJPTowI+zs9sijbiuz4yI4c
/2EqqdhtR6LDJt+A/eBoc26P77A+dgzSakCZVlMwiBHIz7RA0fr+w1+wfvrplyD0+9VBNEgQlCEFCjEnJ0crCfJKeL9wj3AJwinjaAJCdEhxIHK5vNPW1jYfIsfxVA0fJUNgIJbfYEmLgYHBKGNjYwHI0ByiRC8QZyL8vMLMyuxjWYKsXLROdIWzi9OOQkSYxah9Os3+nkXk8Zbk6/enkuO77Unlj/bE0cGiSZvzWr7LdgjQdnyXLabTKkCMCpjkGB5ocZ0SbT/9/IP0SxBS3iaIBgkCo8EKKCnOnTtXOwke5p3hHuMSBUe7ocQ4NXUqSpCEhIRUQio85UX4B+0+BwwDnlcyHHgBGPXcc8+Ne+aZZ1hjxowR8Xg8XMHaQV9ffwbIMQ1GpVeZmpl+ZBRqVCZeKb7I/Yr7gBqAYR6d7ls6zd7JIilxVuTrD6aS8h8dyb6v7LGvbps25xWiR/OKIjtSXoTptBogRYQux2MgxqlTzBv/hS9YP/30S3D+/PnN
MACCI8EqKClmZ2c/VoL4HAVGcPwy/q+84zyCcI9zuzjWjV26HUlOTiZZWdkPHLzm7fKNLiqcJa/6JDitbkdoRlNxePYvFVE555pj5128FJ937U7CghsPEhfcaEtaeKMteeH1tqS8K21J8y62Jcw91RabUdUemVLcHhb7VVtA0PpWD9/5v9uFhtw1ybe5L/xY8pASYm9ipMuRs4sNErQm33zoQCr2OpP335qKfXXztZmbDNFjYsWeqUTFbsROAZMciz6yaadE208//zD9EszNzT0NIsRBEAUoREqKUAitIUFa6joIGAI8Azw3crTuKP5h0UVeBY+oKFcDxDglawpJSkpSpNt+wctISGYNici7SqIW3CDRQAwQC8Qt/I3EAwlA4sKbCpKA5EW3SMqim0S+6DciX3iDyGHb5NyzJDG7gcSklZHw+J0kMLyQ+M5YSVyDUsmk+e5E+p4J4ezjMJftMMiRUwQSjLcm337sSCr3u5IlebZ3IGr1hWN8GhgMDFRL
4xnFWLXXcUvlXgdS+RNFV1qNMMlxy+s2rXCec/+FL1g//fRLEFLeX2EAhCAoQ3UhwvxghQShP06mFB6KYJhgUiLbNuQrH6f4nxdNk1d96p11sto/52yrtNia8Kv4XVTyCa+Sh6ikaJttS2AVGkXKPTs686Fb0NYr08K+bpwW9k2NS/DnFU4zPzrmEPBu2VSft0umeKwttnZ7Za+l09LdFvYLdpjZzvnGxDLlC8OJMdukJiEfCA38t3JFbhuFUu/NppOTttk6Ld7h4r3moOf0jdW+gVt+CQh+56p/4Pq7PtML2t1nZhOrLD9i+PZEwvuRqyrdQSnS5cjdwybyBBvy7TZnUnXQjUTOnvKHheXsEDhuMcACRgPDgWeBocAguhSr9ztVVu13Igr2OXaxF3EgTHJclG19G86z23/hC0f7IzcYuiCGsFisp6E7YihE+4OVzw/8/2Ztx376JQgp73kYACEIRIUKKCnCajG4svRnFq5Zs+zifp7vkla7w33OLxe8
cls6vXNbiE/uReIL+M0Dci8Q6c9WhF8DAqxWo6obmxwbAsvwK6LMmJiYuxwO5zPo85sL+xELRACzgTAgBAgCZgHTAX/AF/ACPIBpgAvgBDgocQW8Bg4cOAMIGzp0aOLzzz8/d9SoUQU6OnobJVKz7wL8pp72fMNEVbbDVNPI/ZFD5Ik25LtPXUnVz+7E3SOQ5C6+S7IX/tYqn3O6ISrh8OfeAe/msDg2NvB+QmAc8AIlxdeXmwyrOejysLrYhVQXO5PqA2qAFJnkODtoMg6KjPuX5TcAJQefyQtj9JeajrIoyx41uWbtKMvmbSOtfvl+lGXTp/D7xtFmu9LHSudasdnscfDvWZTi/9My7KdfgjDT4yKIEAdBEJShitS09E5r99T7zvIy4pZznkzLuUDcAQ8QnidI0AvwBjwyTtyzjyhqEf1s0iaoFRAKFCIFCtF6njWJjY1VRJmpqamtUCazGSTlDPthAIgBISAAeAAXYAMsQBfQUQpnLDAaGAmMAF5U
MlL5f7rKbUVDhgyRDR482AxkaDtixAjPjW/YFyV/baFRtkOvZ+TtBQkm2ZLvP3MlP37rQbz9FqAEe5CT/3tHavavZyITDn3tPf29eVyB4xTc9/ffmuRZW+JGan5GXLs4iLgooMkRhOhMbG0sqv/NyA/lN47nOG70lJqXRzlcbxzl0tk52oUQBa4MuBEyxrGlaaxV0dIx44WC0aNHD8PosJfIkk6/OPvpE3932jMIVoK5AiIkFChESoop8jRi7ZlFnECCrjnnVCJ0yWxuswz56oJk6vyyF3UnfgztrB4wYMBL/EP8m4J6AVFQ1w0lRcvFliQ6OhqjTcWgC6TZG+EClCmjqKeBoUqGqDFYySAlA9UYQKGewim3G4ptKtseBhJ8oeKw95G8A5ZU2Q5jPSNvH5ekJk0hO76cRrZucG6bNNmvdGbI9sPJmSeu5i7+nZIgA793ps09e3bLxuVVdYfdSe3haaT2EIUbQTEyyXH3l444KFL4bwkQo7nxDgdjxnrc
uTDWmxANfBjwpeFx+bzO5DfjuVzuOKz3xDbV5aqrqzsS0unxenp6OhTjx48fhWl2vwj76Qt/V3/P4GdGiYeZROzyladn3wcREgQGQhCVEJNTUokVSNAhubTTMnL3dX3XFTVjRG7fPTVg8HpoYxmQCcwGPAYNGmQnrBBeETYKibChG0qKCIrQZLcJ8driReRL5J0w4PIJYKjer0bnUcdysnGWHCg92TBzX3PDzJ3N9TO/bK6f8SHwzonaGW8ArzTVTl/SVDN9HpDeVB0Q31Dp35p/yFqjbIdzBIXYDW8/SDAZJLjdnSzKs/8d+sKSYD/sAb9x4yek2zsv/Sg4YlclCO82kwy/276K1Jd5kDqKUncFKEa6HFGKW96wx0GRjH9NgG5H88fPJJ0A0WBWT3QCGQjCx/aHug6frgERclB81PdsvNsBF53AB7/pBN67rRPUqkLXt6lIF/7ha3rbz376edLyG2oU8YPDxOTydyZlNl+xnHOapKTnYN8fDoKooKSY
lJRCJlh53xqtZ7Abtl0D5AExgC9gB0wA+MDY4cOHjxLWCi8Km4RERSMFsxjFu8V3pF9INxtsMZiAN2/X5u5yILsdID4CAiQnFcwiIMUuGhD8P6AemaEAZEgWl9pQJTuM9Yz8Yh5JTbEjO7/2JLu+9myrOOT9fUWp90dlB9y37tvluvHbzxw2vL/Z+q3XV5pvWLY4+Jtly9c2rnpt/71Vr9eSgtXNpHR/PGk46kXqFXiS+iNqqMTYLceFOVNuQ7+r4z8tQBzo0PE4kKQ7u7NDdzYhiF74I4hgIJJOJ9Fz/mAduE2PitpZbp8Hs6IJ6UHAsWaIBk3x8+5tX/vp50nIb+CQ4aynDWOLQ03T6svNs04TC5DfJGAykJQ6B/vnSFpaGoJCVEkxPj6+A4qTq6GmOR/a8QYmAxJAB1AUOlMjpFgnKKoXtYiaRUTBiS6EJygpPlaOncJyYZ14n7hAVigz6YsQQWhXTtROJ0hzHYKio2CW44m6meSlI7bqZTuIRj0j
/2ceSZNPJbu+9SIQOZITNQGkqdq/iyoKP9JU6UcaFfiSxgol5T4KGpDjiHcXxxCQIoMct77l1Fq8y3V5Vcm0jNrDHgm1h90jQI6zIGr0hqjRBZgCUaMFYPQkB0HGsCfosqPuXmXHEaIgnoHwS/dZs8pvsAIOXGKFNN5ix9xpZyfA84ndcCiSgPjWB3oTY8Ih/X0RI0K2xwcRnGRC6LADS87p6OhYYfrc2772089fkt9Io1nPG8QfkpukN5yckHWKmAJmwESgS4anSGJKBs7iQFCGCCVFHMl9CLMz9sCX2kMpvuHA00xlISgtUZPoougUyO+kGs2AdmLsFJWL6iT7JQWG7xua4JxiuhBBQmyUESUnFBUFJUYmOTbVziCLQYJU2Q5TPaOghK+Q4A/fe5PGakUb6m1S78MgRz9NOVY8So7efZIjUqcZObY8qS8V9sexZxXnclMJUZCmCSf40HU9s5AvIKrLBlmFwaM/9OUF6bIFqWyX1Tu4CS13eRmEqMjs
hhty8Ch8Xwxgm2c4PoWRvCxCKPhzuuCGlZyHSNC6X4L9/C0SRFnoea4dIU08slCW3nDJKLOZGGeeJCbAhKyTxFRdhvB7QpJcMYsDSUlJQSgpogTbYfDiCyMjI0x7H3vfXxSV+IT4ovi0mKg4JSYoRUqMWssRhVghqhPvF2sIseaI9/SGcpBLJdIlHkpCj5EjSC0AJUiV7DDWMwoOgwRT7UnRDl+Q5kzGtBrE+o/Lcc83bnUgDasnlAoP5yWdOcCfC1LK6UKghDtz+zkdXb15IL0pY8eO5UG/4Wgsm8EBDkAPSmMmsI2nJfNTL9wUzCeEjjDnjzZdoakfvp4XUBgpzCNEwYJu+NEKCdoqB0cG9JGnehlpZqpzpBjQSzfRQGpQjUL7esgn225vx8pwnFTbAHPbavs0GLtDlOd/MPCfvn2t1tGfMLbYW5LWcF4/4wQxyDxBZCBBQ6CHDDObOzluK8/Fxyd0JCYm4kwOhBIigrV81M3X9fsyd1h8UnxJfEZM
FPxCg1mO2oixU1QJQjwgLvjfPqctVaVepO6oD6k/huJQyISSC4Mc/RU0wO+Ly2zUynZogBSFZYLuSLBmRi99jlrIsfqvyXHFUgdcZWbmk0iFscZPOPfqGVE+IRos7iQsmcN6EJgFVfZCFxBeOBAdsji28VniJYRosLQLjsu81SBOLm/W1mjxMkLoCBIPtoAEfWE/eCjW3oB9GQcMw6JtHFnGQRXq/5QjziOoQm7Yv+d5M96cJEopThGlH1sqzDg+D/8fj0X9WsHvK7bJtY0QCJP3xoqyjr8szqnfKM6t3yLOPLpSlPxjBsdJPomqh2QWBXOtJf6R4TomSkXyPTHiOWWLJPNq1krm126QZB8tEMl3p3HdsixxxBzmxD+H+0yXOEbI2KWAx6Z2HnTVRtUH4j4JZr06QZp7ZI50Ud0b0vz6TZLc0qWiiM3eyu2eoeSPbeJ2eqZuOpJ5Zen6S5q2GLx85mv9/OqN4ozdGRxjWxH+oUMh/hdl2Ocv9njvDWMF
KRX/E2c0EQkgBVCETDI0SDz6YLjQ+QhstwWivQ7o+1PM5EBQiJQUoZxFqwUUJKcllyVnJUTyqxpnJOTvEKNThSHJO2RJdkGZSdVhSB2PeIMUvTSlCKjLsR5+zgcJqsp2KGq6axqFR0CCKXbQJwjtVIG8qJS69z5HZjnWPRk5xsfY4nJeoicgwYEoKMnS1t8lBYSoWIEC+6NdV4+VghddL3V/Q+BCM5TkX7ksfYUQBau64Qdt+hhEKeaHbo2WriZEX51XgVXtnfor793Xf+VBq4pVDKxW8mrbH9DeBFHcl54Gr7XfNXj1QWs3ba1iedFq3Gd+6GZr/YJr5QZrOzsN1hJisA7eq+DqVdjWEqVBiRzlJ07ZOd2g4NIh2bqH7bL1hPTgTaQT9vPKaVH8p3NQ/ChhxgiLEpfEbLR0wfE82as3Gg03dHQYvk2IBhu7MXjt6q/S7J9WsqWmQtw3NWEN4Lqlcow2tt81erut1ejtByr0F5d/gvsh8psnkr16YZfR
5o52oy2EqGMMGL7eckLgn6eIxlFs2L7BwhK50aZ7vxm/S4gG78F2m+7e0V9UUqjHl4hRzP+1Ufs+fanZcQdn8FPrLgrSG4kQEIEAu2V4QkOG/OAvbg5+XrdIOdobA8XLnSBCgqAM1YUYGRmplQQhErysIcCzatDkCFJkluOfSKedKw3JAhDiTihErjzkSWrLIEos81Skkg0KKSIoSW8YHbbWLNup00R0TNg1OvwNbF/h9+g+x97k2Kgux5l/WY5OjpPPQGE5819qLSWIEZj05RtX9V8nxABZ0w3fv+BduHhEj4sK8HmMSnhOCWEcx4R1XPuo93j2UYXAe1yHuA0cC+9I+H+2MPLdKJpYmHlLE0NkgybQXqAwZE0sk1hE0Vt2CGatzJC9ee+2QjKbCDFSIlt14Q6IwxPqRIdj1MWZ7DNOtur8TqPNhCh4hwEGscgKGsrYhlamSkk8Re9j1c/6xsXo7TvnUCo9KOzCRJ33uzBae+4kf1qiK+4fld5y7WaZ
mHxAiIr/daE/74djPLuZM4w33/p1woeE9OAjCnyP1nvCwMWJ+DkYrqhYAf/fCRAV2CYNw7VnT7LtghyVWcBT/3kJ4k6+YCkfwk4uf4+b3kB4AB8BCT5KhuMc8s/+n6cGfgHbLgWmQ4GzMaS8hAKEqICSYkREhHYSbBZfURcaJTmMBlF8+lX68LP2YtQmanSqVEaIB0CIJR6kttQT8IBI0Qvk6AkStGIs3RE0dElRVC5U1QnWHffBiAxg6G8ESTHJsfkRcjz5F+R4aL9PBywd9iPOkGGQk9bpMEZNBgVnm1AidGRv/PFAFPPuaoj0qJkgjH3BGPlgpIGvA6nq42AIPmIEiKmeItqKL4xkkgpivPURvMsAiATeK0YYXCBnEotsWcVlk61t7SgVOoZrz/8OIvCBEq4X2db+OkYbrpVTYlFnAtCLWEBYpxpZBhYWVFSpPJ+DJCmFNibv3bvJIBaQGAO0fTRad6GFbWRtqxTsYJ5tgDl1fOoylWZ9e1q2+vQp
xnO0tSeG667eFEW8udbonY4O9XPPKP7NXRgsrauDc21MRb3/aQmOcHvlOd2Uyu9ZafWEDXCAx8lwrNOS83BQnwELADeAD1/YUVFRUQSB1BfREOLs2bO1kiCMDl9BQakLS11kE+dPJJZelg+Nlxjf1d+t3yr5hTlqfFLptFMVCLFkMtmJQoR5wFUl7iQfBPm40WlxhUg1Y6T2iDc9pab6HCkx/gU5zuxVjtRrPv6fT6fM0GwXfAaTgDHAn55tgULDKE6ad+CTx0UsRutaftFfUPyWMHCJC6Zg2GeEnelUx7saA+lQBfnS5MJIZrEwwighSiywD+HC0BUpfRULheEahQR9sU9N9uqpLyix0OmrWKTZu3ZAU0LqjwOeF9nrF0uZxcLAJk0MEYheJZlFRdguyodr5W8BzzH9gerAqLgHGEHTefMRrGfgjW4MAEHQmjX4hwyP7z8rwVFBn78wPqXygE5aHdEF9ECAj5PhOO+3rsIBfQXkA86AHvC0vb39MIj2
MO1VoS7FsLAw7STYILpKT1spMaGkzPPMO6G9a1AcvBN422ya2XbjZcb1BjsMbkEq3Ymi+7vSaadKGVm535rMK7Z8bF+juFJE5Ikwd/hzN1JTiuk04t3FcSUgRWY5+j0ZOTZoyrFguQ8Jjfy0VSiehrN0bIHxwNA/K0JF+YpDqIfJ++0P+xKxGG+8dc1w9Ymf9BfsWy4KKXDGFAs77tWlSN8XlIQ4oTCSSSwGS8p/57kl/STwytgu8Mr8koJPx1Px+IXAf8E7IEEHUdhr8dqIBTF45fwdOF4/cfyH0fg7c/R7v0MU+8EpnkfWHr5Xzk5R/LY62bp7DxnFsq7tIaT9scrBlsHCiC1OIJJOJrFI51fc4rqm72VPcN7EtQncIpz9Tqn+qttt2FepYi2C/aV/PGAZWPlgu5xJfpOwa4IOdl8g0oLf2oSJ318QhG1tECX9cFa65FKr/muEqHiVgVUdRBj3XQvfb8UxQei7NZKXLrdi/y0TooxjJ7CYHY/vPynB
5x1fGjw2pfLncam1ZDygk1pHmGXYoJAhO/bA/acGPY0zPlYBngCLuoA8PDyGQ7RHwsPDFaAQ1aUYGhqqlQSFdcJrIBJKKpqiAfGY5priKs3lQAyU3ljBoxsQB0IsMLM3+8R4gXGVwXaDG5JGSQfIjZZOU/w1MU6s039sOi2pFpPkeFhKC1aRqT7kgbM8umv3jiJewN8iR+YaRyA2xoOk55wmWfOv3Dc1j9mgXD1HD/hTKYtydJEjyfz+a20jFgBEcOc3/eUnD0jm7lsu8M6xxahBfQSVkqAopjBSPWKhZCLJLr4IYgrBchscYOkN7KPEEWJh6IZwVcRCsU4T6cuXHojkRS3C8K11gqB1RwShmzByc5e+dK68h1hQKCtutbMtfHHAIQywwosfXm/PsQldKl1+/Q8mqQjjv98Pr5Ni+irKKFtFDfpIEUok6WXXdVmcfHidMx4DdhfAzzYcu9jl0pfvtUtXEkKH51OwAftr2RN9JuNAlYqCbkQ5p//Q07d6
B88f7K8jnHdXXbG5XJRZ3yJZTog6YmQZsLSDcBzS98F+JOCxAXYs2dRUUc6Fmziar2JJF6K8W/fgNdNGjhz5/H8hJe7RDzgy8diK0ak1ZAwwFiSoLkNdBhkOMwpshO3eA2YDQvULB5a5HwHRHkFQhhSUFIODg7WV4HVhI0OqqRSjcbYxzpP9CMAv2rNwp7nhAAt+N4a70DmDDKPgvZaYTzH/wCTb5Jhsm+yqpFby8HH9jE86nZbUiomFhXkrcM3W1uKSi9Oky96ek6/ODLC8FhZsfT02yuYGRIq/ZadPubkw1+7W8vypdzascfzj75JjEzxOc5+umps8d9GtDlPz2K3KpcP0tEqNafOG4aIwl2TuL9UQyzoG1jJDyUS67PIp4eytKZhyKqPDp4DBwvDCSEX08pom4oyS8/D+U5T9ToPUwb5GHGjAdhC1erahgqBNYaqI5ZWe8EMKm3V50hU4iAK7MhWYBBhz7eJcQSadTGLhTluwH17jiyLH96JGU1G8
PK+XV0uWd/YQi2j+lTsoIIWYs05+h6JB1IXCtgx9HwvCqQEZBKUJz5kJ0xpOil4iRIPFsP9hXx9W9MeZelvi7wryNeE45f6EgoK2xuL5U7bJ5riveEm4kBANqLrM2UXnYZtoeD0PswDcDisEuJ6vv6aq41Sv95zX2QmF8aHYbfJfSIk1vrgvxh9yGSGv7hgpryGjgMfJUA8YF1HUCtvtBBYBk4Hh6gc1a9assSBCgkDUp0BdioGBgdpJsEp4CwcXcKBBRWM3RjlGWObxEsChUijYB8XNleDmS89DdKgLAwCGeD9iEOJseF2++WTzrSZyk1LZe7JLkgpJOyUtJjE+iXRaWichmLIDO4HtwC6gGDgONAJngYvqxEfb3KBmd/QaOYIUy0u8Op1Brj5elleDZllfi44AsSbZ/547x4MsXuBBXnnZk7zxmhfZ8rY32bjei3j65KAAVaTnXLg3aoz+CuU6imOAQX+mb1AgEAzH8hG+/6oPJEtutDKJBSIUZlAk
NESZNQfZYlMjZVQ4mB9cGCl5mRA6wiTmGSOKOtfkMgtBclmyIG5fpiBuLwCPiYfkKBxewIZwKmKhI4je0wIXeAocjwkO2FBCw4ueH/3TPBQKXSzCrPOt8Po0kAKfqr+jwGNAYQnSf73AJBa2RdAcHBTi+m1K4His+pjjsvgHjnN+Ecd5cRHHddnnKFasMaSKkfFYMbJiOeZ7C+bevy+YR4iK3C54oUUNKG49Y08rxXM5XfDV0OXJFuH7Up85VbLEMnK342cTomJON2yrpM/wDx412IHgzxCpTuJltLXjTB86esKJSbj/f0WCWDOJ71OQajg2JUgyGj8L5ecy8E9J8Dn314aMSKk8BxIkyEigNxm+6LzsIrzhR0AYwKbn+HD7S52goCCM+BRQQqSkCILqswQDAgJelByRPNQoOanvBoVolGeEApGj7OgnlxIiflnwfsQQGY4HERoAdvD+wfB8nrmF+cYJsROKZZtl5yWlkgePG4T5s+m0pEEhwTK87eej
pnXh88ALgC4g3v2Vy0bFqjCHp3WtFlNKgWIE1MUIfLXN5QFsVwUcAMqBZqDF0srhgY2NG6ETGLYZ5acBLO56ecCAQVlw3iyAF/7Ml5WqmcNUjSWxCOd6vbJdELvvjGjB7TaFXDBCobOYGUosguTqY3psnlFX0XJhpGgRIYiQAkTCj+kpQfz88SIRpDRvUogGyesGXu/C83wtnopY1EGRsGSOG0BcE2k1d09hpMSPq/0EpUIXC2fmF7+CHMLx+GE7rjqwHUZNFtzgPXuZxMJxXfUBjoRjRIapMcoXU2loy0w5usoDxusJzThsj41O3NlH8rlxZ/fyUu9pTjlM74YTVHwGtrHRNfS01pzKCKQCibfb4D2i4TPTkBMKEY+Bm9LRwZVDOyma6Iks8zHNVpcP/owDMZyoK5dw3reKxC50+Wapf2ZwBF//81Zb8+avHNb/8p3jmZYfHFsv73EiyNmdjncbt9ufOvah3fqd660moRC1kuCLCWWhKD86I4FRj5Dh
M1LvOtiptwBnpgvF39+fDeLBiE8BClFdiiDJPksQIsdt4lKxouBYUEMrRq7rwjDPsBmklgBt6uC+9Db7RSmbZ0GKY+GG7RIQow38PguYC+28ZRpm+qPhesNfJcWSe1QfH5MYtUmnpY1SlOAPzBJkBtYG3AHg0lgqUIqUGOlyXLPC/ndoO0q9jWHDhg2KTS6bnznvHEnNbiLJGRUkXl5CYpJ+hMjvFIqvB64ea7DgPRIQ/9n+QWo2AU6PAwwwzRuvoxvFmhzxJsdnUwkv8lALP+t2mypioZPTM2LhBnxciBLg+GyJxql5FJRMuLOZJYiREjey/EOcZ0wH9iuC7bQiRSNioUSScrsNZBYP6NBnhuAADieiYQ+KhFEsdOQMoExosLw+K4H3k1HT4VC+KCeWx7YAdkjdJk7k+YOc2FvnOIkP21EqFGyKhJ6wAorPQpu2ugae1hoLWcQpibh+D1N95aDMU+pCg+clrKi2NlYMIRpEPeyEcxdHFxr+rBjgCr7w
i14UIQoiu9HhaS/BkkJ7/pkdLnuu7nUjXbh28RMz5dvsj3+80rLXARjVhzkiuaKULkAmGY5WybCWDOFNLYdtXwEmAc/QLxJvb28eiI4gKENKiBTTp0/vkwQhaozF9Fl8GCRYRZuKVqMCJdgE8opnliAzj7pBOzxawmMA7Fsm/LzOdLrpLsPVhqeke6R3Rc0Mo9N9jBqlDQoJfqqNBKsOOF/BFaNrkIOIctFUgEmOqYk2v2G/KP04w6L3SlFufSU998KDwYOffV3ZP/iX+m+oQl28mDG6UZZrTIRHV5hFEsMyD3+T7b6phBtedZWb9rCTEgmjWOIu3dTRZTuyp72dwCQWTiCzBLH/jB149GP6qjMYoSgu5Kkvp9IjFoQVeuImLvKA5So0OQxAubNDT5WiTOhiocOKZSCGgWjAc0cNRn2YiuJ+s6aXL2SFt95ggUwQJrGoiGAgHPBSSHCKjtTTGn9HdGerEXT9HpyHGSh2ugQxKtUNaWvTDSVERQgQ/LAT
znUkfqZ0CWK9qE5Ay2mdYFwXkgZXOwme+MYl7PJP7q3X97sTJq6ps6+bi3vcHu58y3bV46LCrigw7md9mvR6lyGIcKjQpQIOYjlgBjxNbxwkKATRkRkzZqhQlyJEir1KEGRpDBJsBYioRKRajIBfqXl/ERSiLE+mtQSZhIj9jyhEYCREhwJgEqTYvrCfqdD26xM9J35jtNyoyfB7/fviRmp0um9ilNRJOnG+dF8lCPcJYVft67pvSDVywLmbYoBBjl7uk1ug7SE9aj9HCodqI0HE0iarDM5JMiABhmpzHpmg3Qh/MKaTuGo0SlHZr+SlJ3XMZ/nuaWTHdXQ+Six6ZlF5elNXZzKKxf/REtTzP7oNJaMiCoh82AnvH6trWyBnFEtAzTWUNe4rXYKwCMQYvYCmEhQKo1gowhgIZSCkC51pPzVj/x0OHugEXPgBRdKDIAb8Wu7reDf/ob4wrWrRWlelBCWe1j0WuUX8Hi/BcQFtbeOmE6IgQIm/IhKMeJQE
x3m2nB7nRwgdHU7fJdj83bTA6we8Om4UexGtOOCpYs+mqduwO+TREkwoC6Akp40MnzGNbICDWElFgvTGoURGH0RHoD9PAQpRXYq+vr6PlaCfn99zkD43UukzSlC1LFVFN5QYZfO1kKAWQgSehiW/RpiamvKg7Ymwv57wmJQcZVEfl2RBphWYkInbDYi4ToxCpGAUo6RWOwnCHeMCKn/quotclQLlDZQAJjmW/uDUAe0WMbVl67Dw2RwtJRiTfPy2svxpKjC8t5SY6hTH1A2LoDElQvBnfI4+U4AuRRxMwf4lTJt1HbZ+90ixWC19X8dyWR6jWDyZJYiRnI5P3Q6dEJpMAu89xNFNXcsVKYxi8ay5Bu054UXEVBg+3qN6N0imp1Q86lv/b3vnAdfU9T3wAIFsMkhIwt4bZIAC7r231r0V997bKu699x6KWod171G3ttZqtVbbuq3WqtU6UDj/c0ICj/BeCEj92f7h8/l+XvLyxnsZX86999x7dZXPP9VV
uviHrrIZlVioaOLbx7oSG6hVOVZX8dI0bR2AbNRmUOtdmrbCd3/q4pde0Yd2+MrVzWuUtvTBbzOnJ6iZhbZ0hgR1ftUSckx1UB2pbFmCmqqpqZqqANmoYlmCmnL3b6krA2RSKQOdlRK8tbta6afHa75/dqIW5Maf2aiZg69nl95EkTWrBLEovJbEllccm21/iQedhZQ1bxkmqlSpEooiAxMkRIYUc5UgRoztmPWJ3se90w2Dk17IwOsCCTGLkIEFIUF2TELE+i2BsdHCY+O8ondWTCsK00bGwdCesdAxKQaKrwwxpu2w5zP6X/bPkwRx/uBxCBA0faYJEiObHFMWl3yLx53Adqwu/X+L64vzmPRD8iJDvr1olXF2Pj1il8v7xNfXutFP3ygtVd/ow3sDXxip+/AKRXxUR5jb3DQ40K6j4Qdb58+/csilPoqp2LTt+phRw+ixgXoMKpy+R0NpkVTNhaWtcvN0DqnU/CsVf8jN9bHjOzMFYxKJtgK7BE1i
1ZY5l2IuFUIXNe48tQ7jvg1xWc9aqOiNlHLxCC+iqfT6uaYKAMGUiabMjVe60J4H9W6BY6g4iu9VBSo+UwOKpsS5/WwTWDknGCXoUy3BqQKAgfJZqMtYlqBTmdRUp7IATFRlLEvQqfj9W6rSAAZKZaFzz12Cx5eXFT88UvvO81N1IQcn2ahjCRRh7fRZQ+I7mZ+TWiJtVJ0vXMu7BJEu34JQ6bqNz+c3x37CruapFNWqVStSs2ZNIFB4RDYhYnHZogQxakw2Fp/Ty5Yt+8TrqNeHbMPVn2eAUgwZUNAS5P6h7lqeqPpmY1z6sfVxsG9VHGxdEAurUIgtVocz0nZY8hkv+aXlRYIX95TYf3FvCbi4tyROpM4Apcgmx4kjE6lRpAlLipFNj8GPN/UZ/hf0Hf4S8iLDwJCqZ3BWvT74GfvnVslM96Qtub1LjoiFqPUhTe9drBFLfhi7TPGH51z58SPziIXQhfderQ/v3UeDjzXVsuNc7tpTiiQp6mT+KKlh
Q1P+/g0SClMsmrJ3XuJrDbVFJiaZRyyEpiS3BA11nLFfjaSZ88zFoil26Ab1RCFBkFjYoHQbej/MocYJdfSW9qoyAAZKZ+FU/N5bnVsETTVQiRos6PiIxJj/6OxU7OdTqpIASiYlANQxGRLUelVLoOfK4tlRxVuWoCI+NVWRAJBJPGFZgsrY+7cUxXC7ogByBjq33CV492Cd2X+daQC5crp+Ji9y4daeWq87NQ71z/Y5Ug6duv2RC/mSIKKrP+9v/K89XSQSxdOsa4gNQ4LR1atXJ9kZIBkypYivc0qQnuM+VXCb3SVKlDiLctvredzzbeakRWc9TJMZZUoxpH/+JXhqfZHxyNJTGyLnIlNOb4hMPp0SOex0SlRfpOvpjVHtTm+MbnpmU3Q9pOqZTTE9z2yOgdObEFye3BgLx9bHQq8dYVmpOz/65Mhn9PsubxI8vzPx2YXdxYG4SOwhShhgk2OntsWoUSTY/Dj+/v6C7kMeP++FEuyN9CERIv2skGG5
ij2fYn3aHBRhNFtxgglJ0jl0QBUSSQ4qUTSy97gx187YkMY9R4lz6IjqTLEw0fvVnqD3rd7UqRyAOeqSj/7GH3stGnePORCBzrd+mKr02zfmYlEX+/Z3vKaqzuHT25pHLAi+zi5B0/1qA3tVJakg2aWS8OKdzrt6SxKdqbcLA1uStFPk/vLqyD0tnSK2tzGhjtzbQhM4PMKpyOHxBtEUy45T+PZf6P4QJXNUdGMrrrsi5ulDeRxAJrEZqMKNEvSsliCPAchBpGUJOkalpjpGARiINGFZgo5h92/JIgDM0blaluCtPbX9Xp5tlPbqXCMgXrLyRU7OWqKhgZSpZQ/T55ZNgs6t92zMrwRVXS6Cc5EaZ3Hu3XZCodCb2cOgcuXKRbFekGRngITIlCK+xilBY/QixTaJUkhP3K6pxwmPVx6nzKayPJNFcP/gfEnw5Jpw+5Nrw98gcHJtBJxcFwGnDBQBFKMBFKOB00QKEQUoRiPRgGKEU7iu/64wY9oOE0Y+
4wXf9Lo1Yi5NHxk36XhK0bbnthWth1Q6t71YAhJ+fnsxr/M74tWI4PzX8X4InP86AS7sNJEIF3YZISmaybFKxdg7eP925lJp3/dW7+7DXkCP4S+gJ0qQTYb9OWRYvlLvZ1ift1QsFsexNYARZvlhflgMuseMXrJJJfbQLr1bcAQVJY2DldoRzKRfddiiqsrEl0/YI5aHr11cPdu6unmWVCa8eaNMBCAUREIGmuApCymNxpikbBjg1CnqzHSSCpJdKmHrL5Mc1MGzWplHLIQykilB9vtVRN2/xyYWZfh3P2LjSQzdE2PwB9pH5BS4sKJjVNr7nGJJfaf1rFFfGXx4iWMRAELGQBm47Rrl+5l6zzDHMVQE7B4sCwcwEMaE9suQoLN7tQRpKEAmIRnIgi1LUBqcmioNAiAkmViWoMz//i1JAEAm/hnoXCxL8OHxL6b/faEpsNMkk3uHG3w4u6HG69sH679nrn9FnGfSOJMHRxp8qFEuINh0jxnFl2brhpLQ
8ou6w5F0lVfUOvyRVEP0puGYKlasmID1gkCQDAmmFHG9RQlScjN1fSOpYeusxv24+0vmZOYkRBMkxOB++ZPgN6tCoxD4ZnVYJijGDEiM5nJczy7Hk0j/r0OZaTs58hlpPMGOLWIgZXYsnNoUC2e3EHEZbI2Dc1uLAkrRRBqKEYjzCIrRgEmM5nI8+3UijOxX9HuUYzMUYy2kLMoxdt2C+rFdhz5+123YcyARWpQhiwijY6v+ThLEaL8YlwTN00YwWllsKgblEEscrot5/qcq/PBCdeiKFhrfPgk67zqh6qC5FZ3CD3ZTRN09Jo9N+8AmFUIdOPM0FTOpKKgIv3LeMZokkh155Mu3quAdSzUBU+s5+Qwrowo5Ns6xyNvXJBRzsTh7d1pFCchO/vNaMKMVk0wUwUwJso99qPDbOJdNLIRj4I1rGu/BjannCIlB7TO5uNz/5FhZ0Ju/pMEA5mJR+O24TP2RVd4LB0oCAUyYRCL1f4p9b93rkHhIhLiUuLjH
ucp8ryyW+KeliVEyYr+cyL2NEnStliD2BTDgk4XE27IERV6pqSIvgGx4WpagxOP+LaEHgNA9O1o9twTXTS7l8OxMs+dvvmsBTF5/SzQ3cOfwFx+6tYi5TvPUILWQupVLBU7aMa/yXXo9i2ZZXMxixfhyO0z124buaCHxpf1Unc+nfZQI2x34oND7TsekXOp874zYly9fvlSlSpWAwKjQAFOKuN6iBAlaT5Cs3Y+5v3I/gQIkvmFglGJw3/xJ8MSK4KQTK0LAwEoChbjKCFOOayzL8RuMHPvvCGak7ZhBI0uf84Gk5tGQMisaTqZQBBkDZzYb+SoGzn4Vi+QqR7BWjlichhkzl0PnYc+gC0qwK5IXGfYb9gcUjQ15gF/yORKJJIo1TYalKOvq7h8jD7v7MEdRKJyFsOxIiVAWSChBv73SuQT3MwpFpfRdMYpEwoRNLAYCzSGhPHnj6uppGBxU5b2gmXnEQjj6ckuQMCSD6yNipT53H5mJxUwyr19JvF88
EXkDmDCXitjrzQeta7mJeK4wZ9eapUWe6ekiT3zNIztij9//kLqd/0rqdmam2P32IZH7yyeZknFjwRVA5pohQY2+WgI9F7pkIDAicrEsQYEuNVWgA8hES1iWoND5/i0HDYA5liT45FSzum+/bw1ZtMrGi/Mt0utXDTtO10R1svT+E/TPgM45fUip3a+/awlvLnFzcn3tF7Sv8Vp5ttif1tm55dZvP0aChLrt/lSlX/HZWCwuRxEh9tetXKFCBcCI0IC5EPG1PPUddjvi9rfbMTcwcDwD9+NGKSLBffInwePLghYjYGA5EQwniBUESdE6OZ7ARpF+24KMqTuMfEbiUtYcIyTBDTOi4Jv1rMVqpADkuKMYnN1WDObOnQVJQ59BR6QT0hkFaC7DHhZk2KPfMShTMugujjqdLBAIKFeQb+2YghqPTkkYtbxiEwvBHrEgviygRKTeD95oXcvSHCVFAwMDpfTFx6JmsNTz0hVusRjxZAFl4uQ2Yj+1xFL9odJ9
QTOSiDlSN24JMu9X7dK5s8jlTSoJJRt6DnTZEerepjnpe+ygnERERcV5se7yRQdngBxoOFCz4ARgj0g0RgnqqiXYqwDMEagsS9BekZpqrwDgE3ITliXo4Hj/Fl8GkA0pSlDHLkE67/3jzeen/tAOMmibyTsjC5PLPzUOxWXHOhg0/h1ZVfvCu8ttgIsXF1qlR4R4xtL2hp2wXkbqV75zY1XXS/CxInTqeDpdldhpG0qwOhZhW2A0CCZIiEwp4rq8SfAQSvCoG2RyLDtBfYLyJcFjiwO+O7YkADIIhONLjaAU8yLH47jsuyWQmbaDMsyOz2lvSGoWBeunFYETa1mK1SlEVP7liGI8g8vTm+NhzqwZ0H7on9ABSULyI8OkDn0gskjYYywOd8XP1MXa95Uid8oNdHZr3FnifvN3NrmwRywsoEjELlefYwQzn4ZqMv5IbU0z22ld69YV6e8+YROLAW12HAgUiVR74DblB9J1UvSq0C1qyiYVseYHixIkjF3b
PJ103YcKnJ68ypSLkgWFOSST1HSlpvceSo/BfzgkB3vqH63TR5cTKG7e4zsCGJCxQ2KxMyKQX/3b8FiSHZE8Q4Jq52oJdmKATEQZ2IstS9BOmJpqKwDIjkGCLbgkyBc8uGVjD2CAn4VWG9WVQ4K2D0+2vPT+agdg492V9lCxVPCK3Bro+rSNDf7jTKvUVNyei8GdE8bTvWV+YXHaS19ds7WXPlaChApTZ5warvjFNyRqBqa2AFGuXDkDTCni8zxJ0PWQ69+uh10hkyOugNGhAZJiUO+8S/DoQj8R8gGBo4v8DRxbTAQYsCxHhhiRY0jfzQHmqTuZk7CTFH1OogSbRsK6KRFwfBVLfSOxPh9yNIrxFEpy56q60H/MCWgz5Cm0RdoheZVhT5Rht4E/Q/OGRdMxUfwetg5Xw8p99mRpFoz/XO1dDH/uVRS62VuEzrdfWIpWCHsmFJ04/ZmqcJ6AdYCuvWhEGvqBmnfUpx+gVhdaS+K0/6K9Ih2Y0Yo5JBJ7
x9R0mdOqqy4ubl1RXEGmEWFkqqWN2MQicLz6ELcrSa25uVUDUKK3VlehlUS+9Rxf/Pd7k2BsCSEb79NFst131Jqai1C0NJNbZp26cbQZtVpTuaFYuv9bO4e3H2yNQmETC9/hwRu5cvhRjabMXBu7dLCxg2wIhN/covdQo6kdZ2MLwCNssrC1+/M1TRXA0j3QjkaWsbFNfUdrs2GbRt3mGtF1mkuQ7oVv//Bnw/n52XHWRnegnkLmv9NScXr+y4ut37y7TFFfTn452OQ91ZdakWJld3J9vZ2pV9qBOSYJzhhmaCXmZ14w9oZQhEZEV3Jud+A1iawgcEr6Jj2iejfAEaYzYUoRxWa1BEnUrgdcX7seRPkRhxgYpRjUK+8SPDLPJ+HIfB84Mt8XjizI4OgCFOJCI4usl+PRpUHQZ5N/ZuoOW06j9wkv6NAkEtZODodjK8wbY4hwA5wt1exyxMdUvI6B2TPHo/weQsshf0ArpDWSZxka6wvbJ42G+rVi04KC
gnajAEMs1wdyJz4bp3cMoShH7dxmhlS5+rxIcfyeg+OdV3wZikKSboxWSDwv3zvIfn0lkX/9q9Kpxy693mcAjW9HdYBcM5XROag4igKKV2u+GCGRp5y3F//y3E746r0pYrETvvngIPrxudRx3TVnbaV5lBdIrbqIyCQcZ22rELly3ESlqu96parXZiMbVU6dJlGxm6RujfhxW01Gl7fYJLl8eIpItO1HB4dzj+ztf3zG5//60sHh/BNc96tUOve8RlNhDm5LY+tF4VJlEqD5vVEVgE5XtqfMcdY+sXTDVaHo2D2B+NRDkWTvbals2WWlqvs2FxfvvlSUJtGhDJMVqn5rVE69NiGG+1Br2iVnjEwT5u2omPyl0qnfapW61yZkMy2VTn1mUy8Vtu6BNOyVXDVlqFI9YGXmPppemxROAxfQoAzU08d8H0rhUTiN66RUD1pC2yKGffD5Yr2Ld1mSrfnn+euBhr5vL7WCHHyfwYFlNV5TxG1Nz6Vvt9TrQ3WK
74yYF4kXJZe/Su93tin9sG7Q279U0yFOnc6lF5QICZc22yG6SlsoVaqUSYYkQFrmSYIu+1zeuBxwgWwcdCEpIvmT4OE5Xj0Oz/WGTOZ5A4oRSIx5leORRQHQO8UvM2WHLafR+zhKsHERWDsxFI4uYzTGIN8w6hzZ5MjSGIOPixgaZHYsrQk9vzwGzQY/geZIC6TlYBShFTLsyCLDpF57oWubOKwPjH6LEU53TIFSW/uecsmBZIh4Ux9hhOq9GuGPsiM+7o0MQAbi855URMXHNekHSZEVFXlNcrAARYQSGtGaBITUwWMl4bIPHRsf90JaUYIxEkaiYkzObmOs1xPTYKBGYYcZCaVrphQXa+/flANIkQ5FmtSFj/IQkQbGEZtpWQmX8fQ6XYsxGrXlOp5xsFIdXRvth8sKuKxGx6EiLj4uQvdOMjI2ELjQxFSMEbRDqI7R+JqI7XVc5WsUE58tyqXz4+uBZvv4meTNNt4fRYgkXvN9SOxsebIPjjeJoVZd
LlZOLP/U2iGy7hxulJDVutwyO9g4sm5qxXt0DeZfVGlISEiUZ+XeqwpahIR7i3UQW7GxQYZEyZIl36O0NmAUGmJKJrVYx7RH/xZFCJnsZ4BCDOwZmGcJHprlserQLE84NNvIHC84bCSvcjyMy17rfcxSdxigEL2OehokuGZ8CBxZwlKsXkGYyZGlMebEKmqQiYD9yxNg6pTx0HTwXWg86DE0GfwYmiLcMnxqUYadkKQBN6Ft83LQv3N0Ov5jvIN1VEHsUWDeZcgcOME4m5yeWmZJdvQjpcckDxIfScp8SH1jjwp3FIeXOTRGH9Zd+uDr/kg4EofrE/FHWAJ/jAn4OBq3IYn44dKba3/El6DniHN+5lsxiRXh030Yx1VU0PXT0tTDwzSHSl6OZ5SLmMRMxzEV580Ssm2Nz/kmrH2d6/zc+3Bff173eXyiUTnK82OnKcwbWfqOtaMvXdtZx/dvTIchXrOwbU6lZ/Re5sjLw5UqrAMq7VM2aZ1TpzMFLkIc
uBU8mi2H8GpdILFEmQ/4I9uFX4ZSxukeRewT6xgluFv/FkUIBvZmYZJiYI+8S/DAdPcdW8Z5vv96skfavuke6QdnegCK0Uje5HgYlz3Xemem7rDlNHod9oT2jSJg9dhgOLwwq0h9zFikRrgaY/BxCBwzsm5eQxiSvBIaDb4DDQc9gi8G/Q6NjOQmw9acMkSGPILWLVvDgM4x0LNddCp+F9ZQXeA/0BfbhmCZVc6WYOtBpExUutp8ZfPYNsX2GfI8kw3/EHgeSQ/JcBJXAczHbMOkII9Hz/8joAQb1MnZE6RRJpP6J/5g7QRNj459UZKSo7lYN6Xcc5IgawMEvqDDhpJK/qVarnLqeKrgRWjEqfNZ0LdYn+pcf8EpeVC1xpiHFo4JuTRRk8JciNSzRbdT906/C8W3m8GeLAK7512CuG0Y8sBE0biYB6USox5VLB35e40KRR7XqxrxpGmt8Kdt6oX92alR6LPFQ/3ecsgRH3tDj9VemWk7bDmNnoc8oP0X
EbAqORAOzWcvVjPrG48iuESCYM+CGJg6eTC0HnYK6gx8CHWRegMfQX2kAYqwIQowrzJsY5IhMfA2tGjeAnq1j4ZZI6KhUZ2olyjBBmypCJ8aQ3E1VhfA28ODT4mso2wNRqYen2YujEJu76+duG5S6SdrJpZ+as7aSaWftq4XvtXa6R4eHK7X4S/sJmfipRmLRpd8yCwOE9nqB4ODg/Uowor+RavM0jdf/4FdZAUbIao6HHup/GL1IXloVRJiBEOINBCnVLddl4oihEx2ZWAUY34lGIqMRaYji5C1yHbkIHIK+R65ZZLkwkG+bw/OcIeDMxGKGglj1HgQ6bHS05iyw57P6HHAA4rHR7+tUTHyWdPaEU/bNQz/s2uz8Gd92oS9GJoU+teX3UJfrhwd+O7wQn84gpHiwfn+sGp6Xeg3einUHvgL1BzwAGohtQcSD3OVYWNrI0MsArdo1gB6tYuCaUOiYN20IukJ8TH3sKrCkd6nz0GCztHOwbyDPPgkHMjA
MclxMxXTCyX46T5nqkOk3kBsUF2mtZHyL3tq7nxxqh4YOG1OfZgxOPEmqwSZIqSIEHP9puBj8K/WD3C0GaO0PoEQ26MQm6ScVNSYOlHiU7wcpmiEa7do3+u36QFlCLodDIxSDOgWQBJMYs4xUkCQLG0R6b6prs8PTHOFA9Pc4MB0NzhIkBQRLFpD9xUemWk7bDmN7vvcaSitx8jPyC+MKPRhBrG/D+lZ/d2kcX2gx8jlUH/QBajS/x5UHXAfqiHVkRoIU4Z1rJRhEw4ZNu99Flo2qQp92kXCtMFFIGVaOMwbFfEWG6zW03fhs5LgMR4UCEetw7HT/0KChSLkxrrv48rkONGjwzVfPvumNpjz/JuMobV6tYo8RlGlRZOiTBJQKu9IgkREiRqga7WNRPWJuQSqTmffa0aNB+20dqBdWhq0Ke6g+0oHKEbQbcNocDtGgl0Cb2Dk0hWjWGpCFzIrYREbLqx9Y/dP1vsgsH+KSwZTXeCAAVcgMe7HZbel7sy0
nWyP3Q6jBPe6p+H7uhGjbOr7qgmsMCgivu2+aSU6n7lUuscPr8r1/RXK97sDFfrfhYpIJaQyShBFaJUM6+YqQxQh0nQwMughVGsyE0oVL5pWqVzU+4Y1I992blbk78GdIl41rx9FM/e1/Zx+GM4xKMFTPLCKkwWDY9d/owQLub23asc/j9eALGrmoHisb7IpWZoVlImTcfpHyE4sBFXqBM5t95Og/odgQna/baAZORW0U7qAblEl8O1Q7FFwkWKT3T08S2LPGV8qTmOxWkfpHfhcaZwMSoZIEDEiMspSgDgg9pbYnuzcdO8EHeydmMW+SfpM9k5ygW6L3Qwt1W4HMPLb7wau+1GA+1zBfbcveG2NB/8VtdPj6o2/F9/x+J0SvW68LdnnNyjV9zco3e82lDFSFiVYDvlYGdazIMN63S9CQpnG7/Fzfo1SfoH/7J7j5/sH8ogRnXp+ThLUJej8eed5YODcp0HWV7YSW1/d/z0SLGTu4Ajhnb2V7/xxtCpk
UC0H32+s8J5yKulztVT8m4EAF9GxcVhE7gvqpOOfWoCW5z7pcgk0KEjnLhc+aDufe6PrePq5vsOJ+/q2R67p2x44qW/x9XaXJpsW6OotHqopP7KpMqByAkoyACXpjUVuD8TdiIcJHC/RU+Xs5r16sG7JzmQtGBhL6AxsHusHS0cXh6kja0P98R0hYOYwCJ44A8LGrIPIYQcgZsB3ULTPL1AMiTfwKyRgxJeIFEcBlkBKIqX63s6TDKvmkOGDXGVYr+9NKNtg2rvgkIi7OMbgWcwrW4q5Xx2QaF9fXzesAvGk8QiRop9ZEcnGJdBFKesl66BoqRilbKYcp2yqHM9KY+VUh1UON3jf8YAN/i7+G1Vt1SpVU9UkzmPg8RWtFCM1pTVVzXpQfPL7ZvK5H9cSn+pct3aUn/LkUCWcaKkyJzMHxf5B0z3Q9qwHCQgI4BvrrSA3ouMSwK/G4M8gMuScFS/7fMndsyaPd+t+Fdxx6Y7rPbr/kO7V/QryQ7p3j6vg
0+MK+OLr/j2vgX+PHyGgxzUI6XkZwpDQnlcgtNePENbrGi5vQBgS3utniECK9P4ZInvfNBCFRCMxvW9BbJ9bEIfkVYZlrZRhtVxkWKffr1Dui7mpoRFF7+PnewmFtxUFOBJ/4NUxR49y4qRsrcCfmQj5lAxMycX0BeaCkn4dNjrs5V3hARsowReUTE3JxdQTgpKA2aAcQZqcic6bz4mlbKzehmX2Q8otpFxKul9a4p+MUjo+ZhIxU44miZ2OSdBjWkfnZF5TPu7ZhvWejZPU03tJ98R8PwuaK5tKl3q4v0La7wcqQA4OZvAIH1co7ruSrotTglg8irckPqwwByw+5Sgmh5RrCW5NV6CILn0+MiQRWiNDxINA4XkiXog3is8H8UX8el4HfyQACTTwEwQhwb1+ghCUYO4yvMUpwwSrZIgitFKG1RkyrIHnKttk8fuwqMSHKL/vMSdzB4pkItIUUz+iMBrUIIJ/S3GPfli5YEuJ1vY77HfwrvOADf4+/nMa
fp9GojGO1uJt3gpp7NmizO0HS4nKNAAD9YLApb8J4+x5elMis2lUGOY2BIkYEZsSyUkSTiWc/IQrhQMctjhssN9jfxKv91t8fEC4SrhY0UZRjfZh70LIPe8ziVTeVl5MuEI41GETxsnbHPbY77Y/ictDghTBV4Llgtny7vI61EeYrpWEyCZD05QCiIf5vVDCu6nHCe1v6DbXXhEnXC4cLFgnmG+/1X6LcJ5wFB3/n4gIL6xO8Luzs8yzR/vKgiW2TSv2N/UFN10DV1G4KQL5pUhCRfCpMxY07Q99djJ0tlKGnlbKMOh/JMPyuciwYqej6Yk1hr0ODov8DSV3keTn7e09CWmJFMUvqx6/CJK8R3+fvyQpWsQf+HbeLR6wwT+UIUHqmWL7ne1Vru2cqzjXZvaJZaundNjs0Jxrf8EqwRYaSYV6iNh+a3uNd5MH5vBP8G+hdANJLPJm8lD7ffZf2Vy1eU2vsfIzL51/gH9d2VDZgOYTsSRpEhHJj4Rqd87u
V8YxOLE9Z/tUtEC0FP8VRBi7K9qZ5xHz9/BHcO2vjdRWpHNKh0rL2p2wu8j7iZfGfB1leJnut6D/6e6ZGaX9ZXvJ3x7sKQWWuLurJNQq53eAMSIQpwS7IVAQhJWqi0JMBm3bPZ+xDK9YlKFXrjL8yWoZRrHK8JccMkzMhwwr9L4GJZsseV8kvuZjjHKuYeR3HOW3FqU3Gn/MzXBZDIXoSt0j6cts+nH/JyW4HyV4hwds8I9lStDD7qLdTa7t1InqLtT31ZIEUTBJvNs8YEM8V3yK+tpS0dr2ou0vbNvYH7J/QqNkixaKkmx+tPmT1lmDzTWbt/IB8uF0bBIVW19f2ShZGZTvLd5vPMgrduftnsg7yjsZR3rhMweQFWwVTOTaTxuk7SKeIu5v87PNW7bXhauFN2mOmYKU4L5Zkfqfv0q4fm9ncbi/yzLT+0X8QX2vmZ8p65cIi7rFSWAFTUSJauBbc3i6rtUWGm7rs5GhNg8y9LZChsFWyjDaShkWtyDD
0p1PQWLjxalRZVo/Cw4OvYHiO4ei+xrr/GajDHqh+GriDyKSIr//rvxYJHgYJfiQB2zwT2ZIkAZLsLtk90u21x9koUnU9KKoJhcJdqJt2ZDMlZxDSQVTgi+e5ze2beyP2D9TdlGO493lfaDnVnE/A9vrtm+caju1Mh//jwQoXixuanPL5iVtl19sfrJ5o+ys7EP1rCTaTAluF0zm2kc6WnqAdwfv5R4P2BCuFd4qSAkenB3h+vPmYjfv7ogHVr42kQBnlselBvm7fWk2FiFnJChBXhSE+Kj+0PQY5ZqOPMe0jJshkXHnfcp1vO1ef8YrbettacouFz4LGepyyPDqJ5FhjJUyLNn7BpTruAVqNBsEdWuVpwEOnqH4TpP48Au6COU3DMXXHCnph38YEaqxrkb035YfiwSPoAQf8SAHv6METzMkeBkliOtYsE6Cq1GCHPtL5jMk+ANKkGUbu+/t3qHM/s62/rH1OBx0oMiqiKmSn6I2jABL2ty1eU2vfyy2
N21fa0po6hsbiGwMEtyJEsTrzA/ClIKT4OmlsZ4/bYz97fa2OGByJ5OimVxZG5tWPsFrN32etG+uEjRWbK5CoABJR/7ARpfjKMFxGJX0DgwKHONf0/+Eb1vf936tQsC7ZS1wbz0Y9O2WgKbjPpTVd/9TGeoLUIYoQqtlSCI0yTCh+xko024FVG02GOrXrwMta4VCUr1A6N4oAJrVCP6AfXuPovj6U3EXl2WRUIr6cAxAakl04B4Z5D8uwYMoQY5ohH+CIcFvUYKs21kpweUoQY79JXMYEvwOJYiRUoFzh5euKatpb2zEsXUOc9bYXra9kdt+WFx9jxHxXxgtvjese8CNYLPgOxpWjERrkOAOlCCuzw/C9QVTHD61ODLgp5SoB79tiQZ2YjL5KSU6vW5579PUWMV2Xs4vUnx8POWL3SwoAWIE+BCX+5CuGB2GBhULCtKO1h7RjdOBy5eYXDzEDTz6e4BXdy/w7egLfm39IKBlOPi0rAZeLTuDa+uxoGu/
Bpw6Hf1sZehjpQxD2WTY6zrEdjkGJdqvhfItJkKNL9pB49qJ0LZ2ICTV9YPuDf2gX1MfGNbKC8YnecCMrm7QvHrQ39jzpCtGfYFYd6PDaNARi0JC66O+/7AE96IEf+UBG/wjDAmeRQlybKdJsEKCS1CCHPVjklkMCZ5HCeI6S2A95p+OAxx3Kr9QTnIu7txd0VYxTjJBst32B9u3lvaTjpNupXuhpH/hBuFA1u1uZ4D3+1rRTpFiHMi1nIu7S1XHfo4z8J/BX/Q6G7h/ulNjp/5UP0pFbcEWlCCuzw+CDYKr9J58jATPLokIv74u4skvm4qAgc3mRBr4Fbm1MRLa1Pa9xjUnCWFpOHshCqsC8udHCvAdchsluAOjwDaIZ3CXYLlqjmqvcr4SlHOVoJqtAqcZTqCZogHteC3ox+jBdYQruA1CMfb1QMl4gU8nH/Br7wf+rf3Bv0UR8G1eAXyaNwHPFt3AvdUocG07F/StN4Kmw2FQdf72H5OhC6sMf8wh
Q98cMrwGgdhLI6jLQQhtvS09rMnSD2F1xr0Jq9DlWXCxavfCQ0N+alfbHzrW9YVuDXyhdyNvGNzcC75s5QaTOrjCzC56mN9DC0t7a2BNfw1sHKyGGmXD/8Co2tWY4mLHnoz6/1SCO1GCN3jABv8AQ4InUYIc22nirZDgQpQgR0upZDpDgqdQgriOC8lUyQ96V31zPF8IpdaQcChP0TCidiVNU5vvbd5w7SueI75A56G6O9sLtr9xtSxja+3f+lD9QIrEKHLEfYTUKk1zmuiK6arge/E7176i+aLTuG0QtaoKNqIEcV1u4D+iZyjobxx7O67G1uxxyjbKZExA70aNLfn9np5aEFL0+tqw57dSwiA74RlszOL6uvD0DnW9f8JrLmmpFd3ilwlni5OhtKqhxC4gaXmUHw2Y+jsuz2DktwSXDRMSElxxaa+Yq1goXyAH+Xw5KOYrQLFAAQYhzjNKcQ5KcaYTqKerwXmKM2gnaEE31hgxDnMD9wHu4NnXE7x7
eoN3V+8MQXbwg4jSES+wePhjSEjouYCokpf9itW66Vuy+T3v8l0ee1Xu/8yjxqiX7nUnv3ZtMPeta+NlqfrmGz5gX+g0beuv05zb7EJ2pzu33ZeuaXcg3bn9IeRoujbpeLpLx2Pg1vE4uGMU6pW0H3zb74SA9lsgqO0GFNoqiGi1BCJbzIOY5jMgouUkFPUwCGzZDYJatIDgFjVxWQIJhaBWQbg+EFBcT7CoehGLrHuwzm4jSmxp1RI+XyW3doGJ7XQwvaMW5nTRwOLualjRSwVr+6kgZYACNg+Sw5Yhctg2lJaK9LiY6LMcX6ZCCW5DCV7lARv83QwJHkUJ/sgDNjTFrJDgPJQgx/6SKQwJHkcJcmzHP8h/pffQt6V0GpYRtO0oodshxWEP7xoP2MDcv18o0pF9KavKtQ0miaerGqiWUbGWojnzofDpPVN2Ura2uWzznm1/FOgrihwpD1CwHiVoeu16TvAYaRjRHkbRfkHno5xCU/I55QhyT1NgmZPz
A0tfWxP86uf1wZBFCCuXVwSnfVHJ4zuaaiC383G+wJx7BMVVApmA3EJeIR9Y6vto3VvkKXIDOYGsQwF2wwnUY2iwVhq0VTldWUY+T56OgIH55pAYGVI0iXEuinG2U4Ycp6lBM1kDzhOcgYrTumQU5GgXCKsQdisiImIZDqAwEgVDjEfJTEfm4ONFyHJkLbIRt9mG7MTH+3B5EJeHkSPIUeQY7nMc15/Ax980rej9qn11LzDRoYYXJNXwhI41PaFTTS/EEzrXRmp5QNUmKOTOfuDXzQ98e/mCTx8f8BmADPEB7+He4DXMKx3/sXyL78cElF9rrMOrgw0b5ce2dJq/oocc1vR2hPV9ZbCxnxQ2D5DCloEy2DZICtuHIIORoVLYgSzspqEIe0Gh+DgkuBkl+D0P2OB/zZDgIZQg63ZWSnA2SpBjf8lEhgQPoQTNt7mcgXSA9DjNMcI1WKihmLtAOJK2ZQPzEe/RlAIOKx3GcW3jsMGBZsyrQe8Nx4AidiRh
TGi+kG3fH7JwquLUj+5FsBoliM+5kIyXXDJNG8ocE/RjSiqnFwREXV0V8PrG2gDIIJCTcwsDPlQr6XGCJtS3ZgBWq75UGMGJihYt6o1Cq4bR3Uj88a1E9iCn8Pk3yFGExt/bhaxAhsfFxTXCfWJxqTNW0NtI+kpEjrMdbzjOcQQDcx1BPleOGIVoLkYzSSoXKDOYl1OQFD1GlI84i9LuiFMExKLEIlBgMSiyeDx/cXxcBh9XQOFUxddqIQ1wXRMSEa5rj8tOuK4zPu6Kj7sjPfB5z7AQ//7JzdUfkpupgRjb3AnGIeObqxAFTGyhgEkt5DClFdJSDo16asF1kiu4TkGmuYLLTBfQz9aDfo4edPN1tEyjqgGUdS1jX10V1eWl9BHt3NxXCFv6iWDrABFs6y+EbbQcmMH2QWIDO4jBYhjZwpX+GXUuFB+HBDegBC/wgA3+VoYE96IEObbTFLVCgtNRghz7S8YxJLgfJXiRB2womiuW4Xk4R6ohQYkmidpy
7S9YKrhPRT7+Fv5+rm2kg6RUZC5nTN7WsUG9RUQzREt53/KADccejmuoZwz2LpnMtY3tQdu3Lr4uXZjTaX4seyZ6eFxe6vv4+ipfuL7KD66v5uboLN/3JWPdd1EjCKMOMN8SZBt6X4pSc8cfcTAJDv8qYlRTC59XwfWlaB29hpGfG0WQzK43tHSc4TjBcRbKj5jNYA6DudbLkVmcRhmmxxSN2Y3XUxPl5kTixvOLMYVEgjKTESQbui4SD6WOYNFZi0sXbPFyQ3F60BBcuK8XbudD4HrfPnV1VVd0k3xY2U0CK7qKYaUBEazsgkuiswhWdxHBmq5CWNNFAK2Gy0GzVAOa5RpQL1eDegVGrSvx+SpkNUavS52pWmEDvneJ1IhhqstL6SnoPa2d6s2GnqK0rX0dYFs/QQb9CaFRikLYTqAUO9TyeYbHSSwUH4cE16AET/OADf4mhgR3ogQ5ttPEWSHBqSjBMzxgQzKGIcFdKEGO7dSl1aOoHtDSeSQjJA24
9hcsEtyniZf42/iXeGd5wAZ/M/8ZchP/AVzD7VjB167b7bB7yHUM8TjxGSreChYLpnBtI1gmeEST2RfEnDTExpF6x4uLvK79uMILflzhbeCaOSsz2DXR621MmBuJ2jUvAubltyM2tUCSaEiMJBsUhoBEydaRmpBOkIbJZsjeIyCbyWCWzCBFa+RoSYzOY5ypKL4RJViKriu3/qVG7Og+SNZMKHJlUjI+Mrl+haA/etZz/6trL3Vq9UnK9Og5SvBapAL3JU4QOk8FZaYqoeNIGdQeqwSnlVhsX8VgdRaaZZo0YzVBUeaEMfg4GnkQi1QrFfq4Yy2v52Oaa14v6yx5T1IkmHKsWjqU6lulheLjkOAKlOBxHrDBX8eQ4FaUoOm1E9nRxFohwYkoQdyWDckohgS3owQ5ttNEaXpTfZlFCQ6T1Dds/01OBPMNkWAVFNiv9PyfQjhT+DPNFIjnm8K1jWiC6Bo1vBRUd8xz8zz2Xl3mAbmRMtL9dbC/yyzqSpjX
Ivcn+2LKJskmS6dJQTZdlsUMIwUgRvde7s9QLAtJJia5FBQ0WnVwg+CLztOc3yuXYhF8GbIci+HLVVmsYLCSwSpVNhlqlnJKUIZUQYYhO5DfSIpEqfgij5pV9vtzYEP9yznt5e/WdhelxcbGHC+UngUJLkEJHuJBNg5nwF/NkOBmlCCuY8E6CY5DCXLsLxnOkOBXKEH27UiCPXI7j2QISvAID9gQzBbcp+k3bbfZPqDnBc7RDARzBfeooUEwByWIz9mQDJJQK3JwQRSFT85yrfnDEldg4wqDFYNcX/l5u37J7DnzOUrQFiV4TTpVCplMy4SEaK0cOcXo18yPpDEai+QBFJEW1LXLhsnslQux9nGJMh2BTJYyWEYw5cgtRvViNasEzaHXEPrrhCxBvjNJEQVIy9mF0rMgwYUowX08YIO/nCHB9ShBju00MVZIcAxKkGN/FFeWBFNQgvt5wIYm0goJDkQJcuwvmG6MBNfb3aTneeaAdTgscniE95OA55vK
tY1kgORkQQyQMCVJYXd2tu7W5YU6MLCInfXDdK/9fVxHU48W076fpQS1X2qVkskSIKSTUXxTzGCXo9VilE+Xp0cmRp7ConCngpxfxHGcoxBbqb9WLFSAYhGDxVnkVYzq+WpqRV9rUYLcYvRA6iHjkFoWti2U4FyU4E4esMFfzJDgapQgx3aaaCskOErYhbeLB2xIBjEkuAYlyLqdlRLshxLk2F8wxSDBivxl/O9Yt9nNA9EI0V3haOE1YbLwSn4QjRX9IB4s3kVFXTzfVDomG9j4+U1BSPDkdE3HS/M1wMkCDewdr00N8XeZLcc/2uezlqB8hDwM0wXAwCQGkzP5KDHq++mppXQrUh0bZRwLIndOWENoi/mM+ymf0QQ1wJAQifyKUTNDk4oCXE2NSEwJFlLAEpyJEtzKAzb48xkSXIYSZN+OJNgnl7o6e4cJDmN523jAhqQ/Q4IrUIIc22mKWCHB3ihBjv0FEw3F4bIo/kO87TxgQ1VZtZQaLOh68gPm
Kgbh0ouSrAWTBJO5ziPp9fESpH1PTFVd+w6zPrg4P8spvVK87jB9PrTPZyxBY6vwCMfK4vFiEE/IQDKBhMiEXY7SydaJMaB+ABWFZ6JcoqghoyCuWzFT0ZW9EYbIvxi1E7U0i9syqrukRplCcf1DEpyKEtzIAzb4sxgSXGx3i7eJB2woKysnWGq1pQFQ8TybuPaX9GFIcAlKkGM7TYQVEuyJEtzMAzYEYwX38TyJgjGCWVzbSLtKD9DAp4y8PS7sjPDNoHW2hm5z41GCX/GADUmPj5fg3mRZyMXZcsiJIpOJHdQv8DyJdD2UDJ1f6D3/JF9K577O4di8DtkYz8AoRiTPYlSNVKVGxUQdQ6l0odScgigKy8bK3LDe8QV7I4z1YmSTo260jqLW+VhsL1IowX9QgpNQgut4wAZ/eoYE8Ufkzp/Nv8y1nWNTx43UbY3tO0XrtGFaT9slto+59pf0ZEhwPkrQ9Np6JlZKsBtKELdlQzDaUByOFXcUN+faxn6y
PaWulERElqYvoEiPRoimfEAmFAUau/JJBMkowQ08YEPS9eMleGSCbOqFGVK4MNOELAdffyl/dXC84+XDExy/Z0dukSPIjlGKHfR+fJIvJSYGS8SjxS9EySLAugUQjxUbKAgx+tf1v0cJ2iiV8tTN72OLwsZ8xq0FkbZDmBen3Qa6USv2dMyrDC2U4D8owXEowZU8YIM/OUOClNBrP9H+KNd2DskO91AKJeiHYp5iRfN9CIcJp7LuuyoDSTeGBGejBHEdG5pwKyTYGSXIsb9ghCFPMEYbqg2yWWrzhnW71RgNdpRuIqGxDZ1PkZ6Hv4fOdoHtDbyHD2xI20r7U79fwUiUIB6PDUmnj5Mg7XdkvPjOuWkiyECcxXRuzmdDws6M7Gwa4kjD7Ht8ki9m1apVBTjc9inRaJTgGAbJJkxiJNjFyFacVg9UU9HyGKaw9Eap+BSEVOST5XW4GmGsFaMlOfq087mLX8Ix7K3YhRSYBEejBJfwgA3++AwJUrcuwRDB
fFrHhai7aK/eW1+MisWIkqIhKkYLhgvm8RajHJbygAtJZ4YEp6MEObbThFkhwSSUIMf+eA9UHI6mYziMdNjFW8YDNmwW2Lx3bOI4meZQMY4PaI/wSfI6D527fbL9Nq597WbavXJxc6mNIxZpBMNQghzbSTp8nAQXd7NXnp4iSD8zRQBnLTHVHGGe2TBI8o566nySLybJyamHU3/hl0IwQUIkuMVoOWqUjpGmhZYPvYESXIBRYGns+VEgUaBsquxKXlun2cXIHjWG1A75Aa+5N7X0/oOTGxVKcARKcD4P2OCPMUiwDNXpSZtIG/AW8MASdlPsntkPtz/rMMhhG+571naa7RNanxuSJIYEJ6MEWbezUoLtUYILecCGYKDgvrHvsUBWS1YD7zGN1nPhMMLhiqiraJqsuawZSqsN7j/LdqrtPUv7yJrIDlLLsKE4PBglyLbdIrzndh8nwYNj+BVPT+JDnphsb+CM1TgYWN9f/OkkSG8Idl9zVfRRnBOORAmO
YpAPMUqSJRBQPYCKwVsxCmyNxW23gshQVyYrfQsqbYc1asTH0YnRR/Cam+G1M+euKKSgJTgEJTiLBwZmZ4c/MkOCFAHh0o0/mv8zrS9gUAgMCY5DCXJspwm1QoJtUIJzeMCGoF+mBPl0HGEP4Xpa/9HMzQD/oTx1dXNtTJGzoWFkAEoQ17Mhaf1xEtw3ym7UyQk2YOKURWxzMjE37DJZ20/wCSVorBcMKhdUEXONXmAdBghHoPhGGrFWjEa8G3r/bhygtQ/VrVEf3AKJAsfKelhqhPnYZG/nYc5v8Jq3UJ9rej8KJfgPSnAgSnAaD9jgD82QoGk8PVkDWVveFN4Hei2viFuJ7/Kmsrw2HYXQhiHB0ShBXMeGJsQKCbZECXLsL+iVKUE7xF7rqg3Ee7xGr1lkRu7YjbV74xzpPJwaiDJHlu6DEuTYXtIi/xKk+98xzH57prAmsGFjkZPmjCd4rKzpY//pJGiqeE1MTHT2ruPdW9pP+gLrVMAESTFXMSKy
QbL3QVWCbqNI9qJIhlKuHRa1pR8vE6MEx8jmFlTaDlvU6NvMl3p6zCvorn2FsEiwL0pwIg/Y4A/MkKBxXD1bEpC4hXgWvWY1k3iA53isd9ePsBlv856emyNpyZDgCJQgrmNDE2yFBJuhBDn2F3TLJkEbg9z9tNEOvRzO8SbzIL+gSP9SR6vH0D2YxiA0SLAHSpBjH0mzj5Kg7dfDBaeYRdfTTCbx84FdFhOzs+5TRoImKIePGjCwd0cnTXvNdcFQlN8wBixiJMRDxWn6NvrnUcWiLtEgCUgfY48LOb1xBda1b7RsK1fr9Mcme8smy9KjikedxWvuS6NrF9B1F8I+2bjMvhdW8ifzgA1+X/6f1OpLP2xTaggViyUNJFNtRtmkGrYbaxlRW9FP2GDSjo5jMwb3YdlG3Jwx5eYQ21+4jqUOVnfObWpPSVNJ3Rz7jstA0FVwmzlogUmENBK0pJFkod0Qu+e0nbXYJNukiVuKv9f56ZIot5A5CCtJEKU7kWtf
vM4j1AKdXwnu+VL8Y85GDEF2pljmjAEHy3y6OkH2sQmxToya6Rv61vLdpGmjuSvvJv9bOFiY7jDUAUiMosGidFkfWao6Sf3Kt6bvvaiiUedQejtwv2k0MTxKJITSYUaNGlVgIqGWWpylaydX6/THJnvre+pfGuswa9JwXv9cUbgQqusTtRR1E7YSHhY3E59jgmI6K/lC8hV++YOZ0TiJkEZwVsYovxC2E56xHWz7hjeKB0xshtt8ELQX/KYoq1iC0U4F4xh8ekEHwXpRC9EJ5nlEzUWnHCs7TjRNvi5sK5yO13TM/HqELYVH9YH6yrlN8i6rIYvC69qJxz1tfj/SOtKV5uMR0rEoGZiG3Ne56xJkdWRz8Bq+5ffh/2kz0iaN9yUPmNC92few/wPfm9OYtzgQ7y+BemPQe2TeyClqKqonaiM6gNHzWTz/ORP4HpySV5InsyeYW1daPDROejcrrUVs4FxuTDMhyhMpg6Rv8R+F5//qP7UA31w9CqEkLnug
4GbHxMWkhJUKOxQZH3nYWN+3E1/fjMvFyAikKT6Pp77BNP9JQUdSJEH5IPl4ttZp68TIHTXiMGLp4WXCf8R7mExJ0gXVq6UQ7oiCokH6glMkYw7lhiFillw5W2oxxh+/N0V4qkRVN3l5+XhFecVEdYy6H8qkAe4fT8el7ehHS6IhWeD2Psxz0KCeJANjFGVPjQqUcGx+LbQOUViaA4Og47hk/PmZH4OSviny43ovaF8qbtO+lE+Iy4oo3iY4aGwPp3icQClC28nF1aUu3TONAEO5gHQ8jiRxG3rvEA86t/k94346ihbz+7l1q6Me2LiC8z5kf74pbx01Smi/ps/uf1ZkIelQNEfFYyQOBVEJF41QEm2pxZceU19gFGRx40CtWnwupv1o/3/ih6PpoWmQ/7Qd7mRvtw5uz/E+duI9Naf7+OeLwoUwE5tZsLG4n1EcNJ8G/UgomqLiKuXW0Q/c/POzfB72bay9HuvPY/W+9iQxvD9HyntEgTrRknFvdohN
wV+LdZB86X3/VNA1/8+/qBReUzI15fmhADWIC0WJiBqRG0eItmd/gwv2Wvzr+buLRorecrdO5z3Z23GU44fIhMhLKMGJSBRFsf8yoRSKlEHhvf33+Nw+FFuK9FhHqP4EUKSp7qqeytY6nS8xjhanh1QKocEdUlCA9Unyn20UWEghhRIshKJSarTBeVJ/ZLZOm6SYVzn61fZ7gMfbjRLsjsVg/8++LrCQQgolWFj8ofEIMRexjnig+CV36o7lnEbJUEmab23fhyi//chgKgYXJkcXUsi/RYKFIrQzTBNaNrS1ooviN1PajkmIuYlR01nzd5HiRa5RSg8JEClGYi0sBhdSyL9GgoVQsRVz+byohdqtkdsRYX/he4chDmDCKEaTHGkE4Q/6VvrnQZWCaHL6I8gilGC7SPwryITuQgoppFCCnzSXkQZ9oNSd2LjYUcHlg/d61PO47tzK+bG+uf6Je0P3B961vX8LrBR4NTom+qhxeP95NEIMUpEkSg0t/1oB
FlJIoQQLRWicW1mFfZSLYFRYjxK7saEjGZlNoOwm4XKkMeG7AW6TiI99zSeeL6SQQv61EizEONG8kPIWUYbeKLswFF8MQb0/cBkcHx/vRa9TTiOJszD6K6SQ/6IEC5NmSYZ8ivCoztAEPTeJ798X+RVSSCH/B8zRbOJfFK4KAAAAAElFTkSuQmCC
"""
svg = """
H4sIAAAAAAAAAO19WW8bSZb1e/0KjgoDdOFjpmJfVHYN0N2YRj8MptE9he+ZliibKEk0KNouz6+fc25kkkmJUqZlKi2rVOqGmZFbxI27nLtE5Kv/+P3yYvJxvrpeLK9eH+laHU3mV6fLs8XV29dHv/7Pf1bpaHK9nl2dzS6WV/PXR1fLo//45YdX/1ZVk7+s5rP1/GzyabF+N/n71W/Xp7P388mf3q3X70+Ojz99+lQvmsZ6uXp7/NOkqn754YdX1x/f/jCZTPDeq+uTs9PXR80N7z+sLuTCs9Pj+cX8cn61vj7WtT4+2l5+ur38lG9ffJyfLi8vl1fXcufV9Y+di1dn55ur2ZtPVi7SOedjZY6NqXBFdf35aj37vdq9FX3cd6tRSh3j3PbKYVed/H4BUtzZGTkrV3cmwkjDp8XZ+t3rI6u9HL6bL96+W+N0c7w4e32EN5mjX3D06mx+fs3W0s4jJydwCm+Yz1Z/W83OFqBruahctnvGKZ2be3DX9Xr5
vr22eRlanDL6aNt8vf58MS9nqtPlxXJ18qMJc23Cz9K0fD87Xaw/n3RvWZ6fX88xDDDbcf/L7P0vUyoEpfpfpjcve3W8O+gvpFGMSffRCNf0dNt4/h2ERniZv/9lYca/UWlkVT+NrLm/2/r09I0+PRCNrHtyfGRcP41MuL/bs7MwV+ZANDLp/pedvTmLfgDTHo5GNvteGtkc+6Y2qSFTO4RGNue+l81n4/KRTbGfRqm320qdnx+KRn0G4nzmhrzsgDSKvXYNNOrp9qlP8WB81Gcgzt+c+TCmrHkdUx+NvE6qR2fr8OZMH4RGeFmPgYghzs/diDRyQdlefBRUjzk+II3wsh7lN/RlB6ORzbaXj2x2PXxk5b+D0Agv6+Gj/IZ/49LI9NOoD7J4y78D0cj2AA1cgb9RaaRDP410D2R5k/h3IBqZPqYdn0ZqgKzpnm7PzrM6EIbEy3pkbR7PbLrhi6g62ehjDo9NrF6nBNf09N+7N+d+gM0ZRCzV65TAkXo8
MPm7fn1kjKpV7MrRZ7QGk+rgrN6ClN8NrrWuNlnFrar4jNakPObPuW3H9tJ+e5MEPE7erebnr49+3DNH7XVvm8Zfrxbr69dHH67nq3+BCPP/vvr1en7rqv9Zza6uz5ery9dHl7P1avH7n3QNaAxUp6cKf+2RmVY21FErGxN+alcDRkT905aE9xDL5tpq1bG5JFbMps6w+nqXWGhN+M/vECs4UxtnO1GNvZrPDSRWOiCxnAnWOzet8FNplXSeV0kIV87YKfpurVNxWjmFYcTgzU+bR1+/X81nZ/81X79bYkTvZ2dDCKqtxSNDB3kL9yXwGShldgiqSbqc3C16oq/B3UtPN5Cedig9nxSvVENFK3WuHJFfQj4Mw+y3f9UQlKCqHnfrwDaw6kHlY1nBMRkx/IHZMAyAYXFMGBafCAwbkwHjYKv5/BgwDtCCscdXOigD9gWKRmXA0TBuNZQFuwbpaeHc/Y5T1Rv3outUjes8VT0xi2fjPjnVQabPh7Gccr0e
Oa4Zj6nwsm/vkY/mEw31G2yq8kCeeoBukacPiBf3IfgDR4z7XvfIMePxcJNzKg/ULeNGGzSUqE/axmllwPwhpPS14YbxKPoFPvlQ2XqSVL3DJx8g0fTJezT7oZ3ynvzu8/OJnDP+j8qHGHtvXQSu6amLOCALOgDCPx4DWvvHZUDbW7yEa3oihYdlwB5uf44MCFs8GN903PdnwYLNmIbY4r46wwOb4qcUHRqHEf1wmP3c2NB53VsyjWvGS9HgZT01Ws+RAXX8AzNg/7oG37eu4aAM2Leu4TkyoBkaxBydAWOsFQirwYDW1jF4Fb+PUIPPg/H19yzSI4bafR7qMj/lSqVRgsig1UCTcsjCmrGGNhCuHXJoVkEPgcd3lQb4og5Zx12lYa2vXXJpl8VdzrVH831Kw+s0KE0pKwIOxeKqzjFosLVpeDzaAN3hp9rWOgejweI5ZOhBE4axuHWwh9HqG8SK0N/e+bRLrGDqaKCbdogVU6CodZYZ3CZWyKkbOOwh
10HtlooGDBGmUHtWa6dsnlb47ZVTDhqBJPQB/ZtW3tcxcnzTKiWwkLUPysc1IxiwCKSvvunAy0D6XvfIC0HG5LZhaufb8RpUESy+hem3mgjHuQ6jXc6uf+sOjsdRddc5ruan6+28NOubvaV2UL6bDmjXOmeY7JDDzlKg1WdYeGh7Bf3R9eN+pxmojXbdWjQQv7EZ3SR66R97g/7tiX2cLy4uTn48l//AV6vlb/OTq+VVR4sfc3BPhG3iwPqAJ8o2T4F81SAcXQhYHRL5PSYJ71LxVW/ZRVHyffU8B1fzfVn3gyn6uzRVpZ+6rqr2VKh8mbbC1ev5qn2IULZaXKHp/fJitl4sr6pyBfj5+p9/+/OO4JQzUeVOveH5/G+zD9fXi9nVny8+rHZ7vXsO9+3WIJ79df5xIe+E/AL6wpFTptPr8ronoybcl0BB/y2URAWmUsZ6DyQYPX6G8FAt0Zu8Eh3Rkzc4tIboCdseTD8cQEa0e5iMaNcvI/rJyoh1zxOJ
GKdrAycr7JAvxFjbGLohI0afjKXD7uIu+bKtc1LmfvJ5NYR8ssHK4Yj3RdGn9oG7QTz08gIW6vABHFubbDqtEsHx6KVy4b4oB0hphpDyCyI4Xz3ycYLCGPnTXD/4CEFh52HkoBPzDYqCW2M30EuK8lpwtr+h1XwG9cO9ETPeNUgsuc/ISLzktam96RYwycgjOCztBPgMr4X+07mzL4uM3IF2EGl/78iHJRhkF5qxRp5y7ZztbLMh+sPlmgHGuDtyXBuMUbsjDzrUOit7XyIgOjUI68keRSONfCRud2pQWH9Ubh9pzvVAbh9vzkeSc6cHBR7HlPPxdHs1mN87/t8jjn7/DlZV/76DYIk+j+zAu1j1FVU/8j5W49qDaiAKyP6Lvf+D8Ym8u3ficm9s78C75u0LW+2+7lH3zRsXMQ2LElGXfnEo+WB8Iu/uF/Bu9ewYOwf28cnB9g48QIQnKv2gCE9Uu2HuboQHjq1XLvl0V4TnIP3OD+u31vf0Wzmu1XzU
fmv7wH77O/utahd8Vto/IKRmQq69yvGGO59i7a2/EcgwQdUZrv8uQAnJ1MGlTtJ3j6hGMzAmZA4aUNPOb2NC7VGqkzPQoXDzLcND1qY0ekhIMYgSbwRGvK2dNupe1RzjMJAb0ygwT0aeIPEu3QAvTqs6dqt4ZODAuF7pvBtWhLYAu3Vh+b6F1sNqf2X/7+c1cD/IFD9g4PeWOnmTItSKCM/OoQm1sTBzd5aXfkk0cRT1AxIOWst6YPUDjRBTSo36aY+0rRPNkJ9WAbgt26jdd0TJMCy6MAYlK2jQ2obo7LRyIKvzOXxPTBmGuWIvTDmAKQe6taDlQVN2dzBmTrXWKuUAcgYPHpWC0EP7P2UwfQ5JNKGv9vDAO/D3xVMeeQ/+ERlP228hwtZFH6OFVbYKZjmpxDyxTiHHYAyNNQTZJ6enAB2MgKhAqAuokX3Oh4C6iTuAmd08ss0ZBMt2N4+sPV4bo9tNfnqXahVYXXAPcc2wFOBIuAcUBqVzdF5PjQem
x0i/I1IOqmgYiZRMxHtVlirpDAdMxYM4YGORshrky5KYB6yyHJWce92v3B9s44eV+uqbDvxppb5yy4N9FOfrgz/O6gcF23aWht0OWoVofASDf3HsZxyBsWZQDct3rntGWdBlzcCimG+3Vu2hcS2MbaCLPmJca5xJtU9+AeLDJ9U+vWDlOJPqvsHSy7EmdVg88llO6rAwQ9kRcmiRyQM3nOzdGUx2nOzx/A+942RPovVgO04eAoxl9zAwlnerU3bBGCBR9tE9AIyNI7xODYxYjCe849Q0WKsG4dBRq0DHGvkg2Pj86uIw8mGgcsQq0NFGPiji8/zqX63Vw0oNxqx/HWvkAzXceCMfZyNwqwdvA/69oW2MbZggj2iwx0muWPtNaoaeQHIFBsKF7nIxhuWiq5NJ3Qr3rwrL2T9GWG6MCKcdGJb7vkk5ztpbOzAM+N2tvR2JfGkY4D3wtliA2TFYR+Jp/JdCIR5oozN+K8CcHI3mIlwDNRYTNOXUQaMpnN7Q
79Xx2fz8Wn5dztezs9l69sNmnG1LG/V4tTo7P/nnX/9z48Gfnp78/+VqsyHGZMILZm+WH9avjzZ+Pq47Oz1h72frXxaXs7fz4+uPb//f75cXePnmxM7F68/v59uHlseu5tfLD6vT+eujd+v1+5Pj4/cfVhf1cvUWD8H/Lhe86fhf68XFxd/5kk7wpHnoYn0x/0XeWX62ozhuhtEGDzqjfHXc0kCO3pZb1tvpkJ8Xs/X8T2paZcvSKKM3oi/MMvs8X7UpqFfnF8tP/1wudwJNbRtsX8f4Xl6cXJMjXh9Be4A9PnZARLN7x/JqXV0v/nd+gkl///vP5ZjnTq5I1IvS8kl2GWmbyJNV2XjkRBv/7z9fzNfr+ariuxZXb08UHvRpuTrbaSj7hCj5Tw62gaTutiHlfeezy8XF55N/gTBHv5QBz98ullfbCd0MWtpp8nHh7q4pm31TLPBaBGDvrjrZ7pyio4aFjiHunOUeKZgJC+lNOyc+w71zkGij7e6JdqMU
9MXtnNhPavLWq+PtEMow/zFbzW4Pkq2WtZDbezDXvwzghvBAbojme2aGcB8z2ExtZ/1eZjBa1d7AU7/JDPDulMrgk1vMkHSdjTf5DmYI7stnOuyd6fX8961pYpAh1TnpzroI9CYZWxufwq6/yTttd0fSZ8ILazynwwZ7iXIHWRrC8AEEWoXepNMQJau7DvvzIOUAsQK8ulustAq1jkHv17Ex1ZwUZ26KlVVAulH7Wzo2G+6i210NviNWWocvFisAnH1i9ba9fL0HIXnvrHchNGDdw5cEfARMAgD0NkTl4U5aZg86WL3LKxEuZieZ80Uc8XG2Wsyu1vdwSXPjar4+fde2kYmr2cXi7dXJ9Xq2Wj+MlT6tFmv8ri6XZ/OTi1W1ftM8+Or03XLVPHm7OXhueE/P+Hcv70la5QTA608/btey/bTDk/+Y/Ta/eje7/LkCEr4+nb2fV2Wk7+eni/PFqWRtNldts0HvZ+t328kn+pw4qHtuWuKnmLo6K6fD5HRS
uXnlp5g3b5SZAPPGbJSbhpp+g55UhjvrcMGRD5MKPkQMOGswzXLW0TdFA/w17q02qWKN+cfdyqVJZWtl4IdwMxvnJlWojQ44NLA6eFgmD7UH8PgyHq0FdWc0GFMHH6aVB9jOOaMBT+O2JoAi3kwuJjBPDngdQ5A78KCYHIagISwCzussfbR1QH/QHb7Ji/tjMUQ94boVHqM9JRxHjKEcTPgsm+NUq3kVJr423A2YXcOD0ZfIUXOHFV4JYeY5XWs5Z2sd5OLyjCQN8BocvCTDrjru4wNnTeNcxRiSoiuGd3jPFjrrGGbghPCIVDF8GvqeoxAKBCOhMGfw8djka+4ThBMWt3hOGM9zkuR8EsKFOgU5jHXO7D23u9l2CKOQw+RamvtJmXvcbxLI6jGczMc6F3HS1iQzP6aL2bVoiOgFd8GtQ+KolbHcCa1Mh5sAJ+IV0pCZuJxk9MeVQwcSO0OCBzAjZxYeLMkP/A93cMLSfkMm08ph2qh2wpQ0NMrjwexG
mEYymRxjCBhcAltdgDiKDxE+wfyZQm2QJussDO2EJTEIDi+QZ30ZBQjP8wkvsTIMGTUuxChLS8IcerJ1DKkcG5EXCI6Zeo5EaMMj8CoERC7GVMm4eKmm0LDq3nMkeLfhbswNKzs2WFwOf1P6SmbEuYz3O7KocC8jbQrHmCRFwbM2gLQG3BJxLdqUnYBnLFiFNPNGGlj963CYyd0O4+CDneO5JNxOmeG1pnmw5oZHNyADdcxOaufGToitBuyos78slxcf52vorfsV2va6btZ+n1KLJFaCTgPcwQQUnWaLTqMqE52mMycx1CGH0uBFp0XdHuCs0JoqDEpOUWt56kg24AxYKlL3cdKiLUpMGCTI3ZwjFzBhGmKyOSgcqsF7lCjfaLfgRMqhxFrmb56pZc8mK8rO1UGk0pHX5XZrOkypGNIFp3k5i3cW+c/QPNvjCEFIEAG40lb4UASLoqCg2CCWYWK0qE62iE5DV7YtYBIwNslZOA+dDWQIkJO9nFrhaR7D
qDRjxvCY4Z0WKdCToizaazWYzUK65GjCss9MfSxKlvQkP3M+wKtUpGWBFAcz0aJj0pZCrTKvGi0uDdBVYP3UDDoA/lLNsHccjyim/Sy8p953RBaWpTYOLOyYY1OFhRU3I6PmL+SOootyKC0i3uSbJrZPtqdqIb9PrWgeIQR4qjChbRqmWa4lfXQuzAdWgPoRClU2UeNPimVmi2FQtssCmtFEWg3oY00LXY6iKUetPZF/yClaTESx6dLPMiDwYp3LuTKTAfMNiJuF62kpwkQ4iofggw2zkC4+y+U0ZxuOCBOxmHLohCVpbIhtaNNSbA90Yw0uNkKwnyP2FB2NxxHeQKtjtBFajZ8XSC1L0HyZ1nwJS9AYiewHqwtLhAIfqNo071WFKCBBFAujGzxhyg3ADq1dN1buARNtuMg26k2J6rQTUTJJTBRxEqVTNFd5k+X7XQ7CdqJVhUdplmBuiNk8nwPyX06KXcaVwGaKxhD/mhbPAQtkqgktCqfYXpq6QOTT
YItcLD6BiSXe48QrOUs+ogoS6zqRu+SITCRsUTQOHpMbjWNJAiILAUaUhwZGytkoahVPEINL+5CLCiK/WWHti+14BGkUk04LmwRbNoQnrs7Fyoi0QhxVAQ05FqOiOW8FfNtCAJFQn5wQtRwLuKFZgQwb44qqs6L6mwYj2RSKNK0Y1QeD/rLeFP+SFZxAKsJJvlKQh0BKT9GD6RAKE0wagAEiMAyOJko0+AR2LgvqKkIVm6MIAmPYEw1Ubaiu9gvXnhK7EYUrgAQm2jRlFQqkpQiXGFXFpGnSAhIKCKPzAjWkCrA29B4ak415FcEoKKnge0U1CBReLhdunRYD1rhOWrKvpjw/NGDP5bBpmBY/YQP2+BZNHqcmxUk8u3Fd+DAeU6Vl4WpOBe4QFidT85YGLosOzpYi4EQZVo2lxg1BN2ztQrG7RDvF75to+RBKavFvbnpSXBjKhzSYwhiehBHoVTUKmDpLCZvVqrg3xVa0YHljLZLePetuHKeiMaKhZGsg
N0ODVix70YxF0cl08BiiXxSf9wKoOHpqiUZeBTwZkTifReKU2NZibAjjoAxF4uS4uMeC5OhKlh4HwrhYaOVcceYKYhHLNikOmXh+AmjkWCImgv7aOabrJ7SCSrW5nJY5nIivQIkllZPgqkqS9xBfgplmYkiUSIl0LrV9cfxGU+Oyyukgplj81GIpysMF9LaTtrESFBBX/KQQWuwhULa4Rzw2qkEnpT+8tFgWWlh5iissU1rknpC6lkUbtfGMxLIIb083V9Nz4qHIpXhLwmiGdqI0WIk8GEG5FTPiqbUfrnVp44RASrxSQQm0A4TGgj+N20wXzUG5i9aDXlGm61tQbfGk8xZ8FHc1b8FHEj9eN/CDU4YLJRVL5RkEvQYYMCvwqWnIhNigtIARblZ8BzzdidWOri+DFYBoAUZiom4osRsok4IZ0XENnN1GkgQB+C0XuULEDRfpYlYy3VtqBXjFbZBAuAhkLC25oF4RokLZAlfB79TTLReVkFIHn4gW2+IT
chGhKhWmMI0SO1wketOgm5CVzIRp/W2Mm91th6ncxrfcQjAlbqNEEDilsaia0NGIjcbvyNdGVrooTBR8i8JcA+zKKFtkX0yFbcGDxNpa5rEmtVheFNU+NG+50K6L5hkIa9F8CYO1aL6lX1FRLZwXXd6CeVHNLZjnyKT7EmigN5dzUbdFYsRiFR+V2qwocsIkudw2EJ06wnsB86454rN83oB5Jxckqo3EsEehRJE6CeHExsK6SQmyxYZqrvgTYjSjb61mEP8jlC5zCKyEAt/axv+wJcAlfRa92e1ziTtt+ixMve00Hd/QdjrGog5EIibimU6LDTQS+PNT0UbSayI535h+9jr6NgRUlK1vPHE7KarWT4uM2Ma5EZIUtgCQDHfolJGiNq+O3zY/7k0uBJuyDi42yYUQNMCSJzU9pgPI1U+DAraNobs9ym52wYfO0pax800Hi//nn/bkpO6Dsw4y5ZMhgRjgAoUkIhzEmVJTZ8TECIyIjZu/afPiM0tbFkHG
GeVKiEga9vAO9P+ePSvvzNLcSMmMifMjN5XzYcoqZQs7sck5FO1IW0ZlaBrwIyZ5g1hLWCJvMGiTp5CLizpsrlTFhSuXqdaf8xL2FXxOyEiXoGQhiC2J88uBFaDaAMvykwqGUb68AVsSHmxNjuBIXQL6usT7mmPBx03si9s7oCOd2I7hB1W3ur+4KwzQaYniS/Aex6J45aBBPa237LaOQxLCNDHJgjc78bnQgtniVlhZ8rGJz7WRAPbRyEAZlGyiksVg7ee5fd/beXI8RxE0AO+2y3O7RnkLl0wDJEILup1qYjlixwRkG9cmVBoH1eyApdS2NGAJuHQDlHzrwgiEkGCwoOdccA8baINLZk+sBc3KnRii0R1g2EMAiGi2AIIHTfZM8lwS/DMSthAZiMTOEyIkIU6SszSauc26kTtLZKcJzoRJw8/Fd/ToC3NyYoyT5OUknC2dhzwqAS8lul48VeaeGAfbogrHyRAbW2IxEsi538qCa/d90+7pca3lVyCI
8INE4Xz4o4Qbm3BdCTjmGwHHvBNwdC/hxjvCjWU6S8BxG9PYRh0L0u/GHYvrPHrkEfKYvgt5DHCvoyUjg6w2xtDJCNlbGSH7RRkhMzQjdKcXOXpOKN+bE8qHyQnJcO/NCjkd1dNin4HOVXRQ7il0NhHWKjIuA2XsHelvDKdeZXeXbxVc54MG36tvpe0X+lYsLsg2+ilYBYbOO2pWEE1CX23dEfRSlEi7JsQSTScWgmVDLGCCnnOCcEIJ/mrRcGmyCfkUvJeagqZSbePEsy9hdhbGEM+U9DgrCSjqSdJluNqJFIXYBCcK45e4vVQhMZZcGiRbLr5FG8PSIjaE9KUupo3WVMXYAE62NneDO31TlbLFnb5NjbTIs5R9tSUwpUBFT5tQnMTaQaemWonGsIR5ixYpBRIlb801xk1FWknelzAvD4X8YmZ8GV30JVROUvoN6a2gxRIIErXCGgOWCDXJgBLi8TQqPps23OvEVKXS/1TAz7RUCsmlWQIyDDuXGFCc
lmA231uiPpwtakGJALJ2hoZUNeOWiiMJh0vCSdTrButM8fyCdEqYk9zSZJ9UoJF2oqzE0sqUiuFrw6SE3SHJ6eJ06hI5KpV59EBLuBkmsimKEPi/N0Jk96XAn76yu69MFZITI2CdByNwk4u761SfaZWqzIvoxNv7K/w0tEz6pST1pST1pST1pST16Zakun04/W7VZ356qVF9qVF94jWqbl/S5Rvy9EvR6pMrWnX7ciTfjkVeqlhfqlifcRWr25fb+YbS9lLW+lLW+lLW+lLW+t2Utbp9ybhvp0Bf6lxf6lxf6lyfW52r/1ahoK/PzbqQQtKSm3UB9Lg7N/ttd1nZT8k0OJ/wkoZ9ScO+pGHvTMN+beHrfvEMYyu6eyv8QXHAdlb4K5fVnfsH+W9Xg3I3KbU5nKb7BsX8ft/isq/mL60ePdzxUt3/fKr79+4h8/SZ8KXc/6Xcf4eN9+0j8x2w8Uv9/0v9/x+i/n/vtj7fgYC+LAh4mgsC9u5k85T4aeuc
ddlKmMpGYLxsIr8FzsyTNppzmDB4uPYlNB1CAp/oCOoYnSJdaEBfT77S4jiHqc1UMQbKgBWcMUVPLwWPlKkFzypwhIJ4w2HJE34ZJOFJUOHc7tZ7B9XHzwWlGCNLxfh58omJEUqPeY/KC0tjtgw0doalwOwxCuSson2haVKGARZG6sESROAJNsUkjACgTFl4U+iuAleiG1aeCL5kHZAwb/n/pQAuY5MmLsDEErJF2gFtCA6ii15P+HmfmDV7z3iEZBIDTFRKHogBvn8ykDlLYwWKTC3r/lQihofWc87ZKQZmUmQAi5TQEOVphIg6ADo02chAihcDa+kwU8NGfgoDLazClKQeP/ORYbygEhNepYCPQMDooxa4CUseoXgVJINQBrIB38OD9JHlGJouN+x9NrsUOLrPdzcJujumxnW3PirP3IW1/K6Ri3aaYgCO0jc9dxESGNBK5bsd98v52eLD5SOUVC+uziBOJ+p2gbU0nM1Pl6siOOKod8MCbaRgNzDQ
tO7EBrov3JCuPPBswf2Z+fyL9ernNxfL09+q96vl29X8+pqt6zfDirbfzK7n0rnrd4vz9Ul72FkWkzAfMATu580HwzfbXg/ZA7v8rmT76hOvfr6crX6br8q5j4vrxZvFBW+Snxcc1/X7i9nnk8VV6cbH+Yohms35PUtvtAs//Ty/muF09WZ2+tvb1fLD1dnJ7PT0w+UHfu/hVpjkfk0ngZSNjrul2hgvjtYEPc0BwAC6QMwTdz5PKqaCsqBKmHaDQxQSgB0jglGydYGR4+CNnQKiGAifSCsdXECTxOQmsKgTu6WV1hTznKDeJh7ywJ3VNZWiTgnP9/yUlneeCNbQ3Y+Zta1O01WBDqG+oAbhN0eYhoYXZ1TQNMXWWOsLcIJ4QT1AahXUbjJSce4hy2j0/GJVE9FOMJ3JxpuKjeYLdl1AqQFQACmIIsA08r0TJoH8hN92cRnQlsFJhuomLtLZ8sFLkSWsqIfJ11GioRwoC5sFG0LTmchvRzHKzApu2A96
SQoqHr1m9sKlUIoWEobmiTeN5RdWMA4ddHaiIa3heVHl9DKYfeYiDZ8kvM4UIM6kRLpITBmaFfres14008qwMy4ql/fqta4+elFGB1dGezTPPmyTJPA8mip6NJVzb6gbvlMCbGjtpXO6lLODQSnC8HOmAF64zKQbCy7fMsx2b4QY5yEd0CeaHiBgKj0BLfgMPmmkc8C4H3CKVyFNgUUcJM1PoFkisDAALmAHgK6hY6W4UoqgLgIJA/YC0nsuxjEs1wybFsfQHQCNKh74vHL0EUKMvtSPAxDy1YmfEhEnAhJKVAlFolJJ6/NDeVNekqUCXRJC3lPBApEYXXSzM9AJRhNuMJAGhWygxWUtAnSTlJ4Beohz4AnDJMCacqk6N9oq4nubHTVLpG6FFiNMTQSIXHBgAYgYFEjB8THQgg4KzcCJwISpTMdWAZ0yTgpcLCVa0HQxWldaiCmb/knmyJIUUE4BWsm2Hdai1lSSwn1Nk6O9l5itFiDLOiiAaCmIwksJ
Y+nFelgfuRNjt6zEw1wqSWyaxLQieiXOCD8GAQguWF3z/QxoUZVLQtDQscK40A8pXoExitTO0Kf0BgPrebgQQajDN4NgUprC0rDsCC8NcwU+6LZJJhBUBfyvit/I3JjySfKc0UaJEYA+1rbz4BtaNJVH0dvNu9uZaRv4QS1JzNnUhkhhTmF0WSEA8GtDOwkwXJnpQWPF1NFBs2W+AaJZg6IIaTWj6uBISeB6W9YPQGhU0yLrekB2Wzw9yRRaft8LF7PokNljYnfmGcEbjLSgOxQT4G4l/klmoZNDE0swIC755qdUxMxAtKq7dht4pIWEX7u4Wovy1g1ubL6rfkNZy+Hqg/T36n/nq2XPqmuuq9r1Zn+9nE/+Mfnbcv1ucTr5l7tf6964uBsnebul7W0lzJyYhStVdLBgBUPxhUKDh0rOTfxyVcj6pxuTt/2yjblrtf83zjferqzomZS7MpC3LExjY4ARmfCBJx/EwcYvaBuWajmp9gsAxQysUytZhkfo
fXiJ1EGvGmp1XDKR2C0j6Rg0cHWEzMBpn1IK4ayUY21pfviCKpT1UGRBUa5wszMXOMUADCqhYe0D8/SGj5U6SWe5VBJPMRJZxoksJURUuUXPZga1YWxUUdi40TPBhxeWhaR4pMnl66hR0nc1RyDBD17L8iYr2qT8YEVIGQVD4E4MjnIcpST4spXCYb5Uoiel7slm5aXmw7B+qjRgIFXpF2sKmswQQIOL225Sp1lGZaTowIgCY39L6UcUF1xyv1kie8kRd0sZTBKXRbVjIZ7GHIj9cIyvJZnQzTJbeaRm94KssWteETPrfrPkOy3XUQoNxBwYOkyWsTxTFpRRo8amxU8KtUopCEPfzHG1dbXlgkCCb8nc/Fui4eUBfGVo6L5p8FIbIw26IZphuI5Dsg2JA2PK0g+x5y7I4lmZPTFYpDLTS7iilAKDXvQ1dLmlNERmNDG4uFn5peCDUeErwQtei9FgKAxUBX4qVX+pIYCgBVZUgjB+9/NqXQNx57fxvi58
GB5H4W5CicMKPuCxOfiGukXBhDBwx6f8fqZz1gFvJX4pSwFs3ETB0BD3o2D4kMpGo6eObJ4iLHUlBRPeTSXlahIBGRgBAKuUxzuBMlw6yoUADFtmCacZJstVEZ3ACiWupU6JMm6LBAZBY1ycXiqhvVRIMcqY8CTHz19hEBNCPKD7st44ADnhQTkS6Fku+5RlkUYWnIsBIoIGggEvOan4Y8LcMsXPa/Bock8iEuIleG4U7Jl9kY99z4kpZXkOXWR98zmJq8m911EiggCsDmzd9LjtMGkYqXuo9z2mb8KBo+uMgRpCLq69dVKn7Kg0udq4IaA8RehX8budMPxTK2kgwCqmZCILChh3gUchi0ezhUDxTIIwyep6iFgqC5uDDbKUv9CmIQ0owXm0UuuqCaIZ3ghcIyuVRLZJGZXbbIekVTOMUB6e1O2Hp1tD3YfqVNDVXUlegQXGSvwcbJPCg/Ddm+XF2aOgu3th3V144dervy7XH3q2emgumvwZXe/JZFmJ
XQdIG0iJmReDd0Nw3W3BDa3g+tuCm4YIrrkluPYLBDcOEdzYCly6IbhpPMENDxHcfBjBdQ8UXDtEcOMtwbVfLLjfkdjebdyNGlxt+HDZfZiF9+Ap6hZMVWCRDb9FDoAL8wyOvxXn0v7+mu/A1WQqACREZt/AlhP4p5LfBl8RrsFORYmIWPnUuCuFEBKeYkBM1jtJ/tuwip5fQi9r2xiFAMtE+PJWgkowHpEFBDFLRQoj0dwxhtXIOuVSLAJX0rjSJDV5ORlieNMk0l1uN+BwBc/yQ5jBmLI4JugkpcEuiZWCtitqT0VDnM+aB+190+JKJBucTa8mNyERvE4RlKaiUCT8bbkzBqvISjIhRrhMExnPpo7Mygo6XGklis7oHloYPXElRsIWKUXxGc81ZRWHM+I5s84JSBijU47ZEKiA4E2zcQHXq1HHVVGxKiiUGhDrtOZGDQxIKkmiVpEjaCYkNRNC6S0TYpoZYY3P7pzY1MyJa3aY4RjaCWAFpJTvtA3l
Fi9VKrLZg47tdFi5n4tO/M3psLqdj9BSfzMfrm1x3flo6lvKhPgyIcG3EwJlVPjrxnyUECXnIzTz4UJLfX/XfITNqhru0VegCyVhMyF0Wy1NCF0QFxjA1FYxfQ0OgIZNUYJ7Zho1ZUlpWUoNMocsAdbIQp8soJaJFaVT8xzD4K3NcD354e8AYk6bud2nWiEI1V11NUW5QlzhFsFcuPjUYl7DUdGe0JbWtz2tX//eek//7ftdre7V/cEtPOpi/idNp9aCY7gCzENGcXAziiWfuYZYfv20PGosywzfdaw/Q3OLnn1xLogcy1kAKpNsT0TRkrJAbj3EtKWlRMDA4LfPzhcD45qgO1SVlhhSiFy/xQI44CGxMMyraD5C6RAlNMWaCYa+eEasA+At5BV6BcpBC5bNGYi3bSkr7wPjaxsbo6BQSr2vdRIn4i526JcsRgJak6CNKGL+wxRuOVEUi2Zcu2mxxcTwbt+amFhrKHSGSjAeW0wMHpIZ6GOvjWiJAFVk
uEOOD2VXruwNFJCQI5VF3kCuUBzMB3ku92hauJxHc9V1qT1jeS1O08bgCXxHTkTcjM8w3FdsjOcif5oIqCFltCzdZvrFGc5LlMp4WLjTJnjYTIptJsXbdlJSOynmxqSEnJpJKbka6DybO3Ng+Aouvtg2ldukHJL5ldwsjceUFLMP2pV8RGdKMBftpHRmoMxJO0uS69/Mie7MSShz4sJmTuyk5bEbc5JtOyemzElWqZkBnds5STfmBJPkN3NS9iUgXoibSSn1hWXVMKteJZBmMSXQAEnT0FiYV5e5GVAdpHZUDA3vV2RDWashdU2GOIDLHMqDcqmmznh2gM2LBGplgvP+oJnZ+W7JAYNmxjyWKr8dNrtRr/JfE0t/10UpwAYrYdLd5C9wAw39GVkYBU5zPjpMIQsuPFdxcJM0nzNTXtxigeFUrYyEgWG3weZlrwjNhC+EJ8t1zsiSUqJyqB0rHMHAqGFlX6bsANNBKRJBGhbSKSJAsCTmVeBWkMqZkGWp
UoTKhLhorlcMKky8ZPIixCwx0W2y5yIkLiUh8pU6F0yx2zve08H1eOFWPd7tcrzQluMB53xRPR7Gc7sgz90uyJMq+FslefbukrwQ2po8iq3U5CVZItqU5YW2LI85y68szDNcOgl4HKfZSZbV2C1AuJxd//b6SISAPzESV+m76vYq9VIr85iFe3sr9vrB6ONX7Kk8YsXeiwZ80YAH1oCePjLzrgBUwDXBx3s14F3670X7fatKQWueQaWgYL5Xx9cf8c//AVt8DfXmFQEA
"""
| 446.973913 | 512 | 0.966577 | 1,607 | 51,402 | 30.917237 | 0.97822 | 0.000242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15074 | 0.002626 | 51,402 | 114 | 513 | 450.894737 | 0.818382 | 0.002198 | 0 | 0.019048 | 0 | 0.828571 | 0.999318 | 0.997309 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c752230204736fb051a7e7f6aee2d73d9d7959d0 | 273 | py | Python | site-packages/serpent/machine_learning/context_classification/context_classifiers/__init__.py | nanpuhaha/SerpentAI | 6af1105fc0a970227a0d7c11e6a0da1bd0bacec6 | [
"MIT"
] | 14 | 2021-10-31T10:02:30.000Z | 2022-03-31T06:16:57.000Z | site-packages/serpent/machine_learning/context_classification/context_classifiers/__init__.py | nanpuhaha/SerpentAI | 6af1105fc0a970227a0d7c11e6a0da1bd0bacec6 | [
"MIT"
] | 6 | 2021-09-26T21:18:30.000Z | 2022-02-01T01:26:18.000Z | serpent/machine_learning/context_classification/context_classifiers/__init__.py | PiterPentester/SerpentAI | 614bafd3c2df3ee6736309d46a7b92325f9a2d15 | [
"MIT"
] | 2 | 2021-11-14T00:21:27.000Z | 2022-02-19T00:26:21.000Z | from serpent.machine_learning.context_classification.context_classifiers.svm_context_classifier import SVMContextClassifier
from serpent.machine_learning.context_classification.context_classifiers.cnn_inception_v3_context_classifier import CNNInceptionV3ContextClassifier
| 68.25 | 147 | 0.937729 | 28 | 273 | 8.714286 | 0.535714 | 0.090164 | 0.147541 | 0.213115 | 0.532787 | 0.532787 | 0.532787 | 0.532787 | 0 | 0 | 0 | 0.007576 | 0.032967 | 273 | 3 | 148 | 91 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4053350b2cc870776918c441df73f7d514e34d89 | 139 | py | Python | test/conftest.py | jaimeHMol/airflow-kubernetes | f520216555c00dc87158bd7c169d3f36722acac3 | [
"MIT"
] | 6 | 2020-11-18T11:02:20.000Z | 2021-11-16T13:00:20.000Z | test/conftest.py | jaimeHMol/airflow-kubernetes | f520216555c00dc87158bd7c169d3f36722acac3 | [
"MIT"
] | null | null | null | test/conftest.py | jaimeHMol/airflow-kubernetes | f520216555c00dc87158bd7c169d3f36722acac3 | [
"MIT"
] | 2 | 2020-11-18T11:02:22.000Z | 2020-11-19T04:18:22.000Z | import pytest
from airflow.models import DagBag
@pytest.fixture(scope="session")
def dagbag():
return DagBag(include_examples=False)
| 17.375 | 41 | 0.776978 | 18 | 139 | 5.944444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122302 | 139 | 7 | 42 | 19.857143 | 0.877049 | 0 | 0 | 0 | 0 | 0 | 0.05036 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
40756b4079cdc5e77b816bbd3d494316b33e2713 | 193 | py | Python | datasets/utils/__init__.py | YorkSu/hat | b646b6689f3d81c985ed13f3d5c23b6c717fd07d | [
"Apache-2.0"
] | 1 | 2019-04-10T04:49:30.000Z | 2019-04-10T04:49:30.000Z | datasets/utils/__init__.py | Suger131/HAT-tf2.0 | b646b6689f3d81c985ed13f3d5c23b6c717fd07d | [
"Apache-2.0"
] | null | null | null | datasets/utils/__init__.py | Suger131/HAT-tf2.0 | b646b6689f3d81c985ed13f3d5c23b6c717fd07d | [
"Apache-2.0"
] | 1 | 2019-06-14T05:53:42.000Z | 2019-06-14T05:53:42.000Z | """
dataset tools
NOTE:
Better not change anything.
"""
# pylint: disable=wildcard-import
from hat.datasets.utils.dsbuilder import *
from hat.datasets.utils.datagenerator import DG
| 16.083333 | 47 | 0.735751 | 24 | 193 | 5.916667 | 0.75 | 0.140845 | 0.183099 | 0.295775 | 0.366197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165803 | 193 | 11 | 48 | 17.545455 | 0.881988 | 0.435233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
408d8e241ee39c59b205cdee5b121ec5daf67f68 | 166 | py | Python | eecs-online-server/homework/admin.py | luhc228/eecs-online-test | 72cbe3d238b26c3ee4d03fbde59c1e34f4fc29cd | [
"MIT"
] | 4 | 2020-01-09T05:10:30.000Z | 2020-10-18T07:14:51.000Z | eecs-online-server/homework/admin.py | luhc228/eecs-online | 72cbe3d238b26c3ee4d03fbde59c1e34f4fc29cd | [
"MIT"
] | 14 | 2019-12-02T10:53:06.000Z | 2022-03-12T00:09:05.000Z | eecs-online-server/homework/admin.py | luhc228/eecs-online-test | 72cbe3d238b26c3ee4d03fbde59c1e34f4fc29cd | [
"MIT"
] | 4 | 2019-11-02T08:34:45.000Z | 2019-12-10T13:10:05.000Z | from django.contrib import admin
# Register your models here.
from . import models
admin.site.register(models.Homework)
admin.site.register(models.HomeworkQuestion) | 23.714286 | 44 | 0.819277 | 22 | 166 | 6.181818 | 0.545455 | 0.132353 | 0.25 | 0.338235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 166 | 7 | 44 | 23.714286 | 0.906667 | 0.156627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
40b759bdb145ad81c41241e343c1354284523512 | 16 | py | Python | pckg2/user22.py | balqui/nothing01234 | 3dae2ed2c9def2886c9fdae88b6ba8ddd061b5ac | [
"MIT"
] | null | null | null | pckg2/user22.py | balqui/nothing01234 | 3dae2ed2c9def2886c9fdae88b6ba8ddd061b5ac | [
"MIT"
] | null | null | null | pckg2/user22.py | balqui/nothing01234 | 3dae2ed2c9def2886c9fdae88b6ba8ddd061b5ac | [
"MIT"
] | null | null | null | from b import B
| 8 | 15 | 0.75 | 4 | 16 | 3 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 16 | 1 | 16 | 16 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
40bf97665eccddc34f0f04bfdfae47bd83193b1b | 28 | py | Python | Language/helloworldsanthosh.py | Guuh137/Hacktoberfest | 165cb4e66757764408830ed131ff704f48f8a620 | [
"Apache-2.0"
] | 2 | 2019-10-06T15:37:50.000Z | 2019-10-06T16:51:42.000Z | Language/helloworldsanthosh.py | Guuh137/Hacktoberfest | 165cb4e66757764408830ed131ff704f48f8a620 | [
"Apache-2.0"
] | null | null | null | Language/helloworldsanthosh.py | Guuh137/Hacktoberfest | 165cb4e66757764408830ed131ff704f48f8a620 | [
"Apache-2.0"
] | null | null | null | print("Hello World program") | 28 | 28 | 0.785714 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 28 | 1 | 28 | 28 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.655172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
90f52c7d97da8a1c35ee417038917426882fe85b | 1,242 | py | Python | test/expensive_test_signature_2.py | salilab/rmf | 4895bff9d22381882ac38180bdd025e22bdc7c00 | [
"Apache-2.0"
] | 2 | 2017-12-22T18:09:47.000Z | 2019-12-18T05:00:50.000Z | test/expensive_test_signature_2.py | salilab/rmf | 4895bff9d22381882ac38180bdd025e22bdc7c00 | [
"Apache-2.0"
] | 5 | 2015-03-07T19:32:39.000Z | 2021-04-22T20:00:10.000Z | test/expensive_test_signature_2.py | salilab/rmf | 4895bff9d22381882ac38180bdd025e22bdc7c00 | [
"Apache-2.0"
] | 2 | 2015-03-12T18:34:23.000Z | 2015-06-19T20:15:14.000Z | import unittest
import RMF
class Tests(unittest.TestCase):
def test_0(self):
"""Test that signatures make sense and are stable"""
try:
import RMF_HDF5
except:
return
RMF.set_log_level("Off")
path = RMF._get_test_input_file_path("rep_and_geom.rmf")
f = RMF.open_rmf_file_read_only(path)
sig = RMF.get_signature_string(f)
# print sig
old_sig = open(
RMF._get_test_input_file_path(
"rep_and_geom.signature"),
"r").read(
)
RMF._assert_signatures_equal(sig, old_sig)
def test_1(self):
"""Test that signatures make sense and are stable"""
try:
import RMF_HDF5
except:
return
RMF.set_log_level("Off")
path = RMF._get_test_input_file_path("final_NPC2007_new.rmf")
f = RMF.open_rmf_file_read_only(path)
sig = RMF.get_signature_string(f)
# print sig
old_sig = open(
RMF._get_test_input_file_path(
"final_NPC2007_new.signature"),
"r").read(
)
RMF._assert_signatures_equal(sig, old_sig)
if __name__ == '__main__':
unittest.main()
| 25.875 | 69 | 0.578905 | 157 | 1,242 | 4.159236 | 0.318471 | 0.05513 | 0.061256 | 0.091884 | 0.866769 | 0.866769 | 0.866769 | 0.866769 | 0.866769 | 0.790199 | 0 | 0.014406 | 0.329308 | 1,242 | 47 | 70 | 26.425532 | 0.769508 | 0.091787 | 0 | 0.628571 | 0 | 0 | 0.091398 | 0.062724 | 0 | 0 | 0 | 0 | 0.057143 | 1 | 0.057143 | false | 0 | 0.114286 | 0 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
29040c7381b3c063c849b37afae2e433cdfe8e31 | 198 | py | Python | assignment4/experiments/__init__.py | prestononeal/CS-7641-assignments | c3a6815ba1be837084c60c3dd0dc8e8e702aa9b7 | [
"MIT"
] | 148 | 2018-12-18T21:14:04.000Z | 2022-03-04T09:13:21.000Z | assignment4/experiments/__init__.py | prestononeal/CS-7641-assignments | c3a6815ba1be837084c60c3dd0dc8e8e702aa9b7 | [
"MIT"
] | 22 | 2019-01-20T00:11:06.000Z | 2021-05-01T17:21:58.000Z | assignment4/experiments/__init__.py | prestononeal/CS-7641-assignments | c3a6815ba1be837084c60c3dd0dc8e8e702aa9b7 | [
"MIT"
] | 172 | 2019-01-09T06:01:54.000Z | 2022-03-25T22:53:19.000Z | from .base import *
from .policy_iteration import *
from .value_iteration import *
from .q_learner import *
# from .plotting import *
__all__ = ['policy_iteration', 'value_iteration', 'q_learner']
| 24.75 | 62 | 0.752525 | 25 | 198 | 5.56 | 0.4 | 0.28777 | 0.273381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 198 | 7 | 63 | 28.285714 | 0.812866 | 0.116162 | 0 | 0 | 0 | 0 | 0.231214 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
290ea779877f36acc53ea7c96d149b10971ba1f1 | 24 | py | Python | src/pipeline_docs/pipeline_atac_consensus_balanced_peaks/__init__.py | jaime11/pipeline_atac_consensus_balanced_peaks | 9d3d5095c4977e8f1883b49a33400a6ffac6da22 | [
"MIT"
] | null | null | null | src/pipeline_docs/pipeline_atac_consensus_balanced_peaks/__init__.py | jaime11/pipeline_atac_consensus_balanced_peaks | 9d3d5095c4977e8f1883b49a33400a6ffac6da22 | [
"MIT"
] | null | null | null | src/pipeline_docs/pipeline_atac_consensus_balanced_peaks/__init__.py | jaime11/pipeline_atac_consensus_balanced_peaks | 9d3d5095c4977e8f1883b49a33400a6ffac6da22 | [
"MIT"
] | null | null | null |
from trackers import *
| 8 | 22 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 24 | 2 | 23 | 12 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29275f3e604cc02cd345bb199800775cc9993b46 | 31 | py | Python | hiveplot/__init__.py | isrobson/hiveplot | 2c4340b1f9ab5d18c7f6aa9663e3cad0bd9ac151 | [
"MIT"
] | null | null | null | hiveplot/__init__.py | isrobson/hiveplot | 2c4340b1f9ab5d18c7f6aa9663e3cad0bd9ac151 | [
"MIT"
] | null | null | null | hiveplot/__init__.py | isrobson/hiveplot | 2c4340b1f9ab5d18c7f6aa9663e3cad0bd9ac151 | [
"MIT"
] | null | null | null | from .hiveplot import HivePlot
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
292f76a06cd7c18792c8e973956215fef8eb920a | 7,490 | py | Python | tests/test_views.py | Nonse/monkeys | 93681edf18126cc49858992f80df25a7cff931e8 | [
"MIT"
] | null | null | null | tests/test_views.py | Nonse/monkeys | 93681edf18126cc49858992f80df25a7cff931e8 | [
"MIT"
] | null | null | null | tests/test_views.py | Nonse/monkeys | 93681edf18126cc49858992f80df25a7cff931e8 | [
"MIT"
] | null | null | null | from flask import url_for
from monkeygod import models
def test_index(app):
with app.test_client() as client:
res = client.get(url_for('monkey_views.index'))
assert 'Meet the Monkeys' in str(res.data)
def test_search_basic(app, testdata_with_friends):
with app.test_client() as client:
res = client.get(url_for('monkey_views.search'))
assert res.status_code == 200, 'Search without parameters works'
res = client.get(url_for('monkey_views.search', page=2))
assert res.status_code == 200, 'Correct page works'
res = client.get(url_for('monkey_views.search', page=10))
assert res.status_code == 404, 'Wrong page gives 404'
def test_search_criteria(app, testdata_with_friends):
criterias = [
'name_asc', 'name_desc',
'number_asc', 'number_desc',
'bf_asc', 'bf_desc'
]
with app.test_client() as client:
for c in criterias:
res = client.get(url_for('monkey_views.search', sort=c))
assert res.status_code == 200, (
'ORDER BY {}: Search without parameters works'.format(c)
)
res = client.get(url_for('monkey_views.search', sort=c, page=2))
assert res.status_code == 200, (
'ORDER BY {}: Correct page works'.format(c)
)
res = client.get(url_for('monkey_views.search', sort=c, page=10))
assert res.status_code == 404, (
'ORDER BY {}: Wrong page gives 404'.format(c)
)
def test_profile(app, testdata_with_many_friends, session):
with app.test_client() as client:
monkey = models.Monkey.query.first()
res = client.get(url_for('monkey_views.profile',
id=monkey.id))
assert res.status_code == 200, 'Correct view shown'
res = client.get(url_for('monkey_views.profile',
id=monkey.id, page=2))
assert res.status_code == 200, 'Correct page works'
res = client.get(url_for('monkey_views.profile',
id=monkey.id, page=10))
assert res.status_code == 404, 'Wrong page gives 404'
def test_profile_add_friend(app, testdata, session):
with app.test_client() as client:
monkey = models.Monkey.query.first()
res = client.get(url_for('monkey_views.profile_add_friend',
id=monkey.id))
assert res.status_code == 200, 'Correct view shown'
res = client.get(url_for('monkey_views.profile_add_friend',
id=monkey.id, page=2))
assert res.status_code == 200, 'Correct page works'
res = client.get(url_for('monkey_views.profile_add_friend',
id=monkey.id, page=10))
assert res.status_code == 404, 'Wrong page gives 404'
def test_create_monkey(app, session):
with app.test_client() as client:
res = client.get(url_for('monkey_views.create_monkey'))
assert res.status_code == 200, 'GET renders form'
res = client.post(url_for('monkey_views.create_monkey'), data={
'name': 'monkey',
'age': '11',
'email': 'monkey@example.com'
})
assert res.status_code == 302, 'Redirects correctly'
assert models.Monkey.query.count() == 1, 'Monkey was created'
monkey = models.Monkey.query.first()
assert monkey.name == 'monkey'
assert monkey.age == 11
assert monkey.email == 'monkey@example.com'
res = client.post(url_for('monkey_views.create_monkey'), data={
'name': 'monkey',
'email': 'monkey_email'
})
assert res.status_code == 200, 'Invalid data is shown'
assert models.Monkey.query.count() == 1, (
'Invalid monkey was not created'
)
def test_edit_monkey(app, session):
with app.test_client() as client:
monkey = models.Monkey(
name='monkey',
age=11,
email='monkey@example.com'
)
session.add(monkey)
session.commit()
res = client.get(url_for('monkey_views.edit_monkey', id=monkey.id))
assert res.status_code == 200, 'GET renders form'
res = client.post(
url_for('monkey_views.edit_monkey', id=monkey.id),
data={
'name': 'monkey2',
'age': '10',
'email': 'monkey@example.fi'
}
)
assert res.status_code == 302, 'Redirects correctly'
assert monkey.name == 'monkey2'
assert monkey.age == 10
assert monkey.email == 'monkey@example.fi'
res = client.post(
url_for('monkey_views.edit_monkey', id=monkey.id),
data={
'name': 'monkey2',
'email': 'monkey.fi'
}
)
assert res.status_code == 200, 'Invalid data is shown'
assert monkey.name == 'monkey2', 'Name not changed'
assert monkey.age == 10, 'Age not changed'
assert monkey.email == 'monkey@example.fi', 'Email not changed'
def test_delete_monkey(app, session):
with app.test_client() as client:
monkey = models.Monkey(
name='monkey',
age=11,
email='monkey@example.com'
)
session.add(monkey)
session.commit()
res = client.get(url_for('monkey_views.delete_monkey', id=monkey.id))
assert res.status_code == 302, 'Redirects correctly'
assert models.Monkey.query.count() == 0, 'Monkey deleted successfully'
def test_add_friend(app, testdata, session):
with app.test_client() as client:
monkey1, monkey2 = models.Monkey.query[:2]
res = client.get(url_for('monkey_views.add_friend',
id=monkey1.id,
friend_id=monkey2.id))
assert res.status_code == 302, 'Redirects correctly'
assert monkey1.is_friend(monkey2) is True, 'Monkeys are friends'
def test_unfriend(app, testdata, session):
with app.test_client() as client:
monkey1, monkey2 = models.Monkey.query[:2]
monkey1.add_friend(monkey2)
session.add_all([monkey1, monkey2])
session.commit()
res = client.get(url_for('monkey_views.unfriend',
id=monkey1.id,
friend_id=monkey2.id))
assert res.status_code == 302, 'Redirects correctly'
assert monkey1.is_friend(monkey2) is False, 'Monkeys are not friends'
def test_add_bf(app, testdata, session):
with app.test_client() as client:
monkey1, monkey2 = models.Monkey.query[:2]
res = client.get(url_for('monkey_views.add_bf',
id=monkey1.id,
friend_id=monkey2.id))
assert res.status_code == 302, 'Redirects correctly'
assert monkey1.best_friend == monkey2, 'Monkeys are best friends'
def test_remove_bf(app, testdata, session):
with app.test_client() as client:
monkey1, monkey2 = models.Monkey.query[:2]
monkey1.add_best_friend(monkey2)
session.add_all([monkey1, monkey2])
session.commit()
res = client.get(url_for('monkey_views.remove_bf', id=monkey1.id))
assert res.status_code == 302, 'Redirects correctly'
assert monkey1.best_friend != monkey2, 'Monkeys are not best friends'
| 37.638191 | 78 | 0.586782 | 913 | 7,490 | 4.651698 | 0.109529 | 0.035319 | 0.067813 | 0.096068 | 0.80292 | 0.783141 | 0.753709 | 0.734165 | 0.712503 | 0.703555 | 0 | 0.028263 | 0.296128 | 7,490 | 198 | 79 | 37.828283 | 0.777314 | 0 | 0 | 0.539877 | 0 | 0 | 0.210147 | 0.044726 | 0 | 0 | 0 | 0 | 0.245399 | 1 | 0.07362 | false | 0 | 0.01227 | 0 | 0.08589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
295cebd462a47fb162e6d09499799ff820675545 | 412 | py | Python | ludwig/encoders/__init__.py | dantreiman/ludwig | daeffd21f9eef524afb2037763abd07a93228c2a | [
"Apache-2.0"
] | 7,739 | 2019-02-11T14:06:31.000Z | 2020-12-16T18:30:29.000Z | ludwig/encoders/__init__.py | dantreiman/ludwig | daeffd21f9eef524afb2037763abd07a93228c2a | [
"Apache-2.0"
] | 769 | 2019-02-11T16:13:20.000Z | 2020-12-16T17:26:11.000Z | ludwig/encoders/__init__.py | dantreiman/ludwig | daeffd21f9eef524afb2037763abd07a93228c2a | [
"Apache-2.0"
] | 975 | 2019-02-11T15:55:54.000Z | 2020-12-14T21:45:39.000Z | # register all encoders
import ludwig.encoders.bag_encoders
import ludwig.encoders.binary_encoders
import ludwig.encoders.category_encoders
import ludwig.encoders.date_encoders
import ludwig.encoders.generic_encoders
import ludwig.encoders.h3_encoders
import ludwig.encoders.image_encoders
import ludwig.encoders.sequence_encoders
import ludwig.encoders.set_encoders
import ludwig.encoders.text_encoders # noqa
| 34.333333 | 44 | 0.881068 | 54 | 412 | 6.537037 | 0.296296 | 0.396601 | 0.566572 | 0.793201 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002597 | 0.065534 | 412 | 11 | 45 | 37.454545 | 0.914286 | 0.063107 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
464269a1ba8e9b5928956da894f1c3910a31e77e | 7,260 | py | Python | tests/filelinecomparedifflineswithtoleranceunittests.py | thomasms/filecompare | 393af84939689481da27460cccb52040e6171e01 | [
"MIT"
] | null | null | null | tests/filelinecomparedifflineswithtoleranceunittests.py | thomasms/filecompare | 393af84939689481da27460cccb52040e6171e01 | [
"MIT"
] | null | null | null | tests/filelinecomparedifflineswithtoleranceunittests.py | thomasms/filecompare | 393af84939689481da27460cccb52040e6171e01 | [
"MIT"
] | null | null | null | import unittest
import os.path
import filecompare as fc
from tests.filelinecompareunittests import FileLineCompareUnitTest
class FileLineCompareDiffLinesWithToleranceUnitTest(FileLineCompareUnitTest, unittest.TestCase):
def setUp(self):
FileLineCompareUnitTest.setUp(self)
self.filename_original_text_and_chars = os.path.join(self.base_dir, "original_text_and_chars.txt")
self.filename_compare_text_and_chars = os.path.join(self.base_dir, "compare_text_and_chars_with_tolerance.txt")
self.filename_compare_text_and_chars2 = os.path.join(self.base_dir, "compare_text_and_chars_with_tolerance_and_diff_lines.txt")
def get_test_files_tolerance(self):
return self.filename_original_text_and_chars, self.filename_compare_text_and_chars
def get_test_files_tolerance2(self):
return self.filename_original_text_and_chars, self.filename_compare_text_and_chars2
def test_linecompare_diff_with_no_tolerance(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.0, absolute_tolerance=0.0)
result = self.operation(file1, file2, ignore=[])
self.assertNotEqual(0, len(self.operation.diffs), "Assert differences")
self.assertEqual(result, False)
def test_linecompare_diff_with_low_tolerance(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.01, absolute_tolerance=0.0)
result = self.operation(file1, file2, ignore=[])
self.assertNotEqual(0, len(self.operation.diffs), "Assert differences")
self.assertEqual(result, False)
def test_linecompare_diff_with_acceptable_tolerance_no_ignore(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.1, absolute_tolerance=0.0)
result = self.operation(file1, file2, ignore=[])
self.assertNotEqual(0, len(self.operation.diffs), "Assert differences")
self.assertEqual(result, False)
def test_linecompare_diff_with_acceptable_tolerance_with_ignore(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.1, absolute_tolerance=0.0)
result = self.operation(file1, file2, ignore=['3fg'])
self.assertEqual(0, len(self.operation.diffs), "Assert no differences")
self.assertEqual(result, True)
def test_linecompare_diff_with_low_tolerance_and_ignore(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.01, absolute_tolerance=0.0)
result = self.operation(file1, file2, ignore=['3fg'])
self.assertNotEqual(0, len(self.operation.diffs), "Assert differences")
self.assertEqual(result, False)
def test_linecompare_diff_with_abs_tolerance_and_ignore(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.0, absolute_tolerance=0.9)
result = self.operation(file1, file2, ignore=['3fg'])
self.assertNotEqual(0, len(self.operation.diffs), "Assert differences")
self.assertEqual(result, False)
def test_linecompare_diff_with_abs_tolerance_no_ignore(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.0, absolute_tolerance=1.0)
result = self.operation(file1, file2, ignore=[])
self.assertNotEqual(0, len(self.operation.diffs), "Assert differences")
self.assertEqual(result, False)
def test_linecompare_diff_with_abs_tolerance_and_ignore_pass(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.0, absolute_tolerance=1.0)
result = self.operation(file1, file2, ignore=['3fg'])
self.assertEqual(0, len(self.operation.diffs), "Assert no differences")
self.assertEqual(result, True)
def test_linecompare_diff_with_abs_tolerance_and_rel_tolerance(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.01, absolute_tolerance=0.9)
result = self.operation(file1, file2, ignore=['3fg'])
self.assertEqual(0, len(self.operation.diffs), "Assert no differences")
self.assertEqual(result, True)
def test_linecompare_diff_with_abs_tolerance_and_rel_tolerance_fail(self):
file1, file2 = self.get_test_files_tolerance()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=1e-3, absolute_tolerance=0.9)
result = self.operation(file1, file2, ignore=['3fg'])
self.assertNotEqual(0, len(self.operation.diffs), "Assert differences")
self.assertEqual(result, False)
def test_linecompare_diff_with_abs_tolerance_and_rel_tolerance_and_ignore_in_one_file_only(self):
file1, file2 = self.get_test_files_tolerance2()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.1, absolute_tolerance=0.0)
result = self.operation(file1, file2, ignore=['This line is to be ignored', '3fg'])
self.assertEqual(0, len(self.operation.diffs), "Assert no differences")
self.assertEqual(result, True)
def test_linecompare_diff_with_abs_tolerance_and_rel_tolerance_and_ignore_in_one_file_only2(self):
file1, file2 = self.get_test_files_tolerance2()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.1, absolute_tolerance=0.0)
result = self.operation(file1, file2, ignore=['ignored', '3fg'])
self.assertEqual(0, len(self.operation.diffs), "Assert no differences")
self.assertEqual(result, True)
def test_linecompare_diff_with_abs_tolerance_and_rel_tolerance_and_ignore_in_one_file_only_fail(self):
file1, file2 = self.get_test_files_tolerance2()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.0, absolute_tolerance=0.0)
result = self.operation(file1, file2, ignore=['This line is to be ignored', '3fg'])
self.assertEqual(3, len(self.operation.diffs), "Assert differences")
self.assertEqual(result, False)
def test_linecompare_diff_with_abs_tolerance_and_rel_tolerance_and_ignore_in_one_file_only_fail2(self):
file1, file2 = self.get_test_files_tolerance2()
self.operation = fc.FileLineCompareDiffLinesWithTolerance(relative_tolerance=0.0, absolute_tolerance=0.0, custom_splitter="f")
result = self.operation(file1, file2, ignore=['ignored not', '3fg'])
self.assertEqual(0, len(self.operation.diffs), "Assert no differences since number of lines are different")
self.assertEqual(result, False)
| 54.586466 | 135 | 0.734573 | 863 | 7,260 | 5.887601 | 0.103129 | 0.107459 | 0.037788 | 0.060618 | 0.925015 | 0.911435 | 0.894115 | 0.884668 | 0.883094 | 0.874237 | 0 | 0.024605 | 0.171488 | 7,260 | 132 | 136 | 55 | 0.820116 | 0 | 0 | 0.632653 | 0 | 0 | 0.073141 | 0.01708 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.173469 | false | 0.010204 | 0.040816 | 0.020408 | 0.244898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4687fbf3f04a651844a3b068784d37dbe87907a7 | 47 | py | Python | __init__.py | kivy-garden/garden.resizable_behavior | d9b9bf1fdb222b6d6f344e23e2dd7777c66e4d43 | [
"MIT"
] | 8 | 2016-12-29T01:58:18.000Z | 2021-01-12T02:45:13.000Z | __init__.py | kivy-garden/garden.resizable_behavior | d9b9bf1fdb222b6d6f344e23e2dd7777c66e4d43 | [
"MIT"
] | 3 | 2019-05-19T11:06:18.000Z | 2021-01-04T09:25:20.000Z | __init__.py | kivy-garden/garden.resizable_behavior | d9b9bf1fdb222b6d6f344e23e2dd7777c66e4d43 | [
"MIT"
] | 4 | 2017-04-08T09:57:25.000Z | 2021-03-24T19:10:17.000Z | from behaviors.resize import ResizableBehavior
| 23.5 | 46 | 0.893617 | 5 | 47 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
46b275d6715176c8da4522a81061dbf91f6dcbac | 44 | py | Python | src/abb_communication/clients/rfl_robot/communication/__init__.py | createchaos/abb_communication | e4f5eae90f9423d8a20be9f6e0738467d1a5f5aa | [
"MIT"
] | 3 | 2020-02-07T20:16:16.000Z | 2020-12-18T01:14:51.000Z | src/abb_communication/clients/rfl_robot/communication/__init__.py | createchaos/abb_communication | e4f5eae90f9423d8a20be9f6e0738467d1a5f5aa | [
"MIT"
] | 21 | 2021-01-14T14:55:03.000Z | 2021-01-28T00:44:15.000Z | src/abb_communication/clients/rfl_robot/communication/__init__.py | createchaos/abb_communication | e4f5eae90f9423d8a20be9f6e0738467d1a5f5aa | [
"MIT"
] | 1 | 2021-04-01T15:59:21.000Z | 2021-04-01T15:59:21.000Z | from communication import ABBCommunication
| 22 | 43 | 0.886364 | 4 | 44 | 9.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 1 | 44 | 44 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3be550d26654194ee488861bb3f3563d5323fea | 601 | py | Python | lesson8/Car.py | vinaymayar/python-game-workshop | e990f51815c2080a0d702c9d90dac8e8c2a35d45 | [
"MIT"
] | 1 | 2016-10-11T19:27:08.000Z | 2016-10-11T19:27:08.000Z | lesson8/Car.py | vinaymayar/python-game-workshop | e990f51815c2080a0d702c9d90dac8e8c2a35d45 | [
"MIT"
] | null | null | null | lesson8/Car.py | vinaymayar/python-game-workshop | e990f51815c2080a0d702c9d90dac8e8c2a35d45 | [
"MIT"
] | null | null | null | class Car:
def __init__(self, make, model, kilometers_driven):
self.make = make
self.model = model
self.kilometers_driven = kilometers_driven
def get_make(self):
return self.make
def get_model(self):
return self.model
def get_kilometers(self):
return self.kilometers_driven
def set_kilometers(self, kilometers):
self.kilometers_driven = kilometers
def get_miles(self):
miles = self.kilometers_driven / 1.609
return miles
def drive(self, kilometers):
self.kilometers_driven += kilometers
| 24.04 | 55 | 0.65391 | 71 | 601 | 5.309859 | 0.225352 | 0.297082 | 0.265252 | 0.238727 | 0.233422 | 0.233422 | 0 | 0 | 0 | 0 | 0 | 0.009132 | 0.271215 | 601 | 24 | 56 | 25.041667 | 0.851598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.388889 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
316e42054c89053292cf8c9bffbfa290a6efa55c | 46 | py | Python | test.py | unofficialdxnny/ToolX | b1708cdba31a3e648804d6711235d885e0980ff6 | [
"MIT"
] | null | null | null | test.py | unofficialdxnny/ToolX | b1708cdba31a3e648804d6711235d885e0980ff6 | [
"MIT"
] | null | null | null | test.py | unofficialdxnny/ToolX | b1708cdba31a3e648804d6711235d885e0980ff6 | [
"MIT"
] | 1 | 2022-02-20T12:32:18.000Z | 2022-02-20T12:32:18.000Z | import webbrowser
print(webbrowser._browsers) | 23 | 27 | 0.869565 | 5 | 46 | 7.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 2 | 27 | 23 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
317bf78abe09da12e3b776e15198a060538f48ee | 1,173 | py | Python | art.py | sarthakkarandikar03/Guess-The-Number | aa8d7f7e229b84c714e89e6de62057f3fe525cf1 | [
"CC0-1.0"
] | null | null | null | art.py | sarthakkarandikar03/Guess-The-Number | aa8d7f7e229b84c714e89e6de62057f3fe525cf1 | [
"CC0-1.0"
] | null | null | null | art.py | sarthakkarandikar03/Guess-The-Number | aa8d7f7e229b84c714e89e6de62057f3fe525cf1 | [
"CC0-1.0"
] | null | null | null | # includes all the art
logo = """
_______ __ __ _______ _______. _______. .___________. __ __ _______ .__ __. __ __ .___ ___. .______ _______ .______
/ _____|| | | | | ____| / | / | | || | | | | ____| | \ | | | | | | | \/ | | _ \ | ____|| _ \
| | __ | | | | | |__ | (----` | (----` `---| |----`| |__| | | |__ | \| | | | | | | \ / | | |_) | | |__ | |_) |
| | |_ | | | | | | __| \ \ \ \ | | | __ | | __| | . ` | | | | | | |\/| | | _ < | __| | /
| |__| | | `--' | | |____.----) | .----) | | | | | | | | |____ | |\ | | `--' | | | | | | |_) | | |____ | |\ \----.
\______| \______/ |_______|_______/ |_______/ |__| |__| |__| |_______| |__| \__| \______/ |__| |__| |______/ |_______|| _| `._____|
"""
| 97.75 | 161 | 0.207161 | 5 | 1,173 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.574595 | 1,173 | 11 | 162 | 106.636364 | 0.042084 | 0.01705 | 0 | 0 | 0 | 0.75 | 0.986099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3194b37743fded68e8e8fb264a6531da9e66cb21 | 37 | py | Python | tests/test_example.py | sergeiissaev/fibrosis-quantification-software | ac0ead806aca11ebf8c1ecf375d3b037988f3fa1 | [
"Apache-2.0"
] | null | null | null | tests/test_example.py | sergeiissaev/fibrosis-quantification-software | ac0ead806aca11ebf8c1ecf375d3b037988f3fa1 | [
"Apache-2.0"
] | null | null | null | tests/test_example.py | sergeiissaev/fibrosis-quantification-software | ac0ead806aca11ebf8c1ecf375d3b037988f3fa1 | [
"Apache-2.0"
] | null | null | null | def test_example():
assert False
| 12.333333 | 19 | 0.702703 | 5 | 37 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216216 | 37 | 2 | 20 | 18.5 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
31a6e370260ebb397f9ca1f35ba1a89028938762 | 22 | py | Python | altitude/teachers/__init__.py | StamKaly/altitude-mod | e77eb156c933aeae9b0a89841a64d5df6e99da76 | [
"MIT"
] | 2 | 2016-10-08T11:28:26.000Z | 2018-07-11T16:53:36.000Z | altitude/teachers/__init__.py | StamKaly/altitude-mod | e77eb156c933aeae9b0a89841a64d5df6e99da76 | [
"MIT"
] | 1 | 2016-09-06T12:34:19.000Z | 2016-09-06T16:14:29.000Z | altitude/teachers/__init__.py | StamKaly/altitude-mod | e77eb156c933aeae9b0a89841a64d5df6e99da76 | [
"MIT"
] | null | null | null | from . import teachers | 22 | 22 | 0.818182 | 3 | 22 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31b5650787afc20f7afca9881750b845bed1f4d2 | 2,084 | py | Python | pytest-virtualenv/tests/unit/test_package_entry.py | RaiVaibhav/pytest-plugins | b21eef7fb2d876b3910f4a476875f9f157275b49 | [
"MIT"
] | 282 | 2015-12-01T12:40:31.000Z | 2019-10-30T23:30:54.000Z | pytest-virtualenv/tests/unit/test_package_entry.py | RaiVaibhav/pytest-plugins | b21eef7fb2d876b3910f4a476875f9f157275b49 | [
"MIT"
] | 126 | 2015-09-02T14:31:02.000Z | 2019-10-21T20:32:18.000Z | pytest-virtualenv/tests/unit/test_package_entry.py | RaiVaibhav/pytest-plugins | b21eef7fb2d876b3910f4a476875f9f157275b49 | [
"MIT"
] | 55 | 2015-09-21T09:11:05.000Z | 2019-10-27T00:44:32.000Z | from pytest_virtualenv import PackageEntry
def test_issrc_dev_in_version_plus_path_to_source_True():
p = PackageEntry('acme.x', '1.3.10dev1', 'path/to/source')
assert p.issrc
def test_issrc_no_dev_in_version_plus_path_to_source_False():
p = PackageEntry('acme.x', '1.3.10', 'path/to/source')
assert not p.issrc
def test_isdev_path_to_source_blank_string_True():
p = PackageEntry('acme.x', '1.3.10dev1', '')
assert p.isdev
def test_issrc_path_to_source_None_False():
p = PackageEntry('acme.x', '1.3.10dev1', None)
assert not p.issrc
def test_isdev_dev_in_version_plus_path_to_source_False(): # issrc case
p = PackageEntry('acme.x', '1.3.10dev1', 'anything')
assert not p.isdev
def test_isdev_dev_in_version_path_to_source_None_True():
p = PackageEntry('acme.x', '1.3.10dev1', None)
assert p.isdev
def test_isdev_no_dev_in_version_path_to_source_None_False():
p = PackageEntry('acme.x', '1.3.10', None)
assert not p.isdev
def test_isrel_no_dev_in_version_path_to_source_None_True():
p = PackageEntry('acme.x', '1.3.10', None)
assert p.isrel
def test_isrel_no_dev_in_version_plus_path_to_source_True():
p = PackageEntry('acme.x', '1.3.10', 'anything')
assert p.isrel
def test_isrel_no_dev_in_version_plus_path_to_source_None_False():
p = PackageEntry('acme.x', '1.3.10dev1', None)
assert not p.isrel
def test_match_dev_ok():
pe = PackageEntry('acme.x', '1.3.10dev1', None)
assert pe.match(PackageEntry.ANY)
assert pe.match(PackageEntry.DEV)
assert not pe.match(PackageEntry.SRC)
assert not pe.match(PackageEntry.REL)
def test_match_source_ok():
pe = PackageEntry('acme.x', '1.3.10dev1', 'path/to/source')
assert pe.match(PackageEntry.ANY)
assert not pe.match(PackageEntry.DEV)
assert pe.match(PackageEntry.SRC)
def test_match_rel_ok():
pe = PackageEntry('acme.x', '1.3.10', None)
assert pe.match(PackageEntry.ANY)
assert not pe.match(PackageEntry.DEV)
assert not pe.match(PackageEntry.SRC)
assert pe.match(PackageEntry.REL)
| 27.786667 | 72 | 0.721209 | 337 | 2,084 | 4.160237 | 0.118694 | 0.064907 | 0.11127 | 0.166904 | 0.833809 | 0.791726 | 0.766049 | 0.684736 | 0.537803 | 0.513552 | 0 | 0.033879 | 0.150192 | 2,084 | 74 | 73 | 28.162162 | 0.757764 | 0.004798 | 0 | 0.416667 | 0 | 0 | 0.118726 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 1 | 0.270833 | false | 0 | 0.020833 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
31d512e84c38e001f5ea55f84c2e06fef329cc24 | 28,266 | py | Python | src/services/sre_data.py | anandmoghan/speaker-recognition | ff5557e89b686b1da3e3eb30e3dad9bf851e7bbe | [
"MIT"
] | 6 | 2018-11-13T08:11:23.000Z | 2021-08-29T22:52:57.000Z | src/services/sre_data.py | anandmoghan/speaker-recognition | ff5557e89b686b1da3e3eb30e3dad9bf851e7bbe | [
"MIT"
] | null | null | null | src/services/sre_data.py | anandmoghan/speaker-recognition | ff5557e89b686b1da3e3eb30e3dad9bf851e7bbe | [
"MIT"
] | 3 | 2019-04-08T03:38:20.000Z | 2020-03-12T05:02:53.000Z | from json import loads as load_json
from os.path import join as join_path
import numpy as np
import re
from services.common import get_file_list_as_dict, remove_duplicates, sort_by_index
def get_train_data(data_config):
with open(data_config, 'r') as f:
sre_data = load_json(f.read())
data_root = sre_data['ROOT']
data_loc = sre_data['LOCATION']
speaker_key = sre_data['SPEAKER_KEY']
sre04 = make_old_sre_data(data_root, data_loc['SRE04'], 2004, speaker_key)
sre05_train = make_old_sre_data(data_root, data_loc['SRE05_TRAIN'], 2005, speaker_key)
sre05_test = make_old_sre_data(data_root, data_loc['SRE05_TEST'], 2005, speaker_key)
sre06 = make_old_sre_data(data_root, data_loc['SRE06'], 2006, speaker_key)
sre08 = make_sre08_data(data_root, data_loc['SRE08_TRAIN'], data_loc['SRE08_TEST'])
sre10 = make_sre10_data(data_root, data_loc['SRE10'])
sre16 = make_sre16_data(data_root, data_loc['SRE16_EVAL'])
swbd_c1 = make_swbd_cellular(data_root, data_loc['SWBD_C1'], 1)
swbd_c2 = make_swbd_cellular(data_root, data_loc['SWBD_C2'], 2)
swbd_p1 = make_swbd_phase(data_root, data_loc['SWBD_P1'], 1)
swbd_p2 = make_swbd_phase(data_root, data_loc['SWBD_P2'], 2)
swbd_p3 = make_swbd_phase(data_root, data_loc['SWBD_P3'], 3)
mx6_calls = make_mixer6_calls(data_root, data_loc['MX6'])
mx6_mic = make_mixer6_mic(data_root, data_loc['MX6'])
train_data = np.hstack([sre04, sre05_train, sre05_test, sre06, sre08, sre10, sre16, swbd_c1, swbd_c2, swbd_p1,
swbd_p2, swbd_p3, mx6_calls, mx6_mic]).T
print('Removing Duplicates...')
train_data, n_dup = remove_duplicates(train_data)
print('Removed {} duplicates.'.format(n_dup))
print('Sorting train data by index...')
return sort_by_index(train_data)
def make_old_sre_data(data_root, data_loc, sre_year, speaker_key):
print('Making sre{} lists...'.format(sre_year))
sre_loc = join_path(data_root, data_loc)
sre_year = 'sre' + str(sre_year)
bad_audio = ['jagi', 'jaly', 'jbrg', 'jcli', 'jfmx']
file_list = get_file_list_as_dict(sre_loc)
for ba in bad_audio:
try:
del file_list[ba]
except KeyError:
pass
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
with open(speaker_key, 'r') as f:
for line in f.readlines():
tokens = re.split('[\s]+', line.strip())
speaker_id = tokens[0]
file_name = tokens[3]
channel = 1 if tokens[4] == 'A' else 2
if sre_year == tokens[2]:
try:
file_loc = file_list[file_name]
speaker_id = sre_year + '_' + speaker_id
index_list.append('{}-{}_{}_ch{}'.format(speaker_id, sre_year, file_name, channel))
location_list.append(file_loc)
speaker_list.append(speaker_id)
channel_list.append(channel)
read_list.append('sph2pipe -f wav -p -c {} {}'.format(channel, file_loc))
except KeyError:
pass
print('Made {:d} files from {}.'.format(len(index_list), sre_year))
return np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
def make_sre08_data(data_root, data_train_loc, data_test_loc):
print('Making sre2008 lists...')
train_loc = join_path(data_root, data_train_loc)
test_loc = join_path(data_root, data_test_loc)
model_key = join_path(test_loc, 'data/keys/NIST_SRE08_KEYS.v0.1/model-keys/NIST_SRE08_short2.model.key')
trials_key = join_path(test_loc, 'data/keys/NIST_SRE08_KEYS.v0.1/trial-keys/NIST_SRE08_short2-short3.trial.key')
train_file_list = get_file_list_as_dict(train_loc)
test_file_list = get_file_list_as_dict(test_loc)
file_list = {**train_file_list, **test_file_list}
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
model_to_speaker = dict()
with open(model_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[,:]+', line.strip())
model_id = tokens[0]
file_name = tokens[2]
channel = 1 if tokens[3] == 'a' else 2
speaker_id = tokens[4]
model_to_speaker[model_id] = speaker_id
try:
file_loc = file_list[file_name]
speaker_id = 'sre2008_' + speaker_id
index_list.append('{}-sre2008_{}_ch{}'.format(speaker_id, file_name, channel))
location_list.append(file_loc)
channel_list.append(channel)
speaker_list.append(speaker_id)
read_list.append('sph2pipe -f wav -p -c {} {}'.format(channel, file_loc))
except KeyError:
pass
with open(trials_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[,]+', line.strip())
model_id = tokens[0]
file_name = tokens[1]
channel = 1 if tokens[2] == 'a' else 2
target_type = tokens[3]
try:
file_loc = file_list[file_name]
speaker_id = 'sre2008_' + model_to_speaker[model_id]
if target_type == 'target':
index_list.append('{}-sre2008_{}_ch{}'.format(speaker_id, file_name, channel))
location_list.append(file_loc)
channel_list.append(channel)
speaker_list.append(speaker_id)
read_list.append('sph2pipe -f wav -p -c {} {}'.format(channel, file_loc))
del file_list[file_name]
except KeyError:
pass
print('Made {:d} files from sre2008.'.format(len(index_list)))
return np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
def make_sre10_data(data_root, data_loc):
print('Making sre2010 lists...')
sre_loc = join_path(data_root, data_loc)
model_key = join_path(sre_loc, 'keys/coreext.modelkey.csv')
train_key = join_path(sre_loc, 'train/coreext.trn')
trials_key = join_path(sre_loc, 'keys/coreext-coreext.trialkey.csv')
file_list = get_file_list_as_dict(join_path(sre_loc, 'data'))
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
model_to_speaker = dict()
with open(model_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[,]+', line.strip())
model_id = tokens[0]
speaker_id = tokens[1]
if not speaker_id == 'NOT_SCORED':
model_to_speaker[model_id] = speaker_id
with open(train_key, 'r') as f:
for line in f.readlines():
tokens = re.split('[\s:]+', line.strip())
model_id = tokens[0]
file_name = tokens[2].split('/')[2].split('.sph')[0]
channel = 1 if tokens[3] == 'A' else 2
try:
file_loc = file_list[file_name]
speaker_id = 'sre2010_' + model_to_speaker[model_id]
index_list.append('{}-sre2010_{}_ch{}'.format(speaker_id, file_name, channel))
location_list.append(file_loc)
speaker_list.append(speaker_id)
channel_list.append(channel)
read_list.append('sph2pipe -f wav -p -c {} {}'.format(channel, file_loc))
except KeyError:
pass
with open(trials_key, 'r') as f:
for line in f.readlines():
tokens = re.split('[,]+', line.strip())
model_id = tokens[0]
file_name = tokens[1]
channel = 1 if tokens[2] == 'A' else 2
target_type = tokens[3]
try:
speaker_id = 'sre2010_' + model_to_speaker[model_id]
file_loc = file_list[file_name]
if target_type == 'target':
index_list.append('{}-sre2010_{}_ch{}'.format(speaker_id, file_name, channel))
location_list.append(file_loc)
speaker_list.append(speaker_id)
channel_list.append(channel)
read_list.append('sph2pipe -f wav -p -c {} {}'.format(channel, file_loc))
del file_list[file_name]
except KeyError:
pass
print('Made {:d} files from sre2010.'.format(len(index_list)))
return np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
def make_sre16_data(data_root, data_loc):
print('Making sre2016 lists...')
sre_loc = join_path(data_root, data_loc)
file_list = get_file_list_as_dict(join_path(sre_loc, 'data/enrollment'))
meta_key = join_path(sre_loc, 'docs/sre16_eval_enrollment.tsv')
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
with open(meta_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
speaker_id = 'sre16_eval_enroll_' + tokens[0]
file_name = tokens[1]
try:
file_loc = file_list[file_name]
index_list.append('{}-sre16_eval_enroll_{}'.format(speaker_id, file_name))
location_list.append(file_loc)
speaker_list.append(speaker_id)
channel_list.append(1)
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_loc))
del file_list[file_name]
except KeyError:
pass
print('Made {:d} enrollment files.'.format(len(index_list)))
enrollment_data = np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
file_list = get_file_list_as_dict(join_path(sre_loc, 'data/test'))
segment_key = join_path(sre_loc, 'docs/sre16_eval_segment_key.tsv')
language_key = join_path(sre_loc, 'metadata/calls.tsv')
trial_key = join_path(sre_loc, 'docs/sre16_eval_trial_key.tsv')
utt_to_call = dict()
with open(segment_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
utt_to_call[tokens[0]] = tokens[1]
call_to_language = dict()
with open(language_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
call_to_language[tokens[0]] = tokens[1]
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
language_list = []
target_list = []
with open(trial_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
speaker_id = 'sre16_eval_enroll_' + tokens[0]
file_name = tokens[1]
target_type = tokens[3]
call_id = utt_to_call[file_name]
try:
file_loc = file_list[file_name]
index_list.append('{}-sre16_eval_test_{}'.format(speaker_id, file_name))
location_list.append(file_loc)
speaker_list.append(speaker_id)
channel_list.append(1)
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_loc))
language_list.append(call_to_language[call_id])
target_list.append(target_type)
del file_list[file_name]
except KeyError:
pass
print('Made {:d} test files.'.format(len(index_list)))
test_data = np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
return np.hstack([enrollment_data, test_data])
def make_swbd_cellular(data_root, data_loc, cellular=1):
print('Making swbd cellular {} lists...'.format(cellular))
swbd_loc = join_path(data_root, data_loc)
bad_audio = [40019, 45024, 40022]
stats_key = join_path(swbd_loc, 'doc{}/swb_callstats.tbl'.format('' if cellular == 1 else 's'))
swbd_type = 'swbd_c{:d}_'.format(cellular)
file_list = get_file_list_as_dict(swbd_loc)
for ba in bad_audio:
try:
del file_list['sw_' + str(ba)]
except KeyError:
pass
index_list = []
location_list = []
channel_list = []
speaker_list = []
read_list = []
with open(stats_key, 'r') as f:
for line in f.readlines():
tokens = re.split('[,]+', line.strip())
file_name = tokens[0]
speaker_id1 = 'sw_' + tokens[1]
speaker_id2 = 'sw_' + tokens[2]
try:
file_loc = file_list['sw_' + str(file_name)]
index_list.append(speaker_id1 + '-' + swbd_type + file_name + '_ch1')
location_list.append(file_loc)
channel_list.append(1)
speaker_list.append(speaker_id1)
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_loc))
index_list.append(speaker_id2 + '-' + swbd_type + file_name + '_ch2')
location_list.append(file_loc)
channel_list.append(2)
speaker_list.append(speaker_id2)
read_list.append('sph2pipe -f wav -p -c 2 {}'.format(file_loc))
del file_list['sw_' + str(file_name)]
except KeyError:
pass
print('Made {:d} files swbd cellular {}.'.format(len(index_list), cellular))
return np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
def make_swbd_phase(data_root, data_loc, phase=1):
print('Making swbd phase {} lists...'.format(phase))
swbd_loc = join_path(data_root, data_loc)
bad_audio = ['sw_22602']
stats_key = join_path(swbd_loc, 'docs/callinfo.tbl')
swbd_type = 'swbd_p{:d}_'.format(phase)
file_list = get_file_list_as_dict(swbd_loc)
for ba in bad_audio:
try:
del file_list[ba]
except KeyError:
pass
index_list = []
location_list = []
channel_list = []
speaker_list = []
read_list = []
with open(stats_key, 'r') as f:
for line in f.readlines():
tokens = re.split('[,]+', line.strip())
file_name = ('sw_' + tokens[0]) if phase == 3 else ('' + tokens[0].split('.')[0])
speaker_id = 'sw_' + str(tokens[2])
channel = 1 if tokens[3] == 'A' else 2
try:
file_loc = file_list[file_name]
index_list.append(speaker_id + '-' + swbd_type + file_name + '_ch{:d}'.format(channel))
location_list.append(file_loc)
channel_list.append(channel)
speaker_list.append(speaker_id)
read_list.append('sph2pipe -f wav -p -c {} {}'.format(channel, file_loc))
except KeyError:
pass
print('Made {:d} files swbd phase {}.'.format(len(index_list), phase))
return np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
def make_mixer6_calls(data_root, data_loc):
print('Making mixer6 calls lists...')
mx6_loc = join_path(data_root, data_loc)
mx6_calls_loc = join_path(mx6_loc, 'data/ulaw_sphere')
stats_key = join_path(mx6_loc, 'docs/mx6_calls.csv')
file_list = get_file_list_as_dict(mx6_calls_loc)
call_to_file = dict()
for key in file_list.keys():
call_id = re.split('[_]+', key)[2]
call_to_file[call_id] = key
index_list = []
location_list = []
channel_list = []
speaker_list = []
read_list = []
with open(stats_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[,]+', line.strip())
call_id = tokens[0]
speaker_id1 = 'MX6_' + tokens[4]
speaker_id2 = 'MX6_' + tokens[12]
try:
file_name = call_to_file[call_id]
file_loc = file_list[file_name]
index_list.append('{}-MX6_CALLS_{}_ch1'.format(speaker_id1, file_name))
location_list.append(file_loc)
channel_list.append(1)
speaker_list.append(speaker_id1)
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_loc))
index_list.append('{}-MX6_CALLS_{}_ch2'.format(speaker_id2, file_name))
location_list.append(file_loc)
channel_list.append(2)
speaker_list.append(speaker_id2)
read_list.append('sph2pipe -f wav -p -c 2 {}'.format(file_loc))
except KeyError:
pass
print('Made {:d} files from mixer6 calls.'.format(len(index_list)))
return np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
def make_mixer6_mic(data_root, data_loc):
print('Making mixer6 mic lists...')
mx6_loc = join_path(data_root, data_loc)
mx6_mic_loc = join_path(mx6_loc, 'data/pcm_flac')
bad_audio = ['20091208_091618_HRM_120831']
stats_key = join_path(mx6_loc, 'docs/mx6_ivcomponents.csv')
file_list = dict()
mic_idx = ['02', '04', '05', '06', '07', '08', '09', '10', '11', '12', '13'] # Omitting 01, 03 and 14
for idx in mic_idx:
mic_loc = join_path(mx6_mic_loc, 'CH' + idx)
mic_file_list = get_file_list_as_dict(mic_loc, pattern='*.flac')
file_list = {**mic_file_list, **file_list}
index_list = []
location_list = []
channel_list = []
speaker_list = []
read_list = []
with open(stats_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[,]+', line.strip())
session_id = tokens[0]
speaker_id = 'MX6_' + re.split('[_]+', session_id)[3]
start_time = tokens[7]
end_time = tokens[8]
if session_id not in bad_audio:
for idx in mic_idx:
file_name = '{}_CH{}'.format(session_id, idx)
try:
file_loc = file_list[file_name]
index_list.append('{}-MX6_MIC_{}'.format(speaker_id, file_name))
location_list.append(file_loc)
channel_list.append(1)
speaker_list.append(speaker_id)
read_list.append('sox -t flac {} -r 8k -t wav -V0 - trim {} {}'
.format(file_loc, start_time, float(end_time) - float(start_time)))
except KeyError:
pass
print('Made {:d} files from mixer6 mic.'.format(len(index_list)))
return np.vstack([index_list, location_list, channel_list, speaker_list, read_list])
def make_sre18_dev_data(sre_config):
print('Making sre2018 dev lists...')
with open(sre_config, 'r') as f:
sre_data = load_json(f.read())
data_root = sre_data['ROOT']
data_loc = sre_data['LOCATION']['SRE18_DEV']
sre_loc = join_path(data_root, data_loc)
file_list = get_file_list_as_dict(join_path(sre_loc, 'data/unlabeled'))
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
for key in file_list.keys():
index_list.append('sre18_unlabelled_{}'.format(key))
location_list.append(file_list[key])
speaker_list.append('sre18_unlabelled_{}'.format(key))
channel_list.append(1)
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_list[key]))
sre_unlabeled = np.vstack([index_list, location_list, channel_list, speaker_list, read_list]).T
sph_file_list = get_file_list_as_dict(join_path(sre_loc, 'data/enrollment'), pattern='*.sph', ext=True)
flac_file_list = get_file_list_as_dict(join_path(sre_loc, 'data/enrollment'), pattern='*.flac', ext=True)
diarization_file = join_path(sre_loc, 'docs/sre18_dev_enrollment_diarization.tsv')
key_file = join_path(sre_loc, 'docs/sre18_dev_enrollment.tsv')
diarization_dict = dict()
with open(diarization_file) as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
diarization_dict[tokens[0]] = (float(tokens[2]), float(tokens[3]))
utt_to_spk = dict()
with open(key_file) as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
utt_to_spk[tokens[1]] = tokens[0]
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
for key in sph_file_list.keys():
file_loc = sph_file_list[key]
index_list.append('sre18_dev_enroll_{}'.format(key))
location_list.append(file_loc)
speaker_list.append(utt_to_spk[key])
channel_list.append(1)
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_loc))
for key in flac_file_list.keys():
file_loc = flac_file_list[key]
index_list.append('sre18_dev_enroll_{}'.format(key))
location_list.append(file_loc)
speaker_list.append(utt_to_spk[key])
channel_list.append(1)
try:
start_time, end_time = diarization_dict[key]
read_list.append('sox -t flac {} -r 8k -t wav -V0 - trim {} {}'
.format(file_loc, start_time, float(end_time) - float(start_time)))
except KeyError:
read_list.append('sox -t flac {} -r 8k -t wav -V0 -'.format(file_list[key]))
sre_dev_enroll = np.vstack([index_list, location_list, channel_list, speaker_list, read_list]).T
sph_file_list = get_file_list_as_dict(join_path(sre_loc, 'data/test'), pattern='*.sph', ext=True)
flac_file_list = get_file_list_as_dict(join_path(sre_loc, 'data/test'), pattern='*.flac', ext=True)
file_list = {**sph_file_list, **flac_file_list}
trials_key = join_path(sre_loc, 'docs/sre18_dev_trial_key.tsv')
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
with open(trials_key) as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
try:
file_loc = file_list[tokens[1]]
if tokens[3] == 'target':
index_list.append(tokens[1])
location_list.append(file_loc)
speaker_list.append(tokens[1])
channel_list.append(1)
if tokens[1][-3:] == 'sph':
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_loc))
else:
read_list.append('sox -t flac {} -r 8k -t wav -V0 -'.format(file_loc))
del file_list[tokens[1]]
except KeyError:
pass
sre_dev_test = np.vstack([index_list, location_list, channel_list, speaker_list, read_list]).T
return sre_dev_enroll, sre_dev_test, sre_unlabeled
def make_sre18_eval_data(sre_config):
print('Making sre2018 eval lists...')
with open(sre_config, 'r') as f:
sre_data = load_json(f.read())
data_root = sre_data['ROOT']
data_loc = sre_data['LOCATION']['SRE18_EVAL']
sre_loc = join_path(data_root, data_loc)
sph_file_list = get_file_list_as_dict(join_path(sre_loc, 'data/enrollment'), pattern='*.sph', ext=True)
flac_file_list = get_file_list_as_dict(join_path(sre_loc, 'data/enrollment'), pattern='*.flac', ext=True)
diarization_file = join_path(sre_loc, 'docs/sre18_eval_enrollment_diarization.tsv')
key_file = join_path(sre_loc, 'docs/sre18_eval_enrollment.tsv')
diarization_dict = dict()
with open(diarization_file) as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
diarization_dict[tokens[0]] = (float(tokens[2]), float(tokens[3]))
utt_to_spk = dict()
with open(key_file) as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
utt_to_spk[tokens[1]] = tokens[0]
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
for key in sph_file_list.keys():
file_loc = sph_file_list[key]
index_list.append('sre18_eval_enroll_{}'.format(key))
location_list.append(file_loc)
speaker_list.append(utt_to_spk[key])
channel_list.append(1)
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_loc))
for key in flac_file_list.keys():
file_loc = flac_file_list[key]
index_list.append('sre18_eval_enroll_{}'.format(key))
location_list.append(file_loc)
speaker_list.append(utt_to_spk[key])
channel_list.append(1)
try:
start_time, end_time = diarization_dict[key]
read_list.append('sox -t flac {} -r 8k -t wav -V0 - trim {} {}'
.format(file_loc, start_time, float(end_time) - float(start_time)))
except KeyError:
read_list.append('sox -t flac {} -r 8k -t wav -V0 -'.format(file_loc))
sre_eval_enroll = np.vstack([index_list, location_list, channel_list, speaker_list, read_list]).T
sph_file_list = get_file_list_as_dict(join_path(sre_loc, 'data/test'), pattern='*.sph', ext=True)
flac_file_list = get_file_list_as_dict(join_path(sre_loc, 'data/test'), pattern='*.flac', ext=True)
file_list = {**sph_file_list, **flac_file_list}
trials_key = join_path(sre_loc, 'docs/sre18_eval_trial_key.tsv')
index_list = []
location_list = []
speaker_list = []
channel_list = []
read_list = []
with open(trials_key) as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
try:
file_loc = file_list[tokens[1]]
if tokens[3] == 'target':
index_list.append(tokens[1])
location_list.append(file_loc)
speaker_list.append(tokens[1])
channel_list.append(1)
if tokens[1][-3:] == 'sph':
read_list.append('sph2pipe -f wav -p -c 1 {}'.format(file_loc))
else:
read_list.append('sox -t flac {} -r 8k -t wav -V0 -'.format(file_loc))
del file_list[tokens[1]]
except KeyError:
pass
sre_eval_test = np.vstack([index_list, location_list, channel_list, speaker_list, read_list]).T
return sre_eval_enroll, sre_eval_test
def make_sre16_trials_file(sre_config, trials_file):
with open(sre_config, 'r') as f:
sre_data = load_json(f.read())
data_root = sre_data['ROOT']
data_loc = sre_data['LOCATION']['SRE16_EVAL']
sre_loc = join_path(data_root, data_loc)
segment_key = join_path(sre_loc, 'docs/sre16_eval_segment_key.tsv')
language_key = join_path(sre_loc, 'metadata/calls.tsv')
trial_key = join_path(sre_loc, 'docs/sre16_eval_trial_key.tsv')
utt_to_call = dict()
with open(segment_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
utt_to_call[tokens[0]] = tokens[1]
call_to_language = dict()
with open(language_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
call_to_language[tokens[0]] = tokens[1]
trials_list = []
with open(trial_key, 'r') as f:
for line in f.readlines()[1:]:
tokens = re.split('[\s]+', line.strip())
speaker_id = tokens[0]
file_name = tokens[1]
target_type = tokens[3]
trials_list.append('sre16_eval_enroll_{} sre16_eval_test_{} {}'.format(speaker_id, file_name, target_type))
with open(trials_file, 'w') as f:
for trial in trials_list:
f.write('{}\n'.format(trial))
def split_trials_file(trials_file):
index_list = []
label_list = []
target_list = []
with open(trials_file) as f:
for line in f.readlines():
tokens = re.split('[\s]+', line.strip())
index_list.append(tokens[1])
label_list.append(tokens[0])
target_list.append(tokens[2])
return index_list, label_list, target_list
| 40.553802 | 119 | 0.596547 | 3,815 | 28,266 | 4.092005 | 0.056619 | 0.070463 | 0.02921 | 0.03363 | 0.832746 | 0.80802 | 0.775287 | 0.735187 | 0.701877 | 0.68503 | 0 | 0.025046 | 0.273969 | 28,266 | 696 | 120 | 40.612069 | 0.73565 | 0.000778 | 0 | 0.695507 | 0 | 0.001664 | 0.112421 | 0.023724 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021631 | false | 0.026622 | 0.008319 | 0 | 0.049917 | 0.036606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
31e127dffb58a3654a8b1c36306ac6638b8d8eef | 15,902 | py | Python | ldif/nets/pointnet.py | JRyanShue/ldif | fd2cbfbcb752d0e5e19e80b98760bf5bf9d1661f | [
"Apache-2.0"
] | null | null | null | ldif/nets/pointnet.py | JRyanShue/ldif | fd2cbfbcb752d0e5e19e80b98760bf5bf9d1661f | [
"Apache-2.0"
] | null | null | null | ldif/nets/pointnet.py | JRyanShue/ldif | fd2cbfbcb752d0e5e19e80b98760bf5bf9d1661f | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Implementation of PointNet network."""
from ldif.util.tf_util import log
import numpy as np
import tensorflow as tf
import tensorflow.contrib.layers as contrib_layers
# LDIF is an internal package, should be imported last.
# pylint: disable=g-bad-import-order
from ldif.util import math_util
# pylint: enable=g-bad-import-order
def point_set_to_transformation(points):
"""Maps a point set to an affine transformation and a translation."""
batch_size, point_count, _ = points.get_shape().as_list()
with tf.variable_scope('r3_transformation_net'):
net = tf.expand_dims(points, axis=2)
net = contrib_layers.conv2d(
inputs=net,
num_outputs=64,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv1')
net = contrib_layers.conv2d(
net,
num_outputs=128,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv2')
net = contrib_layers.conv2d(
net,
num_outputs=1024,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv3')
net = contrib_layers.max_pool2d(
net, kernel_size=[point_count, 1], padding='VALID', scope='maxpool1')
net = contrib_layers.flatten(net)
net = contrib_layers.fully_connected(
net, num_outputs=512, activation_fn=tf.nn.relu, scope='fc1')
net = contrib_layers.fully_connected(net, num_outputs=256, scope='fc2')
with tf.variable_scope('transformation'):
weights = tf.get_variable(
'weights', [256, 3 * 3],
initializer=tf.constant_initializer(0.0),
dtype=tf.float32)
biases = tf.get_variable(
'biases', [3 * 3],
initializer=tf.constant_initializer(0.0),
dtype=tf.float32)
biases += tf.constant([1, 0, 0, 0, 1, 0, 0, 0, 1], dtype=tf.float32)
transformation = tf.matmul(net, weights)
transformation = tf.nn.bias_add(transformation, biases)
transformation = tf.reshape(transformation, [batch_size, 3, 3])
with tf.variable_scope('translation'):
translation_weights = tf.get_variable(
'weights', [256, 3],
initializer=tf.constant_initializer(0.0),
dtype=tf.float32)
translation = tf.matmul(net, translation_weights)
translation = tf.reshape(translation, [batch_size, 1, 3])
return transformation, translation
def point_set_to_feature_transformation(points, output_dimensionality):
"""A block to learn an orthogonal feature transformation matrix."""
batch_size, point_count, _, input_feature_count = points.get_shape().as_list()
assert input_feature_count == 64
with tf.variable_scope('feature_transformation_net'):
net = contrib_layers.conv2d(
points,
num_outputs=64,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv1')
net = contrib_layers.conv2d(
net,
num_outputs=128,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv2')
net = contrib_layers.conv2d(
net,
num_outputs=1024,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv3')
net = contrib_layers.max_pool2d(
net, kernel_size=[point_count, 1], padding='VALID', scope='maxpool1')
net = contrib_layers.flatten(net)
net = contrib_layers.fully_connected(
net, num_outputs=512, activation_fn=tf.nn.relu, scope='fc1')
net = contrib_layers.fully_connected(net, num_outputs=256, scope='fc2')
with tf.variable_scope('feature_transformation'):
weights = tf.get_variable(
'weights', [256, output_dimensionality * output_dimensionality],
initializer=tf.constant_initializer(0.0),
dtype=tf.float32)
biases = tf.get_variable(
'biases', [output_dimensionality * output_dimensionality],
initializer=tf.constant_initializer(0.0),
dtype=tf.float32)
biases += tf.constant(
np.eye(output_dimensionality).flatten(), dtype=tf.float32)
transformation = tf.matmul(net, weights)
transformation = tf.nn.bias_add(transformation, biases)
transformation = tf.reshape(
transformation,
[batch_size, output_dimensionality, output_dimensionality])
return transformation
def pointnet_depr(points,
output_feature_count,
apply_learned_ortho_tx=False,
apply_learned_64d_tx=True,
use_bad_reduce=False,
nerfify=False,
maxpool_feature_count=1024):
"""Applies pointnet to an input set of point features.
Args:
points: Tensor with shape [batch_size, point_count, feature_count].
output_feature_count: The number of features in the final linear layer.
apply_learned_ortho_tx: Whether to apply the learned transformation to the
input points.
apply_learned_64d_tx: Whether to apply the 64x64 learned orthogonal
transform.
use_bad_reduce: Whether to use the original slow 'maxpool2d' global
max reduce. Only still an option for compatibility with existing trained
networks.
nerfify: Whether to apply the math_util.nerfify function to the features
(all of them, not just the points) after the initial transform step.
maxpool_feature_count: Integer. The number of features in the vector before
doing a global maxpool. This is the main computational bottleneck, so
reducing it is good for training time.
Returns:
embedding: Tensor with shape [batch_size, embedding_length].
"""
batch_size, point_count, feature_count = points.get_shape().as_list()
point_positions = points[..., 0:3]
point_features = points[..., 3:]
feature_count = points.get_shape().as_list()[-1] - 3
print('pointnet_depr')
print(str(batch_size) + ' ' + str(point_count) + ' ' + str(point_positions) + ' ' + str(point_features) + ' ' + str(feature_count))
# print(f'points.shape:{points.shape}') # (32, 1024, 6)
# print(f'apply_learned_ortho_tx: {apply_learned_ortho_tx}') # False
with tf.variable_scope('pointnet', reuse=tf.AUTO_REUSE):
if apply_learned_ortho_tx: # False in LDIF
with tf.variable_scope('learned_transformation'):
transformation, translation = point_set_to_transformation(points)
transformed_points = tf.matmul(point_positions + translation,
transformation)
if feature_count > 0:
transformed_points = tf.concat([transformed_points, point_features],
axis=2)
net = tf.expand_dims(transformed_points, axis=2)
else:
net = tf.expand_dims(points, axis=2)
if nerfify:
net = math_util.nerfify(net, 10, flatten=True, interleave=False)
# Apply the 'mlp 64, 64' layers:
with tf.variable_scope('mlp_block_1'):
net = contrib_layers.conv2d(
net,
num_outputs=64,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv1')
net = contrib_layers.conv2d(
net,
num_outputs=64,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv2')
if apply_learned_64d_tx: # False in LDIF, as specified by paper
# log.info('apply_learned_64d_tx == True')
with tf.variable_scope('learned_feature_transformation'):
feature_transformation = point_set_to_feature_transformation(
net, output_dimensionality=64)
net = tf.matmul(
tf.reshape(net, [batch_size, point_count, 64]),
feature_transformation)
net = tf.expand_dims(net, axis=2)
# Second MLP block
with tf.variable_scope('mlp_block_2'):
net = contrib_layers.conv2d(
net,
num_outputs=64,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv1')
net = contrib_layers.conv2d(
net,
num_outputs=128,
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv2')
net = contrib_layers.conv2d(
net,
num_outputs=maxpool_feature_count, # TODO(kgenova) A bottleneck.
kernel_size=[1, 1],
padding='VALID',
stride=[1, 1],
scope='conv3')
# log.info(f'Hello in pointnet. The shape is {net.get_shape().as_list()}')
# raise ValueError('Stop')
assert len(net.get_shape().as_list()) == 4
if use_bad_reduce:
net = contrib_layers.max_pool2d(
net, [point_count, 1],
stride=[2, 2],
padding='VALID',
scope='global_maxpool')
else:
net = tf.reshape(net, [batch_size, point_count, maxpool_feature_count])
net = tf.reduce_max(net, axis=1)
net = contrib_layers.flatten(net)
# Final MLP
with tf.variable_scope('final_mlp'):
net = contrib_layers.fully_connected(
net, num_outputs=512, activation_fn=tf.nn.relu, scope='fc1')
net = contrib_layers.fully_connected(
net, num_outputs=256, activation_fn=tf.nn.relu, scope='fc2')
net = contrib_layers.fully_connected(
net,
num_outputs=output_feature_count,
activation_fn=None,
scope='final_fc')
return net
def pointnet(points,
output_feature_count,
apply_learned_ortho_tx=False,
apply_learned_64d_tx=True,
use_bad_reduce=False,
nerfify=False,
maxpool_feature_count=1024,
use_gpu=True):
"""Applies pointnet to an input set of point features.
Args:
points: Tensor with shape [batch_size, point_count, feature_count].
output_feature_count: The number of features in the final linear layer.
apply_learned_ortho_tx: Whether to apply the learned transformation to the
input points.
apply_learned_64d_tx: Whether to apply the 64x64 learned orthogonal
transform.
use_bad_reduce: Whether to use the original slow 'maxpool2d' global
max reduce. Only still an option for compatibility with existing trained
networks.
nerfify: Whether to apply the math_util.nerfify function to the features
(all of them, not just the points) after the initial transform step.
maxpool_feature_count: Integer. The number of features in the vector before
doing a global maxpool. This is the main computational bottleneck, so
reducing it is good for training time.
use_gpu: Whether to assume a GPU is available.
Returns:
embedding: Tensor with shape [batch_size, embedding_length].
"""
batch_size, point_count, feature_count = points.get_shape().as_list()
point_positions = points[..., 0:3]
point_features = points[..., 3:]
feature_count = points.get_shape().as_list()[-1] - 3
with tf.variable_scope('pointnet', reuse=tf.AUTO_REUSE):
if apply_learned_ortho_tx:
with tf.variable_scope('learned_transformation'):
transformation, translation = point_set_to_transformation(points)
transformed_points = tf.matmul(point_positions + translation,
transformation)
if feature_count > 0:
transformed_points = tf.concat([transformed_points, point_features],
axis=2)
points = transformed_points
# Go from NWC to NCW so that the final reduce can be faster.
assert len(points.shape) == 3
net = points
if nerfify:
net = math_util.nerfify(net, 10, flatten=True, interleave=False)
# On the GPU HCW is substantially faster, but there is so NCW CPU kernel.
# So in CPU mode we have to do NWC convolutions.
if use_gpu:
net = tf.transpose(net, perm=[0, 2, 1])
data_format = 'NCW'
reduce_dim = 2
else:
data_format = 'NWC'
reduce_dim = 1
# Apply the 'mlp 64, 64' layers:
with tf.variable_scope('mlp_block_1'):
# with tf.variable_scope('test_keras'):
# net = tf.keras.layers.Conv1D(filters=64,
# kernel_size=1,
# strides=1,
# padding='valid',
# data_format='channels_first',
# activation=tf.keras.activations.relu)(net)
net = contrib_layers.conv1d(
net,
num_outputs=64, kernel_size=1,
padding='VALID',
stride=1,
data_format=data_format,
scope='conv1')
net = contrib_layers.conv1d(
net,
num_outputs=64,
kernel_size=1,
padding='VALID',
stride=1,
data_format=data_format,
scope='conv2')
if apply_learned_64d_tx:
if use_gpu:
net = tf.transpose(net, perm=[0, 2, 1])
with tf.variable_scope('learned_feature_transformation'):
feature_transformation = point_set_to_feature_transformation(
net, output_dimensionality=64)
net = tf.matmul(
tf.reshape(net, [batch_size, point_count, 64]),
feature_transformation)
net = tf.expand_dims(net, axis=2)
if use_gpu:
net = tf.transpose(net, perm=[0, 2, 1])
# Second MLP block
with tf.variable_scope('mlp_block_2'):
net = contrib_layers.conv1d(
net,
num_outputs=64,
kernel_size=1,
padding='VALID',
stride=1,
data_format=data_format,
scope='conv1')
net = contrib_layers.conv1d(
net,
num_outputs=128,
kernel_size=1,
padding='VALID',
stride=1,
data_format=data_format,
scope='conv2')
net = contrib_layers.conv1d(
net,
num_outputs=maxpool_feature_count, # TODO(kgenova) A bottleneck.
kernel_size=1,
padding='VALID',
stride=1,
data_format=data_format,
scope='conv3')
# log.info(f'Hello in pointnet. The shape is {net.get_shape().as_list()}')
# raise ValueError('Stop')
assert len(net.get_shape().as_list()) == 3
if use_bad_reduce:
raise ValueError('Bad Reduce is not supported with pointnet1d.')
net = tf.reduce_max(net, axis=reduce_dim)
# net = contrib_layers.flatten(net)
# Final MLP
with tf.variable_scope('final_mlp'):
net = contrib_layers.fully_connected(
net, num_outputs=512, activation_fn=tf.nn.relu, scope='fc1')
net = contrib_layers.fully_connected(
net, num_outputs=256, activation_fn=tf.nn.relu, scope='fc2')
net = contrib_layers.fully_connected(
net,
num_outputs=output_feature_count,
activation_fn=None,
scope='final_fc')
return net
| 38.225962 | 134 | 0.616526 | 1,938 | 15,902 | 4.853973 | 0.146027 | 0.048368 | 0.056128 | 0.036356 | 0.780908 | 0.761667 | 0.746572 | 0.715106 | 0.715106 | 0.715106 | 0 | 0.029042 | 0.285436 | 15,902 | 415 | 135 | 38.318072 | 0.798821 | 0.248711 | 0 | 0.803333 | 0 | 0 | 0.05455 | 0.015197 | 0 | 0 | 0 | 0.004819 | 0.013333 | 1 | 0.013333 | false | 0 | 0.016667 | 0 | 0.043333 | 0.006667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
31ea12e096bc2bfb23ffa3a3ed61d17f48b2ff5e | 8,035 | py | Python | src/keen/utilities/train_utils.py | ddomingof/KEEN | 2dcd548e904ae05b337a61548d7b082b6307175a | [
"MIT"
] | null | null | null | src/keen/utilities/train_utils.py | ddomingof/KEEN | 2dcd548e904ae05b337a61548d7b082b6307175a | [
"MIT"
] | null | null | null | src/keen/utilities/train_utils.py | ddomingof/KEEN | 2dcd548e904ae05b337a61548d7b082b6307175a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import logging
import timeit
import numpy as np
import torch
import torch.optim as optim
from sklearn.utils import shuffle
from keen.constants import *
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
def _split_list_in_batches(input_list, batch_size):
return [input_list[i:i + batch_size] for i in range(0, len(input_list), batch_size)]
def train_model(kg_embedding_model, all_entities, learning_rate, num_epochs, batch_size, pos_triples, device, seed):
model_name = kg_embedding_model.model_name
if model_name in [TRANS_E, TRANS_H, TRANS_D, TRANS_R]:
return _train_translational_based_model(kg_embedding_model, all_entities, learning_rate, num_epochs, batch_size,
pos_triples,
device, seed)
if model_name == CONV_E:
return _train_conv_e_model(kg_embedding_model, all_entities, learning_rate, num_epochs, batch_size, pos_triples,
device, seed)
def _train_translational_based_model(kg_embedding_model, all_entities, learning_rate, num_epochs, batch_size,
pos_triples, device,
seed):
kg_embedding_model = kg_embedding_model.to(device)
optimizer = optim.SGD(kg_embedding_model.parameters(), lr=learning_rate)
loss_per_epoch = []
log.info('****Run Model On %s****' % str(device).upper())
num_pos_triples = pos_triples.shape[0]
num_entities = all_entities.shape[0]
start_training = timeit.default_timer()
for epoch in range(num_epochs):
np.random.seed(seed=seed)
indices = np.arange(num_pos_triples)
np.random.shuffle(indices)
pos_triples = pos_triples[indices]
start = timeit.default_timer()
pos_batches = _split_list_in_batches(input_list=pos_triples, batch_size=batch_size)
current_epoch_loss = 0.
for i in range(len(pos_batches)):
# TODO: Remove original subject and object from entity set
pos_batch = pos_batches[i]
current_batch_size = len(pos_batch)
batch_subjs = pos_batch[:, 0:1]
batch_relations = pos_batch[:, 1:2]
batch_objs = pos_batch[:, 2:3]
num_subj_corrupt = len(pos_batch) // 2
num_obj_corrupt = len(pos_batch) - num_subj_corrupt
pos_batch = torch.tensor(pos_batch, dtype=torch.long, device=device)
corrupted_subj_indices = np.random.choice(np.arange(0, num_entities), size=num_subj_corrupt)
corrupted_subjects = np.reshape(all_entities[corrupted_subj_indices], newshape=(-1, 1))
# TODO:
subject_based_corrupted_triples = np.concatenate(
[corrupted_subjects, batch_relations[:num_subj_corrupt], batch_objs[:num_subj_corrupt]], axis=1)
corrupted_obj_indices = np.random.choice(np.arange(0, num_entities), size=num_obj_corrupt)
corrupted_objects = np.reshape(all_entities[corrupted_obj_indices], newshape=(-1, 1))
object_based_corrupted_triples = np.concatenate(
[batch_subjs[num_subj_corrupt:], batch_relations[num_subj_corrupt:], corrupted_objects], axis=1)
neg_batch = np.concatenate([subject_based_corrupted_triples, object_based_corrupted_triples], axis=0)
neg_batch = torch.tensor(neg_batch, dtype=torch.long, device=device)
# Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
optimizer.zero_grad()
loss = kg_embedding_model(pos_batch, neg_batch)
current_epoch_loss += (loss.item() * current_batch_size)
loss.backward()
optimizer.step()
stop = timeit.default_timer()
log.info("Epoch %s took %s seconds \n" % (str(epoch), str(round(stop - start))))
# Track epoch loss
loss_per_epoch.append(current_epoch_loss / len(pos_triples))
stop_training = timeit.default_timer()
log.info("Training took %s seconds \n" % (str(round(stop_training - start_training))))
return kg_embedding_model, loss_per_epoch
def _train_conv_e_model(conv_e_model, all_entities, learning_rate, num_epochs, batch_size, pos_triples, device,
seed):
conv_e_model = conv_e_model.to(device)
optimizer = optim.Adam(conv_e_model.parameters(), lr=learning_rate)
loss_per_epoch = []
log.info('****Run Model On %s****' % str(device).upper())
num_pos_triples = pos_triples.shape[0]
num_entities = all_entities.shape[0]
start_training = timeit.default_timer()
for epoch in range(num_epochs):
np.random.seed(seed=seed)
indices = np.arange(num_pos_triples)
np.random.shuffle(indices)
pos_triples = pos_triples[indices]
start = timeit.default_timer()
num_positives = batch_size // 2
# TODO: Make sure that batch = num_pos + num_negs
num_negatives = batch_size - num_positives
pos_batches = _split_list_in_batches(input_list=pos_triples, batch_size=num_positives)
current_epoch_loss = 0.
for i in range(len(pos_batches)):
# TODO: Remove original subject and object from entity set
pos_batch = pos_batches[i]
current_batch_size = len(pos_batch)
batch_subjs = pos_batch[:, 0:1]
batch_relations = pos_batch[:, 1:2]
batch_objs = pos_batch[:, 2:3]
num_subj_corrupt = len(pos_batch) // 2
num_obj_corrupt = len(pos_batch) - num_subj_corrupt
pos_batch = torch.tensor(pos_batch, dtype=torch.long, device=device)
corrupted_subj_indices = np.random.choice(np.arange(0, num_entities), size=num_subj_corrupt)
corrupted_subjects = np.reshape(all_entities[corrupted_subj_indices], newshape=(-1, 1))
subject_based_corrupted_triples = np.concatenate(
[corrupted_subjects, batch_relations[:num_subj_corrupt], batch_objs[:num_subj_corrupt]], axis=1)
corrupted_obj_indices = np.random.choice(np.arange(0, num_entities), size=num_obj_corrupt)
corrupted_objects = np.reshape(all_entities[corrupted_obj_indices], newshape=(-1, 1))
object_based_corrupted_triples = np.concatenate(
[batch_subjs[num_subj_corrupt:], batch_relations[num_subj_corrupt:], corrupted_objects], axis=1)
neg_batch = np.concatenate([subject_based_corrupted_triples, object_based_corrupted_triples], axis=0)
neg_batch = torch.tensor(neg_batch, dtype=torch.long, device=device)
batch = np.concatenate([pos_batch, neg_batch], axis=0)
positive_labels = np.ones(shape=(current_batch_size))
negative_labels = np.zeros(shape=(current_batch_size))
labels = np.concatenate([positive_labels, negative_labels], axis=0)
batch, labels = shuffle(batch, labels, random_state=seed)
batch = torch.tensor(batch, dtype=torch.long)
labels = torch.tensor(labels, dtype=torch.float)
# Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
optimizer.zero_grad()
loss = conv_e_model(batch, labels)
current_epoch_loss += (loss.item() * current_batch_size)
loss.backward()
optimizer.step()
stop = timeit.default_timer()
log.info("Epoch %s took %s seconds \n" % (str(epoch), str(round(stop - start))))
# Track epoch loss
loss_per_epoch.append(current_epoch_loss / len(pos_triples))
stop_training = timeit.default_timer()
log.info("Training took %s seconds \n" % (str(round(stop_training - start_training))))
return conv_e_model, loss_per_epoch
| 41.848958 | 120 | 0.661357 | 1,035 | 8,035 | 4.802899 | 0.144928 | 0.032187 | 0.039429 | 0.021123 | 0.808892 | 0.799034 | 0.786562 | 0.786562 | 0.786562 | 0.786562 | 0 | 0.007074 | 0.243435 | 8,035 | 191 | 121 | 42.068063 | 0.81066 | 0.061108 | 0 | 0.677419 | 0 | 0 | 0.020452 | 0 | 0 | 0 | 0 | 0.005236 | 0 | 1 | 0.032258 | false | 0 | 0.056452 | 0.008065 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
31ec57dcf6ff0e15f81dc145058eacf24f7e23e5 | 7,748 | py | Python | a2c_ppo_acktr/wrappers/algorithms.py | AndrewPaulChester/pytorch-a2c-ppo-acktr-gail | efb2f508ee0d7bad8d3bf438cd231d66fbbd4df0 | [
"MIT"
] | null | null | null | a2c_ppo_acktr/wrappers/algorithms.py | AndrewPaulChester/pytorch-a2c-ppo-acktr-gail | efb2f508ee0d7bad8d3bf438cd231d66fbbd4df0 | [
"MIT"
] | null | null | null | a2c_ppo_acktr/wrappers/algorithms.py | AndrewPaulChester/pytorch-a2c-ppo-acktr-gail | efb2f508ee0d7bad8d3bf438cd231d66fbbd4df0 | [
"MIT"
] | null | null | null | import abc
import time
import gtimer as gt
from a2c_ppo_acktr.wrappers.data_collectors import RolloutStepCollector
from rlkit.core.rl_algorithm import BaseRLAlgorithm
from rlkit.data_management.replay_buffer import ReplayBuffer
class IkostrikovRLAlgorithm(BaseRLAlgorithm, metaclass=abc.ABCMeta):
def __init__(
self,
trainer,
exploration_env,
evaluation_env,
exploration_data_collector: RolloutStepCollector,
evaluation_data_collector: RolloutStepCollector,
replay_buffer: ReplayBuffer,
batch_size,
max_path_length,
num_epochs,
num_eval_steps_per_epoch,
num_expl_steps_per_train_loop,
num_trains_per_train_loop,
use_linear_lr_decay,
num_train_loops_per_epoch=1,
min_num_steps_before_training=0,
):
super().__init__(
trainer,
exploration_env,
evaluation_env,
exploration_data_collector,
evaluation_data_collector,
replay_buffer,
)
self.batch_size = batch_size
self.max_path_length = max_path_length
self.num_epochs = num_epochs
self.num_eval_steps_per_epoch = num_eval_steps_per_epoch
self.num_trains_per_train_loop = num_trains_per_train_loop
self.num_train_loops_per_epoch = num_train_loops_per_epoch
self.num_expl_steps_per_train_loop = num_expl_steps_per_train_loop
self.min_num_steps_before_training = min_num_steps_before_training
self.use_linear_lr_decay = use_linear_lr_decay
assert (
self.num_trains_per_train_loop >= self.num_expl_steps_per_train_loop
), "Online training presumes num_trains_per_train_loop >= num_expl_steps_per_train_loop"
def _train(self):
self.training_mode(False)
for epoch in gt.timed_for(
range(self._start_epoch, self.num_epochs), save_itrs=True
):
print(f"in train, with eval to go: {self.num_eval_steps_per_epoch}")
for step in range(self.num_eval_steps_per_epoch):
self.eval_data_collector.collect_one_step(
step, self.num_eval_steps_per_epoch
)
gt.stamp("evaluation sampling")
print("done with eval")
for _ in range(self.num_train_loops_per_epoch):
# this if check could be moved inside the function
if self.use_linear_lr_decay:
# decrease learning rate linearly
self.trainer.decay_lr(epoch, self.num_epochs)
for step in range(self.num_expl_steps_per_train_loop):
self.expl_data_collector.collect_one_step(
step, self.num_expl_steps_per_train_loop
)
# time.sleep(1)
gt.stamp("exploration sampling", unique=False)
rollouts = self.expl_data_collector.get_rollouts()
gt.stamp("data storing", unique=False)
self.training_mode(True)
self.trainer.train(rollouts)
gt.stamp("training", unique=False)
self.training_mode(False)
self._end_epoch(epoch)
def evaluate(self):
self._start_epoch = 0
self.training_mode(False)
for epoch in gt.timed_for(
range(self._start_epoch, self.num_epochs), save_itrs=True
):
for step in range(self.num_eval_steps_per_epoch):
self.eval_data_collector.collect_one_step(
step, self.num_eval_steps_per_epoch
)
gt.stamp("evaluation sampling")
self._end_epoch(epoch)
class TorchIkostrikovRLAlgorithm(IkostrikovRLAlgorithm):
def to(self, device):
for net in self.trainer.networks:
net.to(device)
def training_mode(self, mode):
for net in self.trainer.networks:
net.train(mode)
# class ThreeTierRLAlgorithm(BaseRLAlgorithm, metaclass=abc.ABCMeta):
# def __init__(
# self,
# high_trainer,
# low_trainer,
# exploration_env,
# evaluation_env,
# exploration_data_collector: RolloutStepCollector,
# evaluation_data_collector: RolloutStepCollector,
# replay_buffer: ReplayBuffer,
# batch_size,
# max_path_length,
# num_epochs,
# num_eval_steps_per_epoch,
# num_expl_steps_per_train_loop,
# num_trains_per_train_loop,
# use_linear_lr_decay,
# num_train_loops_per_epoch=1,
# min_num_steps_before_training=0,
# ):
# #TODO: Need to be careful about the trainer call in logging, might either need to alter base methods or put two trainers in one?
# super().__init__(
# trainer,
# exploration_env,
# evaluation_env,
# exploration_data_collector,
# evaluation_data_collector,
# replay_buffer,
# )
# self.batch_size = batch_size
# self.max_path_length = max_path_length
# self.num_epochs = num_epochs
# self.num_eval_steps_per_epoch = num_eval_steps_per_epoch
# self.num_trains_per_train_loop = num_trains_per_train_loop
# self.num_train_loops_per_epoch = num_train_loops_per_epoch
# self.num_expl_steps_per_train_loop = num_expl_steps_per_train_loop
# self.min_num_steps_before_training = min_num_steps_before_training
# self.use_linear_lr_decay = use_linear_lr_decay
# assert (
# self.num_trains_per_train_loop >= self.num_expl_steps_per_train_loop
# ), "Online training presumes num_trains_per_train_loop >= num_expl_steps_per_train_loop"
# def _train(self):
# self.training_mode(False)
# for epoch in gt.timed_for(
# range(self._start_epoch, self.num_epochs), save_itrs=True
# ):
# print(f"in train, with eval to go: {self.num_eval_steps_per_epoch}")
# for step in range(self.num_eval_steps_per_epoch):
# self.eval_data_collector.collect_one_step(
# step, self.num_eval_steps_per_epoch
# )
# gt.stamp("evaluation sampling")
# print("done with eval")
# for _ in range(self.num_train_loops_per_epoch):
# # this if check could be moved inside the function
# if self.use_linear_lr_decay:
# # decrease learning rate linearly
# self.trainer.decay_lr(epoch, self.num_epochs)
# for step in range(self.num_expl_steps_per_train_loop):
# self.expl_data_collector.collect_one_step(
# step, self.num_expl_steps_per_train_loop
# )
# gt.stamp("data storing", unique=False)
# rollouts = self.expl_data_collector.get_rollouts()
# self.training_mode(True)
# self.trainer.train(rollouts)
# gt.stamp("training", unique=False)
# self.training_mode(False)
# self._end_epoch(epoch)
# def evaluate(self):
# self._start_epoch = 0
# self.training_mode(False)
# for epoch in gt.timed_for(
# range(self._start_epoch, self.num_epochs), save_itrs=True
# ):
# for step in range(self.num_eval_steps_per_epoch):
# self.eval_data_collector.collect_one_step(
# step, self.num_eval_steps_per_epoch
# )
# gt.stamp("evaluation sampling")
# self._end_epoch(epoch)
| 36.895238 | 138 | 0.623774 | 926 | 7,748 | 4.777538 | 0.137149 | 0.056962 | 0.065099 | 0.05425 | 0.88585 | 0.88585 | 0.875226 | 0.84132 | 0.84132 | 0.818264 | 0 | 0.001488 | 0.306273 | 7,748 | 209 | 139 | 37.07177 | 0.821581 | 0.472122 | 0 | 0.307692 | 0 | 0 | 0.058221 | 0.021239 | 0 | 0 | 0 | 0.004785 | 0.010989 | 1 | 0.054945 | false | 0 | 0.065934 | 0 | 0.142857 | 0.021978 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9ec88028f4c8cc0f2d5164b57876466f77ffd3a7 | 1,888 | py | Python | examples/written_numbers.py | HelloChatterbox/simple_NER | 678f0003030c9158583122e8b818a3b4e4ba3ea1 | [
"MIT"
] | 25 | 2019-12-26T14:10:47.000Z | 2022-03-16T02:17:16.000Z | examples/written_numbers.py | HelloChatterbox/simple_NER | 678f0003030c9158583122e8b818a3b4e4ba3ea1 | [
"MIT"
] | null | null | null | examples/written_numbers.py | HelloChatterbox/simple_NER | 678f0003030c9158583122e8b818a3b4e4ba3ea1 | [
"MIT"
] | 5 | 2020-08-16T16:38:09.000Z | 2022-03-21T16:59:16.000Z | from simple_NER.annotators.numbers_ner import NumberNER
ner = NumberNER()
for r in ner.extract_entities("three hundred trillion tons of spinning metal"):
"""
{'confidence': 1,
'data': {'number': '300000000000000.0'},
'entity_type': 'written_number',
'rules': [],
'source_text': 'three hundred trillion tons of spinning metal',
'spans': [(0, 22)],
'value': 'three hundred trillion'}
"""
ner = NumberNER(short_scale=False)
for r in ner.extract_entities("three hundred trillion tons of spinning metal"):
"""
{'confidence': 1,
'data': {'number': '3e+20'},
'entity_type': 'written_number',
'rules': [],
'source_text': 'three hundred trillion tons of spinning metal',
'spans': [(0, 22)],
'value': 'three hundred trillion'}
"""
ner = NumberNER()
for r in ner.extract_entities("the 5th number of the third thing"):
"""
{'confidence': 1,
'data': {'number': '5'},
'entity_type': 'written_number',
'rules': [],
'source_text': 'the 5th number of the third thing',
'spans': [(4, 7)],
'value': '5th'}
{'confidence': 1,
'data': {'number': '3'},
'entity_type': 'written_number',
'rules': [],
'source_text': 'the 5th number of the third thing',
'spans': [(22, 27)],
'value': 'third'}
"""
ner = NumberNER(ordinals=False)
for r in ner.extract_entities("the 5th number of the third thing"):
"""
{'confidence': 1,
'data': {'number': '5'},
'entity_type': 'written_number',
'rules': [],
'source_text': 'the 5th number of the third thing',
'spans': [(4, 7)],
'value': '5th'}
{'confidence': 1,
'data': {'number': '0.3333333333333333'},
'entity_type': 'written_number',
'rules': [],
'source_text': 'the 5th number of the third thing',
'spans': [(22, 27)],
'value': 'third'}
""" | 29.968254 | 79 | 0.57786 | 224 | 1,888 | 4.758929 | 0.223214 | 0.067542 | 0.11257 | 0.118199 | 0.891182 | 0.891182 | 0.891182 | 0.881801 | 0.870544 | 0.870544 | 0 | 0.048898 | 0.230932 | 1,888 | 63 | 80 | 29.968254 | 0.685262 | 0 | 0 | 0.666667 | 0 | 0 | 0.325679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9edcc743cece33d41b3ef89de3f18a87d48a1144 | 253 | py | Python | cimcb_lite/utils/nested_getattr.py | RuibingS/cimcb | 382f7d8fff30d3d276f18ac8c7dc686e0e643fa9 | [
"MIT"
] | 5 | 2020-05-26T23:45:40.000Z | 2022-01-13T00:40:14.000Z | cimcb_lite/utils/nested_getattr.py | RuibingS/cimcb | 382f7d8fff30d3d276f18ac8c7dc686e0e643fa9 | [
"MIT"
] | 3 | 2020-10-20T09:03:18.000Z | 2021-11-01T14:22:05.000Z | cimcb_lite/utils/nested_getattr.py | RuibingS/cimcb | 382f7d8fff30d3d276f18ac8c7dc686e0e643fa9 | [
"MIT"
] | 4 | 2020-10-12T07:17:43.000Z | 2022-03-28T06:28:44.000Z | from functools import reduce
def nested_getattr(model, attributes):
"""getattr for nested attributes."""
def _getattr(model, attributes):
return getattr(model, attributes)
return reduce(_getattr, [model] + attributes.split("."))
| 23 | 60 | 0.699605 | 27 | 253 | 6.444444 | 0.444444 | 0.275862 | 0.505747 | 0.321839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 253 | 10 | 61 | 25.3 | 0.84058 | 0.118577 | 0 | 0 | 0 | 0 | 0.004608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
7308e8a7df0420ed7a6105c5802b416399b5f26d | 78 | py | Python | Text_Annotation/demo/__init__.py | renjunxiang/Text_Annotation | cd03ac5f49c4460b762c499d60bc0b2434364dc0 | [
"MIT"
] | 5 | 2018-08-09T03:30:16.000Z | 2019-10-21T00:36:18.000Z | Text_Annotation/demo/__init__.py | renjunxiang/Text_Annotation | cd03ac5f49c4460b762c499d60bc0b2434364dc0 | [
"MIT"
] | null | null | null | Text_Annotation/demo/__init__.py | renjunxiang/Text_Annotation | cd03ac5f49c4460b762c499d60bc0b2434364dc0 | [
"MIT"
] | 3 | 2018-07-26T20:51:06.000Z | 2019-11-21T01:44:08.000Z | from .annotate_cut import annotate_cut
from .annotate_pos import annotate_pos
| 26 | 38 | 0.871795 | 12 | 78 | 5.333333 | 0.416667 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 78 | 2 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
730965623ac94bf8f18427b5825634c2d079d9d7 | 175 | py | Python | rested/commands/__init__.py | mochi-ai/rested | 94a8961f2297fd4a941417f4be47047d91a4aa00 | [
"MIT"
] | 1 | 2020-09-07T22:04:05.000Z | 2020-09-07T22:04:05.000Z | rested/commands/__init__.py | mochi-ai/rested | 94a8961f2297fd4a941417f4be47047d91a4aa00 | [
"MIT"
] | null | null | null | rested/commands/__init__.py | mochi-ai/rested | 94a8961f2297fd4a941417f4be47047d91a4aa00 | [
"MIT"
] | null | null | null | from . import create
from . import manage
from . import serve
from . import shell
from . import test
from . import worker
from . import version
from .command import register
| 17.5 | 29 | 0.765714 | 25 | 175 | 5.36 | 0.44 | 0.522388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188571 | 175 | 9 | 30 | 19.444444 | 0.943662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7347fdf03639839a7c41a0aa1ae64a234241616b | 121 | py | Python | Aula1/Aula1.py | gabriel-correia0408/Sala_Green_GabrielCorreia | 1d22f466d372786c5f8c8eaba7202844b5f03445 | [
"Apache-2.0"
] | null | null | null | Aula1/Aula1.py | gabriel-correia0408/Sala_Green_GabrielCorreia | 1d22f466d372786c5f8c8eaba7202844b5f03445 | [
"Apache-2.0"
] | null | null | null | Aula1/Aula1.py | gabriel-correia0408/Sala_Green_GabrielCorreia | 1d22f466d372786c5f8c8eaba7202844b5f03445 | [
"Apache-2.0"
] | null | null | null | class Pessoa:
id = 0
def set_id(self,id):
self.id = id
def get_id(self):
return self.id
| 11 | 24 | 0.512397 | 19 | 121 | 3.157895 | 0.473684 | 0.3 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0.38843 | 121 | 10 | 25 | 12.1 | 0.797297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b438fd9fc0303a38499b6b0633a082b294777ba4 | 180 | py | Python | plaso/hashers/__init__.py | Defense-Cyber-Crime-Center/plaso | 4f3a85fbea10637c1cdbf0cde9fc539fdcea9c47 | [
"Apache-2.0"
] | 2 | 2016-02-18T12:46:29.000Z | 2022-03-13T03:04:59.000Z | plaso/hashers/__init__.py | CNR-ITTIG/plasodfaxp | 923797fc00664fa9e3277781b0334d6eed5664fd | [
"Apache-2.0"
] | null | null | null | plaso/hashers/__init__.py | CNR-ITTIG/plasodfaxp | 923797fc00664fa9e3277781b0334d6eed5664fd | [
"Apache-2.0"
] | 6 | 2016-12-18T08:05:36.000Z | 2021-04-06T14:19:11.000Z | # -*- coding: utf-8 -*-
"""This file contains an import statement for each hasher."""
from plaso.hashers import md5
from plaso.hashers import sha1
from plaso.hashers import sha256
| 30 | 61 | 0.75 | 27 | 180 | 5 | 0.666667 | 0.2 | 0.355556 | 0.488889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 0.144444 | 180 | 5 | 62 | 36 | 0.837662 | 0.433333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b4526614da7aceee5b403a3769bfd1bca453e892 | 1,027 | py | Python | iMOOC/00_Flask_blog/home/views.py | MurphyWan/SDC | 5828bc9c7e818fffa7db951fc9f00adf261f9031 | [
"Unlicense"
] | 1 | 2017-03-20T15:46:13.000Z | 2017-03-20T15:46:13.000Z | iMOOC/00_Flask_blog/home/views.py | MurphyWan/SDC | 5828bc9c7e818fffa7db951fc9f00adf261f9031 | [
"Unlicense"
] | null | null | null | iMOOC/00_Flask_blog/home/views.py | MurphyWan/SDC | 5828bc9c7e818fffa7db951fc9f00adf261f9031 | [
"Unlicense"
] | null | null | null | # coding:utf8
# 3定义home视图
# 从当前模块中导入
from . import home
from flask import render_template, redirect, url_for
# 在home.html的内容里面定义数据块{% block content %}{% endblock %}之后
# 这里我们要导入render_template模块,用于呈现网页
# 定义视图函数
@home.route("/")
def index():
return render_template("home/index.html")
@home.route("/login/")
def login():
return render_template("home/login.html")
@home.route("/logout/")
def logout():
return redirect(url_for('home.login')) # 这里,我们退出后就重新回到登录界面,所以要用到重定向redirect,导入该库。
@home.route("/register/")
def register():
return render_template("home/register.html")
@home.route("/user/")
def user():
return render_template("home/user.html")
@home.route("/pwd/")
def pwd():
return render_template("home/pwd.html")
@home.route("/comments/")
def comments():
return render_template("home/comments.html")
@home.route("/loginlog/")
def loginlog():
return render_template("home/loginlog.html")
@home.route("/moviecol/")
def moviecol():
return render_template("home/moviecol.html")
| 20.137255 | 86 | 0.703992 | 127 | 1,027 | 5.598425 | 0.314961 | 0.177215 | 0.225035 | 0.270042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00222 | 0.122687 | 1,027 | 50 | 87 | 20.54 | 0.786903 | 0.161636 | 0 | 0 | 0 | 0 | 0.241501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.310345 | true | 0 | 0.068966 | 0.310345 | 0.689655 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b46d9a40787c4c5dd58d723672554c05edf6a443 | 144 | py | Python | tests/test_zh2en.py | ffreemt/gpt3-api | 785f1bf09915047f6bfb5071929ebaa61b000d43 | [
"MIT"
] | 1 | 2021-11-23T11:01:03.000Z | 2021-11-23T11:01:03.000Z | tests/test_zh2en.py | ffreemt/gpt3-api | 785f1bf09915047f6bfb5071929ebaa61b000d43 | [
"MIT"
] | null | null | null | tests/test_zh2en.py | ffreemt/gpt3-api | 785f1bf09915047f6bfb5071929ebaa61b000d43 | [
"MIT"
] | null | null | null | """Test zh2en."""
from gpt3_api.zh2en import zh2en
def test_zh2en():
"""Test zh2en '这是测试'."""
assert "test" in zh2en("这是测试").lower()
| 16 | 42 | 0.618056 | 20 | 144 | 4.35 | 0.55 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059322 | 0.180556 | 144 | 8 | 43 | 18 | 0.677966 | 0.208333 | 0 | 0 | 0 | 0 | 0.07767 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c330ddfd8c55cce1897833e2ce443501096895e9 | 132 | py | Python | Patterns/rebs-becca15.py | sanchit781/HACKTOBERFEST2021_PATTERN | c457eb2a1c7b729bdaa26ade7d4c7eb4092291e2 | [
"MIT"
] | 229 | 2021-09-10T13:24:47.000Z | 2022-03-18T16:54:29.000Z | Patterns/rebs-becca15.py | sanchit781/HACKTOBERFEST2021_PATTERN | c457eb2a1c7b729bdaa26ade7d4c7eb4092291e2 | [
"MIT"
] | 164 | 2021-09-10T12:04:39.000Z | 2021-10-29T21:20:42.000Z | Patterns/rebs-becca15.py | sanchit781/HACKTOBERFEST2021_PATTERN | c457eb2a1c7b729bdaa26ade7d4c7eb4092291e2 | [
"MIT"
] | 567 | 2021-09-10T17:35:27.000Z | 2021-12-11T12:45:43.000Z | for i in range(1,6):
for j in range(1,6):
if(j<6-i):
print(i*i, end=" ")
else:
print(i, end=" ")
print("\n") | 18.857143 | 25 | 0.454545 | 26 | 132 | 2.307692 | 0.461538 | 0.233333 | 0.266667 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054348 | 0.30303 | 132 | 7 | 26 | 18.857143 | 0.597826 | 0 | 0 | 0 | 0 | 0 | 0.030075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5ef0f201bf1b30c23c0001a611cf89e7dd5a8c82 | 156 | py | Python | example_project/core/admin.py | pnuckowski/django-guardian | 8e8dab207296ee37aa1d19eaeebfad7d0642f138 | [
"MIT"
] | 2,469 | 2015-07-27T14:21:38.000Z | 2022-03-29T23:37:37.000Z | example_project/core/admin.py | meetbill/django-guardian | d3611494c1e40a67b20e32ffe9e5198a53923aa2 | [
"MIT"
] | 458 | 2015-07-27T12:02:14.000Z | 2022-03-25T21:42:59.000Z | example_project/core/admin.py | meetbill/django-guardian | d3611494c1e40a67b20e32ffe9e5198a53923aa2 | [
"MIT"
] | 429 | 2015-07-31T08:04:17.000Z | 2022-03-11T09:28:55.000Z | from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from .models import CustomUser
admin.site.register(CustomUser, UserAdmin)
| 26 | 47 | 0.839744 | 21 | 156 | 6.238095 | 0.52381 | 0.152672 | 0.259542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 156 | 5 | 48 | 31.2 | 0.929078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f05dd7f7abc23bb8dc84b3f83bcfa1a3bf0745c | 81 | py | Python | GenerIter/app/clep_inventory.py | homosaur/GenerIter | 5b9d123bfb483c4eddd01d871ba39e72cfb95c9c | [
"MIT"
] | null | null | null | GenerIter/app/clep_inventory.py | homosaur/GenerIter | 5b9d123bfb483c4eddd01d871ba39e72cfb95c9c | [
"MIT"
] | null | null | null | GenerIter/app/clep_inventory.py | homosaur/GenerIter | 5b9d123bfb483c4eddd01d871ba39e72cfb95c9c | [
"MIT"
] | null | null | null | from GenerIter.app.inventory import Inventory
def main():
app = Inventory()
| 16.2 | 45 | 0.728395 | 10 | 81 | 5.9 | 0.7 | 0.40678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17284 | 81 | 4 | 46 | 20.25 | 0.880597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f19d7eac6db5a4d0eda12d19c473c60774381c3 | 48 | py | Python | ARLO/hyperparameter/__init__.py | arlo-lib/ARLO | 159669884044686e36e07bd1cc0948884ed7cc8d | [
"MIT"
] | null | null | null | ARLO/hyperparameter/__init__.py | arlo-lib/ARLO | 159669884044686e36e07bd1cc0948884ed7cc8d | [
"MIT"
] | null | null | null | ARLO/hyperparameter/__init__.py | arlo-lib/ARLO | 159669884044686e36e07bd1cc0948884ed7cc8d | [
"MIT"
] | null | null | null | from ARLO.hyperparameter.hyperparameter import * | 48 | 48 | 0.875 | 5 | 48 | 8.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f3cfede0d2dee88f97fc52260ca542a2fff2434 | 94 | py | Python | lmtuners/utils/__init__.py | shoarora/polytune | 86f31ba3f41ea47edcfa0442a29a79a2a46deaeb | [
"MIT"
] | 2 | 2020-09-17T10:12:36.000Z | 2020-11-22T16:34:08.000Z | lmtuners/utils/__init__.py | shoarora/polytune | 86f31ba3f41ea47edcfa0442a29a79a2a46deaeb | [
"MIT"
] | null | null | null | lmtuners/utils/__init__.py | shoarora/polytune | 86f31ba3f41ea47edcfa0442a29a79a2a46deaeb | [
"MIT"
] | null | null | null | from .masked_lm import mask_tokens # noqa: F401
from .utils import tie_weights # noqa: F401
| 31.333333 | 48 | 0.765957 | 15 | 94 | 4.6 | 0.733333 | 0.231884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.170213 | 94 | 2 | 49 | 47 | 0.807692 | 0.223404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f477a6ea88b6764f0cbd439f8384909a38b648e | 136 | py | Python | BE/shop/products/admin.py | kosior/ngLearn-1 | 4cc52153876aca409d56bd9cabace9283946bd32 | [
"MIT"
] | 2 | 2022-03-24T14:00:33.000Z | 2022-03-26T19:50:32.000Z | BE/shop/products/admin.py | kosior/ngLearn-1 | 4cc52153876aca409d56bd9cabace9283946bd32 | [
"MIT"
] | null | null | null | BE/shop/products/admin.py | kosior/ngLearn-1 | 4cc52153876aca409d56bd9cabace9283946bd32 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Product
@admin.register(Product)
class ProductAdmin(admin.ModelAdmin):
pass
| 15.111111 | 37 | 0.786765 | 17 | 136 | 6.294118 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139706 | 136 | 8 | 38 | 17 | 0.91453 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6f763d1280e08ee4c16909985f1e307b6c504777 | 199 | py | Python | livy_submit/__init__.py | ericdill/livy-submit | 50bb43aed61527ebb2e7e93cd204f6f1c7e0cc17 | [
"BSD-3-Clause"
] | 1 | 2021-04-14T03:05:56.000Z | 2021-04-14T03:05:56.000Z | livy_submit/__init__.py | ericdill/livy-submit | 50bb43aed61527ebb2e7e93cd204f6f1c7e0cc17 | [
"BSD-3-Clause"
] | 3 | 2020-01-13T16:50:15.000Z | 2020-02-03T20:30:37.000Z | livy_submit/__init__.py | ericdill/livy-submit | 50bb43aed61527ebb2e7e93cd204f6f1c7e0cc17 | [
"BSD-3-Clause"
] | 3 | 2019-12-05T16:53:03.000Z | 2021-09-01T17:23:24.000Z | from .krb import kinit_keytab, kinit_username
from .livy_api import LivyAPI
from .hdfs_api import upload
from ._version import get_versions
__version__ = get_versions()["version"]
del get_versions
| 22.111111 | 45 | 0.819095 | 29 | 199 | 5.206897 | 0.517241 | 0.218543 | 0.238411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120603 | 199 | 8 | 46 | 24.875 | 0.862857 | 0 | 0 | 0 | 0 | 0 | 0.035176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
488eedfa9092703dcb3097097517405201b6a395 | 3,242 | py | Python | test/PR_test/integration_test/trace/adapt/test_reduce_lr_on_plateau.py | hanskrupakar/fastestimator | 1c3fe89ad8b012991b524a6c48f328b2a80dc9f6 | [
"Apache-2.0"
] | null | null | null | test/PR_test/integration_test/trace/adapt/test_reduce_lr_on_plateau.py | hanskrupakar/fastestimator | 1c3fe89ad8b012991b524a6c48f328b2a80dc9f6 | [
"Apache-2.0"
] | null | null | null | test/PR_test/integration_test/trace/adapt/test_reduce_lr_on_plateau.py | hanskrupakar/fastestimator | 1c3fe89ad8b012991b524a6c48f328b2a80dc9f6 | [
"Apache-2.0"
] | null | null | null | import math
import unittest
from io import StringIO
from unittest.mock import patch
import fastestimator as fe
from fastestimator.test.unittest_util import MultiLayerTorchModel, one_layer_tf_model, sample_system_object
from fastestimator.trace.adapt import ReduceLROnPlateau
from fastestimator.util.data import Data
class TestReduceLROnPlateau(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.data = Data({'loss': 10})
cls.tf_expected_msg = "FastEstimator-ReduceLROnPlateau: learning rate reduced to 0.00010000000474974513"
cls.torch_expected_msg = "FastEstimator-ReduceLROnPlateau: learning rate reduced to 9.999999747378752e-05"
def test_tf_model_on_epoch_end_reduce_lr(self):
model = fe.build(model_fn=one_layer_tf_model, optimizer_fn='adam')
model_name = model.model_name + '_lr'
lr_on_plateau = ReduceLROnPlateau(model=model, metric='loss')
lr_on_plateau.system = sample_system_object()
lr_on_plateau.best = 5
lr_on_plateau.wait = 11
with patch('sys.stdout', new=StringIO()) as fake_stdout:
lr_on_plateau.on_epoch_end(data=self.data)
log = fake_stdout.getvalue().strip()
self.assertEqual(log, self.tf_expected_msg)
with self.subTest('Check learning rate in data'):
self.assertTrue(math.isclose(self.data[model_name], 0.000100000005, rel_tol=1e-3))
def test_tf_model_on_epoch_end_lr_wait(self):
model = fe.build(model_fn=one_layer_tf_model, optimizer_fn='adam')
lr_on_plateau = ReduceLROnPlateau(model=model, metric='loss')
lr_on_plateau.system = sample_system_object()
lr_on_plateau.best = 12
lr_on_plateau.on_epoch_end(data=self.data)
with self.subTest('Check value of wait'):
self.assertEqual(lr_on_plateau.wait, 0)
with self.subTest('Check value of best'):
self.assertEqual(lr_on_plateau.best, 10)
def test_torch_model_on_epoch_end_reduce_lr(self):
model = fe.build(model_fn=MultiLayerTorchModel, optimizer_fn='adam')
model_name = model.model_name + '_lr'
lr_on_plateau = ReduceLROnPlateau(model=model, metric='loss')
lr_on_plateau.system = sample_system_object()
lr_on_plateau.best = 5
lr_on_plateau.wait = 11
with patch('sys.stdout', new=StringIO()) as fake_stdout:
lr_on_plateau.on_epoch_end(data=self.data)
log = fake_stdout.getvalue().strip()
self.assertEqual(log, self.torch_expected_msg)
with self.subTest('Check learning rate in data'):
self.assertTrue(math.isclose(self.data[model_name], 0.000100000005, rel_tol=1e-3))
def test_torch_model_on_epoch_end_lr_wait(self):
model = fe.build(model_fn=MultiLayerTorchModel, optimizer_fn='adam')
lr_on_plateau = ReduceLROnPlateau(model=model, metric='loss')
lr_on_plateau.system = sample_system_object()
lr_on_plateau.best = 12
lr_on_plateau.on_epoch_end(data=self.data)
with self.subTest('Check value of wait'):
self.assertEqual(lr_on_plateau.wait, 0)
with self.subTest('Check value of best'):
self.assertEqual(lr_on_plateau.best, 10)
| 47.676471 | 114 | 0.706663 | 443 | 3,242 | 4.878104 | 0.191874 | 0.040722 | 0.111985 | 0.041647 | 0.807034 | 0.807034 | 0.807034 | 0.794077 | 0.736696 | 0.736696 | 0 | 0.033487 | 0.198643 | 3,242 | 67 | 115 | 48.38806 | 0.798306 | 0 | 0 | 0.666667 | 0 | 0 | 0.108267 | 0.033004 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.083333 | false | 0 | 0.133333 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
489da8b28526198a06fe7a7454c9a9edfbfbca2a | 1,055 | py | Python | aula023 - TRATAMENTO DE ERRO/ex113-FuncoesAprofundadas.py | miradouro/CursoEmVideo-Python | cc7b05a9a4aad8e6ef3b29453d83370094d75e41 | [
"MIT"
] | null | null | null | aula023 - TRATAMENTO DE ERRO/ex113-FuncoesAprofundadas.py | miradouro/CursoEmVideo-Python | cc7b05a9a4aad8e6ef3b29453d83370094d75e41 | [
"MIT"
] | null | null | null | aula023 - TRATAMENTO DE ERRO/ex113-FuncoesAprofundadas.py | miradouro/CursoEmVideo-Python | cc7b05a9a4aad8e6ef3b29453d83370094d75e41 | [
"MIT"
] | null | null | null | def leiaInt(msg):
ok = False
valor = 0
while True:
try:
n = int(input(msg))
except (ValueError, TypeError):
print(f'\033[1;31mERRO! Digite un número inteiro válido.\033[m')
continue
except (KeyboardInterrupt):
print(f'\033[1;32mEntrada de dados interrompida pelo usuario.\033[m')
return 0
else:
return n
def leiaFloat(msg):
ok = False
valor = 0
while True:
try:
n = str(input(msg)).replace(',', '.').strip()
n = float(n)
except (ValueError, TypeError):
print(f'\033[1;31mERRO! Digite un número inteiro válido.\033[m')
continue
except (KeyboardInterrupt):
print(f'\033[1;32mEntrada de dados interrompida pelo usuario.\033[m')
return 0
else:
return n
# Programa Principal
n = leiaInt('Digite um inteiro: ')
n2 = leiaFloat('Digite um real: ')
print(f'O valor inteiro digitado foi {n} e o valor real foi {n2}')
| 27.763158 | 81 | 0.552607 | 129 | 1,055 | 4.51938 | 0.395349 | 0.051458 | 0.06175 | 0.068611 | 0.703259 | 0.703259 | 0.703259 | 0.703259 | 0.703259 | 0.603774 | 0 | 0.059829 | 0.334597 | 1,055 | 37 | 82 | 28.513514 | 0.770655 | 0.017062 | 0 | 0.75 | 0 | 0 | 0.308213 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.1875 | 0.15625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
48b1ad64378048493f6923259e53fc7e98bab123 | 100 | py | Python | hs_model_program/admin.py | tommac7/hydroshare | 87c4543a55f98103d2614bf4c47f7904c3f9c029 | [
"BSD-3-Clause"
] | 178 | 2015-01-08T23:03:36.000Z | 2022-03-03T13:56:45.000Z | hs_model_program/admin.py | tommac7/hydroshare | 87c4543a55f98103d2614bf4c47f7904c3f9c029 | [
"BSD-3-Clause"
] | 4,125 | 2015-01-01T14:26:15.000Z | 2022-03-31T16:38:55.000Z | hs_model_program/admin.py | tommac7/hydroshare | 87c4543a55f98103d2614bf4c47f7904c3f9c029 | [
"BSD-3-Clause"
] | 53 | 2015-03-15T17:56:51.000Z | 2022-03-17T00:32:16.000Z | from django.contrib import admin
from .models import *
admin.site.unregister(ModelProgramResource)
| 20 | 43 | 0.83 | 12 | 100 | 6.916667 | 0.75 | 0.26506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 100 | 4 | 44 | 25 | 0.922222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
48b9f47d32e4959c5e551aabcd7630496ff82351 | 116 | py | Python | ml_automation/metrics/r2.py | ChillBoss/ml_automation | 50d42b3cd5a3bb2f7a91e4c53bf3bbfe7a3b1741 | [
"MIT"
] | null | null | null | ml_automation/metrics/r2.py | ChillBoss/ml_automation | 50d42b3cd5a3bb2f7a91e4c53bf3bbfe7a3b1741 | [
"MIT"
] | null | null | null | ml_automation/metrics/r2.py | ChillBoss/ml_automation | 50d42b3cd5a3bb2f7a91e4c53bf3bbfe7a3b1741 | [
"MIT"
] | null | null | null | import numpy as np
from sklearn.metrics import r2_score
def r2(preds, target):
return r2_score(target, preds)
| 16.571429 | 36 | 0.758621 | 19 | 116 | 4.526316 | 0.684211 | 0.162791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.172414 | 116 | 6 | 37 | 19.333333 | 0.864583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
48dfacd9d981a07ac8a88462c0f731dc224f4a7b | 1,116 | py | Python | tests/TestParseArgs.py | lastcolour/Tacos | fe2b65250bfa74613151ae2dc6a91eb30f254844 | [
"MIT"
] | null | null | null | tests/TestParseArgs.py | lastcolour/Tacos | fe2b65250bfa74613151ae2dc6a91eb30f254844 | [
"MIT"
] | null | null | null | tests/TestParseArgs.py | lastcolour/Tacos | fe2b65250bfa74613151ae2dc6a91eb30f254844 | [
"MIT"
] | null | null | null | import unittest
import lib.ParseArgs as PA
class TestParseArgs(unittest.TestCase):
def test_only_project_in_args_0(self):
projectName, startIdx = PA._extractProjectName(["--project", "Test"], 0)
self.assertEqual(projectName, "Test")
self.assertEqual(startIdx, 2)
def test_only_project_in_args_1(self):
projectName, startIdx = PA._extractProjectName(["--project:Test"], 0)
self.assertEqual(projectName, "Test")
self.assertEqual(startIdx, 1)
def test_only_project_in_args_2(self):
projectName, startIdx = PA._extractProjectName(["--project=Test"], 0)
self.assertEqual(projectName, "Test")
self.assertEqual(startIdx, 1)
def test_no_project_name_in_args(self):
projectName, startIdx = PA._extractProjectName(["--project:"], 0)
self.assertEqual(projectName, None)
self.assertEqual(startIdx, 0)
def test_project_name_quoted(self):
projectName, startIdx = PA._extractProjectName(["--project", "\"Test\""], 0)
self.assertEqual(projectName, "Test")
self.assertEqual(startIdx, 2) | 37.2 | 84 | 0.684588 | 123 | 1,116 | 5.98374 | 0.219512 | 0.203804 | 0.15625 | 0.169837 | 0.767663 | 0.767663 | 0.611413 | 0.611413 | 0.611413 | 0.611413 | 0 | 0.014365 | 0.189068 | 1,116 | 30 | 85 | 37.2 | 0.798895 | 0 | 0 | 0.347826 | 0 | 0 | 0.068935 | 0 | 0 | 0 | 0 | 0 | 0.434783 | 1 | 0.217391 | false | 0 | 0.086957 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b150461003bb7e6236ddff33cbe9e42d69fe8c6 | 15,865 | py | Python | dexplo/_tests/test_setitem.py | dexplo/dexplo | 2a522437d3bf848260f9772e7a8f705f534c2e2c | [
"BSD-3-Clause"
] | 78 | 2018-01-25T21:07:17.000Z | 2020-11-07T00:19:13.000Z | dexplo/_tests/test_setitem.py | dexplo/dexplo | 2a522437d3bf848260f9772e7a8f705f534c2e2c | [
"BSD-3-Clause"
] | null | null | null | dexplo/_tests/test_setitem.py | dexplo/dexplo | 2a522437d3bf848260f9772e7a8f705f534c2e2c | [
"BSD-3-Clause"
] | 8 | 2018-04-15T15:28:51.000Z | 2022-03-22T10:37:54.000Z | import dexplo as dx
import numpy as np
from numpy import array, nan
import pytest
from dexplo.testing import assert_frame_equal, assert_array_equal, assert_dict_list
class TestSetItem:
df = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
df1 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'teddy', 'niko', 'penny'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True]})
def test_setitem_scalar(self):
df1 = self.df.copy()
df1[0, 0] = -99
df2 = dx.DataFrame({'a': [-99, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1[0, 'b'] = 'pen'
df2 = dx.DataFrame({'a': [-99, 5], 'b': ['pen', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1[1, 'b'] = None
df2 = dx.DataFrame({'a': [-99, 5], 'b': ['pen', None], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
with pytest.raises(TypeError):
df1 = self.df.copy()
df1[0, 0] = 'sfa'
df1 = self.df.copy()
df1[0, 'c'] = 4.3
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [4.3, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[0, 'a'] = nan
df2 = dx.DataFrame({'a': [nan, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[1, 'a'] = -9.9
df2 = dx.DataFrame({'a': [1, -9.9], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
def test_setitem_entire_column_one_value(self):
df1 = self.df.copy()
df1[:, 'e'] = 5
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [5, 5]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = nan
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [nan, nan]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = 'grasshopper'
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': ['grasshopper', 'grasshopper']})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = True
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [True, True]})
assert_frame_equal(df1, df2)
def test_setitem_entire_new_colunm_from_array(self):
df1 = self.df.copy()
df1[:, 'e'] = np.array([9, 99])
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [9, 99]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = [9, np.nan]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [9, np.nan]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = np.array([True, False])
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = np.array(['poop', nan], dtype='O')
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': ['poop', nan]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = np.array(['poop', 'pants'])
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': ['poop', 'pants']})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = np.array([nan, nan])
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [nan, nan]})
assert_frame_equal(df1, df2)
def test_setitem_entire_new_colunm_from_list(self):
df1 = self.df.copy()
df1[:, 'e'] = [9, 99]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [9, 99]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = [9, np.nan]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [9, np.nan]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = [True, False]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = ['poop', nan]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': ['poop', nan]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = ['poop', 'pants']
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': ['poop', 'pants']})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'e'] = [nan, nan]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False], 'e': [nan, nan]})
assert_frame_equal(df1, df2)
def test_setitem_entire_old_column_from_array(self):
df1 = self.df.copy()
df1[:, 'd'] = np.array([9, 99])
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [9, 99]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
d = np.array([9, np.nan])
df1[:, 'd'] = d
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': d})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'd'] = np.array([True, False])
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'd'] = np.array(['poop', nan], dtype='O')
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': ['poop', nan]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'a'] = np.array(['poop', 'pants'], dtype='O')
df2 = dx.DataFrame({'a': ['poop', 'pants'], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'b'] = np.array([nan, nan])
df2 = dx.DataFrame({'a': [1, 5], 'b': [nan, nan], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'c'] = np.array([False, False])
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [False, False],
'd': [True, False]})
assert_frame_equal(df1, df2)
with pytest.raises(ValueError):
df1[:, 'b'] = np.array([1, 2, 3])
with pytest.raises(ValueError):
df1[:, 'b'] = np.array([1])
with pytest.raises(TypeError):
df1[:, 'a'] = np.array([5, {1, 2, 3}])
def test_setitem_entire_new_column_from_df(self):
df1 = self.df1.copy()
df1[:, 'a_bool'] = df1[:, 'a'] > 3
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'teddy', 'niko', 'penny'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True],
'a_bool': [False, True, True, True]},
columns=['a', 'b', 'c', 'd', 'a_bool'])
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[:, 'a2'] = df1[:, 'a'] + 5
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'teddy', 'niko', 'penny'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True],
'a2': [6, 10, 12, 16]},
columns=['a', 'b', 'c', 'd', 'a2'])
assert_frame_equal(df1, df2)
def test_setitem_entire_old_column_from_list(self):
df1 = self.df.copy()
df1[:, 'd'] = [9, 99]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [9, 99]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'd'] = [9, np.nan]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [9, np.nan]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'd'] = [True, False]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'd'] = ['poop', nan]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': ['poop', nan]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'a'] = ['poop', 'pants']
df2 = dx.DataFrame({'a': ['poop', 'pants'], 'b': ['eleni', 'teddy'], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'b'] = [nan, nan]
df2 = dx.DataFrame({'a': [1, 5], 'b': [nan, nan], 'c': [nan, 5.4],
'd': [True, False]})
assert_frame_equal(df1, df2)
df1 = self.df.copy()
df1[:, 'c'] = [False, False]
df2 = dx.DataFrame({'a': [1, 5], 'b': ['eleni', 'teddy'], 'c': [False, False],
'd': [True, False]})
assert_frame_equal(df1, df2)
with pytest.raises(ValueError):
self.df[:, 'b'] = [1, 2, 3]
with pytest.raises(ValueError):
self.df[:, 'b'] = [1]
with pytest.raises(TypeError):
self.df[:, 'a'] = [5, {1, 2, 3}]
def test_setitem_simultaneous_row_and_column(self):
df1 = self.df1.copy()
df1[[0, 1], 'a'] = [9, 10]
df2 = dx.DataFrame({'a': [9, 10, 7, 11], 'b': ['eleni', 'teddy', 'niko', 'penny'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[[0, -1], 'a'] = np.array([9, 10.5])
df2 = dx.DataFrame({'a': [9, 5, 7, 10.5], 'b': ['eleni', 'teddy', 'niko', 'penny'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[2:, 'b'] = np.array(['NIKO', 'PENNY'])
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'teddy', 'NIKO', 'PENNY'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[2, ['b', 'c']] = ['NIKO', 9.3]
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'teddy', 'NIKO', 'penny'],
'c': [nan, 5.4, 9.3, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[2, ['c', 'b']] = [9.3, None]
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'teddy', None, 'penny'],
'c': [nan, 5.4, 9.3, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[[1, -1], 'b':'d'] = [['TEDDY', nan, True], [nan, 5.5, False]]
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'TEDDY', 'niko', nan],
'c': [nan, nan, -1.1, 5.5], 'd': [True, True, False, False]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[1:-1, 'a':'d':2] = [[nan, 4], [3, 99]]
df2 = dx.DataFrame({'a': [1, nan, 3, 11], 'b': ['eleni', 'teddy', 'niko', 'penny'],
'c': [nan, 4, 99, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
def test_testitem_boolean(self):
df1 = self.df1.copy()
criteria = df1[:, 'a'] > 4
df1[criteria, 'b'] = 'TEDDY'
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'TEDDY', 'TEDDY', 'TEDDY'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
criteria = df1[:, 'a'] > 4
df1[criteria, 'b'] = ['A', 'B', 'C']
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'A', 'B', 'C'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
criteria = df1[:, 'a'] == 5
df1[criteria, :] = [nan, 'poop', 2.2, True]
df2 = dx.DataFrame({'a': [1, nan, 7, 11], 'b': ['eleni', 'poop', 'niko', 'penny'],
'c': [nan, 2.2, -1.1, .045], 'd': [True, True, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
with pytest.raises(ValueError):
df1[df1[:, 'a'] > 2, 'b'] = np.array(['aa', 'bb', 'cc', 'dd'])
df1 = self.df1.copy()
criteria = df1[:, 'a'] > 6
df1[criteria, 'b'] = np.array(['food', nan], dtype='O')
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'teddy', 'food', nan],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[df1[:, 'a'] < 6, ['d', 'c', 'a']] = [[False, nan, 5.3], [False, 44, 4]]
df2 = dx.DataFrame({'a': [5.3, 4, 7, 11], 'b': ['eleni', 'teddy', 'niko', 'penny'],
'c': [nan, 44, -1.1, .045], 'd': [False, False, False, True]})
assert_frame_equal(df1, df2)
def test_setitem_other_df(self):
df_other = dx.DataFrame({'z': [1, 10, 9, 50], 'y': ['dont', 'be a', 'silly', 'sausage']})
df1 = self.df1.copy()
df1[:, ['a', 'b']] = df_other
df2 = dx.DataFrame({'a': [1, 10, 9, 50], 'b': ['dont', 'be a', 'silly', 'sausage'],
'c': [nan, 5.4, -1.1, .045], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
df1 = self.df1.copy()
df1[[1, 3], ['c', 'b']] = df_other[[0, 2], :]
df2 = dx.DataFrame({'a': [1, 5, 7, 11], 'b': ['eleni', 'dont', 'niko', 'silly'],
'c': [nan, 1, -1.1, 9], 'd': [True, False, False, True]})
assert_frame_equal(df1, df2)
with pytest.raises(ValueError):
df1 = self.df1.copy()
df1[[1, 3], ['c', 'b']] = df_other[[0], :]
| 41.970899 | 97 | 0.423007 | 2,097 | 15,865 | 3.122079 | 0.04578 | 0.092409 | 0.098977 | 0.119139 | 0.882236 | 0.840232 | 0.821139 | 0.798228 | 0.76157 | 0.744158 | 0 | 0.072875 | 0.334006 | 15,865 | 377 | 98 | 42.082228 | 0.546754 | 0 | 0 | 0.627832 | 0 | 0 | 0.074819 | 0 | 0 | 0 | 0 | 0 | 0.171521 | 1 | 0.032362 | false | 0 | 0.016181 | 0 | 0.058252 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b1e8f28785b23ef5d81cdbb32ee5b44643e50b0 | 206 | py | Python | packages/jet_bridge_base/jet_bridge_base/fields/raw.py | F2210/jet-bridge | 72b1af5cd7df585a4026d65170d3607f8cdf6bea | [
"MIT"
] | 1,247 | 2019-01-10T22:22:08.000Z | 2022-03-29T20:54:32.000Z | packages/jet_bridge_base/jet_bridge_base/fields/raw.py | F2210/jet-bridge | 72b1af5cd7df585a4026d65170d3607f8cdf6bea | [
"MIT"
] | 12 | 2019-03-15T20:06:14.000Z | 2022-01-07T10:28:20.000Z | packages/jet_bridge_base/jet_bridge_base/fields/raw.py | F2210/jet-bridge | 72b1af5cd7df585a4026d65170d3607f8cdf6bea | [
"MIT"
] | 130 | 2019-02-26T17:36:53.000Z | 2022-03-17T22:46:27.000Z | from jet_bridge_base.fields.field import Field
class RawField(Field):
def to_internal_value_item(self, value):
return value
def to_representation_item(self, value):
return value
| 18.727273 | 46 | 0.723301 | 28 | 206 | 5.071429 | 0.607143 | 0.070423 | 0.183099 | 0.267606 | 0.338028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213592 | 206 | 10 | 47 | 20.6 | 0.876543 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d2a47abfdfa288331556fd4f5c8f627041c631d4 | 22,697 | py | Python | dataworkspace/dataworkspace/tests/applications/test_views.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | 1 | 2019-06-10T08:22:56.000Z | 2019-06-10T08:22:56.000Z | dataworkspace/dataworkspace/tests/applications/test_views.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | 2 | 2019-05-17T13:10:42.000Z | 2019-06-17T10:48:46.000Z | dataworkspace/dataworkspace/tests/applications/test_views.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | null | null | null | import json
from contextlib import contextmanager
from unittest import mock
import botocore
import pytest
from django.contrib.admin.models import LogEntry
from django.contrib.auth import get_user_model
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from django.test import Client, override_settings
from django.urls import reverse
from dataworkspace.apps.applications.models import (
VisualisationApproval,
ApplicationInstance,
UserToolConfiguration,
)
from dataworkspace.apps.datasets.constants import UserAccessType
from dataworkspace.tests import factories
from dataworkspace.tests.common import get_http_sso_data
@contextmanager
def _visualisation_ui_gitlab_mocks():
with mock.patch(
"dataworkspace.apps.applications.views._visualisation_gitlab_project"
) as projects_mock, mock.patch(
"dataworkspace.apps.applications.views._visualisation_branches"
) as branches_mock, mock.patch(
"dataworkspace.apps.applications.views.gitlab_has_developer_access"
) as access_mock:
access_mock.return_value = True
projects_mock.return_value = {
"id": 1,
"default_branch": "master",
"name": "test-gitlab-project",
}
branches_mock.return_value = [
{
"name": "master",
"commit": {"committed_date": "2020-04-14T21:25:22.000+00:00"},
}
]
yield projects_mock, branches_mock, access_mock
class TestDataVisualisationUICataloguePage:
def test_successful_post_data(self, staff_client):
visualisation = factories.VisualisationCatalogueItemFactory.create(
short_description="old",
published=False,
visualisation_template__gitlab_project_id=1,
)
# Login to admin site
staff_client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = staff_client.post(
reverse(
"visualisations:catalogue-item",
args=(visualisation.visualisation_template.gitlab_project_id,),
),
{
"short_description": "summary",
"user_access_type": UserAccessType.OPEN,
},
follow=True,
)
visualisation.refresh_from_db()
print(response.content)
assert response.status_code == 200
assert visualisation.short_description == "summary"
@pytest.mark.parametrize(
"start_type, expected_type",
(
(
UserAccessType.REQUIRES_AUTHORIZATION,
UserAccessType.REQUIRES_AUTHENTICATION,
),
(
UserAccessType.REQUIRES_AUTHENTICATION,
UserAccessType.REQUIRES_AUTHORIZATION,
),
(UserAccessType.REQUIRES_AUTHORIZATION, UserAccessType.OPEN),
),
)
def test_can_set_user_access_type(self, staff_client, start_type, expected_type):
log_count = LogEntry.objects.count()
visualisation = factories.VisualisationCatalogueItemFactory.create(
short_description="summary",
published=False,
user_access_type=start_type,
visualisation_template__gitlab_project_id=1,
)
# Login to admin site
staff_client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = staff_client.post(
reverse(
"visualisations:catalogue-item",
args=(visualisation.visualisation_template.gitlab_project_id,),
),
{"short_description": "summary", "user_access_type": expected_type},
follow=True,
)
visualisation.refresh_from_db()
assert response.status_code == 200
assert visualisation.user_access_type == expected_type
assert LogEntry.objects.count() == log_count + 1
def test_bad_post_data_no_short_description(self, staff_client):
visualisation = factories.VisualisationCatalogueItemFactory.create(
short_description="old",
published=False,
visualisation_template__gitlab_project_id=1,
)
# Login to admin site
staff_client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = staff_client.post(
reverse(
"visualisations:catalogue-item",
args=(visualisation.visualisation_template.gitlab_project_id,),
),
{"summary": ""},
follow=True,
)
visualisation.refresh_from_db()
assert response.status_code == 400
assert visualisation.short_description == "old"
assert "The visualisation must have a summary" in response.content.decode(response.charset)
class TestDataVisualisationUIApprovalPage:
@pytest.mark.django_db
def test_approve_visualisation_successfully(self):
develop_visualisations_permission = Permission.objects.get(
codename="develop_visualisations",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = factories.UserFactory.create(
username="visualisation.creator@test.com",
is_staff=False,
is_superuser=False,
)
user.user_permissions.add(develop_visualisations_permission)
visualisation = factories.VisualisationCatalogueItemFactory.create(
published=False, visualisation_template__gitlab_project_id=1
)
# Login to admin site
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = client.post(
reverse(
"visualisations:approvals",
args=(visualisation.visualisation_template.gitlab_project_id,),
),
{
"action": "approve",
"approved": "on",
"approver": user.id,
"visualisation": str(visualisation.visualisation_template.id),
},
follow=True,
)
assert response.status_code == 200
assert len(VisualisationApproval.objects.all()) == 1
@pytest.mark.django_db
def test_bad_post_data_approved_box_not_checked(self):
develop_visualisations_permission = Permission.objects.get(
codename="develop_visualisations",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = factories.UserFactory.create(
username="visualisation.creator@test.com",
is_staff=False,
is_superuser=False,
)
user.user_permissions.add(develop_visualisations_permission)
visualisation = factories.VisualisationCatalogueItemFactory.create(
published=False, visualisation_template__gitlab_project_id=1
)
# Login to admin site
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = client.post(
reverse(
"visualisations:approvals",
args=(visualisation.visualisation_template.gitlab_project_id,),
),
{
"action": "approve",
"approver": user.id,
"visualisation": str(visualisation.visualisation_template.id),
},
follow=True,
)
visualisation.refresh_from_db()
assert response.status_code == 400
assert (
"You must confirm that you have reviewed this visualisation"
in response.content.decode(response.charset)
)
assert len(VisualisationApproval.objects.all()) == 0
@pytest.mark.django_db
def test_unapprove_visualisation_successfully(self):
develop_visualisations_permission = Permission.objects.get(
codename="develop_visualisations",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = factories.UserFactory.create(
username="visualisation.creator@test.com",
is_staff=False,
is_superuser=False,
)
user.user_permissions.add(develop_visualisations_permission)
vis_cat_item = factories.VisualisationCatalogueItemFactory.create(
published=False, visualisation_template__gitlab_project_id=1
)
approval = factories.VisualisationApprovalFactory.create(
approved=True,
approver=user,
visualisation=vis_cat_item.visualisation_template,
)
# Login to admin site
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = client.post(
reverse(
"visualisations:approvals",
args=(vis_cat_item.visualisation_template.gitlab_project_id,),
),
{
"action": "unapprove",
"approver": user.id,
"visualisation": str(vis_cat_item.visualisation_template.id),
},
follow=True,
)
approval.refresh_from_db()
assert response.status_code == 200
assert len(VisualisationApproval.objects.all()) == 1
assert approval.approved is False
class TestQuickSightPollAndRedirect:
@pytest.mark.django_db
@override_settings(QUICKSIGHT_SSO_URL="https://sso.quicksight")
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
def test_view_redirects_to_quicksight_sso_url(self, mock_boto_client):
user = get_user_model().objects.create(is_staff=True, is_superuser=True)
# Login to admin site
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with mock.patch("dataworkspace.apps.applications.views.sync_quicksight_permissions"):
resp = client.get(reverse("applications:quicksight_redirect"), follow=False)
assert resp["Location"] == "https://sso.quicksight"
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
def test_view_starts_celery_polling_job(self, mock_boto_client):
user = get_user_model().objects.create(is_staff=True, is_superuser=True)
# Login to admin site
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with mock.patch(
"dataworkspace.apps.applications.views.sync_quicksight_permissions"
) as sync_mock:
client.get(reverse("applications:quicksight_redirect"), follow=False)
assert sync_mock.delay.call_args_list == [
mock.call(
user_sso_ids_to_update=(user.profile.sso_id,),
)
]
class TestToolsPage:
@pytest.mark.django_db
def test_user_with_no_size_config_shows_default_config(self):
group_name = "Visualisation Tools"
template = factories.ApplicationTemplateFactory()
template.group_name = group_name
template.save()
user = get_user_model().objects.create()
client = Client(**get_http_sso_data(user))
response = client.get(reverse("applications:tools"), follow=True)
assert len(response.context["tools"][group_name]["tools"]) == 3
tool = None
for item in response.context["tools"][group_name]["tools"]:
if item.name == template.nice_name:
tool = item
break
assert tool is not None
assert tool.tool_configuration.size_config.name == "Medium"
assert tool.tool_configuration.size_config.cpu == 1024
assert tool.tool_configuration.size_config.memory == 8192
@pytest.mark.django_db
def test_user_with_size_config_shows_correct_config(self):
group_name = "Visualisation Tools"
template = factories.ApplicationTemplateFactory()
template.group_name = group_name
template.save()
user = get_user_model().objects.create()
UserToolConfiguration.objects.create(
user=user,
tool_template=template,
size=UserToolConfiguration.SIZE_EXTRA_LARGE,
)
client = Client(**get_http_sso_data(user))
response = client.get(reverse("applications:tools"), follow=True)
assert len(response.context["tools"][group_name]["tools"]) == 3
tool = None
for item in response.context["tools"][group_name]["tools"]:
if item.name == template.nice_name:
tool = item
break
assert tool is not None
assert tool.tool_configuration.size_config.name == "Extra Large"
assert tool.tool_configuration.size_config.cpu == 4096
assert tool.tool_configuration.size_config.memory == 30720
class TestUserToolSizeConfigurationView:
@pytest.mark.django_db
def test_get_shows_all_size_choices(self):
tool = factories.ApplicationTemplateFactory()
user = get_user_model().objects.create()
client = Client(**get_http_sso_data(user))
response = client.get(
reverse(
"applications:configure_tool_size",
kwargs={"tool_host_basename": tool.host_basename},
),
)
assert response.status_code == 200
assert b"Small" in response.content
assert b"Medium (default)" in response.content
assert b"Large" in response.content
assert b"Extra Large" in response.content
@pytest.mark.django_db
def test_post_creates_new_tool_configuration(self):
tool = factories.ApplicationTemplateFactory(nice_name="RStudio")
user = get_user_model().objects.create()
assert not tool.user_tool_configuration.filter(user=user).first()
client = Client(**get_http_sso_data(user))
response = client.post(
reverse(
"applications:configure_tool_size",
kwargs={"tool_host_basename": tool.host_basename},
),
{"size": UserToolConfiguration.SIZE_EXTRA_LARGE},
follow=True,
)
assert response.status_code == 200
assert str(list(response.context["messages"])[0]) == "Saved RStudio size"
assert (
tool.user_tool_configuration.filter(user=user).first().size
== UserToolConfiguration.SIZE_EXTRA_LARGE
)
@pytest.mark.django_db
def test_post_updates_existing_tool_configuration(self):
tool = factories.ApplicationTemplateFactory(nice_name="RStudio")
user = get_user_model().objects.create()
UserToolConfiguration.objects.create(
user=user, tool_template=tool, size=UserToolConfiguration.SIZE_EXTRA_LARGE
)
client = Client(**get_http_sso_data(user))
response = client.post(
reverse(
"applications:configure_tool_size",
kwargs={"tool_host_basename": tool.host_basename},
),
{"size": UserToolConfiguration.SIZE_SMALL},
follow=True,
)
assert response.status_code == 200
assert str(list(response.context["messages"])[0]) == "Saved RStudio size"
assert (
tool.user_tool_configuration.filter(user=user).first().size
== UserToolConfiguration.SIZE_SMALL
)
class TestVisualisationLogs:
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.applications.views._visualisation_gitlab_project")
def test_not_developer(self, mock_get_gitlab_project):
mock_get_gitlab_project.return_value = {
"id": 1,
"default_branch": "master",
"name": "test-gitlab-project",
}
develop_visualisations_permission = Permission.objects.get(
codename="develop_visualisations",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = factories.UserFactory.create(
username="visualisation.creator@test.com",
is_staff=False,
is_superuser=False,
)
user.user_permissions.add(develop_visualisations_permission)
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with mock.patch(
"dataworkspace.apps.applications.views.gitlab_has_developer_access"
) as access_mock:
access_mock.return_value = False
response = client.get(reverse("visualisations:logs", args=(1, "xxx")))
assert response.status_code == 403
@pytest.mark.django_db
def test_commit_does_not_exist(self, mocker):
application_template = factories.ApplicationTemplateFactory()
factories.ApplicationInstanceFactory(
application_template=application_template,
commit_id="",
spawner_application_template_options=json.dumps(
{"CONTAINER_NAME": "user-defined-container"}
),
spawner_application_instance_id=json.dumps({"task_arn": "arn:test:vis/task-id/999"}),
)
mock_get_application_template = mocker.patch(
"dataworkspace.apps.applications.views._application_template"
)
mock_get_application_template.return_value = application_template
develop_visualisations_permission = Permission.objects.get(
codename="develop_visualisations",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = factories.UserFactory.create(
username="visualisation.creator@test.com",
is_staff=False,
is_superuser=False,
)
user.user_permissions.add(develop_visualisations_permission)
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = client.get(reverse("visualisations:logs", args=(1, "xxx")))
assert response.status_code == 200
assert response.content == b"No logs were found for this visualisation."
@pytest.mark.django_db
def test_no_events(self, mocker):
application_template = factories.ApplicationTemplateFactory()
factories.ApplicationInstanceFactory(
application_template=application_template,
commit_id="xxx",
spawner_application_template_options=json.dumps(
{"CONTAINER_NAME": "user-defined-container"}
),
spawner_application_instance_id=json.dumps({"task_arn": "arn:test:vis/task-id/999"}),
)
mock_get_application_template = mocker.patch(
"dataworkspace.apps.applications.views._application_template"
)
mock_get_application_template.return_value = application_template
mock_boto = mocker.patch("dataworkspace.apps.core.boto3_client.boto3.client")
mock_boto.return_value.get_log_events.side_effect = botocore.exceptions.ClientError(
error_response={"Error": {"Code": "ResourceNotFoundException"}},
operation_name="get_log_events",
)
develop_visualisations_permission = Permission.objects.get(
codename="develop_visualisations",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = factories.UserFactory.create(
username="visualisation.creator@test.com",
is_staff=False,
is_superuser=False,
)
user.user_permissions.add(develop_visualisations_permission)
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = client.get(reverse("visualisations:logs", args=(1, "xxx")))
assert response.status_code == 200
assert response.content == b"No logs were found for this visualisation."
@pytest.mark.django_db
def test_with_events(self, mocker):
application_template = factories.ApplicationTemplateFactory()
factories.ApplicationInstanceFactory(
application_template=application_template,
commit_id="xxx",
spawner_application_template_options=json.dumps(
{"CONTAINER_NAME": "user-defined-container"}
),
spawner_application_instance_id=json.dumps({"task_arn": "arn:test:vis/task-id/999"}),
)
mock_get_application_template = mocker.patch(
"dataworkspace.apps.applications.views._application_template"
)
mock_get_application_template.return_value = application_template
mock_boto = mocker.patch("dataworkspace.apps.core.boto3_client.boto3.client")
mock_boto.return_value.get_log_events.side_effect = [
{
"nextForwardToken": "12345",
"events": [{"timestamp": 1605891793796, "message": "log message 1"}],
},
{"events": [{"timestamp": 1605891793797, "message": "log message 2"}]},
]
develop_visualisations_permission = Permission.objects.get(
codename="develop_visualisations",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user = factories.UserFactory.create(
username="visualisation.creator@test.com",
is_staff=False,
is_superuser=False,
)
user.user_permissions.add(develop_visualisations_permission)
client = Client(**get_http_sso_data(user))
client.post(reverse("admin:index"), follow=True)
with _visualisation_ui_gitlab_mocks():
response = client.get(reverse("visualisations:logs", args=(1, "xxx")))
assert response.status_code == 200
assert response.content == (
b"2020-11-20 17:03:13.796000 - log message 1\n"
b"2020-11-20 17:03:13.797000 - log message 2\n"
)
| 39.404514 | 99 | 0.639159 | 2,229 | 22,697 | 6.238223 | 0.125168 | 0.032794 | 0.024452 | 0.015102 | 0.82215 | 0.787846 | 0.759871 | 0.732542 | 0.722042 | 0.701474 | 0 | 0.011384 | 0.268538 | 22,697 | 575 | 100 | 39.473043 | 0.826166 | 0.007005 | 0 | 0.606855 | 0 | 0 | 0.13729 | 0.075591 | 0 | 0 | 0 | 0 | 0.094758 | 1 | 0.03629 | false | 0 | 0.030242 | 0 | 0.078629 | 0.002016 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2a716f97972120bcf7d4de3bbb3eed66518458e | 67,271 | py | Python | tests/unit/test_copr_build.py | jpopelka/packit-service | 5802fa78200daf9cd479d820c3cc003c530b9af7 | [
"MIT"
] | null | null | null | tests/unit/test_copr_build.py | jpopelka/packit-service | 5802fa78200daf9cd479d820c3cc003c530b9af7 | [
"MIT"
] | null | null | null | tests/unit/test_copr_build.py | jpopelka/packit-service | 5802fa78200daf9cd479d820c3cc003c530b9af7 | [
"MIT"
] | null | null | null | # Copyright Contributors to the Packit project.
# SPDX-License-Identifier: MIT
import json
from datetime import datetime, timezone
from pathlib import Path
from typing import Type
import pytest
from celery import Celery
from copr.v3 import Client
from flexmock import flexmock
import gitlab
import packit
import packit_service
from ogr.abstract import GitProject, CommitStatus
from ogr.services.github import GithubProject
from ogr.services.github.check_run import (
GithubCheckRunStatus,
GithubCheckRunResult,
create_github_check_run_output,
)
from ogr.services.gitlab import GitlabProject
from packit.actions import ActionName
from packit.api import PackitAPI
from packit.config import PackageConfig, JobConfig, JobType, JobConfigTriggerType
from packit.config.job_config import JobMetadataConfig
from packit.copr_helper import CoprHelper
from packit.exceptions import FailedCreateSRPM, PackitCoprSettingsException
from packit_service import sentry_integration
from packit_service.config import ServiceConfig
from packit_service.models import (
CoprBuildModel,
SRPMBuildModel,
JobTriggerModel,
JobTriggerModelType,
)
from packit_service.service.db_triggers import (
AddPullRequestDbTrigger,
AddBranchPushDbTrigger,
AddReleaseDbTrigger,
)
from packit_service.worker.events import (
PullRequestGithubEvent,
PushGitHubEvent,
ReleaseEvent,
PushGitlabEvent,
MergeRequestGitlabEvent,
)
from packit_service.worker.build import copr_build
from packit_service.worker.build.copr_build import (
CoprBuildJobHelper,
BaseBuildJobHelper,
)
from packit_service.worker.monitoring import Pushgateway
from packit_service.worker.parser import Parser
from packit_service.worker.reporting import (
BaseCommitStatus,
StatusReporterGitlab,
StatusReporterGithubChecks,
)
from tests.spellbook import DATA_DIR
DEFAULT_TARGETS = [
"fedora-29-x86_64",
"fedora-30-x86_64",
"fedora-31-x86_64",
"fedora-rawhide-x86_64",
]
CACHE_CLEAR = [
packit.copr_helper.CoprHelper.get_available_chroots,
]
pytestmark = pytest.mark.usefixtures("cache_clear", "mock_get_aliases")
create_table_content = StatusReporterGithubChecks._create_table
@pytest.fixture(scope="module")
def branch_push_event() -> PushGitHubEvent:
file_content = (DATA_DIR / "webhooks" / "github" / "push_branch.json").read_text()
return Parser.parse_github_push_event(json.loads(file_content))
@pytest.fixture(scope="module")
def branch_push_event_gitlab() -> PushGitlabEvent:
file_content = (DATA_DIR / "webhooks" / "gitlab" / "push_branch.json").read_text()
return Parser.parse_gitlab_push_event(json.loads(file_content))
def build_helper(
event,
metadata=None,
trigger=None,
jobs=None,
db_trigger=None,
selected_job=None,
project_type: Type[GitProject] = GithubProject,
targets_override=None,
):
if jobs and metadata:
raise Exception("Only one of jobs and metadata can be used.")
if not metadata:
metadata = JobMetadataConfig(
_targets=DEFAULT_TARGETS,
owner="nobody",
)
jobs = jobs or [
JobConfig(
type=JobType.copr_build,
trigger=trigger or JobConfigTriggerType.pull_request,
metadata=metadata,
)
]
pkg_conf = PackageConfig(jobs=jobs, downstream_package_name="dummy")
handler = CoprBuildJobHelper(
service_config=ServiceConfig(),
package_config=pkg_conf,
job_config=selected_job or jobs[0],
project=project_type(
repo="the-example-repo",
service=flexmock(instance_url="git.instance.io"),
namespace="the/example/namespace",
),
metadata=flexmock(
pr_id=event.pr_id,
git_ref=event.git_ref,
commit_sha=event.commit_sha,
identifier=event.identifier,
tag_name=None,
task_accepted_time=datetime.now(timezone.utc),
),
db_trigger=db_trigger,
targets_override=targets_override,
pushgateway=Pushgateway(),
)
handler._api = PackitAPI(ServiceConfig(), pkg_conf)
return handler
def test_copr_build_check_names(github_pr_event):
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
event=github_pr_event,
metadata=JobMetadataConfig(_targets=["bright-future-x86_64"], owner="packit"),
db_trigger=trigger,
)
# we need to make sure that pr_id is set
# so we can check it out and add it to spec's release field
assert helper.metadata.pr_id
flexmock(copr_build).should_receive("get_copr_build_info_url").and_return(
"https://test.url"
)
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Building SRPM ...",
check_name="rpm-build:bright-future-x86_64",
url="",
links_to_external_services=None,
markdown_content=None,
).and_return()
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Starting RPM build...",
check_name="rpm-build:bright-future-x86_64",
url="https://test.url",
links_to_external_services=None,
markdown_content=None,
).and_return()
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(GithubProject).should_receive("create_check_run").and_return().never()
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PullRequestGithubEvent).should_receive("db_trigger").and_return(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").with_args(
project="the-example-namespace-the-example-repo-342-stg",
chroots=["bright-future-x86_64"],
owner="packit",
description=None,
instructions=None,
preserve_project=False,
list_on_homepage=False,
additional_repos=[],
request_admin_if_needed=True,
).and_return(None)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="packit",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({"bright-future-x86_64": "", "__proxy__": "something"})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_check_names_invalid_chroots(github_pr_event):
build_targets = [
"bright-future-x86_64",
"even-brighter-one-aarch64",
"fedora-32-x86_64",
]
# packit.config.aliases.get_aliases.cache_clear()
# packit.copr_helper.CoprHelper.get_available_chroots.cache_clear()
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
event=github_pr_event,
metadata=JobMetadataConfig(_targets=build_targets, owner="packit"),
db_trigger=trigger,
)
# we need to make sure that pr_id is set
# so we can check it out and add it to spec's release field
assert helper.metadata.pr_id
flexmock(copr_build).should_receive("get_copr_build_info_url").and_return(
"https://test.url"
)
flexmock(copr_build).should_receive("get_srpm_build_info_url").and_return(
"https://test.url"
)
for target in build_targets:
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Building SRPM ...",
check_name=f"rpm-build:{target}",
url="",
links_to_external_services=None,
markdown_content=None,
).and_return()
for not_supported_target in ("bright-future-x86_64", "fedora-32-x86_64"):
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.error,
description=f"Not supported target: {not_supported_target}",
check_name=f"rpm-build:{not_supported_target}",
url="https://test.url",
links_to_external_services=None,
markdown_content=None,
).and_return()
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Starting RPM build...",
check_name="rpm-build:even-brighter-one-aarch64",
url="https://test.url",
links_to_external_services=None,
markdown_content=None,
).and_return()
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
.should_receive("comment")
.with_args(
pr_id=342,
body="There are build targets that are not supported by COPR.\n"
"<details>\n<summary>Unprocessed build targets</summary>\n\n"
"```\n"
"bright-future-x86_64\n"
"fedora-32-x86_64\n"
"```\n</details>\n<details>\n"
"<summary>Available build targets</summary>\n\n"
"```\n"
"even-brighter-one-aarch64\n"
"not-so-bright-future-x86_64\n"
"```\n</details>",
)
.and_return()
)
flexmock(GithubProject).should_receive("create_check_run").and_return().never()
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PullRequestGithubEvent).should_receive("db_trigger").and_return(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="packit",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return(
{
"__response__": 200,
"not-so-bright-future-x86_64": "",
"even-brighter-one-aarch64": "",
"__proxy__": "something",
}
)
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_check_names_multiple_jobs(github_pr_event):
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
event=github_pr_event,
jobs=[
# We run only the job it's config is passed to the handler.
# Other one(s) has to be run by a different handler instance.
JobConfig(
type=JobType.copr_build,
trigger=JobConfigTriggerType.pull_request,
metadata=JobMetadataConfig(
_targets=["fedora-rawhide-x86_64"], owner="nobody"
),
actions={ActionName.post_upstream_clone: "ls /*"},
),
JobConfig(
type=JobType.copr_build,
trigger=JobConfigTriggerType.pull_request,
metadata=JobMetadataConfig(
_targets=["fedora-32-x86_64"], owner="nobody"
),
actions={ActionName.post_upstream_clone: 'bash -c "ls /*"'},
),
],
db_trigger=trigger,
selected_job=JobConfig(
type=JobType.copr_build,
trigger=JobConfigTriggerType.pull_request,
metadata=JobMetadataConfig(_targets=["fedora-32-x86_64"], owner="nobody"),
actions={ActionName.post_upstream_clone: 'bash -c "ls /*"'},
),
)
# we need to make sure that pr_id is set
# so we can check it out and add it to spec's release field
assert helper.metadata.pr_id
flexmock(copr_build).should_receive("get_copr_build_info_url").and_return(
"https://test.url"
)
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Building SRPM ...",
check_name="rpm-build:fedora-32-x86_64",
url="",
links_to_external_services=None,
markdown_content=None,
).and_return().once()
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Starting RPM build...",
check_name="rpm-build:fedora-32-x86_64",
url="https://test.url",
links_to_external_services=None,
markdown_content=None,
).and_return().once()
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(GithubProject).should_receive("create_check_run").and_return().never()
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PullRequestGithubEvent).should_receive("db_trigger").and_return(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").with_args(
project="the-example-namespace-the-example-repo-342-stg",
chroots=["fedora-32-x86_64"],
owner="nobody",
description=None,
instructions=None,
preserve_project=None,
list_on_homepage=None,
additional_repos=[],
request_admin_if_needed=True,
).and_return(None)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="packit",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({"fedora-32-x86_64": "supported", "__to_be_ignored__": None})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_check_names_custom_owner(github_pr_event):
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
event=github_pr_event,
metadata=JobMetadataConfig(_targets=["bright-future-x86_64"], owner="nobody"),
db_trigger=trigger,
)
# we need to make sure that pr_id is set
# so we can check it out and add it to spec's release field
assert helper.metadata.pr_id
flexmock(copr_build).should_receive("get_copr_build_info_url").and_return(
"https://test.url"
)
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Building SRPM ...",
check_name="rpm-build:bright-future-x86_64",
url="",
links_to_external_services=None,
markdown_content=None,
).and_return()
flexmock(StatusReporterGithubChecks).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Starting RPM build...",
check_name="rpm-build:bright-future-x86_64",
url="https://test.url",
links_to_external_services=None,
markdown_content=None,
).and_return()
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(GithubProject).should_receive("create_check_run").and_return().never()
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PullRequestGithubEvent).should_receive("db_trigger").and_return(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").with_args(
project="the-example-namespace-the-example-repo-342-stg",
chroots=["bright-future-x86_64"],
owner="nobody",
description=None,
instructions=None,
preserve_project=None,
list_on_homepage=None,
additional_repos=[],
request_admin_if_needed=True,
).and_return(None)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="nobody",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({"bright-future-x86_64": "", "bright-future-aarch64": ""})
.mock,
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_success_set_test_check(github_pr_event):
# status is set for each test-target (2x):
# - Building SRPM ...
# - Starting RPM build...
test_job = JobConfig(
type=JobType.tests,
trigger=JobConfigTriggerType.pull_request,
metadata=JobMetadataConfig(
owner="nobody", _targets=["bright-future-x86_64", "brightest-future-x86_64"]
),
)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
jobs=[test_job],
event=github_pr_event,
db_trigger=trigger,
)
flexmock(GithubProject).should_receive("create_check_run").and_return().times(4)
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({"bright-future-x86_64": "", "brightest-future-x86_64": ""})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_for_branch(branch_push_event):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Starting RPM build...
branch_build_job = JobConfig(
type=JobType.build,
trigger=JobConfigTriggerType.commit,
metadata=JobMetadataConfig(
_targets=DEFAULT_TARGETS,
owner="nobody",
dist_git_branches=["build-branch"],
),
)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.commit,
id=123,
job_trigger_model_type=JobTriggerModelType.branch_push,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.branch_push, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.branch_push))
flexmock(AddBranchPushDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
jobs=[branch_build_job],
event=branch_push_event,
db_trigger=trigger,
)
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(GithubProject).should_receive("create_check_run").and_return().times(8)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PushGitHubEvent).should_receive("db_trigger").and_return(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({target: "" for target in DEFAULT_TARGETS})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_for_branch_failed(branch_push_event):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Starting RPM build...
branch_build_job = JobConfig(
type=JobType.build,
trigger=JobConfigTriggerType.commit,
metadata=JobMetadataConfig(
_targets=DEFAULT_TARGETS,
owner="nobody",
dist_git_branches=["build-branch"],
),
)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.commit,
id=123,
job_trigger_model_type=JobTriggerModelType.branch_push,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.branch_push, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.branch_push))
flexmock(AddBranchPushDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
jobs=[branch_build_job],
event=branch_push_event,
db_trigger=trigger,
)
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(GithubProject).should_receive("create_check_run").and_return().times(8)
flexmock(GithubProject).should_receive("commit_comment").and_return(flexmock())
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(flexmock(success=False, id=2), flexmock())
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PushGitHubEvent).should_receive("db_trigger").and_raise(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_raise(
FailedCreateSRPM, "some error"
)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(id=2, projectname="the-project-name", ownername="the-owner")
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({target: "" for target in DEFAULT_TARGETS})
.mock(),
)
)
flexmock(sentry_integration).should_receive("send_to_sentry").and_return().once()
flexmock(CoprBuildJobHelper).should_receive("run_build").never()
assert not helper.run_copr_build()["success"]
def test_copr_build_for_release(release_event):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Starting RPM build...
branch_build_job = JobConfig(
type=JobType.build,
trigger=JobConfigTriggerType.release,
metadata=JobMetadataConfig(
_targets=DEFAULT_TARGETS,
owner="nobody",
dist_git_branches=["build-branch"],
),
)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.release,
id=123,
job_trigger_model_type=JobTriggerModelType.release,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.release, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.release))
flexmock(AddReleaseDbTrigger).should_receive("db_trigger").and_return(trigger)
flexmock(release_event.project).should_receive("get_sha_from_tag").and_return(
"123456"
)
helper = build_helper(
jobs=[branch_build_job],
event=release_event,
db_trigger=trigger,
)
flexmock(ReleaseEvent).should_receive("get_project").and_return(helper.project)
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(GithubProject).should_receive("create_check_run").and_return().times(8)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({target: "" for target in DEFAULT_TARGETS})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_success(github_pr_event):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Starting RPM build...
helper = build_helper(
event=github_pr_event,
db_trigger=flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
),
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(GithubProject).should_receive("create_check_run").and_return().times(8)
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PullRequestGithubEvent).should_receive("db_trigger").and_return(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({target: "" for target in DEFAULT_TARGETS})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_fails_in_packit(github_pr_event):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Build failed, check latest comment for details.
helper = build_helper(
event=github_pr_event,
db_trigger=flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
),
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(packit_service.worker.build.copr_build).should_receive(
"get_valid_build_targets"
).and_return({"fedora-31-x86_64", "fedora-rawhide-x86_64"})
templ = "rpm-build:fedora-{ver}-x86_64"
flexmock(copr_build).should_receive("get_srpm_build_info_url").and_return(
"https://test.url"
)
for v in ["31", "rawhide"]:
flexmock(GithubProject).should_receive("create_check_run").with_args(
name=templ.format(ver=v),
commit_sha="528b803be6f93e19ca4130bf4976f2800a3004c4",
url=None,
external_id="2",
status=GithubCheckRunStatus.in_progress,
conclusion=None,
output=create_github_check_run_output("Building SRPM ...", ""),
).and_return().once()
for v in ["31", "rawhide"]:
flexmock(GithubProject).should_receive("create_check_run").with_args(
name=templ.format(ver=v),
commit_sha="528b803be6f93e19ca4130bf4976f2800a3004c4",
url="https://test.url",
external_id="2",
status=GithubCheckRunStatus.completed,
conclusion=GithubCheckRunResult.failure,
output=create_github_check_run_output(
"SRPM build failed, check the logs for details.",
create_table_content(
url="https://test.url", links_to_external_services=None
),
),
).and_return().once()
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(flexmock(success=False, id=2), flexmock())
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(sentry_integration).should_receive("send_to_sentry").and_return().once()
flexmock(PackitAPI).should_receive("create_srpm").and_raise(
FailedCreateSRPM, "some error"
)
flexmock(CoprBuildJobHelper).should_receive("run_build").never()
assert not helper.run_copr_build()["success"]
def test_copr_build_fails_to_update_copr_project(github_pr_event):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Build failed, check latest comment for details.
helper = build_helper(
event=github_pr_event,
db_trigger=flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
),
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
templ = "rpm-build:fedora-{ver}-x86_64"
flexmock(copr_build).should_receive("get_srpm_build_info_url").and_return(
"https://test.url"
)
flexmock(packit_service.worker.build.copr_build).should_receive(
"get_valid_build_targets"
).and_return({"fedora-31-x86_64", "fedora-rawhide-x86_64"})
for v in ["31", "rawhide"]:
flexmock(GithubProject).should_receive("create_check_run").with_args(
name=templ.format(ver=v),
commit_sha="528b803be6f93e19ca4130bf4976f2800a3004c4",
url=None,
external_id="2",
status=GithubCheckRunStatus.in_progress,
conclusion=None,
output=create_github_check_run_output("Building SRPM ...", ""),
).and_return().once()
for v in ["31", "rawhide"]:
flexmock(GithubProject).should_receive("create_check_run").with_args(
name=templ.format(ver=v),
commit_sha="528b803be6f93e19ca4130bf4976f2800a3004c4",
url=None,
external_id="2",
status=GithubCheckRunStatus.completed,
conclusion=GithubCheckRunResult.failure,
output=create_github_check_run_output(
"Submit of the build failed: Copr project update failed.", ""
),
).and_return().once()
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(flexmock(success=True, id=2), flexmock())
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
flexmock(GithubProject).should_receive("get_pr").with_args(342).and_return(
flexmock()
)
flexmock(GithubProject).should_receive("get_pr").with_args(pr_id=342).and_return(
flexmock(source_project=flexmock())
.should_receive("comment")
.with_args(
body="Based on your Packit configuration the settings of the "
"nobody/the-example-namespace-the-example-repo-342-stg "
"Copr project would need to be updated as follows:\n"
"\n"
"| field | old value | new value |\n"
"| ----- | --------- | --------- |\n"
"| chroots | ['f30', 'f31'] | ['f31', 'f32'] |\n"
"| description | old | new |\n"
"\n"
"\n"
"Packit was unable to update the settings above "
"as it is missing `admin` permissions on the "
"nobody/the-example-namespace-the-example-repo-342-stg Copr project.\n"
"\n"
"To fix this you can do one of the following:\n"
"\n"
"- Grant Packit `admin` permissions on the "
"nobody/the-example-namespace-the-example-repo-342-stg "
"Copr project on the [permissions page](https://copr.fedorainfracloud.org/coprs/nobody/"
"the-example-namespace-the-example-repo-342-stg/permissions/).\n"
"- Change the above Copr project settings manually on the "
"[settings page](https://copr.fedorainfracloud.org/"
"coprs/nobody/the-example-namespace-the-example-repo-342-stg/edit/) "
"to match the Packit configuration.\n"
"- Update the Packit configuration to match the Copr project settings.\n"
"\n"
"Please retrigger the build, once the issue above is fixed.\n",
)
.and_return()
.mock()
)
flexmock(sentry_integration).should_receive("send_to_sentry").and_return().once()
# copr build
flexmock(CoprHelper).should_receive("get_copr_settings_url").with_args(
"nobody",
"the-example-namespace-the-example-repo-342-stg",
section="permissions",
).and_return(
"https://copr.fedorainfracloud.org/"
"coprs/nobody/the-example-namespace-the-example-repo-342-stg/permissions/"
).once()
flexmock(CoprHelper).should_receive("get_copr_settings_url").with_args(
"nobody",
"the-example-namespace-the-example-repo-342-stg",
).and_return(
"https://copr.fedorainfracloud.org/"
"coprs/nobody/the-example-namespace-the-example-repo-342-stg/edit/"
).once()
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_raise(
PackitCoprSettingsException,
"Copr project update failed.",
fields_to_change={
"chroots": (["f30", "f31"], ["f31", "f32"]),
"description": ("old", "new"),
},
)
assert not helper.run_copr_build()["success"]
def test_copr_build_no_targets(github_pr_event):
# status is set for each build-target (fedora-stable => 2x):
# - Building SRPM ...
# - Starting RPM build...
helper = build_helper(
event=github_pr_event,
metadata=JobMetadataConfig(owner="nobody"),
db_trigger=flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
),
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(copr_build).should_receive("get_srpm_build_info_url").and_return(
"https://test.url"
)
flexmock(copr_build).should_receive("get_valid_build_targets").and_return(
{"fedora-32-x86_64", "fedora-31-x86_64"}
)
flexmock(GithubProject).should_receive("create_check_run").and_return().times(4)
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PullRequestGithubEvent).should_receive("db_trigger").and_return(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(CoprHelper).should_receive("get_copr_client").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return(
{target: "" for target in {"fedora-32-x86_64", "fedora-31-x86_64"}}
)
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_check_names_gitlab(gitlab_mr_event):
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(CoprBuildJobHelper).should_receive("is_reporting_allowed").and_return(True)
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
event=gitlab_mr_event,
metadata=JobMetadataConfig(_targets=["bright-future-x86_64"], owner="nobody"),
db_trigger=trigger,
project_type=GitlabProject,
)
flexmock(copr_build).should_receive("get_copr_build_info_url").and_return(
"https://test.url"
)
flexmock(StatusReporterGitlab).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Building SRPM ...",
check_name="rpm-build:bright-future-x86_64",
url="",
links_to_external_services=None,
markdown_content=None,
).and_return()
flexmock(StatusReporterGitlab).should_receive("set_status").with_args(
state=BaseCommitStatus.running,
description="Starting RPM build...",
check_name="rpm-build:bright-future-x86_64",
url="https://test.url",
links_to_external_services=None,
markdown_content=None,
).and_return()
mr = flexmock(source_project=flexmock())
flexmock(GitlabProject).should_receive("get_pr").and_return(mr)
flexmock(mr.source_project).should_receive("set_commit_status").and_return().never()
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(MergeRequestGitlabEvent).should_receive("db_trigger").and_return(
flexmock()
)
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
flexmock(packit_service.worker.build.copr_build).should_receive(
"get_valid_build_targets"
).and_return(["bright-future-x86_64"])
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").with_args(
project="git.instance.io-the-example-namespace-the-example-repo-1-stg",
chroots=["bright-future-x86_64"],
owner="nobody",
description=None,
instructions=None,
preserve_project=None,
list_on_homepage=None,
additional_repos=[],
request_admin_if_needed=True,
).and_return(None)
flexmock(CoprHelper).should_receive("get_copr_client").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="nobody",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({"bright-future-x86_64": ""})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_success_set_test_check_gitlab(gitlab_mr_event):
# status is set for each test-target (2x):
# - Building SRPM ...
# - Starting RPM build...
test_job = JobConfig(
type=JobType.tests,
trigger=JobConfigTriggerType.pull_request,
metadata=JobMetadataConfig(
owner="nobody", _targets=["bright-future-x86_64", "brightest-future-x86_64"]
),
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(CoprBuildJobHelper).should_receive("is_reporting_allowed").and_return(True)
flexmock(packit_service.worker.build.copr_build).should_receive(
"get_valid_build_targets"
).and_return(["bright-future-x86_64", "brightest-future-x86_64"])
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
)
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
jobs=[test_job],
event=gitlab_mr_event,
db_trigger=trigger,
project_type=GitlabProject,
)
mr = flexmock(source_project=flexmock())
flexmock(GitlabProject).should_receive("get_pr").and_return(mr)
flexmock(mr.source_project).should_receive("set_commit_status").and_return().times(
4
)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(CoprHelper).should_receive("get_copr_client").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({"bright-future-x86_64": "", "brightest-future-x86_64": ""})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_for_branch_gitlab(branch_push_event_gitlab):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Starting RPM build...
branch_build_job = JobConfig(
type=JobType.build,
trigger=JobConfigTriggerType.commit,
metadata=JobMetadataConfig(
_targets=DEFAULT_TARGETS,
owner="nobody",
dist_git_branches=["build-branch"],
),
)
flexmock(CoprBuildJobHelper).should_receive("is_reporting_allowed").and_return(True)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.commit,
id=123,
job_trigger_model_type=JobTriggerModelType.branch_push,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.branch_push, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.branch_push))
flexmock(AddBranchPushDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
jobs=[branch_build_job],
event=branch_push_event_gitlab,
db_trigger=trigger,
project_type=GitlabProject,
)
flexmock(GitlabProject).should_receive("set_commit_status").and_return().times(8)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PushGitHubEvent).should_receive("db_trigger").and_return(flexmock())
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(CoprHelper).should_receive("get_copr_client").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({target: "" for target in DEFAULT_TARGETS})
.mock(),
)
)
flexmock(packit_service.worker.build.copr_build).should_receive(
"get_valid_build_targets"
).and_return(DEFAULT_TARGETS)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_success_gitlab(gitlab_mr_event):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Starting RPM build...
helper = build_helper(
event=gitlab_mr_event,
db_trigger=flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
),
project_type=GitlabProject,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(CoprBuildJobHelper).should_receive("is_reporting_allowed").and_return(True)
mr = flexmock(source_project=flexmock())
flexmock(GitlabProject).should_receive("get_pr").and_return(mr)
flexmock(mr.source_project).should_receive("set_commit_status").and_return().times(
8
)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(MergeRequestGitlabEvent).should_receive("db_trigger").and_return(
flexmock()
)
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(CoprHelper).should_receive("get_copr_client").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({target: "" for target in DEFAULT_TARGETS})
.mock(),
)
)
flexmock(packit_service.worker.build.copr_build).should_receive(
"get_valid_build_targets"
).and_return(DEFAULT_TARGETS)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_fails_in_packit_gitlab(gitlab_mr_event):
# status is set for each build-target (4x):
# - Building SRPM ...
# - Build failed, check latest comment for details.
helper = build_helper(
event=gitlab_mr_event,
db_trigger=flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
),
project_type=GitlabProject,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(CoprBuildJobHelper).should_receive("is_reporting_allowed").and_return(True)
templ = "rpm-build:fedora-{ver}-x86_64"
flexmock(copr_build).should_receive("get_srpm_build_info_url").and_return(
"https://test.url"
)
flexmock(packit_service.worker.build.copr_build).should_receive(
"get_valid_build_targets"
).and_return({"fedora-31-x86_64", "fedora-rawhide-x86_64"})
mr = flexmock(source_project=flexmock())
flexmock(GitlabProject).should_receive("get_pr").and_return(mr)
for v in ["31", "rawhide"]:
flexmock(mr.source_project).should_receive("set_commit_status").with_args(
"1f6a716aa7a618a9ffe56970d77177d99d100022",
CommitStatus.running,
"",
"Building SRPM ...",
templ.format(ver=v),
trim=True,
).and_return().once()
for v in ["31", "rawhide"]:
flexmock(mr.source_project).should_receive("set_commit_status").with_args(
"1f6a716aa7a618a9ffe56970d77177d99d100022",
CommitStatus.failure,
"https://test.url",
"SRPM build failed, check the logs for details.",
templ.format(ver=v),
trim=True,
).and_return().once()
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(flexmock(success=False, id=2), flexmock())
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(sentry_integration).should_receive("send_to_sentry").and_return().once()
flexmock(PackitAPI).should_receive("create_srpm").and_raise(
FailedCreateSRPM, "some error"
)
flexmock(CoprBuildJobHelper).should_receive("run_build").never()
assert not helper.run_copr_build()["success"]
def test_copr_build_success_gitlab_comment(gitlab_mr_event):
helper = build_helper(
event=gitlab_mr_event,
db_trigger=flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
),
project_type=GitlabProject,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(BaseBuildJobHelper).should_receive("is_gitlab_instance").and_return(True)
flexmock(BaseBuildJobHelper).should_receive("base_project").and_return(
GitlabProject(
repo="the-example-repo",
service=flexmock(),
namespace="the-example-namespace",
)
)
flexmock(GitlabProject).should_receive("request_access").and_return()
flexmock(CoprBuildJobHelper).should_receive("is_reporting_allowed").and_return(
False
)
pr = flexmock(
comment=flexmock().should_receive("comment").and_return().mock(),
source_project=flexmock(),
)
flexmock(GitlabProject).should_receive("get_pr").and_return(pr)
flexmock(pr.source_project).should_receive("set_commit_status").and_raise(
gitlab.GitlabCreateError(response_code=403)
)
flexmock(GitlabProject).should_receive("commit_comment").and_return()
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True, id=42)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(MergeRequestGitlabEvent).should_receive("db_trigger").and_return(
flexmock()
)
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(CoprHelper).should_receive("get_copr_client").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({target: "" for target in DEFAULT_TARGETS})
.mock(),
)
)
flexmock(packit_service.worker.build.copr_build).should_receive(
"get_valid_build_targets"
).and_return(
{
"fedora-33-x86_64",
"fedora-32-x86_64",
"fedora-31-x86_64",
"fedora-rawhide-x86_64",
}
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_no_targets_gitlab(gitlab_mr_event):
# status is set for each build-target (fedora-stable => 2x):
# - Building SRPM ...
# - Starting RPM build...
helper = build_helper(
event=gitlab_mr_event,
metadata=JobMetadataConfig(owner="nobody"),
db_trigger=flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
),
project_type=GitlabProject,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(CoprBuildJobHelper).should_receive("is_reporting_allowed").and_return(True)
flexmock(copr_build).should_receive("get_valid_build_targets").and_return(
{"fedora-32-x86_64", "fedora-31-x86_64"}
)
mr = flexmock(source_project=flexmock())
flexmock(GitlabProject).should_receive("get_pr").and_return(mr)
flexmock(mr.source_project).should_receive("set_commit_status").and_return().times(
4
)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(MergeRequestGitlabEvent).should_receive("db_trigger").and_return(
flexmock()
)
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(CoprHelper).should_receive("get_copr_client").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.and_return(
flexmock(
id=2,
projectname="the-project-name",
ownername="the-owner",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return(
{target: "" for target in {"fedora-32-x86_64", "fedora-31-x86_64"}}
)
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
def test_copr_build_targets_override(github_pr_event):
# status is set for only one test-target defined in targets_override (2x):
# - Building SRPM ...
# - Starting RPM build...
test_job = JobConfig(
type=JobType.tests,
trigger=JobConfigTriggerType.pull_request,
metadata=JobMetadataConfig(
owner="nobody", _targets=["bright-future-x86_64", "brightest-future-x86_64"]
),
)
trigger = flexmock(
job_config_trigger_type=JobConfigTriggerType.pull_request,
id=123,
job_trigger_model_type=JobTriggerModelType.pull_request,
)
flexmock(JobTriggerModel).should_receive("get_or_create").with_args(
type=JobTriggerModelType.pull_request, trigger_id=123
).and_return(flexmock(id=2, type=JobTriggerModelType.pull_request))
flexmock(AddPullRequestDbTrigger).should_receive("db_trigger").and_return(trigger)
helper = build_helper(
jobs=[test_job],
event=github_pr_event,
db_trigger=trigger,
targets_override={"bright-future-x86_64"},
)
flexmock(GithubProject).should_receive("create_check_run").and_return().times(2)
flexmock(GithubProject).should_receive("get_pr").and_return(
flexmock(source_project=flexmock())
)
flexmock(SRPMBuildModel).should_receive("create_with_new_run").and_return(
(
flexmock(success=True)
.should_receive("set_url")
.with_args("https://some.host/my.srpm")
.mock(),
flexmock(),
)
)
flexmock(CoprBuildModel).should_receive("create").and_return(flexmock(id=1))
flexmock(PackitAPI).should_receive("create_srpm").and_return("my.srpm")
# copr build
flexmock(CoprHelper).should_receive("create_copr_project_if_not_exists").and_return(
None
)
flexmock(Client).should_receive("create_from_config_file").and_return(
flexmock(
config={"copr_url": "https://copr.fedorainfracloud.org/"},
build_proxy=flexmock()
.should_receive("create_from_file")
.with_args(
ownername="nobody",
projectname="the-example-namespace-the-example-repo-342-stg",
path=Path("my.srpm"),
buildopts={
"chroots": ["bright-future-x86_64"],
},
)
.and_return(
flexmock(
id=2,
projectname="the-example-namespace-the-example-repo-342-stg",
ownername="nobody",
)
)
.mock(),
mock_chroot_proxy=flexmock()
.should_receive("get_list")
.and_return({"bright-future-x86_64": ""})
.mock(),
)
)
flexmock(Celery).should_receive("send_task").once()
assert helper.run_copr_build()["success"]
| 37.207412 | 100 | 0.644483 | 7,233 | 67,271 | 5.691553 | 0.049495 | 0.096315 | 0.054097 | 0.026307 | 0.890519 | 0.883086 | 0.871183 | 0.863556 | 0.851216 | 0.840017 | 0 | 0.01598 | 0.237205 | 67,271 | 1,807 | 101 | 37.228002 | 0.786277 | 0.033506 | 0 | 0.694199 | 0 | 0.003153 | 0.173398 | 0.051291 | 0 | 0 | 0 | 0 | 0.015132 | 1 | 0.014502 | false | 0 | 0.020177 | 0 | 0.03657 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2d721cc1f55bedac724436b0753bc6639072efc | 37,735 | py | Python | instances/passenger_demand/pas-20210421-2109-int12e/8.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int12e/8.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int12e/8.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 2831
passenger_arriving = (
(4, 8, 5, 3, 2, 0, 3, 9, 8, 6, 0, 0), # 0
(3, 14, 8, 4, 0, 0, 6, 5, 3, 2, 3, 0), # 1
(2, 6, 4, 0, 0, 0, 6, 3, 5, 8, 1, 0), # 2
(4, 6, 6, 3, 4, 0, 5, 9, 3, 7, 2, 0), # 3
(3, 5, 12, 0, 3, 0, 5, 8, 6, 8, 2, 0), # 4
(2, 7, 9, 4, 1, 0, 10, 9, 4, 6, 2, 0), # 5
(0, 5, 7, 3, 2, 0, 5, 4, 2, 4, 1, 0), # 6
(5, 3, 10, 1, 3, 0, 4, 10, 5, 3, 1, 0), # 7
(1, 6, 8, 2, 4, 0, 5, 4, 8, 10, 3, 0), # 8
(2, 10, 7, 2, 3, 0, 2, 5, 3, 6, 2, 0), # 9
(4, 5, 4, 5, 3, 0, 5, 6, 5, 2, 1, 0), # 10
(3, 12, 6, 0, 3, 0, 2, 6, 10, 6, 0, 0), # 11
(2, 5, 9, 2, 0, 0, 4, 10, 9, 11, 2, 0), # 12
(6, 6, 3, 3, 2, 0, 3, 10, 1, 7, 1, 0), # 13
(5, 3, 8, 2, 1, 0, 1, 7, 8, 7, 2, 0), # 14
(3, 8, 3, 4, 1, 0, 6, 6, 2, 8, 3, 0), # 15
(6, 13, 6, 1, 0, 0, 5, 9, 5, 2, 0, 0), # 16
(5, 9, 5, 1, 1, 0, 7, 9, 7, 1, 1, 0), # 17
(6, 8, 10, 3, 4, 0, 5, 11, 9, 1, 3, 0), # 18
(4, 13, 5, 4, 3, 0, 4, 2, 9, 10, 2, 0), # 19
(6, 9, 7, 4, 3, 0, 5, 5, 5, 1, 2, 0), # 20
(4, 4, 7, 0, 2, 0, 3, 5, 5, 3, 1, 0), # 21
(5, 8, 4, 2, 3, 0, 2, 5, 4, 9, 2, 0), # 22
(2, 9, 6, 3, 1, 0, 2, 14, 7, 4, 2, 0), # 23
(5, 6, 7, 1, 2, 0, 6, 6, 4, 5, 2, 0), # 24
(1, 6, 5, 3, 1, 0, 4, 6, 6, 5, 4, 0), # 25
(2, 7, 8, 3, 1, 0, 7, 3, 4, 2, 2, 0), # 26
(3, 8, 5, 5, 2, 0, 7, 5, 1, 3, 0, 0), # 27
(6, 8, 9, 2, 0, 0, 5, 6, 5, 4, 2, 0), # 28
(2, 11, 7, 3, 4, 0, 4, 7, 3, 4, 3, 0), # 29
(2, 7, 9, 4, 4, 0, 6, 6, 5, 5, 0, 0), # 30
(6, 5, 2, 6, 2, 0, 4, 7, 8, 6, 1, 0), # 31
(3, 14, 8, 3, 5, 0, 8, 9, 8, 3, 4, 0), # 32
(3, 10, 0, 1, 2, 0, 6, 13, 4, 5, 0, 0), # 33
(4, 6, 10, 4, 1, 0, 5, 5, 5, 4, 1, 0), # 34
(5, 11, 9, 3, 2, 0, 4, 7, 7, 7, 3, 0), # 35
(8, 8, 5, 4, 2, 0, 8, 12, 3, 4, 5, 0), # 36
(6, 0, 9, 3, 3, 0, 14, 6, 6, 0, 2, 0), # 37
(4, 6, 9, 5, 3, 0, 7, 6, 7, 1, 1, 0), # 38
(5, 5, 6, 0, 3, 0, 9, 5, 5, 6, 4, 0), # 39
(4, 4, 9, 3, 0, 0, 6, 7, 4, 5, 3, 0), # 40
(2, 9, 4, 4, 2, 0, 7, 7, 3, 5, 0, 0), # 41
(5, 10, 8, 2, 1, 0, 4, 7, 3, 5, 3, 0), # 42
(6, 11, 6, 2, 0, 0, 4, 7, 9, 7, 1, 0), # 43
(2, 7, 9, 4, 0, 0, 4, 4, 5, 3, 2, 0), # 44
(6, 6, 3, 5, 0, 0, 8, 11, 6, 6, 2, 0), # 45
(6, 10, 10, 1, 1, 0, 5, 7, 7, 3, 1, 0), # 46
(7, 8, 9, 3, 2, 0, 5, 10, 5, 4, 3, 0), # 47
(4, 11, 8, 7, 6, 0, 3, 9, 3, 2, 1, 0), # 48
(2, 18, 6, 3, 2, 0, 6, 5, 3, 4, 4, 0), # 49
(3, 7, 6, 4, 2, 0, 7, 7, 9, 5, 1, 0), # 50
(7, 9, 6, 3, 0, 0, 5, 6, 8, 5, 2, 0), # 51
(2, 8, 9, 1, 3, 0, 7, 9, 6, 3, 1, 0), # 52
(2, 9, 10, 5, 1, 0, 4, 9, 6, 1, 2, 0), # 53
(2, 8, 5, 4, 2, 0, 2, 12, 5, 4, 7, 0), # 54
(4, 14, 8, 2, 3, 0, 5, 7, 3, 5, 1, 0), # 55
(2, 12, 6, 4, 1, 0, 0, 8, 0, 7, 3, 0), # 56
(2, 9, 6, 4, 1, 0, 6, 5, 8, 5, 10, 0), # 57
(1, 8, 5, 3, 2, 0, 6, 6, 10, 7, 2, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(3.1795818700614573, 8.15575284090909, 9.59308322622108, 7.603532608695652, 8.571634615384614, 5.708152173913044), # 0
(3.20942641205736, 8.246449918455387, 9.644898645029993, 7.6458772644927535, 8.635879807692307, 5.706206567028985), # 1
(3.238930172666081, 8.335801683501682, 9.695484147386459, 7.687289855072463, 8.69876923076923, 5.704201449275362), # 2
(3.268068107989464, 8.42371171875, 9.744802779562981, 7.727735054347824, 8.760245192307693, 5.702137092391305), # 3
(3.296815174129353, 8.510083606902358, 9.792817587832047, 7.767177536231884, 8.82025, 5.700013768115941), # 4
(3.3251463271875914, 8.594820930660775, 9.839491618466152, 7.805581974637681, 8.87872596153846, 5.697831748188405), # 5
(3.353036523266023, 8.677827272727273, 9.88478791773779, 7.842913043478261, 8.935615384615383, 5.695591304347826), # 6
(3.380460718466491, 8.75900621580387, 9.92866953191945, 7.879135416666666, 8.990860576923078, 5.693292708333334), # 7
(3.40739386889084, 8.83826134259259, 9.971099507283634, 7.914213768115941, 9.044403846153847, 5.6909362318840575), # 8
(3.4338109306409126, 8.915496235795453, 10.012040890102828, 7.9481127717391304, 9.0961875, 5.68852214673913), # 9
(3.459686859818554, 8.990614478114479, 10.051456726649528, 7.980797101449276, 9.146153846153846, 5.68605072463768), # 10
(3.4849966125256073, 9.063519652251683, 10.089310063196228, 8.012231431159421, 9.194245192307692, 5.683522237318841), # 11
(3.509715144863916, 9.134115340909089, 10.125563946015424, 8.042380434782608, 9.240403846153844, 5.680936956521738), # 12
(3.5338174129353224, 9.20230512678872, 10.160181421379605, 8.071208786231884, 9.284572115384616, 5.678295153985506), # 13
(3.5572783728416737, 9.267992592592593, 10.193125535561265, 8.098681159420288, 9.326692307692307, 5.6755971014492745), # 14
(3.5800729806848106, 9.331081321022726, 10.224359334832902, 8.124762228260868, 9.36670673076923, 5.672843070652174), # 15
(3.6021761925665783, 9.391474894781144, 10.25384586546701, 8.149416666666665, 9.404557692307693, 5.6700333333333335), # 16
(3.6235629645888205, 9.449076896569863, 10.281548173736075, 8.172609148550725, 9.4401875, 5.667168161231884), # 17
(3.64420825285338, 9.503790909090908, 10.307429305912597, 8.194304347826087, 9.473538461538464, 5.664247826086956), # 18
(3.664087013462101, 9.555520515046295, 10.331452308269066, 8.214466938405796, 9.504552884615384, 5.661272599637681), # 19
(3.683174202516827, 9.604169297138045, 10.353580227077975, 8.2330615942029, 9.533173076923077, 5.658242753623187), # 20
(3.7014447761194034, 9.649640838068178, 10.373776108611827, 8.250052989130435, 9.559341346153845, 5.655158559782609), # 21
(3.7188736903716704, 9.69183872053872, 10.3920029991431, 8.26540579710145, 9.582999999999998, 5.652020289855073), # 22
(3.7354359013754754, 9.730666527251683, 10.408223944944302, 8.279084692028986, 9.604091346153846, 5.6488282155797105), # 23
(3.75110636523266, 9.76602784090909, 10.422401992287917, 8.291054347826087, 9.62255769230769, 5.645582608695652), # 24
(3.7658600380450684, 9.797826244212962, 10.434500187446444, 8.301279438405798, 9.638341346153844, 5.642283740942029), # 25
(3.779671875914545, 9.825965319865318, 10.444481576692374, 8.309724637681159, 9.651384615384615, 5.63893188405797), # 26
(3.792516834942932, 9.85034865056818, 10.452309206298198, 8.316354619565217, 9.661629807692309, 5.635527309782609), # 27
(3.804369871232075, 9.870879819023568, 10.457946122536418, 8.321134057971014, 9.66901923076923, 5.632070289855072), # 28
(3.815205940883816, 9.887462407933501, 10.461355371679518, 8.324027626811594, 9.673495192307692, 5.628561096014493), # 29
(3.8249999999999997, 9.9, 10.4625, 8.325, 9.674999999999999, 5.625), # 30
(3.834164434143222, 9.910414559659088, 10.461641938405796, 8.324824387254901, 9.674452393617022, 5.620051511744128), # 31
(3.843131010230179, 9.920691477272728, 10.459092028985506, 8.324300980392156, 9.672821276595744, 5.612429710144928), # 32
(3.8519037563938614, 9.930829474431818, 10.45488668478261, 8.323434926470588, 9.670124202127658, 5.6022092203898035), # 33
(3.860486700767263, 9.940827272727272, 10.449062318840578, 8.32223137254902, 9.666378723404256, 5.589464667666167), # 34
(3.8688838714833755, 9.950683593749998, 10.441655344202898, 8.320695465686274, 9.661602393617022, 5.574270677161419), # 35
(3.8770992966751923, 9.96039715909091, 10.432702173913043, 8.318832352941177, 9.655812765957448, 5.556701874062968), # 36
(3.885137004475703, 9.96996669034091, 10.422239221014491, 8.316647181372549, 9.64902739361702, 5.536832883558221), # 37
(3.893001023017902, 9.979390909090908, 10.410302898550723, 8.314145098039214, 9.641263829787233, 5.514738330834581), # 38
(3.900695380434782, 9.988668536931817, 10.396929619565215, 8.31133125, 9.632539627659574, 5.490492841079459), # 39
(3.908224104859335, 9.997798295454546, 10.382155797101449, 8.308210784313726, 9.62287234042553, 5.464171039480259), # 40
(3.915591224424552, 10.006778906249998, 10.366017844202899, 8.304788848039216, 9.612279521276594, 5.435847551224389), # 41
(3.9228007672634266, 10.015609090909093, 10.348552173913044, 8.301070588235293, 9.600778723404256, 5.40559700149925), # 42
(3.929856761508952, 10.024287571022725, 10.329795199275361, 8.297061151960785, 9.5883875, 5.373494015492254), # 43
(3.936763235294117, 10.032813068181818, 10.309783333333334, 8.292765686274508, 9.575123404255319, 5.339613218390804), # 44
(3.9435242167519178, 10.041184303977271, 10.288552989130435, 8.288189338235293, 9.561003989361701, 5.304029235382309), # 45
(3.9501437340153456, 10.0494, 10.266140579710147, 8.28333725490196, 9.546046808510638, 5.266816691654173), # 46
(3.956625815217391, 10.05745887784091, 10.24258251811594, 8.278214583333332, 9.530269414893617, 5.228050212393803), # 47
(3.962974488491049, 10.065359659090909, 10.217915217391303, 8.272826470588234, 9.513689361702127, 5.187804422788607), # 48
(3.9691937819693086, 10.073101065340907, 10.19217509057971, 8.26717806372549, 9.49632420212766, 5.146153948025987), # 49
(3.9752877237851663, 10.080681818181816, 10.165398550724637, 8.261274509803922, 9.478191489361702, 5.103173413293353), # 50
(3.9812603420716113, 10.088100639204544, 10.137622010869565, 8.255120955882353, 9.459308776595744, 5.0589374437781105), # 51
(3.987115664961637, 10.09535625, 10.10888188405797, 8.248722549019607, 9.439693617021277, 5.013520664667666), # 52
(3.992857720588235, 10.10244737215909, 10.079214583333332, 8.24208443627451, 9.419363563829787, 4.966997701149425), # 53
(3.9984905370843995, 10.109372727272726, 10.04865652173913, 8.235211764705882, 9.398336170212765, 4.919443178410794), # 54
(4.00401814258312, 10.116131036931817, 10.017244112318838, 8.22810968137255, 9.376628989361702, 4.87093172163918), # 55
(4.0094445652173905, 10.122721022727271, 9.985013768115941, 8.220783333333333, 9.354259574468085, 4.821537956021989), # 56
(4.014773833120205, 10.129141406250001, 9.952001902173912, 8.213237867647058, 9.331245478723403, 4.771336506746626), # 57
(4.0200099744245525, 10.135390909090907, 9.91824492753623, 8.20547843137255, 9.307604255319148, 4.7204019990005), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(4, 8, 5, 3, 2, 0, 3, 9, 8, 6, 0, 0), # 0
(7, 22, 13, 7, 2, 0, 9, 14, 11, 8, 3, 0), # 1
(9, 28, 17, 7, 2, 0, 15, 17, 16, 16, 4, 0), # 2
(13, 34, 23, 10, 6, 0, 20, 26, 19, 23, 6, 0), # 3
(16, 39, 35, 10, 9, 0, 25, 34, 25, 31, 8, 0), # 4
(18, 46, 44, 14, 10, 0, 35, 43, 29, 37, 10, 0), # 5
(18, 51, 51, 17, 12, 0, 40, 47, 31, 41, 11, 0), # 6
(23, 54, 61, 18, 15, 0, 44, 57, 36, 44, 12, 0), # 7
(24, 60, 69, 20, 19, 0, 49, 61, 44, 54, 15, 0), # 8
(26, 70, 76, 22, 22, 0, 51, 66, 47, 60, 17, 0), # 9
(30, 75, 80, 27, 25, 0, 56, 72, 52, 62, 18, 0), # 10
(33, 87, 86, 27, 28, 0, 58, 78, 62, 68, 18, 0), # 11
(35, 92, 95, 29, 28, 0, 62, 88, 71, 79, 20, 0), # 12
(41, 98, 98, 32, 30, 0, 65, 98, 72, 86, 21, 0), # 13
(46, 101, 106, 34, 31, 0, 66, 105, 80, 93, 23, 0), # 14
(49, 109, 109, 38, 32, 0, 72, 111, 82, 101, 26, 0), # 15
(55, 122, 115, 39, 32, 0, 77, 120, 87, 103, 26, 0), # 16
(60, 131, 120, 40, 33, 0, 84, 129, 94, 104, 27, 0), # 17
(66, 139, 130, 43, 37, 0, 89, 140, 103, 105, 30, 0), # 18
(70, 152, 135, 47, 40, 0, 93, 142, 112, 115, 32, 0), # 19
(76, 161, 142, 51, 43, 0, 98, 147, 117, 116, 34, 0), # 20
(80, 165, 149, 51, 45, 0, 101, 152, 122, 119, 35, 0), # 21
(85, 173, 153, 53, 48, 0, 103, 157, 126, 128, 37, 0), # 22
(87, 182, 159, 56, 49, 0, 105, 171, 133, 132, 39, 0), # 23
(92, 188, 166, 57, 51, 0, 111, 177, 137, 137, 41, 0), # 24
(93, 194, 171, 60, 52, 0, 115, 183, 143, 142, 45, 0), # 25
(95, 201, 179, 63, 53, 0, 122, 186, 147, 144, 47, 0), # 26
(98, 209, 184, 68, 55, 0, 129, 191, 148, 147, 47, 0), # 27
(104, 217, 193, 70, 55, 0, 134, 197, 153, 151, 49, 0), # 28
(106, 228, 200, 73, 59, 0, 138, 204, 156, 155, 52, 0), # 29
(108, 235, 209, 77, 63, 0, 144, 210, 161, 160, 52, 0), # 30
(114, 240, 211, 83, 65, 0, 148, 217, 169, 166, 53, 0), # 31
(117, 254, 219, 86, 70, 0, 156, 226, 177, 169, 57, 0), # 32
(120, 264, 219, 87, 72, 0, 162, 239, 181, 174, 57, 0), # 33
(124, 270, 229, 91, 73, 0, 167, 244, 186, 178, 58, 0), # 34
(129, 281, 238, 94, 75, 0, 171, 251, 193, 185, 61, 0), # 35
(137, 289, 243, 98, 77, 0, 179, 263, 196, 189, 66, 0), # 36
(143, 289, 252, 101, 80, 0, 193, 269, 202, 189, 68, 0), # 37
(147, 295, 261, 106, 83, 0, 200, 275, 209, 190, 69, 0), # 38
(152, 300, 267, 106, 86, 0, 209, 280, 214, 196, 73, 0), # 39
(156, 304, 276, 109, 86, 0, 215, 287, 218, 201, 76, 0), # 40
(158, 313, 280, 113, 88, 0, 222, 294, 221, 206, 76, 0), # 41
(163, 323, 288, 115, 89, 0, 226, 301, 224, 211, 79, 0), # 42
(169, 334, 294, 117, 89, 0, 230, 308, 233, 218, 80, 0), # 43
(171, 341, 303, 121, 89, 0, 234, 312, 238, 221, 82, 0), # 44
(177, 347, 306, 126, 89, 0, 242, 323, 244, 227, 84, 0), # 45
(183, 357, 316, 127, 90, 0, 247, 330, 251, 230, 85, 0), # 46
(190, 365, 325, 130, 92, 0, 252, 340, 256, 234, 88, 0), # 47
(194, 376, 333, 137, 98, 0, 255, 349, 259, 236, 89, 0), # 48
(196, 394, 339, 140, 100, 0, 261, 354, 262, 240, 93, 0), # 49
(199, 401, 345, 144, 102, 0, 268, 361, 271, 245, 94, 0), # 50
(206, 410, 351, 147, 102, 0, 273, 367, 279, 250, 96, 0), # 51
(208, 418, 360, 148, 105, 0, 280, 376, 285, 253, 97, 0), # 52
(210, 427, 370, 153, 106, 0, 284, 385, 291, 254, 99, 0), # 53
(212, 435, 375, 157, 108, 0, 286, 397, 296, 258, 106, 0), # 54
(216, 449, 383, 159, 111, 0, 291, 404, 299, 263, 107, 0), # 55
(218, 461, 389, 163, 112, 0, 291, 412, 299, 270, 110, 0), # 56
(220, 470, 395, 167, 113, 0, 297, 417, 307, 275, 120, 0), # 57
(221, 478, 400, 170, 115, 0, 303, 423, 317, 282, 122, 0), # 58
(221, 478, 400, 170, 115, 0, 303, 423, 317, 282, 122, 0), # 59
)
passenger_arriving_rate = (
(3.1795818700614573, 6.524602272727271, 5.755849935732647, 3.0414130434782605, 1.7143269230769227, 0.0, 5.708152173913044, 6.857307692307691, 4.562119565217391, 3.8372332904884314, 1.6311505681818177, 0.0), # 0
(3.20942641205736, 6.597159934764309, 5.786939187017996, 3.0583509057971012, 1.7271759615384612, 0.0, 5.706206567028985, 6.908703846153845, 4.587526358695652, 3.857959458011997, 1.6492899836910773, 0.0), # 1
(3.238930172666081, 6.668641346801345, 5.817290488431875, 3.074915942028985, 1.7397538461538458, 0.0, 5.704201449275362, 6.959015384615383, 4.612373913043478, 3.8781936589545833, 1.6671603367003363, 0.0), # 2
(3.268068107989464, 6.738969375, 5.846881667737788, 3.091094021739129, 1.7520490384615384, 0.0, 5.702137092391305, 7.0081961538461535, 4.636641032608694, 3.897921111825192, 1.68474234375, 0.0), # 3
(3.296815174129353, 6.808066885521885, 5.875690552699228, 3.106871014492753, 1.76405, 0.0, 5.700013768115941, 7.0562, 4.66030652173913, 3.9171270351328187, 1.7020167213804713, 0.0), # 4
(3.3251463271875914, 6.87585674452862, 5.903694971079691, 3.122232789855072, 1.775745192307692, 0.0, 5.697831748188405, 7.102980769230768, 4.6833491847826085, 3.9357966473864603, 1.718964186132155, 0.0), # 5
(3.353036523266023, 6.942261818181818, 5.930872750642674, 3.137165217391304, 1.7871230769230766, 0.0, 5.695591304347826, 7.148492307692306, 4.705747826086957, 3.953915167095116, 1.7355654545454544, 0.0), # 6
(3.380460718466491, 7.007204972643096, 5.95720171915167, 3.1516541666666664, 1.7981721153846155, 0.0, 5.693292708333334, 7.192688461538462, 4.727481249999999, 3.97146781276778, 1.751801243160774, 0.0), # 7
(3.40739386889084, 7.0706090740740715, 5.982659704370181, 3.165685507246376, 1.8088807692307691, 0.0, 5.6909362318840575, 7.2355230769230765, 4.7485282608695645, 3.9884398029134536, 1.7676522685185179, 0.0), # 8
(3.4338109306409126, 7.132396988636362, 6.007224534061696, 3.179245108695652, 1.8192374999999996, 0.0, 5.68852214673913, 7.2769499999999985, 4.768867663043478, 4.004816356041131, 1.7830992471590905, 0.0), # 9
(3.459686859818554, 7.1924915824915825, 6.030874035989717, 3.19231884057971, 1.829230769230769, 0.0, 5.68605072463768, 7.316923076923076, 4.7884782608695655, 4.020582690659811, 1.7981228956228956, 0.0), # 10
(3.4849966125256073, 7.250815721801346, 6.053586037917737, 3.204892572463768, 1.8388490384615384, 0.0, 5.683522237318841, 7.355396153846153, 4.807338858695652, 4.0357240252784905, 1.8127039304503365, 0.0), # 11
(3.509715144863916, 7.30729227272727, 6.0753383676092545, 3.2169521739130427, 1.8480807692307688, 0.0, 5.680936956521738, 7.392323076923075, 4.825428260869565, 4.050225578406169, 1.8268230681818176, 0.0), # 12
(3.5338174129353224, 7.361844101430976, 6.096108852827762, 3.228483514492753, 1.8569144230769232, 0.0, 5.678295153985506, 7.427657692307693, 4.84272527173913, 4.0640725685518415, 1.840461025357744, 0.0), # 13
(3.5572783728416737, 7.414394074074074, 6.115875321336759, 3.2394724637681147, 1.8653384615384612, 0.0, 5.6755971014492745, 7.461353846153845, 4.859208695652172, 4.077250214224506, 1.8535985185185184, 0.0), # 14
(3.5800729806848106, 7.46486505681818, 6.134615600899742, 3.249904891304347, 1.873341346153846, 0.0, 5.672843070652174, 7.493365384615384, 4.874857336956521, 4.089743733933161, 1.866216264204545, 0.0), # 15
(3.6021761925665783, 7.513179915824915, 6.152307519280206, 3.259766666666666, 1.8809115384615382, 0.0, 5.6700333333333335, 7.523646153846153, 4.889649999999999, 4.101538346186803, 1.8782949789562287, 0.0), # 16
(3.6235629645888205, 7.55926151725589, 6.168928904241645, 3.26904365942029, 1.8880374999999998, 0.0, 5.667168161231884, 7.552149999999999, 4.903565489130435, 4.11261926949443, 1.8898153793139725, 0.0), # 17
(3.64420825285338, 7.603032727272725, 6.184457583547558, 3.2777217391304343, 1.8947076923076926, 0.0, 5.664247826086956, 7.578830769230771, 4.916582608695652, 4.122971722365039, 1.9007581818181813, 0.0), # 18
(3.664087013462101, 7.644416412037035, 6.198871384961439, 3.285786775362318, 1.9009105769230765, 0.0, 5.661272599637681, 7.603642307692306, 4.928680163043477, 4.132580923307626, 1.9111041030092588, 0.0), # 19
(3.683174202516827, 7.683335437710435, 6.2121481362467845, 3.2932246376811594, 1.9066346153846152, 0.0, 5.658242753623187, 7.626538461538461, 4.93983695652174, 4.14143209083119, 1.9208338594276086, 0.0), # 20
(3.7014447761194034, 7.719712670454542, 6.224265665167096, 3.3000211956521737, 1.911868269230769, 0.0, 5.655158559782609, 7.647473076923076, 4.950031793478261, 4.14951044344473, 1.9299281676136355, 0.0), # 21
(3.7188736903716704, 7.753470976430976, 6.23520179948586, 3.3061623188405793, 1.9165999999999994, 0.0, 5.652020289855073, 7.666399999999998, 4.959243478260869, 4.15680119965724, 1.938367744107744, 0.0), # 22
(3.7354359013754754, 7.784533221801346, 6.244934366966581, 3.311633876811594, 1.920818269230769, 0.0, 5.6488282155797105, 7.683273076923076, 4.967450815217392, 4.163289577977721, 1.9461333054503365, 0.0), # 23
(3.75110636523266, 7.812822272727271, 6.25344119537275, 3.3164217391304347, 1.9245115384615379, 0.0, 5.645582608695652, 7.6980461538461515, 4.974632608695652, 4.168960796915166, 1.9532055681818177, 0.0), # 24
(3.7658600380450684, 7.838260995370368, 6.260700112467866, 3.320511775362319, 1.9276682692307685, 0.0, 5.642283740942029, 7.710673076923074, 4.980767663043479, 4.173800074978577, 1.959565248842592, 0.0), # 25
(3.779671875914545, 7.860772255892254, 6.266688946015424, 3.3238898550724634, 1.9302769230769228, 0.0, 5.63893188405797, 7.721107692307691, 4.985834782608695, 4.177792630676949, 1.9651930639730635, 0.0), # 26
(3.792516834942932, 7.8802789204545425, 6.2713855237789184, 3.326541847826087, 1.9323259615384616, 0.0, 5.635527309782609, 7.729303846153846, 4.98981277173913, 4.180923682519278, 1.9700697301136356, 0.0), # 27
(3.804369871232075, 7.8967038552188535, 6.2747676735218505, 3.328453623188405, 1.9338038461538458, 0.0, 5.632070289855072, 7.735215384615383, 4.992680434782608, 4.183178449014567, 1.9741759638047134, 0.0), # 28
(3.815205940883816, 7.9099699263468, 6.276813223007711, 3.3296110507246373, 1.9346990384615383, 0.0, 5.628561096014493, 7.738796153846153, 4.994416576086956, 4.184542148671807, 1.9774924815867, 0.0), # 29
(3.8249999999999997, 7.92, 6.2775, 3.3299999999999996, 1.9349999999999996, 0.0, 5.625, 7.739999999999998, 4.994999999999999, 4.185, 1.98, 0.0), # 30
(3.834164434143222, 7.92833164772727, 6.276985163043477, 3.3299297549019604, 1.9348904787234043, 0.0, 5.620051511744128, 7.739561914893617, 4.994894632352941, 4.184656775362318, 1.9820829119318175, 0.0), # 31
(3.843131010230179, 7.936553181818182, 6.275455217391303, 3.329720392156862, 1.9345642553191487, 0.0, 5.612429710144928, 7.738257021276595, 4.994580588235293, 4.1836368115942015, 1.9841382954545455, 0.0), # 32
(3.8519037563938614, 7.944663579545454, 6.272932010869566, 3.329373970588235, 1.9340248404255314, 0.0, 5.6022092203898035, 7.736099361702125, 4.994060955882353, 4.181954673913044, 1.9861658948863634, 0.0), # 33
(3.860486700767263, 7.952661818181817, 6.269437391304347, 3.3288925490196077, 1.9332757446808508, 0.0, 5.589464667666167, 7.733102978723403, 4.993338823529411, 4.179624927536231, 1.9881654545454543, 0.0), # 34
(3.8688838714833755, 7.960546874999998, 6.264993206521739, 3.328278186274509, 1.9323204787234043, 0.0, 5.574270677161419, 7.729281914893617, 4.9924172794117645, 4.176662137681159, 1.9901367187499994, 0.0), # 35
(3.8770992966751923, 7.968317727272727, 6.259621304347825, 3.3275329411764707, 1.9311625531914893, 0.0, 5.556701874062968, 7.724650212765957, 4.9912994117647065, 4.173080869565217, 1.9920794318181818, 0.0), # 36
(3.885137004475703, 7.975973352272726, 6.253343532608695, 3.3266588725490194, 1.9298054787234038, 0.0, 5.536832883558221, 7.719221914893615, 4.989988308823529, 4.168895688405796, 1.9939933380681816, 0.0), # 37
(3.893001023017902, 7.983512727272726, 6.246181739130434, 3.325658039215685, 1.9282527659574464, 0.0, 5.514738330834581, 7.713011063829786, 4.988487058823528, 4.164121159420289, 1.9958781818181814, 0.0), # 38
(3.900695380434782, 7.990934829545453, 6.238157771739129, 3.3245324999999997, 1.9265079255319146, 0.0, 5.490492841079459, 7.7060317021276585, 4.98679875, 4.1587718478260856, 1.9977337073863632, 0.0), # 39
(3.908224104859335, 7.998238636363636, 6.229293478260869, 3.32328431372549, 1.924574468085106, 0.0, 5.464171039480259, 7.698297872340424, 4.984926470588236, 4.1528623188405795, 1.999559659090909, 0.0), # 40
(3.915591224424552, 8.005423124999998, 6.219610706521739, 3.321915539215686, 1.9224559042553186, 0.0, 5.435847551224389, 7.689823617021275, 4.982873308823529, 4.146407137681159, 2.0013557812499996, 0.0), # 41
(3.9228007672634266, 8.012487272727274, 6.209131304347826, 3.320428235294117, 1.920155744680851, 0.0, 5.40559700149925, 7.680622978723404, 4.980642352941175, 4.1394208695652175, 2.0031218181818184, 0.0), # 42
(3.929856761508952, 8.01943005681818, 6.1978771195652165, 3.3188244607843136, 1.9176774999999997, 0.0, 5.373494015492254, 7.670709999999999, 4.978236691176471, 4.131918079710144, 2.004857514204545, 0.0), # 43
(3.936763235294117, 8.026250454545455, 6.18587, 3.317106274509803, 1.9150246808510636, 0.0, 5.339613218390804, 7.660098723404254, 4.975659411764705, 4.123913333333333, 2.0065626136363637, 0.0), # 44
(3.9435242167519178, 8.032947443181817, 6.1731317934782615, 3.315275735294117, 1.91220079787234, 0.0, 5.304029235382309, 7.64880319148936, 4.972913602941175, 4.115421195652174, 2.008236860795454, 0.0), # 45
(3.9501437340153456, 8.03952, 6.159684347826087, 3.313334901960784, 1.9092093617021275, 0.0, 5.266816691654173, 7.63683744680851, 4.970002352941176, 4.106456231884058, 2.00988, 0.0), # 46
(3.956625815217391, 8.045967102272726, 6.1455495108695635, 3.3112858333333324, 1.9060538829787232, 0.0, 5.228050212393803, 7.624215531914893, 4.966928749999999, 4.097033007246376, 2.0114917755681816, 0.0), # 47
(3.962974488491049, 8.052287727272727, 6.130749130434782, 3.309130588235293, 1.9027378723404254, 0.0, 5.187804422788607, 7.610951489361701, 4.96369588235294, 4.087166086956521, 2.013071931818182, 0.0), # 48
(3.9691937819693086, 8.058480852272725, 6.115305054347826, 3.306871225490196, 1.899264840425532, 0.0, 5.146153948025987, 7.597059361702128, 4.960306838235294, 4.076870036231884, 2.014620213068181, 0.0), # 49
(3.9752877237851663, 8.064545454545453, 6.099239130434782, 3.3045098039215683, 1.8956382978723403, 0.0, 5.103173413293353, 7.582553191489361, 4.956764705882353, 4.066159420289854, 2.016136363636363, 0.0), # 50
(3.9812603420716113, 8.070480511363634, 6.082573206521739, 3.302048382352941, 1.8918617553191486, 0.0, 5.0589374437781105, 7.567447021276594, 4.953072573529411, 4.055048804347826, 2.0176201278409085, 0.0), # 51
(3.987115664961637, 8.076284999999999, 6.065329130434782, 3.299489019607843, 1.8879387234042553, 0.0, 5.013520664667666, 7.551754893617021, 4.949233529411765, 4.043552753623188, 2.0190712499999997, 0.0), # 52
(3.992857720588235, 8.081957897727271, 6.047528749999999, 3.2968337745098037, 1.8838727127659571, 0.0, 4.966997701149425, 7.5354908510638285, 4.945250661764706, 4.0316858333333325, 2.020489474431818, 0.0), # 53
(3.9984905370843995, 8.08749818181818, 6.0291939130434775, 3.294084705882353, 1.8796672340425529, 0.0, 4.919443178410794, 7.5186689361702115, 4.941127058823529, 4.019462608695651, 2.021874545454545, 0.0), # 54
(4.00401814258312, 8.092904829545454, 6.010346467391303, 3.2912438725490194, 1.8753257978723403, 0.0, 4.87093172163918, 7.501303191489361, 4.936865808823529, 4.006897644927535, 2.0232262073863634, 0.0), # 55
(4.0094445652173905, 8.098176818181816, 5.991008260869564, 3.288313333333333, 1.8708519148936167, 0.0, 4.821537956021989, 7.483407659574467, 4.9324699999999995, 3.994005507246376, 2.024544204545454, 0.0), # 56
(4.014773833120205, 8.103313125, 5.971201141304347, 3.285295147058823, 1.8662490957446805, 0.0, 4.771336506746626, 7.464996382978722, 4.927942720588234, 3.980800760869564, 2.02582828125, 0.0), # 57
(4.0200099744245525, 8.108312727272725, 5.950946956521738, 3.2821913725490197, 1.8615208510638295, 0.0, 4.7204019990005, 7.446083404255318, 4.923287058823529, 3.9672979710144918, 2.0270781818181813, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
7, # 1
)
| 112.641791 | 213 | 0.727998 | 5,147 | 37,735 | 5.335147 | 0.224597 | 0.314639 | 0.24909 | 0.471959 | 0.331646 | 0.329643 | 0.329643 | 0.329643 | 0.329643 | 0.329643 | 0 | 0.818187 | 0.119624 | 37,735 | 334 | 214 | 112.979042 | 0.008398 | 0.032092 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2faa0b97e5024a8d2d56c463fff15b25c8767f7 | 32 | py | Python | calibratesdr/gsm/__init__.py | arnaudlb/CalibrateSDR | 3493da7808bdec23aa89ad88b1149a811ace143a | [
"MIT"
] | 24 | 2020-12-30T02:11:28.000Z | 2022-02-21T20:18:44.000Z | calibratesdr/gsm/__init__.py | rahulv1999/CalibrateSDR | e3952bda2e731196798a44b405e254d428746750 | [
"MIT"
] | 5 | 2020-12-29T09:47:08.000Z | 2021-08-30T10:33:58.000Z | calibratesdr/gsm/__init__.py | rahulv1999/CalibrateSDR | e3952bda2e731196798a44b405e254d428746750 | [
"MIT"
] | 15 | 2020-12-31T13:34:29.000Z | 2021-11-19T13:57:55.000Z | from calibratesdr.gsm import gsm | 32 | 32 | 0.875 | 5 | 32 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
825d76f4209d2ae9a3d4b36081aa1001af829071 | 155 | py | Python | indexer/src/annotators/__init__.py | alliance-genome/agr_archive_initial_prototype | 8559303de20e55886cc5bc7c2153f9357fc0ca2f | [
"MIT"
] | 9 | 2016-10-03T16:10:39.000Z | 2016-10-10T16:22:52.000Z | indexer/src/annotators/__init__.py | alliance-genome/agr | 8559303de20e55886cc5bc7c2153f9357fc0ca2f | [
"MIT"
] | 168 | 2017-02-06T17:07:20.000Z | 2017-08-23T21:23:55.000Z | indexer/src/annotators/__init__.py | alliance-genome/agr_prototype | 8559303de20e55886cc5bc7c2153f9357fc0ca2f | [
"MIT"
] | 12 | 2016-10-04T22:01:48.000Z | 2017-02-01T21:17:33.000Z | from so_annotator import SoAnnotator
from go_annotator import GoAnnotator
from do_annotator import DoAnnotator
from ortho_annotator import OrthoAnnotator
| 25.833333 | 42 | 0.890323 | 20 | 155 | 6.7 | 0.55 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109677 | 155 | 5 | 43 | 31 | 0.971014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
829e19a50fb715c59cbde09ffb935b34fa635b4e | 162 | py | Python | parlai/chat_service/services/telegram/websocket_manager.py | hhschu/ParlAI | 1c2732b2a6c6b50154caae42e1f6ba6737d23feb | [
"MIT"
] | 1 | 2020-05-04T06:08:45.000Z | 2020-05-04T06:08:45.000Z | parlai/chat_service/services/telegram/websocket_manager.py | hhschu/ParlAI | 1c2732b2a6c6b50154caae42e1f6ba6737d23feb | [
"MIT"
] | null | null | null | parlai/chat_service/services/telegram/websocket_manager.py | hhschu/ParlAI | 1c2732b2a6c6b50154caae42e1f6ba6737d23feb | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from parlai.chat_service.services.websocket.websocket_manager import WebsocketManager
class TelegramManager(WebsocketManager):
pass
| 20.25 | 85 | 0.82716 | 18 | 162 | 7.333333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006849 | 0.098765 | 162 | 7 | 86 | 23.142857 | 0.89726 | 0.12963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
82cb699f53cd3d64d433d9ce74f4a60ff42a1d17 | 8,409 | py | Python | plots/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/plots/plots.py | Abhishek-Aditya-bs/Streaming-Spark-For-Machine-Learning | 76f9c97e66d6171bc83d1183fadc30bd492422a7 | [
"MIT"
] | 1 | 2021-12-10T13:14:53.000Z | 2021-12-10T13:14:53.000Z | plots/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/plots/plots.py | iVishalr/SSML-spark-streaming-for-machine-learning | ba95a7d2d6bb15bacfbbf5b3c95317310b36d54f | [
"MIT"
] | null | null | null | plots/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/plots/plots.py | iVishalr/SSML-spark-streaming-for-machine-learning | ba95a7d2d6bb15bacfbbf5b3c95317310b36d54f | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
# from torchvision.utils import make_grid,save_image
import seaborn as sns
import numpy as np
import pandas as pd
import time
accuracy = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/accuracy-epoch-7.npy')
avg_accuracy = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/smooth_accuracy-epoch-7.npy')
acc = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/accuracy-epoch-7.npy')
accsmooth = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/smooth_accuracy-epoch-7.npy')
loss = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/loss-epoch-7.npy')
loss_smooth = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/smooth_loss-epoch-7.npy')
recall = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/recall-epoch-7.npy')
avg_recall = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/smooth_recall-epoch-7.npy')
precision = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/precision-epoch-7.npy')
avg_precision = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/smooth_precision-epoch-7.npy')
f1 = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/f1-epoch-7.npy')
avg_f1 = np.load('../checkpoints/DIMSVM-Batch:64-LR:2e-4-alpha:6e-4/smooth_f1-epoch-7.npy')
def plot_train_loss(loss, loss_smooth):
sns.set()
plt.style.use('seaborn-ticks')
plt.figure(figsize=(15,20))
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.plot(loss,label="Train Loss",alpha=0.3,color="orange",marker="o")
plt.plot(loss_smooth,label="Avg Train Loss",alpha=0.9,color="red")
yticks = plt.yticks()
for y_locs in yticks[0][1:]:
plt.axhline(y=y_locs,color='lightgrey',linestyle='--',lw=1,alpha=1)
labels = np.arange(1,len(loss)+1,1e3)
xlabels = ['{:,.0f}'.format(x) + 'k' for x in labels/1000]
xlabels[0] = '0'
locs = np.arange(1,len(loss)+1,1e3).astype(int)
plt.xticks(ticks=locs,labels=xlabels)
plt.legend(loc=0,prop={'size':10})
plt.title("Deep Image SVM Batch=64-LR=2e-4 alpha=6e-4 (Train Loss)",pad=20,fontsize=15)
plt.xlabel("Iterations",fontsize=15,labelpad=15)
plt.ylabel("Train Loss",fontsize=15,labelpad=15)
plt.savefig('./loss-pic.png')
def plot_train_accuracies(accuracy, avg_accuracy):
sns.set()
plt.style.use('seaborn-ticks')
plt.figure(figsize=(15,20))
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.plot(accuracy,label="Train Accuracy",alpha=0.2,color="orange",marker="o")
plt.plot(avg_accuracy,label="Avg Train Accuracy",alpha=0.9,color="red")
yticks = plt.yticks()
for y_locs in yticks[0][1:]:
plt.axhline(y=y_locs,color='lightgrey',linestyle='--',lw=1,alpha=1)
labels = np.arange(1,len(accuracy)+1,1e3)
xlabels = ['{:,.0f}'.format(x) + 'k' for x in labels/1000]
xlabels[0] = '0'
locs = np.arange(1,len(accuracy)+1,1e3).astype(int)
plt.xticks(ticks=locs,labels=xlabels)
plt.legend(loc=0,prop={'size':10})
plt.title("Deep Image SVM Batch=64-LR=2e-4 alpha=6e-4 (Train Accuracy)",pad=20,fontsize=15)
plt.xlabel("Iterations",fontsize=15,labelpad=15)
plt.ylabel("Train Accuracy",fontsize=15,labelpad=15)
plt.savefig('./acc-pic.png')
def plot_train_recall(recall, avg_recall):
sns.set()
plt.style.use('seaborn-ticks')
plt.figure(figsize=(15,20))
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.plot(recall,label="Train Recall",alpha=0.2,color="orange",marker="o")
plt.plot(avg_recall,label="Avg Train Recall",alpha=0.9,color="red")
yticks = plt.yticks()
for y_locs in yticks[0][1:]:
plt.axhline(y=y_locs,color='lightgrey',linestyle='--',lw=1,alpha=1)
labels = np.arange(1,len(recall)+1,1e3)
xlabels = ['{:,.0f}'.format(x) + 'k' for x in labels/1000]
xlabels[0] = '0'
locs = np.arange(1,len(recall)+1,1e3).astype(int)
plt.xticks(ticks=locs,labels=xlabels)
plt.legend(loc=0,prop={'size':10})
plt.title("Deep Image SVM Batch=64-LR=2e-4 alpha=6e-4 (Train Recall)",pad=20,fontsize=15)
plt.xlabel("Iterations",fontsize=15,labelpad=15)
plt.ylabel("Train Recall",fontsize=15,labelpad=15)
plt.savefig('./recall-pic.png')
def plot_train_precision(precision, avg_precision):
sns.set()
plt.style.use('seaborn-ticks')
plt.figure(figsize=(15,20))
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.plot(precision,label="Train Precision",alpha=0.2,color="orange",marker="o")
plt.plot(avg_precision,label="Avg Train Precision",alpha=0.9,color="red")
yticks = plt.yticks()
for y_locs in yticks[0][1:]:
plt.axhline(y=y_locs,color='lightgrey',linestyle='--',lw=1,alpha=1)
labels = np.arange(1,len(precision)+1,1e3)
xlabels = ['{:,.0f}'.format(x) + 'k' for x in labels/1000]
xlabels[0] = '0'
locs = np.arange(1,len(precision)+1,1e3).astype(int)
plt.xticks(ticks=locs,labels=xlabels)
plt.legend(loc=0,prop={'size':10})
plt.title("Deep Image SVM Batch=64-LR=2e-4 alpha=6e-4 (Train Precision)",pad=20,fontsize=15)
plt.xlabel("Iterations",fontsize=15,labelpad=15)
plt.ylabel("Train Recall",fontsize=15,labelpad=15)
plt.savefig('./precision-pic.png')
def plot_train_f1(f1, avg_f1):
sns.set()
plt.style.use('seaborn-ticks')
plt.figure(figsize=(15,20))
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.plot(f1,label="Train f1",alpha=0.2,color="orange",marker="o")
plt.plot(avg_f1,label="Avg Train f1",alpha=0.9,color="red")
yticks = plt.yticks()
for y_locs in yticks[0][1:]:
plt.axhline(y=y_locs,color='lightgrey',linestyle='--',lw=1,alpha=1)
labels = np.arange(1,len(f1)+1,1e3)
xlabels = ['{:,.0f}'.format(x) + 'k' for x in labels/1000]
xlabels[0] = '0'
locs = np.arange(1,len(f1)+1,1e3).astype(int)
plt.xticks(ticks=locs,labels=xlabels)
plt.legend(loc=0,prop={'size':10})
plt.title("Deep Image SVM Batch=64-LR=2e-4 alpha=6e-4 (Train f1)",pad=20,fontsize=15)
plt.xlabel("Iterations",fontsize=15,labelpad=15)
plt.ylabel("Train f1",fontsize=15,labelpad=15)
plt.savefig('./f1-pic.png')
def all_plots(accuracy, loss, recall, precision, f1):
sns.set()
plt.style.use('seaborn-ticks')
plt.figure(figsize=(15,20))
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.plot(accuracy,label="Train Accuracy",alpha=1,color="orange",linestyle='-')
# plt.plot(loss,label="Train Loss",alpha=0.3,color="cyan",marker="_")
plt.plot(recall,label="Train Recall",alpha=1,color="red",linestyle='-')
plt.plot(precision,label="Train Precision",alpha=1,color="blue",linestyle='-')
plt.plot(f1,label="Train f1",alpha=1,color="green",linestyle='-')
yticks = plt.yticks()
for y_locs in yticks[0][1:]:
plt.axhline(y=y_locs,color='lightgrey',linestyle='--',lw=1,alpha=1)
labels = np.arange(1,len(f1)+1,1e3)
xlabels = ['{:,.0f}'.format(x) + 'k' for x in labels/1000]
xlabels[0] = '0'
locs = np.arange(1,len(f1)+1,1e3).astype(int)
plt.xticks(ticks=locs,labels=xlabels)
plt.legend(loc=0,prop={'size':10})
plt.title("Deep Image SVM Batch=64-LR=2e-4 alpha=6e-4",pad=20,fontsize=15)
plt.xlabel("Iterations",fontsize=15,labelpad=15)
plt.ylabel("Train metrics",fontsize=15,labelpad=15)
plt.savefig('./all-pic.png')
plot_train_accuracies(acc, accsmooth)
plot_train_loss(loss, loss_smooth)
plot_train_recall(recall, avg_recall)
plot_train_precision(precision, avg_precision)
plot_train_f1(f1, avg_f1)
all_plots(accsmooth, loss_smooth, avg_recall, avg_precision, avg_f1)
| 44.257895 | 105 | 0.67618 | 1,379 | 8,409 | 4.058013 | 0.084844 | 0.025733 | 0.051465 | 0.077198 | 0.867584 | 0.859543 | 0.794139 | 0.755182 | 0.755182 | 0.755182 | 0 | 0.053853 | 0.11892 | 8,409 | 189 | 106 | 44.492063 | 0.701444 | 0.014033 | 0 | 0.588957 | 0 | 0.104294 | 0.240256 | 0.103777 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03681 | false | 0 | 0.030675 | 0 | 0.067485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
82de5a5767a1b0b6d7af8ec67c7f80ed865b08b8 | 107 | py | Python | myapi/admin.py | Planthive/PlantHive_WebApp | 3ae9623406348981b4873b6d857ee9124188a4ee | [
"Apache-2.0"
] | null | null | null | myapi/admin.py | Planthive/PlantHive_WebApp | 3ae9623406348981b4873b6d857ee9124188a4ee | [
"Apache-2.0"
] | null | null | null | myapi/admin.py | Planthive/PlantHive_WebApp | 3ae9623406348981b4873b6d857ee9124188a4ee | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import growth_schedule
admin.site.register(growth_schedule)
| 21.4 | 36 | 0.850467 | 15 | 107 | 5.933333 | 0.666667 | 0.314607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093458 | 107 | 4 | 37 | 26.75 | 0.917526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82e48976869087f61fa9c140d7e61a4f1c52994d | 2,775 | py | Python | core/tests/test_views.py | callmb/directory-sso-profile | 9d5a5038c1665dbf6804e03ad95a02b5c2412b64 | [
"MIT"
] | null | null | null | core/tests/test_views.py | callmb/directory-sso-profile | 9d5a5038c1665dbf6804e03ad95a02b5c2412b64 | [
"MIT"
] | null | null | null | core/tests/test_views.py | callmb/directory-sso-profile | 9d5a5038c1665dbf6804e03ad95a02b5c2412b64 | [
"MIT"
] | null | null | null | from unittest import mock
import pytest
import requests
from django.urls import reverse
from core.tests.helpers import create_response
def test_companies_house_search_validation_error(client, settings):
url = reverse('api:companies-house-search')
response = client.get(url) # notice absense of `term`
assert response.status_code == 400
@mock.patch('core.views.ch_search_api_client.company.search_companies')
def test_companies_house_search_api_error(mock_search, client, settings):
mock_search.return_value = create_response(400)
url = reverse('api:companies-house-search')
with pytest.raises(requests.HTTPError):
client.get(url, data={'term': 'thing'})
@mock.patch('core.views.ch_search_api_client.company.search_companies')
def test_companies_house_search_api_success(mock_search, client, settings):
mock_search.return_value = create_response(
200, {'items': [{'name': 'Smashing corp'}]}
)
url = reverse('api:companies-house-search')
response = client.get(url, data={'term': 'thing'})
assert response.status_code == 200
assert response.content == b'[{"name":"Smashing corp"}]'
@mock.patch('core.views.ch_search_api_client.company.search_companies')
def test_companies_house_search(mock_search, client, settings):
mock_search.return_value = create_response(
200, {'items': [{'name': 'Smashing corp'}]}
)
url = reverse('api:companies-house-search')
response = client.get(url, data={'term': 'thing'})
assert response.status_code == 200
assert response.content == b'[{"name":"Smashing corp"}]'
@mock.patch('core.views.requests.get')
def test_address_lookup_bad_postcode(mock_get, client):
mock_get.return_value = create_response(400)
url = reverse('api:postcode-search')
response = client.get(url, data={'postcode': '21313'})
assert response.status_code == 200
assert response.content == b'[]'
@mock.patch('core.views.requests.get')
def test_address_lookup_not_ok(mock_get, client):
mock_get.return_value = create_response(500)
url = reverse('api:postcode-search')
with pytest.raises(requests.HTTPError):
client.get(url, data={'postcode': '21313'})
@mock.patch('core.views.requests.get')
def test_address_lookup_ok(mock_get, client):
mock_get .return_value = create_response(200, {'addresses': [
'1 A road, , , , Ashire',
'2 B road, , , , Bshire',
]})
url = reverse('api:postcode-search')
response = client.get(url, data={'postcode': '123123'})
assert response.status_code == 200
assert response.content == (
b'[{"text":"1 A road, Ashire","value":"1 A road, Ashire, 123123"},'
b'{"text":"2 B road, Bshire","value":"2 B road, Bshire, 123123"}]'
)
| 30.494505 | 75 | 0.695135 | 363 | 2,775 | 5.118457 | 0.192837 | 0.067815 | 0.086114 | 0.058127 | 0.818622 | 0.791173 | 0.769107 | 0.769107 | 0.758881 | 0.681916 | 0 | 0.028682 | 0.158198 | 2,775 | 90 | 76 | 30.833333 | 0.766695 | 0.008649 | 0 | 0.457627 | 0 | 0 | 0.27028 | 0.124045 | 0 | 0 | 0 | 0 | 0.152542 | 1 | 0.118644 | false | 0 | 0.084746 | 0 | 0.20339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d87415c62395b74254297b1189e0d7c9f19d7d4 | 93 | py | Python | wavesim/tests/python/test_Mesh.py | TheComet/wavesim | 90125d5273b96633e5f74666ddb707cedfa2fbf1 | [
"Apache-2.0"
] | 7 | 2018-01-25T10:58:39.000Z | 2021-05-08T08:08:37.000Z | wavesim/tests/python/test_Mesh.py | TheComet/wavesim | 90125d5273b96633e5f74666ddb707cedfa2fbf1 | [
"Apache-2.0"
] | 4 | 2018-03-06T15:47:13.000Z | 2018-03-07T19:07:45.000Z | wavesim/tests/python/test_Mesh.py | TheComet/wavesim | 90125d5273b96633e5f74666ddb707cedfa2fbf1 | [
"Apache-2.0"
] | 2 | 2018-02-18T02:02:31.000Z | 2020-02-16T09:49:12.000Z | import wavesim
import unittest
class TestMesh(unittest.TestCase):
pass
unittest.main()
| 11.625 | 34 | 0.774194 | 11 | 93 | 6.545455 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150538 | 93 | 7 | 35 | 13.285714 | 0.911392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7da301f964bfa2e7f71274c54598d16ede203e81 | 5,765 | py | Python | constellation_forms/tests/views/testPermissions.py | ConstellationApps/Forms | 5d2bacf589c1a473cf619f34d569d33191b11285 | [
"ISC"
] | 2 | 2017-04-18T02:41:00.000Z | 2017-04-18T02:51:39.000Z | constellation_forms/tests/views/testPermissions.py | ConstellationApps/Forms | 5d2bacf589c1a473cf619f34d569d33191b11285 | [
"ISC"
] | 33 | 2017-03-03T06:16:44.000Z | 2019-08-20T23:06:21.000Z | constellation_forms/tests/views/testPermissions.py | ConstellationApps/Forms | 5d2bacf589c1a473cf619f34d569d33191b11285 | [
"ISC"
] | 1 | 2017-02-22T18:48:04.000Z | 2017-02-22T18:48:04.000Z | from django.contrib.auth.models import User, Group, Permission
from django.test import TestCase, RequestFactory
from guardian.shortcuts import assign_perm
from ... import views
from ...models import Form
class PermissionsTest(TestCase):
def setUp(self):
self.factory = RequestFactory()
self.group = Group.objects.create(name="Test Group")
self.group.save()
self.user = User.objects.create_user(
username="user",
email="user@example.com",
password="pass")
self.user.save()
self.permission = Permission.objects.get(
codename="add_form")
self.form = Form.objects.create(form_id=1,
version=1,
name="TestForm",
description="",
elements={})
def tearDown(self):
self.user.delete()
self.group.delete()
self.form.delete()
# Test editing new forms
def test_form_create_get_unauthorized(self):
request = self.factory.get("/forms/manage/create-form")
request.user = self.user
response = views.manage_create_form.as_view()(request)
self.assertEqual(response.status_code, 302)
def test_form_create_post_unauthorized(self):
request = self.factory.post("/forms/manage/create-form")
request.user = self.user
response = views.manage_create_form.as_view()(request)
self.assertEqual(response.status_code, 403)
def test_form_create_get_authorized(self):
request = self.factory.get("/forms/manage/create-form")
self.user.user_permissions.add(self.permission)
request.user = self.user
response = views.manage_create_form.as_view()(request)
self.assertEqual(response.status_code, 200)
def test_form_create_post_authorized(self):
request = self.factory.post("/forms/manage/create-form")
self.user.user_permissions.add(self.permission)
request.user = self.user
# We just want to pass the permission check
self.assertRaises(KeyError,
views.manage_create_form.as_view(),
request)
# Test editing existing forms
def test_edit_get_unauthorized(self):
request = self.factory.get("/forms/manage/create-form/1")
request.user = self.user
response = views.manage_create_form.as_view()(request)
self.assertEqual(response.status_code, 302)
def test_edit_post_unauthorized(self):
request = self.factory.post("/forms/manage/create-form/1")
request.user = self.user
response = views.manage_create_form.as_view()(request)
self.assertEqual(response.status_code, 403)
def test_edit_get_authorized(self):
request = self.factory.get("/forms/manage/create-form/1")
self.user.user_permissions.add(self.permission)
request.user = self.user
response = views.manage_create_form.as_view()(request)
self.assertEqual(response.status_code, 200)
def test_edit_post_authorized(self):
request = self.factory.post("/forms/manage/create-form/1")
self.user.user_permissions.add(self.permission)
request.user = self.user
# We just want to pass the permission check
self.assertRaises(KeyError,
views.manage_create_form.as_view(),
request)
def test_edit_get_in_group(self):
request = self.factory.get("/forms/manage/create-form/1")
self.user.groups.add(self.group)
assign_perm("constellation_forms.form_owned_by", self.group, self.form)
assign_perm("constellation_forms.form_visible", self.group, self.form)
request.user = self.user
response = views.manage_create_form.as_view()(request, form_id=1)
self.assertEqual(response.status_code, 200)
def test_edit_post_in_group(self):
request = self.factory.post("/forms/manage/create-form/1")
self.user.groups.add(self.group)
assign_perm("constellation_forms.form_owned_by", self.group, self.form)
assign_perm("constellation_forms.form_visible", self.group, self.form)
request.user = self.user
# We just want to pass the permission check
self.assertRaises(
KeyError,
views.manage_create_form.as_view(),
request,
form_id=1)
def test_view_form_unauthorized(self):
request = self.factory.get("/forms/view/form/1")
request.user = self.user
response = views.view_form.as_view()(request, form_id=1)
self.assertEqual(response.status_code, 302)
def test_view_post_unauthorized(self):
request = self.factory.post("/forms/view/form/1")
request.user = self.user
response = views.view_form.as_view()(request, form_id=1)
self.assertEqual(response.status_code, 403)
def test_view_form_in_group(self):
request = self.factory.get("/forms/view/form/1")
self.user.groups.add(self.group)
request.user = self.user
assign_perm("constellation_forms.form_visible", self.group, self.form)
response = views.view_form.as_view()(request, form_id=1)
self.assertEqual(response.status_code, 200)
def test_view_post_in_group(self):
request = self.factory.post("/forms/view/form/1")
self.user.groups.add(self.group)
request.user = self.user
assign_perm("constellation_forms.form_visible", self.group, self.form)
self.assertRaises(
KeyError,
views.view_form.as_view(),
request,
form_id=1)
| 40.886525 | 79 | 0.643365 | 705 | 5,765 | 5.073759 | 0.112057 | 0.055913 | 0.08946 | 0.086106 | 0.810735 | 0.797596 | 0.797596 | 0.794241 | 0.780263 | 0.73749 | 0 | 0.011078 | 0.248395 | 5,765 | 140 | 80 | 41.178571 | 0.814447 | 0.030529 | 0 | 0.644068 | 0 | 0 | 0.103529 | 0.081677 | 0 | 0 | 0 | 0 | 0.118644 | 1 | 0.135593 | false | 0.008475 | 0.042373 | 0 | 0.186441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
816eb49b48651101c65472ec9033d23744d9e6ba | 9,752 | py | Python | mayan/apps/permissions/tests/test_api.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | 1 | 2021-06-17T18:24:25.000Z | 2021-06-17T18:24:25.000Z | mayan/apps/permissions/tests/test_api.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | 7 | 2020-06-06T00:01:04.000Z | 2022-01-13T01:47:17.000Z | mayan/apps/permissions/tests/test_api.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
from rest_framework import status
from mayan.apps.rest_api.tests import BaseAPITestCase
from mayan.apps.user_management.tests.mixins import GroupTestMixin
from ..classes import Permission
from ..models import Role
from ..permissions import (
permission_role_create, permission_role_delete,
permission_role_edit, permission_role_view
)
from .mixins import (
PermissionAPIViewTestMixin, PermissionTestMixin, RoleAPIViewTestMixin,
RoleTestMixin
)
class PermissionAPIViewTestCase(PermissionAPIViewTestMixin, BaseAPITestCase):
def setUp(self):
super(PermissionAPIViewTestCase, self).setUp()
Permission.invalidate_cache()
def test_permissions_list_api_view(self):
response = self._request_permissions_list_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
class RoleAPIViewTestCase(GroupTestMixin, PermissionTestMixin, RoleAPIViewTestMixin, RoleTestMixin, BaseAPITestCase):
def test_role_create_api_view_no_permission(self):
role_count = Role.objects.count()
response = self._request_test_role_create_api_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(Role.objects.count(), role_count)
def test_role_create_api_view_with_permission(self):
self.grant_permission(permission=permission_role_create)
role_count = Role.objects.count()
response = self._request_test_role_create_api_view()
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(Role.objects.count(), role_count + 1)
def _request_role_create_api_view_extra_data(self):
extra_data = {
'groups_pk_list': '{}'.format(self.test_group.pk),
'permissions_pk_list': '{}'.format(self.test_permission.pk)
}
return self._request_test_role_create_api_view(extra_data=extra_data)
def test_role_create_api_view_extra_data_no_permission(self):
self._create_test_group()
self._create_test_permission()
role_count = Role.objects.count()
response = self._request_role_create_api_view_extra_data()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(Role.objects.count(), role_count)
def test_role_create_complex_view_with_permission(self):
self._create_test_group()
self._create_test_permission()
self.grant_permission(permission=permission_role_create)
role_count = Role.objects.count()
response = self._request_role_create_api_view_extra_data()
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(Role.objects.count(), role_count + 1)
new_role = Role.objects.get(pk=response.data['id'])
self.assertTrue(
self.test_group in new_role.groups.all()
)
self.assertTrue(
self.test_permission.stored_permission in new_role.permissions.all()
)
def test_role_delete_view_no_access(self):
self._create_test_role()
role_count = Role.objects.count()
response = self._request_test_role_delete_api_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(Role.objects.count(), role_count)
def test_role_delete_view_with_access(self):
self._create_test_role()
self.grant_access(obj=self.test_role, permission=permission_role_delete)
role_count = Role.objects.count()
response = self._request_test_role_delete_api_view()
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(Role.objects.count(), role_count - 1)
def test_role_edit_via_patch_no_access(self):
self._create_test_role()
response = self._request_test_role_edit_api_view(request_type='patch')
role_label = self.test_role.label
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.test_role.refresh_from_db()
self.assertEqual(self.test_role.label, role_label)
def test_role_edit_via_patch_with_access(self):
self._create_test_role()
self.grant_access(obj=self.test_role, permission=permission_role_edit)
role_label = self.test_role.label
response = self._request_test_role_edit_api_view(request_type='patch')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_role.refresh_from_db()
self.assertNotEqual(self.test_role.label, role_label)
def test_role_edit_via_put_no_access(self):
self._create_test_role()
response = self._request_test_role_edit_api_view(request_type='put')
role_label = self.test_role.label
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.test_role.refresh_from_db()
self.assertEqual(self.test_role.label, role_label)
def test_role_edit_via_put_with_access(self):
self._create_test_role()
self.grant_access(obj=self.test_role, permission=permission_role_edit)
role_label = self.test_role.label
response = self._request_test_role_edit_api_view(request_type='put')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_role.refresh_from_db()
self.assertNotEqual(self.test_role.label, role_label)
def _request_role_edit_api_patch_view_extra_data(self):
extra_data = {
'groups_pk_list': '{}'.format(self.test_group.pk),
'permissions_pk_list': '{}'.format(self.test_permission.pk)
}
return self._request_test_role_edit_api_view(
extra_data=extra_data, request_type='patch'
)
def test_role_edit_api_patch_view_extra_data_no_access(self):
self._create_test_group()
self._create_test_permission()
self._create_test_role()
role_label = self.test_role.label
response = self._request_role_edit_api_patch_view_extra_data()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.test_role.refresh_from_db()
self.assertEqual(self.test_role.label, role_label)
self.assertTrue(
self.test_group not in self.test_role.groups.all()
)
self.assertTrue(
self.test_permission.stored_permission not in self.test_role.permissions.all()
)
def test_role_edit_api_patch_view_extra_data_with_access(self):
self._create_test_group()
self._create_test_permission()
self._create_test_role()
self.grant_access(obj=self.test_role, permission=permission_role_edit)
role_label = self.test_role.label
response = self._request_role_edit_api_patch_view_extra_data()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_role.refresh_from_db()
self.assertNotEqual(self.test_role.label, role_label)
self.assertTrue(
self.test_group in self.test_role.groups.all()
)
self.assertTrue(
self.test_permission.stored_permission in self.test_role.permissions.all()
)
def _request_role_edit_api_put_view_extra_data(self):
extra_data = {
'groups_pk_list': '{}'.format(self.test_group.pk),
'permissions_pk_list': '{}'.format(self.test_permission.pk)
}
return self._request_test_role_edit_api_view(
extra_data=extra_data, request_type='put'
)
def test_role_edit_api_put_view_extra_data_no_access(self):
self._create_test_group()
self._create_test_permission()
self._create_test_role()
role_label = self.test_role.label
response = self._request_role_edit_api_put_view_extra_data()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.test_role.refresh_from_db()
self.assertEqual(self.test_role.label, role_label)
self.assertTrue(
self.test_group not in self.test_role.groups.all()
)
self.assertTrue(
self.test_permission.stored_permission not in self.test_role.permissions.all()
)
def test_role_edit_api_put_view_extra_data_with_access(self):
self._create_test_group()
self._create_test_permission()
self._create_test_role()
self.grant_access(obj=self.test_role, permission=permission_role_edit)
role_label = self.test_role.label
response = self._request_role_edit_api_put_view_extra_data()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_role.refresh_from_db()
self.assertNotEqual(self.test_role.label, role_label)
self.assertTrue(
self.test_group in self.test_role.groups.all()
)
self.assertTrue(
self.test_permission.stored_permission in self.test_role.permissions.all()
)
def test_roles_list_view_no_access(self):
self._create_test_role()
response = self._request_role_list_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 0)
def test_roles_list_view_with_access(self):
self._create_test_role()
self.grant_access(
obj=self.test_role, permission=permission_role_view
)
response = self._request_role_list_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 1)
self.assertEqual(
response.data['results'][0]['label'], self.test_role.label
)
| 34.828571 | 117 | 0.713495 | 1,238 | 9,752 | 5.168821 | 0.07189 | 0.095015 | 0.073136 | 0.077043 | 0.872636 | 0.853571 | 0.83388 | 0.825754 | 0.816065 | 0.793718 | 0 | 0.007311 | 0.200472 | 9,752 | 279 | 118 | 34.953405 | 0.81339 | 0 | 0 | 0.636364 | 0 | 0 | 0.016304 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.106061 | false | 0 | 0.040404 | 0 | 0.171717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
81ba79d95f574170c14fce38d769536078108969 | 30 | py | Python | cloudy/util/__init__.py | un33k/python-cloudy | 597e55fe308521c0df25ce5ed6c859949bd5dd17 | [
"MIT"
] | 1 | 2015-04-29T13:43:36.000Z | 2015-04-29T13:43:36.000Z | cloudy/util/__init__.py | un33k/python-cloudy | 597e55fe308521c0df25ce5ed6c859949bd5dd17 | [
"MIT"
] | null | null | null | cloudy/util/__init__.py | un33k/python-cloudy | 597e55fe308521c0df25ce5ed6c859949bd5dd17 | [
"MIT"
] | 2 | 2015-06-20T16:39:11.000Z | 2016-11-18T13:58:58.000Z | from conf import CloudyConfig
| 15 | 29 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81bb031e9d151a3b7f05429919871b8dc4790d05 | 1,332 | py | Python | problem8.py | adhikariprajitraj/Project-Euler | c5628601c87b76f2514c245f6682928a45fb9ef6 | [
"MIT"
] | 1 | 2020-08-05T00:50:44.000Z | 2020-08-05T00:50:44.000Z | problem8.py | adhikariprajitraj/Project-Euler | c5628601c87b76f2514c245f6682928a45fb9ef6 | [
"MIT"
] | null | null | null | problem8.py | adhikariprajitraj/Project-Euler | c5628601c87b76f2514c245f6682928a45fb9ef6 | [
"MIT"
] | null | null | null | num = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450"""
product =1
maxi=1
temp = list(num.strip())
for item in temp:
if item == "\n":
temp.remove(item)
for i in range(0,len(temp)-14):
result=1
prod = list(temp[i:i+13])
for j in prod:
result *= int(j)
if result>=maxi:
maxi = result
print(maxi)
| 32.487805 | 53 | 0.871622 | 69 | 1,332 | 16.826087 | 0.637681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.831683 | 0.09009 | 1,332 | 40 | 54 | 33.3 | 0.126238 | 0 | 0 | 0 | 0 | 0 | 0.767267 | 0.750751 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.028571 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
81bc83657ff1875f91618000e05d78dc043fc1e9 | 114 | py | Python | simeng/data_containers/container.py | OWigginsHay/simulation_engine | bb2ab5578d05b0235fd90e4b858f5e83b20257cc | [
"MIT"
] | null | null | null | simeng/data_containers/container.py | OWigginsHay/simulation_engine | bb2ab5578d05b0235fd90e4b858f5e83b20257cc | [
"MIT"
] | 2 | 2021-12-01T16:58:37.000Z | 2021-12-03T23:11:56.000Z | simeng/data_containers/container.py | OWigginsHay/simulation_engine | bb2ab5578d05b0235fd90e4b858f5e83b20257cc | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
class DataContainer(ABC):
@abstractmethod
def update():
pass | 16.285714 | 35 | 0.684211 | 12 | 114 | 6.5 | 0.75 | 0.435897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245614 | 114 | 7 | 36 | 16.285714 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0.2 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
81c4ba3be62444e0fde619d35fbca6c852bd18ca | 134 | py | Python | src/decaylanguage/modeling/__init__.py | ExternalRepositories/decaylanguage | c726ed105d5cafecc0e7bca513acf83295e03bea | [
"BSD-3-Clause"
] | null | null | null | src/decaylanguage/modeling/__init__.py | ExternalRepositories/decaylanguage | c726ed105d5cafecc0e7bca513acf83295e03bea | [
"BSD-3-Clause"
] | null | null | null | src/decaylanguage/modeling/__init__.py | ExternalRepositories/decaylanguage | c726ed105d5cafecc0e7bca513acf83295e03bea | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from .amplitudechain import LS
from .amplitudechain import AmplitudeChain
__all__ = ("LS", "AmplitudeChain")
| 22.333333 | 42 | 0.723881 | 14 | 134 | 6.642857 | 0.571429 | 0.387097 | 0.516129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.134328 | 134 | 5 | 43 | 26.8 | 0.793103 | 0.156716 | 0 | 0 | 0 | 0 | 0.144144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81c65897c0ccd2a3e07373aa785ef891b3a85419 | 41 | py | Python | src/OddsJamClient/__init__.py | cooperbrandon1/oddsjam-api | 1d1170ba74fa1705cc786a96ab12a8cf5fa19dff | [
"MIT"
] | 3 | 2021-11-02T14:07:31.000Z | 2021-12-03T01:15:53.000Z | src/OddsJamClient/__init__.py | cooperbrandon1/oddsjam-api | 1d1170ba74fa1705cc786a96ab12a8cf5fa19dff | [
"MIT"
] | null | null | null | src/OddsJamClient/__init__.py | cooperbrandon1/oddsjam-api | 1d1170ba74fa1705cc786a96ab12a8cf5fa19dff | [
"MIT"
] | 2 | 2022-02-17T09:31:04.000Z | 2022-02-23T04:16:51.000Z | from .OddsJamClient import OddsJamClient; | 41 | 41 | 0.878049 | 4 | 41 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81dbd2b0f472198784b54000b6ad9732b5ef58b2 | 80 | py | Python | gae/wsgi.py | vb64/mfl-analyze | 4af42023fb9f104fbe9497e316fa4a9178c49cfe | [
"MIT"
] | null | null | null | gae/wsgi.py | vb64/mfl-analyze | 4af42023fb9f104fbe9497e316fa4a9178c49cfe | [
"MIT"
] | null | null | null | gae/wsgi.py | vb64/mfl-analyze | 4af42023fb9f104fbe9497e316fa4a9178c49cfe | [
"MIT"
] | null | null | null | import django.core.handlers.wsgi
app = django.core.handlers.wsgi.WSGIHandler()
| 20 | 45 | 0.8 | 11 | 80 | 5.818182 | 0.636364 | 0.3125 | 0.5625 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 80 | 3 | 46 | 26.666667 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
81e2b467a00e10e633f8c92e1bb95b971249f5e2 | 12,308 | py | Python | feasability_study/WriteAllDataTypesToProtobufDataMatrix.py | totonga/wodson | 9c8cf8b54ce5c2bebebe46fecac36e1a2ad9af02 | [
"Apache-2.0"
] | 1 | 2016-01-06T13:13:00.000Z | 2016-01-06T13:13:00.000Z | feasability_study/WriteAllDataTypesToProtobufDataMatrix.py | totonga/wodson | 9c8cf8b54ce5c2bebebe46fecac36e1a2ad9af02 | [
"Apache-2.0"
] | 5 | 2016-02-16T08:11:49.000Z | 2016-03-06T19:33:43.000Z | feasability_study/WriteAllDataTypesToProtobufDataMatrix.py | totonga/wodson | 9c8cf8b54ce5c2bebebe46fecac36e1a2ad9af02 | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/env python
import sys
import time
import wodson_pb2
from google.protobuf import json_format
from google.protobuf import timestamp_pb2
def get_time_stamp(timeVal):
seconds = int(timeVal)
nanos = int((timeVal - seconds) * 10**9)
return timestamp_pb2.Timestamp(seconds=seconds, nanos=nanos)
if len(sys.argv) != 2:
print "Usage:", sys.argv[0], "No target given"
sys.exit(-1)
data_matrices = wodson_pb2.DataMatrices()
data_matrix = data_matrices.matrices.add()
data_matrix.name = "MyLocalColumn"
data_matrix.base_name = "AoLocalColumn"
column = data_matrix.columns.add()
column.name = "Name"
column.base_name = "name"
column.datatype = wodson_pb2.DT_STRING
column.dt_string.values.extend(["Time", "Revs", "Description"])
column = data_matrix.columns.add()
column.name = "Id"
column.base_name = "id"
column.datatype = wodson_pb2.DT_LONGLONG
column.dt_longlong.values.extend([4711, 4712, 4713])
column = data_matrix.columns.add()
column.name = "Flags"
column.base_name = "flags"
column.datatype = wodson_pb2.DS_SHORT
column.ds_long.values.add().values.extend([15,15,15,15,15])
column.ds_long.values.add().values.extend([15,15,15,15,15])
column.ds_long.values.add().values.extend([15,15,15,15,15])
column = data_matrix.columns.add()
column.name = "Values"
column.base_name = "values"
column.datatype = wodson_pb2.DT_UNKNOWN
get_time_stamp(time.time())
timeVals = column.dt_unknown.values.add()
timeVals.datatype = wodson_pb2.DT_DATE
timeVals.dt_date.values.extend([get_time_stamp(time.time()), get_time_stamp(time.time() + 1), get_time_stamp(time.time() + 2), get_time_stamp(time.time() + 3), get_time_stamp(time.time() + 4)])
revsVals = column.dt_unknown.values.add()
revsVals.datatype = wodson_pb2.DT_DOUBLE
revsVals.dt_double.values.extend([1.1, 1.2, 1.3])
desciptionVals = column.dt_unknown.values.add()
desciptionVals.datatype = wodson_pb2.DT_STRING
desciptionVals.dt_string.values.extend(["first", "second", "third", "forth", "fifth"])
# Write all datatypes
data_matrix = data_matrices.matrices.add()
data_matrix.name = "AllTypes"
# DT types
column = data_matrix.columns.add()
column.name = "DT_STRING"
column.datatype = wodson_pb2.DT_STRING
column.dt_string.values.extend(["a", "b", "c"])
column = data_matrix.columns.add()
column.name = "DT_SHORT"
column.datatype = wodson_pb2.DT_SHORT
column.dt_long.values.extend([1, 2, -1])
column = data_matrix.columns.add()
column.name = "DT_FLOAT"
column.datatype = wodson_pb2.DT_FLOAT
column.dt_float.values.extend([1.1, 2.1, -1.1])
column = data_matrix.columns.add()
column.name = "DT_BOOLEAN"
column.datatype = wodson_pb2.DT_BOOLEAN
column.dt_boolean.values.extend([True, False, True])
column = data_matrix.columns.add()
column.name = "DT_BYTE"
column.datatype = wodson_pb2.DT_BYTE
column.dt_byte.values = b'abc'
column = data_matrix.columns.add()
column.name = "DT_LONG"
column.datatype = wodson_pb2.DT_LONG
column.dt_long.values.extend([1, 2, -1])
column = data_matrix.columns.add()
column.name = "DT_DOUBLE"
column.datatype = wodson_pb2.DT_DOUBLE
column.dt_double.values.extend([1.1, 2.1, -1.1])
column = data_matrix.columns.add()
column.name = "DT_LONGLONG"
column.datatype = wodson_pb2.DT_DOUBLE
column.dt_longlong.values.extend([123, 345, 789])
column = data_matrix.columns.add()
column.name = "DT_ID"
column.datatype = wodson_pb2.DT_ID
column.dt_longlong.values.extend([123, 345, 789])
column = data_matrix.columns.add()
column.name = "DT_DATE"
column.datatype = wodson_pb2.DT_DATE
column.dt_date.values.extend([get_time_stamp(time.time()), get_time_stamp(time.time() + 1), get_time_stamp(time.time() + 2)])
column = data_matrix.columns.add()
column.name = "DT_BYTESTR"
column.datatype = wodson_pb2.DT_BYTESTR
column.dt_bytestr.values.extend([b'abc', b'def', b'hij'])
column = data_matrix.columns.add()
column.name = "DT_COMPLEX"
column.datatype = wodson_pb2.DT_COMPLEX
column.dt_float.values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
column = data_matrix.columns.add()
column.name = "DT_DCOMPLEX"
column.datatype = wodson_pb2.DT_DCOMPLEX
column.dt_double.values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
column = data_matrix.columns.add()
column.name = "DT_ENUM"
column.datatype = wodson_pb2.DT_ENUM
column.dt_long.values.extend([1, 3, 7])
column = data_matrix.columns.add()
column.name = "DT_EXTERNALREFERENCE"
column.datatype = wodson_pb2.DT_EXTERNALREFERENCE
column.dt_string.values.extend(["first picture", "image/jpg", "data/firstPic.jpg", "second picture", "image/jpg", "data/secondPic.jpg", "third picture", "image/jpg", "data/thirdPic.jpg"])
# DS types
column = data_matrix.columns.add()
column.name = "DS_STRING"
column.datatype = wodson_pb2.DS_STRING
column.ds_string.values.add().values.extend(["a", "b", "c"])
column.ds_string.values.add().values.extend(["a", "b", "c"])
column.ds_string.values.add().values.extend(["a", "b", "c"])
column = data_matrix.columns.add()
column.name = "DS_SHORT"
column.datatype = wodson_pb2.DS_SHORT
column.ds_long.values.add().values.extend([1, 2, -1])
column.ds_long.values.add().values.extend([1, 2, -1])
column.ds_long.values.add().values.extend([1, 2, -1])
column = data_matrix.columns.add()
column.name = "DS_FLOAT"
column.datatype = wodson_pb2.DS_FLOAT
column.ds_float.values.add().values.extend([1.1, 2.1, -1.1])
column.ds_float.values.add().values.extend([1.1, 2.1, -1.1])
column.ds_float.values.add().values.extend([1.1, 2.1, -1.1])
column = data_matrix.columns.add()
column.name = "DS_BOOLEAN"
column.datatype = wodson_pb2.DS_BOOLEAN
column.ds_boolean.values.add().values.extend([True, False, True])
column.ds_boolean.values.add().values.extend([True, False, True])
column.ds_boolean.values.add().values.extend([True, False, True])
column = data_matrix.columns.add()
column.name = "DS_BYTE"
column.datatype = wodson_pb2.DS_BYTE
column.ds_byte.values.add().values = b'abc'
column.ds_byte.values.add().values = b'abc'
column.ds_byte.values.add().values = b'abc'
column = data_matrix.columns.add()
column.name = "DS_LONG"
column.datatype = wodson_pb2.DS_LONG
column.ds_long.values.add().values.extend([1, 2, -1])
column.ds_long.values.add().values.extend([1, 2, -1])
column.ds_long.values.add().values.extend([1, 2, -1])
column = data_matrix.columns.add()
column.name = "DS_DOUBLE"
column.datatype = wodson_pb2.DS_DOUBLE
column.ds_double.values.add().values.extend([1.1, 2.1, -1.1])
column.ds_double.values.add().values.extend([1.1, 2.1, -1.1])
column.ds_double.values.add().values.extend([1.1, 2.1, -1.1])
column = data_matrix.columns.add()
column.name = "DS_LONGLONG"
column.datatype = wodson_pb2.DS_DOUBLE
column.ds_longlong.values.add().values.extend([123, 345, 789])
column.ds_longlong.values.add().values.extend([123, 345, 789])
column.ds_longlong.values.add().values.extend([123, 345, 789])
column = data_matrix.columns.add()
column.name = "DS_ID"
column.datatype = wodson_pb2.DS_ID
column.ds_longlong.values.add().values.extend([123, 345, 789])
column.ds_longlong.values.add().values.extend([123, 345, 789])
column.ds_longlong.values.add().values.extend([123, 345, 789])
column = data_matrix.columns.add()
column.name = "DS_DATE"
column.datatype = wodson_pb2.DS_DATE
column.ds_date.values.add().values.extend([get_time_stamp(time.time()), get_time_stamp(time.time() + 1), get_time_stamp(time.time() + 2)])
column.ds_date.values.add().values.extend([get_time_stamp(time.time()), get_time_stamp(time.time() + 1), get_time_stamp(time.time() + 2)])
column.ds_date.values.add().values.extend([get_time_stamp(time.time()), get_time_stamp(time.time() + 1), get_time_stamp(time.time() + 2)])
column = data_matrix.columns.add()
column.name = "DS_BYTESTR"
column.datatype = wodson_pb2.DS_BYTESTR
column.ds_bytestr.values.add().values.extend([b'abc', b'def', b'hij'])
column.ds_bytestr.values.add().values.extend([b'abc', b'def', b'hij'])
column.ds_bytestr.values.add().values.extend([b'abc', b'def', b'hij'])
column = data_matrix.columns.add()
column.name = "DS_COMPLEX"
column.datatype = wodson_pb2.DS_COMPLEX
column.ds_float.values.add().values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
column.ds_float.values.add().values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
column.ds_float.values.add().values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
column = data_matrix.columns.add()
column.name = "DS_DCOMPLEX"
column.datatype = wodson_pb2.DS_DCOMPLEX
column.ds_double.values.add().values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
column.ds_double.values.add().values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
column.ds_double.values.add().values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
column = data_matrix.columns.add()
column.name = "DS_ENUM"
column.datatype = wodson_pb2.DS_ENUM
column.ds_long.values.add().values.extend([1, 3, 7])
column.ds_long.values.add().values.extend([1, 3, 7])
column.ds_long.values.add().values.extend([1, 3, 7])
column = data_matrix.columns.add()
column.name = "DS_EXTERNALREFERENCE"
column.datatype = wodson_pb2.DS_EXTERNALREFERENCE
column.ds_string.values.add().values.extend(["first picture", "image/jpg", "data/firstPic.jpg", "second picture", "image/jpg", "data/secondPic.jpg", "third picture", "image/jpg", "data/thirdPic.jpg"])
column.ds_string.values.add().values.extend(["first picture", "image/jpg", "data/firstPic.jpg", "second picture", "image/jpg", "data/secondPic.jpg", "third picture", "image/jpg", "data/thirdPic.jpg"])
column.ds_string.values.add().values.extend(["first picture", "image/jpg", "data/firstPic.jpg", "second picture", "image/jpg", "data/secondPic.jpg", "third picture", "image/jpg", "data/thirdPic.jpg"])
# UNKNOWN
data_matrix = data_matrices.matrices.add()
data_matrix.name = "UnkownTypes"
column = data_matrix.columns.add()
column.name = "Values"
column.base_name = "values"
column.datatype = wodson_pb2.DT_UNKNOWN
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_STRING
unknownVals.dt_string.values.extend(["a", "b", "c"])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_SHORT
unknownVals.dt_long.values.extend([1, 2, -1])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_FLOAT
unknownVals.dt_float.values.extend([1.1, 2.1, -1.1])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_BOOLEAN
unknownVals.dt_boolean.values.extend([True, False, True])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_BYTE
unknownVals.dt_byte.values = b'abc'
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_LONG
unknownVals.dt_long.values.extend([1, 2, -1])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_DOUBLE
unknownVals.dt_double.values.extend([1.1, 2.1, -1.1])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_DOUBLE
unknownVals.dt_longlong.values.extend([123, 345, 789])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_ID
unknownVals.dt_longlong.values.extend([123, 345, 789])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_DATE
unknownVals.dt_date.values.extend([get_time_stamp(time.time()), get_time_stamp(time.time() + 1), get_time_stamp(time.time() + 2)])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_BYTESTR
unknownVals.dt_bytestr.values.extend([b'abc', b'def', b'hij'])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_COMPLEX
unknownVals.dt_float.values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_DCOMPLEX
unknownVals.dt_double.values.extend([1.1,0.0, 2.1,0.0, -1.1,0.0])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_ENUM
unknownVals.dt_long.values.extend([1, 3, 7])
unknownVals = column.dt_unknown.values.add()
unknownVals.datatype = wodson_pb2.DT_EXTERNALREFERENCE
unknownVals.dt_string.values.extend(["first picture", "image/jpg", "data/firstPic.jpg", "second picture", "image/jpg", "data/secondPic.jpg", "third picture", "image/jpg", "data/thirdPic.jpg"])
jsonStr = json_format.MessageToJson(data_matrices, True)
print jsonStr
f = open(sys.argv[1] + ".pb", "wb")
f.write(data_matrices.SerializeToString())
f.close()
f = open(sys.argv[1] + ".json", "wb")
f.write(jsonStr)
f.close() | 37.072289 | 200 | 0.746831 | 1,987 | 12,308 | 4.454454 | 0.059386 | 0.105751 | 0.101796 | 0.106768 | 0.89843 | 0.757316 | 0.746808 | 0.718337 | 0.691221 | 0.675291 | 0 | 0.038878 | 0.076292 | 12,308 | 332 | 201 | 37.072289 | 0.739643 | 0.005444 | 0 | 0.492248 | 0 | 0 | 0.091281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.01938 | null | null | 0.007752 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c48d97343fca1c2acf5dfb7954475ee8390d7560 | 19,461 | py | Python | ultron_cli/clients.py | gitter-badger/ultron-cli | d555f7be9738a3bf86f8a7a2186eac221303acca | [
"MIT"
] | 1 | 2019-04-15T14:04:24.000Z | 2019-04-15T14:04:24.000Z | ultron_cli/clients.py | gitter-badger/ultron-cli | d555f7be9738a3bf86f8a7a2186eac221303acca | [
"MIT"
] | null | null | null | ultron_cli/clients.py | gitter-badger/ultron-cli | d555f7be9738a3bf86f8a7a2186eac221303acca | [
"MIT"
] | null | null | null | import os
import json
import logging
import requests
from attrdict import AttrDict
from cliff.lister import Lister
from cliff.command import Command
from cliff.show import ShowOne
from prompt_toolkit import prompt
sessionfile = os.path.expanduser('~/.ultron_session.json')
class List(Lister):
"List all clients in inventory"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(List, self).get_parser(prog_name)
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
result = requests.get(url, params={'fields': 'name', 'dynfields': 'groups'},
verify=session.certfile, auth=(session.username, session.password))
if result.status_code == requests.codes.ok:
clients = result.json().get('result', {})
if len(clients) == 0:
raise RuntimeError('ERROR: Clients not found')
cols = ['name', 'groups']
rows = [[x['name'], ', '.join(x['groups'])] for x in clients.values()]
return [cols, rows]
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
class Show(ShowOne):
"Show details of a client"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(Show, self).get_parser(prog_name)
parser.add_argument('client')
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
parser.add_argument('-F', '--fields', nargs='*', default=[])
parser.add_argument('-D', '--dynfields', nargs='*', default=[])
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
params = {}
if len(p.fields) > 0:
params['fields'] = ','.join(p.fields)
if len(p.dynfields) > 0:
params['dynfields'] = ','.join(p.dynfields)
url = '{}/clients/{}/{}/{}'.format(session.endpoint, p.admin, p.inventory, p.client)
result = requests.get(url, params=params, verify=session.certfile)
if result.status_code == requests.codes.ok:
client = result.json().get('result',{}).get(p.client)
if not client:
raise RuntimeError('ERROR: Client not found')
return [client.keys(), client.values()]
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
class New(Command):
"Add new clients to inventory"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(New, self).get_parser(prog_name)
parser.add_argument('clients', nargs='*', default=[])
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
parser.add_argument('-P', '--props', nargs='*', default=[])
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
if len(p.clients) == 0:
clientnames = prompt('Enter hostnames and press ESC+ENTER\n> ', multiline=True).split()
else:
clientnames = p.clients
data = {'clientnames': ','.join(set(clientnames))}
if len(p.props) > 0:
try:
data['props'] = json.dumps({
x.split('=')[0]: '='.join(x.split('=')[1:]) for x in p.props
})
except:
raise RuntimeError('ERROR: Invalid props format. Example format: a=123 b=abc c=xyz')
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
# Validate if already exists
result = requests.get(url, params={'clientnames': data['clientnames'], 'fields': 'name'},
verify=session.certfile)
clients = result.json().get('result')
if len(clients) > 0:
raise RuntimeError('ERROR: Duplicate clients: {}'.format(', '.join(clients.keys())))
result = requests.post(url, data=data, verify=session.certfile,
auth=(session.username, session.password))
if result.status_code == requests.codes.ok:
print('SUCCESS: Created new clients')
return
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
class Update(Command):
"Update details of existing clients"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(Update, self).get_parser(prog_name)
parser.add_argument('clients', nargs='*', default=[])
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
parser.add_argument('-P', '--props', nargs='*', default=[])
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
data = {}
if len(p.clients) > 0:
data = {'clientnames': ','.join(set(p.clients))}
if len(p.props) > 0:
try:
data['props'] = json.dumps({
x.split('=')[0]: '='.join(x.split('=')[1:]) for x in p.props
})
except:
raise RuntimeError('ERROR: Invalid props format. Example format: a=123 b=abc c=xyz')
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
# Validate no extra clients
if len(p.clients) > 0:
result = requests.get(url, params={'clientnames': data['clientnames'], 'fields': 'name'},
verify=session.certfile)
clients = result.json().get('result')
if len(clients) != len(p.clients):
raise RuntimeError('ERROR: Clients not found: {}'.format(', '.join(set(set(p.clients)-clients.keys()))))
result = requests.post(url, data=data, verify=session.certfile,
auth=(session.username, session.password))
if result.status_code == requests.codes.ok:
print('SUCCESS: Updated clients')
return
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
class Delete(Command):
"Delete clients from inventory"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(Delete, self).get_parser(prog_name)
parser.add_argument('clients', nargs='*', default=[])
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
data = {}
if len(p.clients) > 0:
data = {'clientnames': ','.join(set(p.clients))}
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
# Validate no extra clients
if len(p.clients) > 0:
result = requests.get(url, params={'clientnames': data['clientnames'], 'fields': 'name'},
verify=session.certfile)
clients = result.json().get('result')
if len(clients) != len(p.clients):
raise RuntimeError('ERROR: Clients not found: {}'.format(', '.join(set(set(p.clients)-clients.keys()))))
result = requests.delete(url, data=data, verify=session.certfile,
auth=(session.username, session.password))
if result.status_code == requests.codes.ok:
print('SUCCESS: Deleted clients')
return
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
class Perform(Command):
"Perform a task on all/selected clients in inventory"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(Perform, self).get_parser(prog_name)
parser.add_argument('task')
parser.add_argument('clients', nargs='*', default=[])
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
parser.add_argument('-S', '--synchronous', action='store_true')
parser.add_argument('-K', '--kwargs', type=json.loads, help='BSON encoded key-value pairs', default={})
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
data = {'async': int(not p.synchronous), 'task': p.task}
if len(p.clients) > 0:
data['clientnames'] = ','.join(set(p.clients))
if len(p.kwargs) > 0:
if not isinstance(p.kwargs, dict):
raise RuntimeError('kwargs: Must be BSON encoded key-value pairs')
data['kwargs'] = json.dumps(p.kwargs)
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
# Validate no extra clients
if len(p.clients) > 0:
result = requests.get(url, params={'clientnames': data['clientnames'], 'fields': 'name'},
verify=session.certfile)
clients = result.json().get('result')
if len(clients) != len(p.clients):
raise RuntimeError('ERROR: Clients not found: {}'.format(', '.join(set(set(p.clients)-clients.keys()))))
result = requests.post(url, data=data, verify=session.certfile,
auth=(session.username, session.password))
if result.status_code == requests.codes.ok:
print('SUCCESS: Submitted task')
return
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
class StatTasks(ShowOne):
"Show statistics of a performed tasks"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(StatTasks, self).get_parser(prog_name)
parser.add_argument('tasks', nargs='*', default=[])
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
result = requests.get(url, verify=session.certfile)
if result.status_code != requests.codes.ok:
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
clients = result.json().get('result', {})
if len(clients) == 0:
raise RuntimeError('ERROR: Clients not found')
tasks = {}
for client in clients.values():
if not client['tasks']: continue
for k, v in client['tasks'].items():
if len(p.tasks) > 0 and k not in p.tasks: continue
if k not in tasks:
tasks[k] = {
'performed on': 0, 'success': 0,
'failed': 0, 'pending': 0
}
tasks[k]['performed on'] += 1
tasks[k][v['status'].lower()] += 1
return [tasks.keys(), tasks.values()]
class StatStates(ShowOne):
"Show statistics of a client states"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(StatStates, self).get_parser(prog_name)
parser.add_argument('states', nargs='*', default=[])
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
result = requests.get(url, verify=session.certfile)
if result.status_code != requests.codes.ok:
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
clients = result.json().get('result', {})
if len(clients) == 0:
raise RuntimeError('ERROR: Clients not found')
states = {}
for client in clients.values():
for k, v in client['state'].items():
if len(p.states) > 0 and k not in p.states: continue
if k not in states: states[k] = {}
if v not in states[k]: states[k][v] = 0
states[k][v] += 1
for k,v in states.items():
if len(v) > 15:
del states[k]
return [states.keys(), states.values()]
class StatProps(ShowOne):
"Show statistics of a client props"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(StatProps, self).get_parser(prog_name)
parser.add_argument('props', nargs='*', default=[])
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
result = requests.get(url, verify=session.certfile)
if result.status_code != requests.codes.ok:
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
clients = result.json().get('result', {})
if len(clients) == 0:
raise RuntimeError('ERROR: Clients not found')
props = {}
for client in clients.values():
for k, v in client['props'].items():
if len(p.props) > 0 and k not in p.props: continue
if k not in props: props[k] = {}
if v not in props[k]: props[k][v] = 0
props[k][v] += 1
for k,v in props.items():
if len(v) > 15:
del props[k]
return [props.keys(), props.values()]
class FilterTask(Lister):
"List clients filtered by task status"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(FilterTask, self).get_parser(prog_name)
parser.add_argument('task')
parser.add_argument('value')
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
result = requests.get(url, verify=session.certfile)
if result.status_code != requests.codes.ok:
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
clients = result.json().get('result', {})
if len(clients) == 0:
raise RuntimeError('ERROR: Clients not found')
found = set()
for client in clients.values():
if not client['tasks']: continue
if client['tasks'][p.task]['status'] != p.value.upper():
if p.value == 'performed on':
found.add(client['name'])
continue
continue
found.add(client['name'])
return [['name'], [[x] for x in found]]
class FilterState(Lister):
"List clients filtered by state"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(FilterState, self).get_parser(prog_name)
parser.add_argument('state')
parser.add_argument('value')
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
result = requests.get(url, verify=session.certfile)
if result.status_code != requests.codes.ok:
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
clients = result.json().get('result', {})
if len(clients) == 0:
raise RuntimeError('ERROR: Clients not found')
found = set()
for client in clients.values():
if p.state not in client['state']: continue
if str(client['state'][p.state]) != p.value:
continue
found.add(client['name'])
return [['name'], [[x] for x in found]]
class FilterProp(Lister):
"List clients filtered by prop"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
with open(sessionfile) as f: session = AttrDict(json.load(f))
parser = super(FilterProp, self).get_parser(prog_name)
parser.add_argument('prop')
parser.add_argument('value')
parser.add_argument('-A', '--admin', default=session.username)
parser.add_argument('-I', '--inventory', default=session.inventory)
return parser
def take_action(self, p):
with open(sessionfile) as f: session = AttrDict(json.load(f))
url = '{}/clients/{}/{}'.format(session.endpoint, p.admin, p.inventory)
result = requests.get(url, verify=session.certfile)
if result.status_code != requests.codes.ok:
raise RuntimeError('ERROR: {}: {}'.format(result.status_code, result.json().get('message')))
clients = result.json().get('result', {})
if len(clients) == 0:
raise RuntimeError('ERROR: Clients not found')
found = set()
for client in clients.values():
if p.prop not in client['props']: continue
if str(client['props'][p.prop]) != p.value:
continue
found.add(client['name'])
return [['name'], [[x] for x in found]]
| 39.235887 | 120 | 0.586609 | 2,285 | 19,461 | 4.917287 | 0.077462 | 0.036045 | 0.068085 | 0.044856 | 0.832503 | 0.804201 | 0.793076 | 0.791296 | 0.769135 | 0.764952 | 0 | 0.003261 | 0.259442 | 19,461 | 495 | 121 | 39.315152 | 0.776367 | 0.026206 | 0 | 0.663073 | 0 | 0 | 0.124897 | 0.001137 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06469 | false | 0.013477 | 0.024259 | 0 | 0.218329 | 0.010782 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4d687fd0ecaed15c28519e763d37c7758be6505 | 132 | py | Python | tests/conftest.py | nekhaly/network-analyzer | 62cf8ae35ed8442131cc609129b7c2b0af9f9526 | [
"MIT"
] | null | null | null | tests/conftest.py | nekhaly/network-analyzer | 62cf8ae35ed8442131cc609129b7c2b0af9f9526 | [
"MIT"
] | 1 | 2021-09-28T12:49:47.000Z | 2021-09-28T12:49:47.000Z | tests/conftest.py | nekhaly/network-analyzer | 62cf8ae35ed8442131cc609129b7c2b0af9f9526 | [
"MIT"
] | 2 | 2021-11-24T08:57:59.000Z | 2022-03-08T10:25:18.000Z | import pytest
@pytest.fixture(scope="session")
def bridge_contract(deploy_contract):
return deploy_contract("TestHomeBridge")
| 18.857143 | 44 | 0.795455 | 15 | 132 | 6.8 | 0.733333 | 0.27451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098485 | 132 | 6 | 45 | 22 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c4f478ca4f9cf10ef8452424466ba4ad9e9d3a0a | 33 | py | Python | __init__.py | YusukeKambara/japan_horse_racing | 05c2e06fe265c5744b908b8575df260db18a115b | [
"MIT"
] | null | null | null | __init__.py | YusukeKambara/japan_horse_racing | 05c2e06fe265c5744b908b8575df260db18a115b | [
"MIT"
] | 1 | 2021-12-13T20:32:18.000Z | 2021-12-13T20:32:18.000Z | __init__.py | YusukeKambara/japan_horse_racing | 05c2e06fe265c5744b908b8575df260db18a115b | [
"MIT"
] | null | null | null | from japan_horse_racing import *
| 16.5 | 32 | 0.848485 | 5 | 33 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f20b0ee2d917265e3eb0a1ebf7738ee21e026e5f | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_internal/commands/search.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_internal/commands/search.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_internal/commands/search.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/14/0f/b9/42574440301fbdc3e52e683cd89904d7831b1a5d8a1859d17fd9b9802c | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 96 | 1 | 96 | 96 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1eeb3f975c57a84067b954aa2ca03d94b513ebf8 | 3,389 | py | Python | test/sprot_tests.py | genomeannotation/Annie-new | 4bb39804c4d51877907f531d72e8b2e841c58243 | [
"MIT"
] | 7 | 2018-06-14T18:19:14.000Z | 2022-01-12T12:31:50.000Z | test/sprot_tests.py | genomeannotation/Annie-new | 4bb39804c4d51877907f531d72e8b2e841c58243 | [
"MIT"
] | 10 | 2015-05-01T00:42:32.000Z | 2022-02-08T17:27:31.000Z | test/sprot_tests.py | genomeannotation/Annie | 4bb39804c4d51877907f531d72e8b2e841c58243 | [
"MIT"
] | 5 | 2016-01-22T11:59:08.000Z | 2022-03-14T06:21:13.000Z | #!/usr/bin/env python
import unittest
import io
from src.annotation import Annotation
from src.sprot import read_sprot, get_gff_info
class TestSprot(unittest.TestCase):
def setUp(self):
self.blast_file = io.StringIO(\
'm.4830 sp|Q5AZY1|MRH4_EMENI 32.65 49 33 0 114 162 500 548 0.56 34.3\n'\
'm.4830 sp|Q5AZY1|ASDF1_EMENI 32.65 49 33 0 114 162 500 548 0.56 34.3')
self.gff_file = io.StringIO(\
'comp9975_c0_seq1 . mRNA 25 603 . + . ID=m.4830;Parent=g.4830')
self.fasta_file = io.StringIO(\
'>sp|Q5AZY1|MRH4_EMENI ATP-dependent RNA helicase mrh4, mitochondrial OS=Emericella nidulans (strain FGSC A4 / ATCC 38163 / CBS 112.46 / NRRL 194 / M139) GN=mrh4 PE=3 SV=1\n\
MNRLGGLSLPLRPVCLFCRAQTSLALSPLQGGQAVRSIATGRLRRRARMTLSKDVAKSSL\n\
KPKRTDRGKLGPFPNMNQTRARVREDPRSRSPAALKRSGETEEKPAMNTESPLYKALKMQ\n\
TALAPISYGKRTAIKAKIAEITSFDAFTLLPIVRNSIFSQALPGIADAVPTPIQRVAIPR\n\
LLEDAPAKKQAKKVDDDEPQYEQYLLAAETGSGKTLAYLIPVIDAIKRQEIQEKEMEKKE\n\
EERKVREREENKKNQAFDLEPEIPPPSNAGRPRAIILVPTAELVAQVGAKLKAFAHTVKF\n\
RSGIISSNLTPRRIKSTLFNPAGIDILVSTPHLLASIAKTDPYVLSRVSHLVLDEADSLM\n\
DRSFLPISTEVISKAAPSLQKLIFCSATIPRSLDSQLRKLYPDIWRLTTPNLHAIPRRVQ\n\
LGVVDIQKDPYRGNRNLACADVIWSIGKSGAGSDEAGSPWSEPKTKKILVFVNEREEADE\n\
VAQFLKSKGIDAHSFNRDSGTRKQEEILAEFTEPAAVPTAEEILLARKQQQRENINIPFV\n\
LPERTNRDTERRLDGVKVLVTTDIASRGIDTLALKTVILYHVPHTTIDFIHRLGRLGRMG\n\
KRGRAVVLVGKKDRKDVVKEVREVWFGLDS')
def test_read_sprot(self):
sprot_list = read_sprot(self.blast_file, self.gff_file, self.fasta_file)
expected = [Annotation("g.4830", "name", "mrh4"), Annotation("m.4830", "product", "ATP-dependent RNA helicase mrh4, mitochondrial")]
self.assertEquals(sprot_list, expected)
def test_read_sprot_missing_gene_name(self):
self.fasta_file = io.StringIO(\
'>sp|Q5AZY1|MRH4_EMENI ATP-dependent RNA helicase mrh4, mitochondrial OS=Emericella nidulans (strain FGSC A4 / ATCC 38163 / CBS 112.46 / NRRL 194 / M139) PE=3 SV=1\n\
MNRLGGLSLPLRPVCLFCRAQTSLALSPLQGGQAVRSIATGRLRRRARMTLSKDVAKSSL\n\
KPKRTDRGKLGPFPNMNQTRARVREDPRSRSPAALKRSGETEEKPAMNTESPLYKALKMQ\n\
TALAPISYGKRTAIKAKIAEITSFDAFTLLPIVRNSIFSQALPGIADAVPTPIQRVAIPR\n\
LLEDAPAKKQAKKVDDDEPQYEQYLLAAETGSGKTLAYLIPVIDAIKRQEIQEKEMEKKE\n\
EERKVREREENKKNQAFDLEPEIPPPSNAGRPRAIILVPTAELVAQVGAKLKAFAHTVKF\n\
RSGIISSNLTPRRIKSTLFNPAGIDILVSTPHLLASIAKTDPYVLSRVSHLVLDEADSLM\n\
DRSFLPISTEVISKAAPSLQKLIFCSATIPRSLDSQLRKLYPDIWRLTTPNLHAIPRRVQ\n\
LGVVDIQKDPYRGNRNLACADVIWSIGKSGAGSDEAGSPWSEPKTKKILVFVNEREEADE\n\
VAQFLKSKGIDAHSFNRDSGTRKQEEILAEFTEPAAVPTAEEILLARKQQQRENINIPFV\n\
LPERTNRDTERRLDGVKVLVTTDIASRGIDTLALKTVILYHVPHTTIDFIHRLGRLGRMG\n\
KRGRAVVLVGKKDRKDVVKEVREVWFGLDS')
sprot_list = read_sprot(self.blast_file, self.gff_file, self.fasta_file)
expected = [Annotation("g.4830", "name", "MRH4"), Annotation("m.4830", "product", "ATP-dependent RNA helicase mrh4, mitochondrial")]
self.assertEquals(sprot_list, expected)
def test_get_gff_info(self):
test_gff = io.StringIO('comp9975_c0_seq1 . mRNA 25 603 . + . foo=dog;Parent=g.4830;bazz=bub;ID=m.4830')
expected = {"m.4830" : "g.4830"}
self.assertEquals(get_gff_info(test_gff), expected)
##########################
def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(TestSprot))
return suite
if __name__ == '__main__':
unittest.main()
| 48.414286 | 190 | 0.77663 | 339 | 3,389 | 7.619469 | 0.315634 | 0.01355 | 0.02168 | 0.035618 | 0.805265 | 0.805265 | 0.805265 | 0.805265 | 0.779714 | 0.779714 | 0 | 0.061356 | 0.129537 | 3,389 | 69 | 191 | 49.115942 | 0.814237 | 0.005901 | 0 | 0.509091 | 0 | 0.090909 | 0.131658 | 0.025135 | 0 | 0 | 0 | 0 | 0.054545 | 1 | 0.090909 | false | 0 | 0.072727 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ef17bd19b88211fcbab7bb4b4ca4fb0317897e1 | 12,421 | py | Python | parser_util/parser_helper.py | norbit8/online-sat-solver | 92abf0541ddd214db1c8db6cab0c270081f5f2d0 | [
"Apache-2.0"
] | null | null | null | parser_util/parser_helper.py | norbit8/online-sat-solver | 92abf0541ddd214db1c8db6cab0c270081f5f2d0 | [
"Apache-2.0"
] | 2 | 2021-02-15T16:07:33.000Z | 2021-02-15T16:13:51.000Z | parser_util/parser_helper.py | norbit8/online-sat-solver | 92abf0541ddd214db1c8db6cab0c270081f5f2d0 | [
"Apache-2.0"
] | 1 | 2021-12-12T04:41:18.000Z | 2021-12-12T04:41:18.000Z | from prop_logic.formula import *
def helper_nnf(formula: Formula):
if is_variable(str(formula)) or is_constant(str(formula)):
return formula
if formula.root == "~":
removed_neg_formula = formula.first
if removed_neg_formula.root == '|':
if removed_neg_formula.first.root == "~" and removed_neg_formula.second.root == "~":
return Formula.parse(
"(" + str(removed_neg_formula.first.first) + "&" + str(removed_neg_formula.second.first) + ")")
if removed_neg_formula.first.root == "~":
return Formula.parse(
"(" + str(removed_neg_formula.first.first) + "&" + str(helper_nnf(
Formula.parse("~" + str(removed_neg_formula.second)))) + ")")
if removed_neg_formula.second.root == "~":
return Formula.parse(
"(" + str(helper_nnf(Formula.parse("~" + str(removed_neg_formula.first)))) + "&" + str(
removed_neg_formula.second.first) + ")")
return Formula.parse(
"(" + str(helper_nnf(Formula.parse("~" + str(removed_neg_formula.first)))) + "&" + str(
helper_nnf(Formula.parse("~" + str(removed_neg_formula.second)))) + ")")
elif removed_neg_formula.root == '&':
if removed_neg_formula.first.root == "~" and removed_neg_formula.second.root == "~":
return Formula.parse(
"(" + str(removed_neg_formula.first.first) + "|" + str(removed_neg_formula.second.first) + ")")
elif removed_neg_formula.first.root == "~":
return Formula.parse(
"(" + str(removed_neg_formula.first.first) + "|" + str(
helper_nnf(Formula.parse("~" + str(removed_neg_formula.second)))) + ")")
elif removed_neg_formula.second.root == "~":
return Formula.parse(
"(" + str(helper_nnf(Formula.parse("~" + str(removed_neg_formula.first)))) + "|" + str(
removed_neg_formula.second.first) + ")")
return Formula.parse(
"(" + str(helper_nnf(Formula.parse("~" + str(removed_neg_formula.first)))) + "|" + str(
helper_nnf(Formula.parse("~" + str(removed_neg_formula.second)))) + ")")
if formula.root == "->":
if formula.first.root == "~":
return Formula.parse("(" + str(formula.first.first) + "|" + str(formula.second) + ")")
return Formula.parse(
"(" + str(helper_nnf(Formula.parse("~" + str(formula.first)))) + "|" + str(formula.second) + ")")
if formula.root == "<->":
return Formula.parse(
"(" + str(helper_nnf(Formula.parse("(" + str(formula.first) + "->" + str(formula.second) + ")"))) + "&" +
str(helper_nnf(Formula.parse("(" + str(formula.second) + "->" + str(formula.first) + ")"))) + ")")
# if formula.root == "&" or formula.root == '|'
return formula
def convert_to_nnf(formula: Formula):
print("STEP: ", formula)
if is_constant(str(formula)) or is_variable(str(formula)):
return formula
# unary operator
if formula.root == "~":
if is_constant(str(formula.first)) or is_variable(str(formula.first)):
return formula
elif formula.first.root == '~':
return convert_to_nnf(formula.first.first)
else: # Binary op is the root after ~, so we are in the case of ~(q->p) for example.
removed_negation_formula = formula.first
op = removed_negation_formula.root
return helper_nnf(Formula.parse("~" + str(helper_nnf(
Formula.parse(
"(" + str(convert_to_nnf(removed_negation_formula.first)) + op + str(
convert_to_nnf(removed_negation_formula.second)) + ")")))))
# binary operator
op = formula.root
return helper_nnf(
Formula.parse("(" + str(convert_to_nnf(formula.first)) + op + str(convert_to_nnf(formula.second)) + ")"))
def convert_to_cnf(formula: List[Formula]):
new_cnf = [formula[0]]
for f in formula[1:]:
if f.second.root == "~":
dictionary = dict({'q': f.second.first, 'p': f.first})
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~p|~q)"), dictionary))
elif f.second.root == "->":
dictionary = dict({'r': f.second.second, 'q': f.second.first, 'p': f.first})
if dictionary['q'].root == "~" and dictionary['r'].root == "~":
dictionary['r'] = dictionary['r'].first
dictionary['q'] = dictionary['q'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|(~p|q))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|p)"), dictionary))
elif dictionary['q'].root == "~":
dictionary['q'] = dictionary['q'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|(~p|q))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|p)"), dictionary))
elif dictionary['r'].root == "~":
dictionary['r'] = dictionary['r'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|(~p|~q))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|p)"), dictionary))
else:
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|(~p|~q))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|p)"), dictionary))
elif f.second.root == "&":
dictionary = dict({'r': f.second.second, 'q': f.second.first, 'p': f.first})
if dictionary['q'].root == "~" and dictionary['r'].root == "~":
dictionary['r'] = dictionary['r'].first
dictionary['q'] = dictionary['q'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|(r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|~p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~p|~r)"), dictionary))
elif dictionary['q'].root == "~":
dictionary['q'] = dictionary['q'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|(~r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|~p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~p|r)"), dictionary))
elif dictionary['r'].root == "~":
dictionary['r'] = dictionary['r'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|(r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|~p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~p|~r)"), dictionary))
else:
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|(~r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|~p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~p|r)"), dictionary))
elif f.second.root == "|":
dictionary = dict({'r': f.second.second, 'q': f.second.first, 'p': f.first})
if dictionary['q'].root == "~" and dictionary['r'].root == "~":
dictionary['r'] = dictionary['r'].first
dictionary['q'] = dictionary['q'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|(~p|~q))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|p)"), dictionary))
elif dictionary['q'].root == "~":
dictionary['q'] = dictionary['q'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|(~p|~q))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|p)"), dictionary))
elif dictionary['r'].root == "~":
dictionary['r'] = dictionary['r'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|(~p|q))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|p)"), dictionary))
else:
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|(~p|q))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|p)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|p)"), dictionary))
elif f.second.root == "<->":
dictionary = dict({'r': f.second.second, 'q': f.second.first, 'p': f.first})
if dictionary['q'].root == "~" and dictionary['r'].root == "~":
dictionary['r'] = dictionary['r'].first
dictionary['q'] = dictionary['q'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|(r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|(~r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|(~q|~p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|(~r|~p))"), dictionary))
elif dictionary['q'].root == "~":
dictionary['q'] = dictionary['q'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|(~r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|(r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|(~q|~p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|(r|~p))"), dictionary))
elif dictionary['r'].root == "~":
dictionary['r'] = dictionary['r'].first
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|(r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|(~r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(r|(q|~p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|(~r|~p))"), dictionary))
else:
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|(~r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(q|(r|p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~r|(q|~p))"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|(r|~p))"), dictionary))
else:
if f.second == '~':
dictionary = dict({'p': f.second.first, 'q': f.first})
new_cnf.append(Formula.substitute_variables(Formula.parse("(p|q)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(~q|~p)"), dictionary))
else:
dictionary = dict({'p': f.second, 'q': f.first})
new_cnf.append(Formula.substitute_variables(Formula.parse("(~p|q)"), dictionary))
new_cnf.append(Formula.substitute_variables(Formula.parse("(p|~q)"), dictionary))
return new_cnf
| 67.505435 | 117 | 0.579986 | 1,372 | 12,421 | 5.093294 | 0.041545 | 0.14253 | 0.099599 | 0.157699 | 0.914282 | 0.881797 | 0.881511 | 0.855753 | 0.837149 | 0.837149 | 0 | 0.000212 | 0.241365 | 12,421 | 183 | 118 | 67.874317 | 0.741377 | 0.012318 | 0 | 0.643678 | 0 | 0 | 0.052434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017241 | false | 0 | 0.005747 | 0 | 0.132184 | 0.005747 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4800f4d99c4f587de428f1b83381397dce6a89b0 | 109 | py | Python | myprojects/views.py | albertvisser/myprojects | 121f8832cb6be288c125ec1886d9100b0bb6ba02 | [
"MIT"
] | null | null | null | myprojects/views.py | albertvisser/myprojects | 121f8832cb6be288c125ec1886d9100b0bb6ba02 | [
"MIT"
] | null | null | null | myprojects/views.py | albertvisser/myprojects | 121f8832cb6be288c125ec1886d9100b0bb6ba02 | [
"MIT"
] | null | null | null | from django.http import HttpResponseRedirect
def index(request):
return HttpResponseRedirect('/docs/')
| 18.166667 | 44 | 0.779817 | 11 | 109 | 7.727273 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12844 | 109 | 5 | 45 | 21.8 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0.055046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
481ceace7801c610a763d365cdde8b738ad68c84 | 18,742 | py | Python | backend/tests/views/test_labeling_task.py | druzhynin-oleksii/model_garden | 3599f5d0c81bc79139ceabed5dd647c29ccadc31 | [
"MIT"
] | 8 | 2020-09-10T18:28:19.000Z | 2022-02-22T03:41:14.000Z | backend/tests/views/test_labeling_task.py | druzhynin-oleksii/model_garden | 3599f5d0c81bc79139ceabed5dd647c29ccadc31 | [
"MIT"
] | 1 | 2020-09-15T21:20:29.000Z | 2020-09-15T21:20:29.000Z | backend/tests/views/test_labeling_task.py | druzhynin-oleksii/model_garden | 3599f5d0c81bc79139ceabed5dd647c29ccadc31 | [
"MIT"
] | 8 | 2020-09-10T16:29:35.000Z | 2022-01-25T15:05:03.000Z | from unittest import mock
from rest_framework import status
from rest_framework.reverse import reverse
from model_garden.constants import LabelingTaskStatus
from model_garden.services.cvat import CVATServiceException
from tests import BaseAPITestCase
class TestLabelingTaskViewSet(BaseAPITestCase):
def setUp(self):
super().setUp()
self.cvat_service_cls_patcher = mock.patch('model_garden.views.labeling_task.CvatService')
self.cvat_service_cls_mock = self.cvat_service_cls_patcher.start()
self.cvat_service_mock = self.cvat_service_cls_mock.return_value
self.cvat_service_mock.get_root_user.return_value = {'id': 1}
self.cvat_service_mock.get_user.return_value = {'id': 3, 'username': 'test_labeler'}
self.cvat_service_mock.create_task.return_value = {'id': 1}
self.cvat_service_mock.get_task.return_value = {
'id': 1,
'status': LabelingTaskStatus.ANNOTATION,
}
def tearDown(self):
self.cvat_service_cls_patcher.stop()
super().tearDown()
def test_create(self):
dataset = self.test_factory.create_dataset()
media_asset = self.test_factory.create_media_asset(dataset=dataset)
labeler = self.test_factory.create_labeler(labeler_id=3)
response = self.client.post(
path=reverse('labelingtask-list'),
data={
'task_name': 'test',
'dataset_id': dataset.id,
'assignee_id': labeler.labeler_id,
'files_in_task': 2,
'count_of_tasks': 1,
},
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED, response.content)
self.cvat_service_mock.create_task.assert_called_once_with(
name='test.01',
assignee_id=3,
owner_id=1,
remote_files=[
f'https://d3o54g14k1n39o.cloudfront.net/test_path/{media_asset.filename}',
],
)
media_asset.refresh_from_db()
self.assertIsNotNone(media_asset.labeling_task)
self.assertEqual(media_asset.labeling_task.labeler.labeler_id, labeler.labeler_id)
def test_create_dataset_not_found(self):
response = self.client.post(
path=reverse('labelingtask-list'),
data={
'task_name': 'test',
'dataset_id': 1,
'assignee_id': 3,
'files_in_task': 2,
'count_of_tasks': 1,
},
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.json(), {'message': "Dataset with id='1' not found"})
def test_create_labeler_not_found(self):
dataset = self.test_factory.create_dataset()
media_asset = self.test_factory.create_media_asset(dataset=dataset)
response = self.client.post(
path=reverse('labelingtask-list'),
data={
'task_name': 'test',
'dataset_id': dataset.id,
'assignee_id': 3,
'files_in_task': 2,
'count_of_tasks': 1,
},
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED, response.content)
self.cvat_service_mock.create_task.assert_called_once_with(
name='test.01',
assignee_id=3,
owner_id=1,
remote_files=[
f'https://d3o54g14k1n39o.cloudfront.net/test_path/{media_asset.filename}',
],
)
media_asset.refresh_from_db()
self.assertIsNotNone(media_asset.labeling_task)
self.assertEqual(media_asset.labeling_task.labeler.labeler_id, 3)
def test_create_cvat_user_not_found(self):
self.cvat_service_mock.get_user.side_effect = CVATServiceException("not found")
dataset = self.test_factory.create_dataset()
media_asset = self.test_factory.create_media_asset(dataset=dataset)
response = self.client.post(
path=reverse('labelingtask-list'),
data={
'task_name': 'test',
'dataset_id': dataset.id,
'assignee_id': 3,
'files_in_task': 2,
'count_of_tasks': 1,
},
)
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND, response.content)
self.assertEqual(response.json(), {'message': 'not found'})
self.cvat_service_mock.create_task.assert_not_called()
media_asset.refresh_from_db()
self.assertIsNone(media_asset.labeling_task)
def test_create_cvat_request_fails(self):
self.cvat_service_mock.create_task.side_effect = CVATServiceException("request failed")
dataset = self.test_factory.create_dataset()
self.test_factory.create_media_asset(dataset=dataset)
labeler = self.test_factory.create_labeler(labeler_id=3)
response = self.client.post(
path=reverse('labelingtask-list'),
data={
'task_name': 'test',
'dataset_id': dataset.id,
'assignee_id': labeler.labeler_id,
'files_in_task': 2,
'count_of_tasks': 1,
},
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.json(), {'message': 'request failed'})
def test_list(self):
dataset = self.test_factory.create_dataset()
labeling_task = self.test_factory.create_labeling_task(name='Test labeling task')
media_asset = self.test_factory.create_media_asset(dataset=dataset)
media_asset.labeling_task = labeling_task
media_asset.save()
response = self.client.get(
path=reverse('labelingtask-list'),
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.json(),
{
"count": 1,
"next": None,
"previous": None,
"results": [
{
"id": labeling_task.id,
"name": labeling_task.name,
"dataset": dataset.path,
"labeler": labeling_task.labeler.username,
"url": 'http://localhost:8080/task/1',
"status": labeling_task.status,
"error": None,
},
],
},
)
def test_list_same_labeling_task_multiple_media_assets(self):
dataset = self.test_factory.create_dataset()
labeling_task = self.test_factory.create_labeling_task(name='Test labeling task')
media_asset1 = self.test_factory.create_media_asset(dataset=dataset)
media_asset1.labeling_task = labeling_task
media_asset1.save()
media_asset2 = self.test_factory.create_media_asset(dataset=dataset)
media_asset2.labeling_task = labeling_task
media_asset2.save()
response = self.client.get(
path=reverse('labelingtask-list'),
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 1)
def test_list_with_name_filter(self):
t1 = self.test_factory.create_labeling_task(name='Test labeling task 1')
self.test_factory.create_labeling_task(name='Test labeling task 2')
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'name': 'Test labeling task 1',
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 1)
self.assertEqual(response.json()['results'][0]['name'], t1.name)
def test_list_with_name_filter_contains(self):
self.test_factory.create_labeling_task(name='Test labeling task 1')
t2 = self.test_factory.create_labeling_task(name='Test labeling task 2')
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'name': 'task 2',
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 1)
self.assertEqual(response.json()['results'][0]['name'], t2.name)
def test_list_with_name_filter_empty_result(self):
self.test_factory.create_labeling_task(name='Test labeling task 1')
self.test_factory.create_labeling_task(name='Test labeling task 2')
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'name': 'Test labeling task 3',
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 0)
def test_list_with_dataset_filter(self):
dataset1 = self.test_factory.create_dataset(path='/test_path1')
dataset2 = self.test_factory.create_dataset(path='/test_path2')
self.test_factory.create_media_asset(dataset=dataset1, assigned=True)
media_asset2 = self.test_factory.create_media_asset(dataset=dataset2, assigned=True)
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'dataset': dataset2.path,
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 1)
self.assertEqual(response.json()['results'][0]['name'], media_asset2.labeling_task.name)
def test_list_with_dataset_filter_empty_result(self):
dataset1 = self.test_factory.create_dataset(path='/test_path1')
dataset2 = self.test_factory.create_dataset(path='/test_path2')
self.test_factory.create_media_asset(dataset=dataset1, assigned=True)
self.test_factory.create_media_asset(dataset=dataset2, assigned=True)
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'dataset': 'test_path3',
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 0)
def test_list_with_labeler_filter(self):
self.test_factory.create_labeling_task(name='Test labeling task 1')
labeling_task2 = self.test_factory.create_labeling_task(name='Test labeling task 2')
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'labeler': labeling_task2.labeler.username,
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 1)
self.assertEqual(response.json()['results'][0]['name'], labeling_task2.name)
def test_list_with_labeler_filter_empty_result(self):
self.test_factory.create_labeling_task(name='Test labeling task 1')
self.test_factory.create_labeling_task(name='Test labeling task 2')
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'labeler': 'unknown labeler',
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 0)
def test_list_with_status_filter(self):
self.test_factory.create_labeling_task(name='Test labeling task 1', status=LabelingTaskStatus.ANNOTATION)
t2 = self.test_factory.create_labeling_task(name='Test labeling task 2', status=LabelingTaskStatus.VALIDATION)
self.test_factory.create_labeling_task(name='Test labeling task 3', status=LabelingTaskStatus.COMPLETED)
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'status': LabelingTaskStatus.VALIDATION,
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 1)
self.assertEqual(response.json()['results'][0]['name'], t2.name)
def test_list_with_multiple_status_filters(self):
self.test_factory.create_labeling_task(name='Test labeling task 1', status=LabelingTaskStatus.ANNOTATION)
t2 = self.test_factory.create_labeling_task(name='Test labeling task 2', status=LabelingTaskStatus.VALIDATION)
t3 = self.test_factory.create_labeling_task(name='Test labeling task 3', status=LabelingTaskStatus.COMPLETED)
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'status': [LabelingTaskStatus.VALIDATION, LabelingTaskStatus.COMPLETED],
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 2)
self.assertEqual({t['name'] for t in response.json()['results']}, {t2.name, t3.name})
def test_list_with_status_filter_empty_result(self):
self.test_factory.create_labeling_task(name='Test labeling task 1', status=LabelingTaskStatus.ANNOTATION)
self.test_factory.create_labeling_task(name='Test labeling task 3', status=LabelingTaskStatus.COMPLETED)
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'status': LabelingTaskStatus.VALIDATION,
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 0)
def test_list_with_dataset_id_filter(self):
dataset = self.test_factory.create_dataset()
task1 = self.test_factory.create_labeling_task(name='Test labeling task1')
task2 = self.test_factory.create_labeling_task(name='Test labeling task2')
media1 = self.test_factory.create_media_asset(dataset=dataset)
media1.labeling_task = task1
media1.save()
media2 = self.test_factory.create_media_asset(dataset=dataset)
media2.labeling_task = task2
media2.save()
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'dataset_id': dataset.id,
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['count'], 2)
self.assertEqual({task['name'] for task in response.json()['results']}, {task1.name, task2.name})
def test_list_with_dataset_id_filter_empty_result(self):
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'dataset_id': 0,
},
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_ordering_by_labeler_name(self):
task = self.test_factory.create_labeling_task(name='task 1')
task.labeler.username = 'zyx'
task.labeler.save()
task = self.test_factory.create_labeling_task(name='task 2')
task.labeler.username = 'abc'
task.labeler.save()
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'ordering': '-labeler',
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['results'][0]['labeler'], 'zyx')
self.assertEqual(response.json()['results'][1]['labeler'], 'abc')
def test_ordering_by_name(self):
self.test_factory.create_labeling_task(name='task 1')
self.test_factory.create_labeling_task(name='task 2')
response = self.client.get(
path=reverse('labelingtask-list'),
data={
'ordering': '-name',
},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()['results'][0]['name'], 'task 2')
self.assertEqual(response.json()['results'][1]['name'], 'task 1')
def test_archive_without_task_id(self):
response = self.client.patch(
path=reverse('labelingtask-archive'),
data={},
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.json(), {'id': ['This field is required.']})
def test_archive_several_tasks(self):
self.cvat_service_mock.delete_task.return_value = None
tasks = [
self.test_factory.create_labeling_task(status=LabelingTaskStatus.ANNOTATION),
self.test_factory.create_labeling_task(status=LabelingTaskStatus.COMPLETED),
]
archived_ids = [t.id for t in tasks]
response = self.client.patch(
path=reverse('labelingtask-archive'),
data={'id': archived_ids},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertSetEqual(set(response.json()['archived']), set(archived_ids))
self.assertListEqual(response.json()['errors'], [])
self.assertEqual(self.cvat_service_mock.delete_task.call_count, 2)
def test_archive_save_deleted_task(self):
self.cvat_service_mock.delete_task.return_value = CVATServiceException()
tasks = [
self.test_factory.create_labeling_task(status=LabelingTaskStatus.SAVED),
]
archived_ids = [t.id for t in tasks]
response = self.client.patch(
path=reverse('labelingtask-archive'),
data={'id': archived_ids},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertSetEqual(set(response.json()['archived']), set(archived_ids))
self.assertListEqual(response.json()['errors'], [])
self.assertEqual(self.cvat_service_mock.delete_task.call_count, 1)
def test_archive_skips_already_archived_tasks(self):
self.cvat_service_mock.delete_task.return_value = None
tasks = [
self.test_factory.create_labeling_task(status=LabelingTaskStatus.ARCHIVED),
self.test_factory.create_labeling_task(status=LabelingTaskStatus.COMPLETED),
]
archived_ids = [t.id for t in tasks]
response = self.client.patch(
path=reverse('labelingtask-archive'),
data={'id': archived_ids},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertSetEqual(set(response.json()['archived']), set([archived_ids[1]]))
self.assertListEqual(response.json()['errors'], [])
self.assertEqual(self.cvat_service_mock.delete_task.call_count, 1)
def test_archive_sends_error_on_failed_cvat_calls(self):
self.cvat_service_mock.delete_task.side_effect = [
None,
CVATServiceException(),
]
tasks = [
self.test_factory.create_labeling_task(status=LabelingTaskStatus.ANNOTATION),
self.test_factory.create_labeling_task(status=LabelingTaskStatus.COMPLETED),
]
archived_ids = [t.id for t in tasks]
expected = archived_ids[:]
response = self.client.patch(
path=reverse('labelingtask-archive'),
data={'id': archived_ids},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
got_archived = response.json()['archived']
self.assertEqual(len(got_archived), 1)
self.assertIn(got_archived[0], expected)
expected.remove(got_archived[0])
got_errors = response.json()['errors']
self.assertEqual(len(got_errors), 1)
self.assertIn(got_errors[0]['id'], expected)
self.assertEqual(self.cvat_service_mock.delete_task.call_count, 2)
def test_retry(self):
labeling_task1 = self.test_factory.create_labeling_task(error='error 1')
labeling_task2 = self.test_factory.create_labeling_task(error='error 2')
response = self.client.patch(
path=reverse('labelingtask-retry'),
data={'id': [labeling_task2.pk]},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
labeling_task1.refresh_from_db()
labeling_task2.refresh_from_db()
self.assertEqual(labeling_task1.error, 'error 1')
self.assertIsNone(labeling_task2.error)
def test_retry_not_found(self):
labeling_task = self.test_factory.create_labeling_task(error='some error')
response = self.client.patch(
path=reverse('labelingtask-retry'),
data={'id': [777]},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
labeling_task.refresh_from_db()
self.assertEqual(labeling_task.error, 'some error')
| 34.966418 | 114 | 0.705528 | 2,339 | 18,742 | 5.38478 | 0.075673 | 0.079079 | 0.073839 | 0.103374 | 0.83287 | 0.804526 | 0.769671 | 0.754029 | 0.738706 | 0.693053 | 0 | 0.016493 | 0.168605 | 18,742 | 535 | 115 | 35.031776 | 0.791811 | 0 | 0 | 0.520548 | 0 | 0 | 0.119464 | 0.002348 | 0 | 0 | 0 | 0 | 0.184932 | 1 | 0.068493 | false | 0 | 0.013699 | 0 | 0.084475 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4820389d3e5123ae5cff498473a4108cdb3d45d4 | 105 | py | Python | cac/relaxed_sync/__init__.py | FujitsuLaboratories/CAC | d12df8e47f61eaf7d7b0ed355e2d1aa296453f86 | [
"Apache-2.0"
] | 8 | 2021-09-30T07:24:43.000Z | 2022-02-21T07:30:46.000Z | cac/relaxed_sync/__init__.py | FujitsuLaboratories/CAC | d12df8e47f61eaf7d7b0ed355e2d1aa296453f86 | [
"Apache-2.0"
] | null | null | null | cac/relaxed_sync/__init__.py | FujitsuLaboratories/CAC | d12df8e47f61eaf7d7b0ed355e2d1aa296453f86 | [
"Apache-2.0"
] | null | null | null | # TODO: Review and modify comments
"""
"""
from .relaxed_sync import RelaxedSyncDistributedDataParallel
| 17.5 | 60 | 0.790476 | 10 | 105 | 8.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12381 | 105 | 5 | 61 | 21 | 0.891304 | 0.304762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
483939f0010c4caa6bfc5b6ad2485e39c96f6934 | 61 | py | Python | utils.py | meetaime/keras-extensions | 95af56b499e6ac01ca187666ef7e1f0cf7a58d4a | [
"CC0-1.0"
] | null | null | null | utils.py | meetaime/keras-extensions | 95af56b499e6ac01ca187666ef7e1f0cf7a58d4a | [
"CC0-1.0"
] | null | null | null | utils.py | meetaime/keras-extensions | 95af56b499e6ac01ca187666ef7e1f0cf7a58d4a | [
"CC0-1.0"
] | null | null | null | import numpy as np
def sigmoid(x):
return 1/(1+np.exp(-x)) | 15.25 | 25 | 0.655738 | 13 | 61 | 3.076923 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.163934 | 61 | 4 | 25 | 15.25 | 0.745098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
484522e3a65a7e32b3ef740f66a5f03ac4b17eb1 | 2,538 | py | Python | tests/integration/genome/test_genome_assembly_metrics.py | JLSteenwyk/BioKIT | 9ca31d8003dc845bf56b2c56c87820c0b05021c4 | [
"MIT"
] | 8 | 2021-10-03T21:08:33.000Z | 2021-12-02T17:15:32.000Z | tests/integration/genome/test_genome_assembly_metrics.py | JLSteenwyk/BioKIT | 9ca31d8003dc845bf56b2c56c87820c0b05021c4 | [
"MIT"
] | null | null | null | tests/integration/genome/test_genome_assembly_metrics.py | JLSteenwyk/BioKIT | 9ca31d8003dc845bf56b2c56c87820c0b05021c4 | [
"MIT"
] | 5 | 2021-10-05T06:25:03.000Z | 2022-01-04T11:01:09.000Z | import pytest
import textwrap
from mock import patch, call
from pathlib import Path
import sys
from biokit.biokit import Biokit
here = Path(__file__)
@pytest.mark.integration
class TestGenomeAssemblyMetrics(object):
@patch("builtins.print")
def test_genome_assembly_metrics_invalid_input(self, mocked_print): # noqa
with pytest.raises(SystemExit) as pytest_wrapped_e:
Biokit()
assert pytest_wrapped_e.type == SystemExit
assert pytest_wrapped_e.value.code == 2
@pytest.mark.slow
@patch("builtins.print")
def test_genome_assembly_metrics_simple(self, mocked_print):
expected_result = textwrap.dedent(
f"\
12157105\tAssembly size\n\
6\tL50\n\
13\tL90\n\
924431\tN50\n\
439888\tN90\n\
0.3815\tGC content\n\
17\tNumber of scaffolds\n\
17\tNumber of large scaffolds\n\
12157105\tSum length of large scaffolds\n\
1531933\tLongest scaffold\n\
0.3098\tFrequency of A\n\
0.3087\tFrequency of T\n\
0.1909\tFrequency of C\n\
0.1906\tFrequency of G" # noqa
)
testargs = [
"biokit",
"genome_assembly_metrics",
f"{here.parent.parent.parent}/sample_files/GCF_000146045.2_R64_genomic.fna",
]
with patch.object(sys, "argv", testargs):
Biokit()
assert mocked_print.mock_calls == [call(expected_result)]
@pytest.mark.slow
@patch("builtins.print")
def test_genome_assembly_metrics_simple_alias(self, mocked_print):
expected_result = textwrap.dedent(
f"\
12157105\tAssembly size\n\
6\tL50\n\
13\tL90\n\
924431\tN50\n\
439888\tN90\n\
0.3815\tGC content\n\
17\tNumber of scaffolds\n\
17\tNumber of large scaffolds\n\
12157105\tSum length of large scaffolds\n\
1531933\tLongest scaffold\n\
0.3098\tFrequency of A\n\
0.3087\tFrequency of T\n\
0.1909\tFrequency of C\n\
0.1906\tFrequency of G" # noqa
)
testargs = [
"biokit",
"assembly_metrics",
f"{here.parent.parent.parent}/sample_files/GCF_000146045.2_R64_genomic.fna",
]
with patch.object(sys, "argv", testargs):
Biokit()
assert mocked_print.mock_calls == [call(expected_result)]
| 30.95122 | 88 | 0.589441 | 305 | 2,538 | 4.754098 | 0.308197 | 0.013793 | 0.057931 | 0.033103 | 0.772414 | 0.772414 | 0.772414 | 0.772414 | 0.74069 | 0.74069 | 0 | 0.100633 | 0.314815 | 2,538 | 81 | 89 | 31.333333 | 0.73318 | 0.005516 | 0 | 0.704225 | 0 | 0 | 0.097222 | 0.06627 | 0 | 0 | 0 | 0 | 0.056338 | 1 | 0.042254 | false | 0 | 0.084507 | 0 | 0.140845 | 0.112676 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
485b93888b21974875647c874e3b68242dc9326a | 137 | py | Python | Capitulo 1/11 - lower.py | mmmacedo/python | 2e7d99021342a5c7c31fe644ff194b6a8fa88a88 | [
"MIT"
] | null | null | null | Capitulo 1/11 - lower.py | mmmacedo/python | 2e7d99021342a5c7c31fe644ff194b6a8fa88a88 | [
"MIT"
] | null | null | null | Capitulo 1/11 - lower.py | mmmacedo/python | 2e7d99021342a5c7c31fe644ff194b6a8fa88a88 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
nome = "Daniel Moreno"
print nome, "em minúsculo:", nome.lower()
print nome, "em maiúsculo:", nome.upper() | 22.833333 | 42 | 0.613139 | 18 | 137 | 4.666667 | 0.666667 | 0.214286 | 0.261905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008929 | 0.182482 | 137 | 6 | 43 | 22.833333 | 0.741071 | 0.153285 | 0 | 0 | 0 | 0 | 0.354545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
485da1c5ba3aac9135c030c7b42074ef51dd8ea1 | 33 | py | Python | sfEQAA/__init__.py | komodo108/sfEQAA.py | aec88a8960fec946611514a83f020e875106b3c7 | [
"Apache-2.0"
] | null | null | null | sfEQAA/__init__.py | komodo108/sfEQAA.py | aec88a8960fec946611514a83f020e875106b3c7 | [
"Apache-2.0"
] | 2 | 2021-05-05T22:51:37.000Z | 2021-05-06T00:06:18.000Z | sfEQAA/__init__.py | komodo108/sfEQAA.py | aec88a8960fec946611514a83f020e875106b3c7 | [
"Apache-2.0"
] | null | null | null | from .convert import EQ_AA, AA_EQ | 33 | 33 | 0.818182 | 7 | 33 | 3.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6ff69ce8a8559c0eac0e2b3df72c2a0c67c3efe7 | 25 | py | Python | kalliope/neurons/shell/__init__.py | joshuaboniface/kalliope | 0e040be3165e838485d1e5addc4d2c5df12bfd84 | [
"MIT"
] | 1 | 2020-03-30T15:03:19.000Z | 2020-03-30T15:03:19.000Z | kalliope/neurons/shell/__init__.py | joshuaboniface/kalliope | 0e040be3165e838485d1e5addc4d2c5df12bfd84 | [
"MIT"
] | 1 | 2020-06-08T23:32:48.000Z | 2020-06-08T23:32:48.000Z | kalliope/neurons/shell/__init__.py | joshuaboniface/kalliope | 0e040be3165e838485d1e5addc4d2c5df12bfd84 | [
"MIT"
] | 1 | 2021-11-21T19:08:15.000Z | 2021-11-21T19:08:15.000Z | from .shell import Shell
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b50613045bdc3a08affdd15261df02a1121740ba | 37 | py | Python | src/project/api/user/__init__.py | jSkrod/djangae-react-browser-games-app | 28c5064f0a126021afb08b195839305aba6b35a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | src/project/api/user/__init__.py | jSkrod/djangae-react-browser-games-app | 28c5064f0a126021afb08b195839305aba6b35a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | src/project/api/user/__init__.py | jSkrod/djangae-react-browser-games-app | 28c5064f0a126021afb08b195839305aba6b35a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | from project.api.user import signals
| 18.5 | 36 | 0.837838 | 6 | 37 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82f6e27076d50b11e9f0f4f685f6893ddc976e4d | 119 | py | Python | basetrainer/utils/__init__.py | PanJinquan/pytorch-base-trainer | 37799c948f72b2f9d3771ff469e06cdbff4a1d07 | [
"MIT"
] | 11 | 2022-01-18T10:07:52.000Z | 2022-03-16T02:40:31.000Z | basetrainer/utils/__init__.py | PanJinquan/pytorch-base-trainer | 37799c948f72b2f9d3771ff469e06cdbff4a1d07 | [
"MIT"
] | null | null | null | basetrainer/utils/__init__.py | PanJinquan/pytorch-base-trainer | 37799c948f72b2f9d3771ff469e06cdbff4a1d07 | [
"MIT"
] | 1 | 2022-01-26T06:31:29.000Z | 2022-01-26T06:31:29.000Z | # -*-coding: utf-8 -*-
"""
@Author : panjq
@E-mail : pan_jinquan@163.com
@Date : 2021-07-28 08:57:17
"""
| 17 | 33 | 0.512605 | 18 | 119 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202247 | 0.252101 | 119 | 6 | 34 | 19.833333 | 0.47191 | 0.815126 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2058e2db091fa0fb77e14398a08a64e9bdb3040 | 170 | py | Python | clifford/test/__init__.py | Dano-drevo/clifford | 3da06765b3c48d9840b5d55f082363ba6075a0d4 | [
"BSD-3-Clause"
] | 1 | 2019-11-11T13:07:29.000Z | 2019-11-11T13:07:29.000Z | clifford/test/__init__.py | Dano-drevo/clifford | 3da06765b3c48d9840b5d55f082363ba6075a0d4 | [
"BSD-3-Clause"
] | null | null | null | clifford/test/__init__.py | Dano-drevo/clifford | 3da06765b3c48d9840b5d55f082363ba6075a0d4 | [
"BSD-3-Clause"
] | null | null | null | import os
import pytest
def run_all_tests(*args):
""" Invoke pytest, forwarding options to pytest.main """
pytest.main([os.path.dirname(__file__)] + list(args))
| 24.285714 | 60 | 0.705882 | 24 | 170 | 4.75 | 0.708333 | 0.175439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152941 | 170 | 6 | 61 | 28.333333 | 0.791667 | 0.282353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
962855ef5863b7054bc0fcacc5786238d8cd996d | 55,737 | py | Python | heat_cfntools/tests/test_cfn_helper.py | openstack/heat-cfntools | ce3f97cecdf4e932cafdffae8f13d07756aed2db | [
"Apache-2.0"
] | 38 | 2015-01-29T20:10:40.000Z | 2021-12-07T15:17:22.000Z | heat_cfntools/tests/test_cfn_helper.py | openstack/heat-cfntools | ce3f97cecdf4e932cafdffae8f13d07756aed2db | [
"Apache-2.0"
] | null | null | null | heat_cfntools/tests/test_cfn_helper.py | openstack/heat-cfntools | ce3f97cecdf4e932cafdffae8f13d07756aed2db | [
"Apache-2.0"
] | 13 | 2015-01-06T08:49:57.000Z | 2018-11-17T22:14:38.000Z | #
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import os
import tempfile
from unittest import mock
import boto.cloudformation as cfn
import fixtures
import testtools
import testtools.matchers as ttm
from heat_cfntools.cfntools import cfn_helper
def popen_root_calls(calls, shell=False):
kwargs = {'env': None, 'cwd': None, 'stderr': -1, 'stdout': -1,
'shell': shell}
return [
mock.call(call, **kwargs)
for call in calls
]
class FakePOpen(object):
def __init__(self, stdout='', stderr='', returncode=0):
self.returncode = returncode
self.stdout = stdout
self.stderr = stderr
def communicate(self):
return (self.stdout, self.stderr)
def wait(self):
pass
@mock.patch.object(cfn_helper.pwd, 'getpwnam')
@mock.patch.object(cfn_helper.os, 'seteuid')
@mock.patch.object(cfn_helper.os, 'geteuid')
class TestCommandRunner(testtools.TestCase):
def test_command_runner(self, mock_geteuid, mock_seteuid, mock_getpwnam):
def returns(*args, **kwargs):
if args[0][0] == '/bin/command1':
return FakePOpen('All good')
elif args[0][0] == '/bin/command2':
return FakePOpen('Doing something', 'error', -1)
else:
raise Exception('This should never happen')
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
cmd2 = cfn_helper.CommandRunner(['/bin/command2'])
cmd1 = cfn_helper.CommandRunner(['/bin/command1'],
nextcommand=cmd2)
cmd1.run('root')
self.assertEqual(
'CommandRunner:\n\tcommand: [\'/bin/command1\']\n\tstdout: '
'All good',
str(cmd1))
self.assertEqual(
'CommandRunner:\n\tcommand: [\'/bin/command2\']\n\tstatus: '
'-1\n\tstdout: Doing something\n\tstderr: error',
str(cmd2))
calls = popen_root_calls([['/bin/command1'], ['/bin/command2']])
mock_popen.assert_has_calls(calls)
def test_privileges_are_lowered_for_non_root_user(self, mock_geteuid,
mock_seteuid,
mock_getpwnam):
pw_entry = mock.Mock()
pw_entry.pw_uid = 1001
mock_getpwnam.return_value = pw_entry
mock_geteuid.return_value = 0
calls = [mock.call(1001), mock.call(0)]
with mock.patch('subprocess.Popen') as mock_popen:
command = ['/bin/command', '--option=value', 'arg1', 'arg2']
cmd = cfn_helper.CommandRunner(command)
cmd.run(user='nonroot')
self.assertTrue(mock_geteuid.called)
mock_getpwnam.assert_called_once_with('nonroot')
mock_seteuid.assert_has_calls(calls)
self.assertTrue(mock_popen.called)
def test_run_returns_when_cannot_set_privileges(self, mock_geteuid,
mock_seteuid,
mock_getpwnam):
msg = '[Error 1] Permission Denied'
mock_seteuid.side_effect = Exception(msg)
with mock.patch('subprocess.Popen') as mock_popen:
command = ['/bin/command2']
cmd = cfn_helper.CommandRunner(command)
cmd.run(user='nonroot')
self.assertTrue(mock_getpwnam.called)
self.assertTrue(mock_seteuid.called)
self.assertFalse(mock_popen.called)
self.assertEqual(126, cmd.status)
self.assertEqual(msg, cmd.stderr)
def test_privileges_are_restored_for_command_failure(self, mock_geteuid,
mock_seteuid,
mock_getpwnam):
pw_entry = mock.Mock()
pw_entry.pw_uid = 1001
mock_getpwnam.return_value = pw_entry
mock_geteuid.return_value = 0
calls = [mock.call(1001), mock.call(0)]
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = ValueError('Something wrong')
command = ['/bin/command', '--option=value', 'arg1', 'arg2']
cmd = cfn_helper.CommandRunner(command)
self.assertRaises(ValueError, cmd.run, user='nonroot')
self.assertTrue(mock_geteuid.called)
mock_getpwnam.assert_called_once_with('nonroot')
mock_seteuid.assert_has_calls(calls)
self.assertTrue(mock_popen.called)
@mock.patch.object(cfn_helper, 'controlled_privileges')
class TestPackages(testtools.TestCase):
def test_yum_install(self, mock_cp):
def returns(*args, **kwargs):
if args[0][0] == 'rpm' and args[0][1] == '-q':
return FakePOpen(returncode=1)
else:
return FakePOpen(returncode=0)
calls = [['which', 'yum']]
for pack in ('httpd', 'wordpress', 'mysql-server'):
calls.append(['rpm', '-q', pack])
calls.append(['yum', '-y', '--showduplicates', 'list',
'available', pack])
calls = popen_root_calls(calls)
packages = {
"yum": {
"mysql-server": [],
"httpd": [],
"wordpress": []
}
}
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
cfn_helper.PackagesHandler(packages).apply_packages()
mock_popen.assert_has_calls(calls, any_order=True)
def test_dnf_install_yum_unavailable(self, mock_cp):
def returns(*args, **kwargs):
if ((args[0][0] == 'rpm' and args[0][1] == '-q')
or (args[0][0] == 'which' and args[0][1] == 'yum')):
return FakePOpen(returncode=1)
else:
return FakePOpen(returncode=0)
calls = [['which', 'yum']]
for pack in ('httpd', 'wordpress', 'mysql-server'):
calls.append(['rpm', '-q', pack])
calls.append(['dnf', '-y', '--showduplicates', 'list',
'available', pack])
calls = popen_root_calls(calls)
packages = {
"yum": {
"mysql-server": [],
"httpd": [],
"wordpress": []
}
}
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
cfn_helper.PackagesHandler(packages).apply_packages()
mock_popen.assert_has_calls(calls, any_order=True)
def test_dnf_install(self, mock_cp):
def returns(*args, **kwargs):
if args[0][0] == 'rpm' and args[0][1] == '-q':
return FakePOpen(returncode=1)
else:
return FakePOpen(returncode=0)
calls = []
for pack in ('httpd', 'wordpress', 'mysql-server'):
calls.append(['rpm', '-q', pack])
calls.append(['dnf', '-y', '--showduplicates', 'list',
'available', pack])
calls = popen_root_calls(calls)
packages = {
"dnf": {
"mysql-server": [],
"httpd": [],
"wordpress": []
}
}
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
cfn_helper.PackagesHandler(packages).apply_packages()
mock_popen.assert_has_calls(calls, any_order=True)
def test_zypper_install(self, mock_cp):
def returns(*args, **kwargs):
if args[0][0].startswith('rpm') and args[0][1].startswith('-q'):
return FakePOpen(returncode=1)
else:
return FakePOpen(returncode=0)
calls = []
for pack in ('httpd', 'wordpress', 'mysql-server'):
calls.append(['rpm', '-q', pack])
calls.append(['zypper', '-n', '--no-refresh', 'search', pack])
calls = popen_root_calls(calls)
packages = {
"zypper": {
"mysql-server": [],
"httpd": [],
"wordpress": []
}
}
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
cfn_helper.PackagesHandler(packages).apply_packages()
mock_popen.assert_has_calls(calls, any_order=True)
def test_apt_install(self, mock_cp):
packages = {
"apt": {
"mysql-server": [],
"httpd": [],
"wordpress": []
}
}
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen(returncode=0)
cfn_helper.PackagesHandler(packages).apply_packages()
self.assertTrue(mock_popen.called)
@mock.patch.object(cfn_helper, 'controlled_privileges')
class TestServicesHandler(testtools.TestCase):
def test_services_handler_systemd(self, mock_cp):
calls = []
returns = []
# apply_services
calls.append(['/bin/systemctl', 'enable', 'httpd.service'])
returns.append(FakePOpen())
calls.append(['/bin/systemctl', 'status', 'httpd.service'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/bin/systemctl', 'start', 'httpd.service'])
returns.append(FakePOpen())
calls.append(['/bin/systemctl', 'enable', 'mysqld.service'])
returns.append(FakePOpen())
calls.append(['/bin/systemctl', 'status', 'mysqld.service'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/bin/systemctl', 'start', 'mysqld.service'])
returns.append(FakePOpen())
# monitor_services not running
calls.append(['/bin/systemctl', 'status', 'httpd.service'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/bin/systemctl', 'start', 'httpd.service'])
returns.append(FakePOpen())
calls = popen_root_calls(calls)
calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True))
returns.append(FakePOpen())
calls.extend(popen_root_calls([['/bin/systemctl', 'status',
'mysqld.service']]))
returns.append(FakePOpen(returncode=-1))
calls.extend(popen_root_calls([['/bin/systemctl', 'start',
'mysqld.service']]))
returns.append(FakePOpen())
calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True))
returns.append(FakePOpen())
# monitor_services running
calls.extend(popen_root_calls([['/bin/systemctl', 'status',
'httpd.service']]))
returns.append(FakePOpen())
calls.extend(popen_root_calls([['/bin/systemctl', 'status',
'mysqld.service']]))
returns.append(FakePOpen())
services = {
"systemd": {
"mysqld": {"enabled": "true", "ensureRunning": "true"},
"httpd": {"enabled": "true", "ensureRunning": "true"}
}
}
hooks = [
cfn_helper.Hook(
'hook1',
'service.restarted',
'Resources.resource1.Metadata',
'root',
'/bin/services_restarted')
]
with mock.patch('os.path.exists') as mock_exists:
mock_exists.return_value = True
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
sh = cfn_helper.ServicesHandler(services, 'resource1', hooks)
sh.apply_services()
# services not running
sh.monitor_services()
# services running
sh.monitor_services()
mock_popen.assert_has_calls(calls, any_order=True)
mock_exists.assert_called_with('/bin/systemctl')
def test_services_handler_systemd_disabled(self, mock_cp):
calls = []
# apply_services
calls.append(['/bin/systemctl', 'disable', 'httpd.service'])
calls.append(['/bin/systemctl', 'status', 'httpd.service'])
calls.append(['/bin/systemctl', 'stop', 'httpd.service'])
calls.append(['/bin/systemctl', 'disable', 'mysqld.service'])
calls.append(['/bin/systemctl', 'status', 'mysqld.service'])
calls.append(['/bin/systemctl', 'stop', 'mysqld.service'])
calls = popen_root_calls(calls)
services = {
"systemd": {
"mysqld": {"enabled": "false", "ensureRunning": "false"},
"httpd": {"enabled": "false", "ensureRunning": "false"}
}
}
hooks = [
cfn_helper.Hook(
'hook1',
'service.restarted',
'Resources.resource1.Metadata',
'root',
'/bin/services_restarted')
]
with mock.patch('os.path.exists') as mock_exists:
mock_exists.return_value = True
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen()
sh = cfn_helper.ServicesHandler(services, 'resource1', hooks)
sh.apply_services()
mock_popen.assert_has_calls(calls, any_order=True)
mock_exists.assert_called_with('/bin/systemctl')
def test_services_handler_sysv_service_chkconfig(self, mock_cp):
def exists(*args, **kwargs):
return args[0] != '/bin/systemctl'
calls = []
returns = []
# apply_services
calls.append(['/sbin/chkconfig', 'httpd', 'on'])
returns.append(FakePOpen())
calls.append(['/sbin/service', 'httpd', 'status'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/sbin/service', 'httpd', 'start'])
returns.append(FakePOpen())
# monitor_services not running
calls.append(['/sbin/service', 'httpd', 'status'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/sbin/service', 'httpd', 'start'])
returns.append(FakePOpen())
calls = popen_root_calls(calls)
calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True))
returns.append(FakePOpen())
# monitor_services running
calls.extend(popen_root_calls([['/sbin/service', 'httpd', 'status']]))
returns.append(FakePOpen())
services = {
"sysvinit": {
"httpd": {"enabled": "true", "ensureRunning": "true"}
}
}
hooks = [
cfn_helper.Hook(
'hook1',
'service.restarted',
'Resources.resource1.Metadata',
'root',
'/bin/services_restarted')
]
with mock.patch('os.path.exists') as mock_exists:
mock_exists.side_effect = exists
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
sh = cfn_helper.ServicesHandler(services, 'resource1', hooks)
sh.apply_services()
# services not running
sh.monitor_services()
# services running
sh.monitor_services()
mock_popen.assert_has_calls(calls)
mock_exists.assert_any_call('/bin/systemctl')
mock_exists.assert_any_call('/sbin/service')
mock_exists.assert_any_call('/sbin/chkconfig')
def test_services_handler_sysv_disabled_service_chkconfig(self, mock_cp):
def exists(*args, **kwargs):
return args[0] != '/bin/systemctl'
calls = []
# apply_services
calls.append(['/sbin/chkconfig', 'httpd', 'off'])
calls.append(['/sbin/service', 'httpd', 'status'])
calls.append(['/sbin/service', 'httpd', 'stop'])
calls = popen_root_calls(calls)
services = {
"sysvinit": {
"httpd": {"enabled": "false", "ensureRunning": "false"}
}
}
hooks = [
cfn_helper.Hook(
'hook1',
'service.restarted',
'Resources.resource1.Metadata',
'root',
'/bin/services_restarted')
]
with mock.patch('os.path.exists') as mock_exists:
mock_exists.side_effect = exists
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen()
sh = cfn_helper.ServicesHandler(services, 'resource1', hooks)
sh.apply_services()
mock_popen.assert_has_calls(calls)
mock_exists.assert_any_call('/bin/systemctl')
mock_exists.assert_any_call('/sbin/service')
mock_exists.assert_any_call('/sbin/chkconfig')
def test_services_handler_sysv_systemctl(self, mock_cp):
calls = []
returns = []
# apply_services
calls.append(['/bin/systemctl', 'enable', 'httpd.service'])
returns.append(FakePOpen())
calls.append(['/bin/systemctl', 'status', 'httpd.service'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/bin/systemctl', 'start', 'httpd.service'])
returns.append(FakePOpen())
# monitor_services not running
calls.append(['/bin/systemctl', 'status', 'httpd.service'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/bin/systemctl', 'start', 'httpd.service'])
returns.append(FakePOpen())
shell_calls = []
shell_calls.append('/bin/services_restarted')
returns.append(FakePOpen())
calls = popen_root_calls(calls)
calls.extend(popen_root_calls(shell_calls, shell=True))
# monitor_services running
calls.extend(popen_root_calls([['/bin/systemctl', 'status',
'httpd.service']]))
returns.append(FakePOpen())
services = {
"sysvinit": {
"httpd": {"enabled": "true", "ensureRunning": "true"}
}
}
hooks = [
cfn_helper.Hook(
'hook1',
'service.restarted',
'Resources.resource1.Metadata',
'root',
'/bin/services_restarted')
]
with mock.patch('os.path.exists') as mock_exists:
mock_exists.return_value = True
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
sh = cfn_helper.ServicesHandler(services, 'resource1', hooks)
sh.apply_services()
# services not running
sh.monitor_services()
# services running
sh.monitor_services()
mock_popen.assert_has_calls(calls)
mock_exists.assert_called_with('/bin/systemctl')
def test_services_handler_sysv_disabled_systemctl(self, mock_cp):
calls = []
# apply_services
calls.append(['/bin/systemctl', 'disable', 'httpd.service'])
calls.append(['/bin/systemctl', 'status', 'httpd.service'])
calls.append(['/bin/systemctl', 'stop', 'httpd.service'])
calls = popen_root_calls(calls)
services = {
"sysvinit": {
"httpd": {"enabled": "false", "ensureRunning": "false"}
}
}
hooks = [
cfn_helper.Hook(
'hook1',
'service.restarted',
'Resources.resource1.Metadata',
'root',
'/bin/services_restarted')
]
with mock.patch('os.path.exists') as mock_exists:
mock_exists.return_value = True
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen()
sh = cfn_helper.ServicesHandler(services, 'resource1', hooks)
sh.apply_services()
mock_popen.assert_has_calls(calls)
mock_exists.assert_called_with('/bin/systemctl')
def test_services_handler_sysv_service_updaterc(self, mock_cp):
calls = []
returns = []
# apply_services
calls.append(['/usr/sbin/update-rc.d', 'httpd', 'enable'])
returns.append(FakePOpen())
calls.append(['/usr/sbin/service', 'httpd', 'status'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/usr/sbin/service', 'httpd', 'start'])
returns.append(FakePOpen())
# monitor_services not running
calls.append(['/usr/sbin/service', 'httpd', 'status'])
returns.append(FakePOpen(returncode=-1))
calls.append(['/usr/sbin/service', 'httpd', 'start'])
returns.append(FakePOpen())
shell_calls = []
shell_calls.append('/bin/services_restarted')
returns.append(FakePOpen())
calls = popen_root_calls(calls)
calls.extend(popen_root_calls(shell_calls, shell=True))
# monitor_services running
calls.extend(popen_root_calls([['/usr/sbin/service', 'httpd',
'status']]))
returns.append(FakePOpen())
services = {
"sysvinit": {
"httpd": {"enabled": "true", "ensureRunning": "true"}
}
}
hooks = [
cfn_helper.Hook(
'hook1',
'service.restarted',
'Resources.resource1.Metadata',
'root',
'/bin/services_restarted')
]
with mock.patch('os.path.exists') as mock_exists:
mock_exists.return_value = False
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
sh = cfn_helper.ServicesHandler(services, 'resource1', hooks)
sh.apply_services()
# services not running
sh.monitor_services()
# services running
sh.monitor_services()
mock_popen.assert_has_calls(calls)
mock_exists.assert_any_call('/bin/systemctl')
mock_exists.assert_any_call('/sbin/service')
mock_exists.assert_any_call('/sbin/chkconfig')
def test_services_handler_sysv_disabled_service_updaterc(self, mock_cp):
calls = []
returns = []
# apply_services
calls.append(['/usr/sbin/update-rc.d', 'httpd', 'disable'])
returns.append(FakePOpen())
calls.append(['/usr/sbin/service', 'httpd', 'status'])
returns.append(FakePOpen())
calls.append(['/usr/sbin/service', 'httpd', 'stop'])
returns.append(FakePOpen())
calls = popen_root_calls(calls)
services = {
"sysvinit": {
"httpd": {"enabled": "false", "ensureRunning": "false"}
}
}
hooks = [
cfn_helper.Hook(
'hook1',
'service.restarted',
'Resources.resource1.Metadata',
'root',
'/bin/services_restarted')
]
with mock.patch('os.path.exists') as mock_exists:
mock_exists.return_value = False
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
sh = cfn_helper.ServicesHandler(services, 'resource1', hooks)
sh.apply_services()
mock_popen.assert_has_calls(calls)
mock_exists.assert_any_call('/bin/systemctl')
mock_exists.assert_any_call('/sbin/service')
mock_exists.assert_any_call('/sbin/chkconfig')
class TestHupConfig(testtools.TestCase):
def test_load_main_section(self):
fcreds = tempfile.NamedTemporaryFile()
fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n'.encode('UTF-8'))
fcreds.flush()
main_conf = tempfile.NamedTemporaryFile()
main_conf.write(('''[main]
stack=teststack
credential-file=%s''' % fcreds.name).encode('UTF-8'))
main_conf.flush()
mainconfig = cfn_helper.HupConfig([open(main_conf.name)])
self.assertEqual(
'{stack: teststack, credential_file: %s, '
'region: nova, interval:10}' % fcreds.name,
str(mainconfig))
main_conf.close()
main_conf = tempfile.NamedTemporaryFile()
main_conf.write(('''[main]
stack=teststack
region=region1
credential-file=%s-invalid
interval=120''' % fcreds.name).encode('UTF-8'))
main_conf.flush()
e = self.assertRaises(cfn_helper.InvalidCredentialsException,
cfn_helper.HupConfig,
[open(main_conf.name)])
self.assertIn('invalid credentials file', str(e))
fcreds.close()
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_hup_config(self, mock_cp):
hooks_conf = tempfile.NamedTemporaryFile()
def write_hook_conf(f, name, triggers, path, action):
f.write((
'[%s]\ntriggers=%s\npath=%s\naction=%s\nrunas=root\n\n' % (
name, triggers, path, action)).encode('UTF-8'))
write_hook_conf(
hooks_conf,
'hook2',
'service2.restarted',
'Resources.resource2.Metadata',
'/bin/hook2')
write_hook_conf(
hooks_conf,
'hook1',
'service1.restarted',
'Resources.resource1.Metadata',
'/bin/hook1')
write_hook_conf(
hooks_conf,
'hook3',
'service3.restarted',
'Resources.resource3.Metadata',
'/bin/hook3')
write_hook_conf(
hooks_conf,
'cfn-http-restarted',
'service.restarted',
'Resources.resource.Metadata',
'/bin/cfn-http-restarted')
hooks_conf.flush()
fcreds = tempfile.NamedTemporaryFile()
fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n'.encode('UTF-8'))
fcreds.flush()
main_conf = tempfile.NamedTemporaryFile()
main_conf.write(('''[main]
stack=teststack
credential-file=%s
region=region1
interval=120''' % fcreds.name).encode('UTF-8'))
main_conf.flush()
mainconfig = cfn_helper.HupConfig([
open(main_conf.name),
open(hooks_conf.name)])
unique_resources = mainconfig.unique_resources_get()
self.assertThat([
'resource',
'resource1',
'resource2',
'resource3',
], ttm.Equals(sorted(unique_resources)))
hooks = sorted(mainconfig.hooks,
key=lambda hook: hook.resource_name_get())
self.assertEqual(len(hooks), 4)
self.assertEqual(
'{cfn-http-restarted, service.restarted,'
' Resources.resource.Metadata, root, /bin/cfn-http-restarted}',
str(hooks[0]))
self.assertEqual(
'{hook1, service1.restarted, Resources.resource1.Metadata,'
' root, /bin/hook1}', str(hooks[1]))
self.assertEqual(
'{hook2, service2.restarted, Resources.resource2.Metadata,'
' root, /bin/hook2}', str(hooks[2]))
self.assertEqual(
'{hook3, service3.restarted, Resources.resource3.Metadata,'
' root, /bin/hook3}', str(hooks[3]))
calls = []
calls.extend(popen_root_calls(['/bin/cfn-http-restarted'], shell=True))
calls.extend(popen_root_calls(['/bin/hook1'], shell=True))
calls.extend(popen_root_calls(['/bin/hook2'], shell=True))
calls.extend(popen_root_calls(['/bin/hook3'], shell=True))
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen('All good')
for hook in hooks:
hook.event(hook.triggers, None, hook.resource_name_get())
hooks_conf.close()
fcreds.close()
main_conf.close()
mock_popen.assert_has_calls(calls)
class TestCfnHelper(testtools.TestCase):
def _check_metadata_content(self, content, value):
with tempfile.NamedTemporaryFile() as metadata_info:
metadata_info.write(content.encode('UTF-8'))
metadata_info.flush()
port = cfn_helper.metadata_server_port(metadata_info.name)
self.assertEqual(value, port)
def test_metadata_server_port(self):
self._check_metadata_content("http://172.20.42.42:8000\n", 8000)
def test_metadata_server_port_https(self):
self._check_metadata_content("https://abc.foo.bar:6969\n", 6969)
def test_metadata_server_port_noport(self):
self._check_metadata_content("http://172.20.42.42\n", None)
def test_metadata_server_port_justip(self):
self._check_metadata_content("172.20.42.42", None)
def test_metadata_server_port_weird(self):
self._check_metadata_content("::::", None)
self._check_metadata_content("beforecolons:aftercolons", None)
def test_metadata_server_port_emptyfile(self):
self._check_metadata_content("\n", None)
self._check_metadata_content("", None)
def test_metadata_server_nofile(self):
random_filename = self.getUniqueString()
self.assertIsNone(cfn_helper.metadata_server_port(random_filename))
def test_to_boolean(self):
self.assertTrue(cfn_helper.to_boolean(True))
self.assertTrue(cfn_helper.to_boolean('true'))
self.assertTrue(cfn_helper.to_boolean('yes'))
self.assertTrue(cfn_helper.to_boolean('1'))
self.assertTrue(cfn_helper.to_boolean(1))
self.assertFalse(cfn_helper.to_boolean(False))
self.assertFalse(cfn_helper.to_boolean('false'))
self.assertFalse(cfn_helper.to_boolean('no'))
self.assertFalse(cfn_helper.to_boolean('0'))
self.assertFalse(cfn_helper.to_boolean(0))
self.assertFalse(cfn_helper.to_boolean(None))
self.assertFalse(cfn_helper.to_boolean('fingle'))
def test_parse_creds_file(self):
def parse_creds_test(file_contents, creds_match):
with tempfile.NamedTemporaryFile(mode='w') as fcreds:
fcreds.write(file_contents)
fcreds.flush()
creds = cfn_helper.parse_creds_file(fcreds.name)
self.assertThat(creds_match, ttm.Equals(creds))
parse_creds_test(
'AWSAccessKeyId=foo\nAWSSecretKey=bar\n',
{'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'}
)
parse_creds_test(
'AWSAccessKeyId =foo\nAWSSecretKey= bar\n',
{'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'}
)
parse_creds_test(
'AWSAccessKeyId = foo\nAWSSecretKey = bar\n',
{'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'}
)
class TestMetadataRetrieve(testtools.TestCase):
def setUp(self):
super(TestMetadataRetrieve, self).setUp()
self.tdir = self.useFixture(fixtures.TempDir())
self.last_file = os.path.join(self.tdir.path, 'last_metadata')
def test_metadata_retrieve_files(self):
md_data = {"AWS::CloudFormation::Init": {"config": {"files": {
"/tmp/foo": {"content": "bar"}}}}}
md_str = json.dumps(md_data)
md = cfn_helper.Metadata('teststack', None)
with tempfile.NamedTemporaryFile(mode='w+') as default_file:
default_file.write(md_str)
default_file.flush()
self.assertThat(default_file.name, ttm.FileContains(md_str))
self.assertTrue(
md.retrieve(default_path=default_file.name,
last_path=self.last_file))
self.assertThat(self.last_file, ttm.FileContains(md_str))
self.assertThat(md_data, ttm.Equals(md._metadata))
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(md.retrieve(default_path=default_file.name,
last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
def test_metadata_retrieve_none(self):
md = cfn_helper.Metadata('teststack', None)
default_file = os.path.join(self.tdir.path, 'default_file')
self.assertFalse(md.retrieve(default_path=default_file,
last_path=self.last_file))
self.assertIsNone(md._metadata)
displayed = self.useFixture(fixtures.StringStream('stdout'))
fake_stdout = displayed.stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout))
md.display()
fake_stdout.flush()
self.assertEqual(displayed.getDetails()['stdout'].as_text(), "")
def test_metadata_retrieve_passed(self):
md_data = {"AWS::CloudFormation::Init": {"config": {"files": {
"/tmp/foo": {"content": "bar"}}}}}
md_str = json.dumps(md_data)
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(md.retrieve(meta_str=md_data,
last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
self.assertEqual(md_str, str(md))
displayed = self.useFixture(fixtures.StringStream('stdout'))
fake_stdout = displayed.stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout))
md.display()
fake_stdout.flush()
self.assertEqual(displayed.getDetails()['stdout'].as_text(),
"{\"AWS::CloudFormation::Init\": {\"config\": {"
"\"files\": {\"/tmp/foo\": {\"content\": \"bar\"}"
"}}}}\n")
def test_metadata_retrieve_by_key_passed(self):
md_data = {"foo": {"bar": {"fred.1": "abcd"}}}
md_str = json.dumps(md_data)
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(md.retrieve(meta_str=md_data,
last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
self.assertEqual(md_str, str(md))
displayed = self.useFixture(fixtures.StringStream('stdout'))
fake_stdout = displayed.stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout))
md.display("foo")
fake_stdout.flush()
self.assertEqual(displayed.getDetails()['stdout'].as_text(),
"{\"bar\": {\"fred.1\": \"abcd\"}}\n")
def test_metadata_retrieve_by_nested_key_passed(self):
md_data = {"foo": {"bar": {"fred.1": "abcd"}}}
md_str = json.dumps(md_data)
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(md.retrieve(meta_str=md_data,
last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
self.assertEqual(md_str, str(md))
displayed = self.useFixture(fixtures.StringStream('stdout'))
fake_stdout = displayed.stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout))
md.display("foo.bar.'fred.1'")
fake_stdout.flush()
self.assertEqual(displayed.getDetails()['stdout'].as_text(),
'"abcd"\n')
def test_metadata_retrieve_key_none(self):
md_data = {"AWS::CloudFormation::Init": {"config": {"files": {
"/tmp/foo": {"content": "bar"}}}}}
md_str = json.dumps(md_data)
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(md.retrieve(meta_str=md_data,
last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
self.assertEqual(md_str, str(md))
displayed = self.useFixture(fixtures.StringStream('stdout'))
fake_stdout = displayed.stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout))
md.display("no_key")
fake_stdout.flush()
self.assertEqual(displayed.getDetails()['stdout'].as_text(), "")
def test_metadata_retrieve_by_nested_key_none(self):
md_data = {"foo": {"bar": {"fred.1": "abcd"}}}
md_str = json.dumps(md_data)
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(md.retrieve(meta_str=md_data,
last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
self.assertEqual(md_str, str(md))
displayed = self.useFixture(fixtures.StringStream('stdout'))
fake_stdout = displayed.stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout))
md.display("foo.fred")
fake_stdout.flush()
self.assertEqual(displayed.getDetails()['stdout'].as_text(), "")
def test_metadata_retrieve_by_nested_key_none_with_matching_string(self):
md_data = {"foo": "bar"}
md_str = json.dumps(md_data)
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(md.retrieve(meta_str=md_data,
last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
self.assertEqual(md_str, str(md))
displayed = self.useFixture(fixtures.StringStream('stdout'))
fake_stdout = displayed.stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout))
md.display("foo.bar")
fake_stdout.flush()
self.assertEqual(displayed.getDetails()['stdout'].as_text(), "")
def test_metadata_creates_cache(self):
temp_home = tempfile.mkdtemp()
def cleanup_temp_home(thome):
os.unlink(os.path.join(thome, 'cache', 'last_metadata'))
os.rmdir(os.path.join(thome, 'cache'))
os.rmdir(os.path.join(thome))
self.addCleanup(cleanup_temp_home, temp_home)
last_path = os.path.join(temp_home, 'cache', 'last_metadata')
md_data = {"AWS::CloudFormation::Init": {"config": {"files": {
"/tmp/foo": {"content": "bar"}}}}}
md_str = json.dumps(md_data)
md = cfn_helper.Metadata('teststack', None)
self.assertFalse(os.path.exists(last_path),
"last_metadata file already exists")
self.assertTrue(md.retrieve(meta_str=md_str, last_path=last_path))
self.assertTrue(os.path.exists(last_path),
"last_metadata file should exist")
# Ensure created dirs and file have right perms
self.assertTrue(os.stat(last_path).st_mode & 0o600 == 0o600)
self.assertTrue(
os.stat(os.path.dirname(last_path)).st_mode & 0o700 == 0o700)
def test_is_valid_metadata(self):
md_data = {"AWS::CloudFormation::Init": {"config": {"files": {
"/tmp/foo": {"content": "bar"}}}}}
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(
md.retrieve(meta_str=md_data, last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
self.assertTrue(md._is_valid_metadata())
self.assertThat(
md_data['AWS::CloudFormation::Init'], ttm.Equals(md._metadata))
def test_remote_metadata(self):
md_data = {"AWS::CloudFormation::Init": {"config": {"files": {
"/tmp/foo": {"content": "bar"}}}}}
with mock.patch.object(
cfn.CloudFormationConnection, 'describe_stack_resource'
) as mock_dsr:
mock_dsr.return_value = {
'DescribeStackResourceResponse': {
'DescribeStackResourceResult': {
'StackResourceDetail': {'Metadata': md_data}}}}
md = cfn_helper.Metadata(
'teststack',
None,
access_key='foo',
secret_key='bar')
self.assertTrue(md.retrieve(last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
with tempfile.NamedTemporaryFile(mode='w') as fcreds:
fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n')
fcreds.flush()
md = cfn_helper.Metadata(
'teststack', None, credentials_file=fcreds.name)
self.assertTrue(md.retrieve(last_path=self.last_file))
self.assertThat(md_data, ttm.Equals(md._metadata))
def test_nova_meta_with_cache(self):
meta_in = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f",
"availability_zone": "nova",
"hostname": "as-wikidatabase-4ykioj3lgi57.novalocal",
"launch_index": 0,
"meta": {},
"public_keys": {"heat_key": "ssh-rsa etc...\n"},
"name": "as-WikiDatabase-4ykioj3lgi57"}
md_str = json.dumps(meta_in)
md = cfn_helper.Metadata('teststack', None)
with tempfile.NamedTemporaryFile(mode='w+') as default_file:
default_file.write(md_str)
default_file.flush()
self.assertThat(default_file.name, ttm.FileContains(md_str))
meta_out = md.get_nova_meta(cache_path=default_file.name)
self.assertEqual(meta_in, meta_out)
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_nova_meta_curl(self, mock_cp):
url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json'
temp_home = tempfile.mkdtemp()
cache_path = os.path.join(temp_home, 'meta_data.json')
def cleanup_temp_home(thome):
os.unlink(cache_path)
os.rmdir(thome)
self.addCleanup(cleanup_temp_home, temp_home)
meta_in = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f",
"availability_zone": "nova",
"hostname": "as-wikidatabase-4ykioj3lgi57.novalocal",
"launch_index": 0,
"meta": {"freddy": "is hungry"},
"public_keys": {"heat_key": "ssh-rsa etc...\n"},
"name": "as-WikiDatabase-4ykioj3lgi57"}
md_str = json.dumps(meta_in)
def write_cache_file(*params, **kwargs):
with open(cache_path, 'w+') as cache_file:
cache_file.write(md_str)
cache_file.flush()
self.assertThat(cache_file.name, ttm.FileContains(md_str))
return FakePOpen('Downloaded', '', 0)
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = write_cache_file
md = cfn_helper.Metadata('teststack', None)
meta_out = md.get_nova_meta(cache_path=cache_path)
self.assertEqual(meta_in, meta_out)
mock_popen.assert_has_calls(
popen_root_calls([['curl', '-o', cache_path, url]]))
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_nova_meta_curl_corrupt(self, mock_cp):
url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json'
temp_home = tempfile.mkdtemp()
cache_path = os.path.join(temp_home, 'meta_data.json')
def cleanup_temp_home(thome):
os.unlink(cache_path)
os.rmdir(thome)
self.addCleanup(cleanup_temp_home, temp_home)
md_str = "this { is not really json"
def write_cache_file(*params, **kwargs):
with open(cache_path, 'w+') as cache_file:
cache_file.write(md_str)
cache_file.flush()
self.assertThat(cache_file.name, ttm.FileContains(md_str))
return FakePOpen('Downloaded', '', 0)
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = write_cache_file
md = cfn_helper.Metadata('teststack', None)
meta_out = md.get_nova_meta(cache_path=cache_path)
self.assertIsNone(meta_out)
mock_popen.assert_has_calls(
popen_root_calls([['curl', '-o', cache_path, url]]))
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_nova_meta_curl_failed(self, mock_cp):
url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json'
temp_home = tempfile.mkdtemp()
cache_path = os.path.join(temp_home, 'meta_data.json')
def cleanup_temp_home(thome):
os.rmdir(thome)
self.addCleanup(cleanup_temp_home, temp_home)
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen('Failed', '', 1)
md = cfn_helper.Metadata('teststack', None)
meta_out = md.get_nova_meta(cache_path=cache_path)
self.assertIsNone(meta_out)
mock_popen.assert_has_calls(
popen_root_calls([['curl', '-o', cache_path, url]]))
def test_get_tags(self):
fake_tags = {'foo': 'fee',
'apple': 'red'}
md_data = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f",
"availability_zone": "nova",
"hostname": "as-wikidatabase-4ykioj3lgi57.novalocal",
"launch_index": 0,
"meta": fake_tags,
"public_keys": {"heat_key": "ssh-rsa etc...\n"},
"name": "as-WikiDatabase-4ykioj3lgi57"}
tags_expect = fake_tags
tags_expect['InstanceId'] = md_data['uuid']
md = cfn_helper.Metadata('teststack', None)
with mock.patch.object(md, 'get_nova_meta') as mock_method:
mock_method.return_value = md_data
tags = md.get_tags()
mock_method.assert_called_once_with()
self.assertEqual(tags_expect, tags)
def test_get_instance_id(self):
uuid = "f9431d18-d971-434d-9044-5b38f5b4646f"
md_data = {"uuid": uuid,
"availability_zone": "nova",
"hostname": "as-wikidatabase-4ykioj3lgi57.novalocal",
"launch_index": 0,
"public_keys": {"heat_key": "ssh-rsa etc...\n"},
"name": "as-WikiDatabase-4ykioj3lgi57"}
md = cfn_helper.Metadata('teststack', None)
with mock.patch.object(md, 'get_nova_meta') as mock_method:
mock_method.return_value = md_data
self.assertEqual(md.get_instance_id(), uuid)
mock_method.assert_called_once_with()
class TestCfnInit(testtools.TestCase):
def setUp(self):
super(TestCfnInit, self).setUp()
self.tdir = self.useFixture(fixtures.TempDir())
self.last_file = os.path.join(self.tdir.path, 'last_metadata')
def test_cfn_init(self):
with tempfile.NamedTemporaryFile(mode='w+') as foo_file:
md_data = {"AWS::CloudFormation::Init": {"config": {"files": {
foo_file.name: {"content": "bar"}}}}}
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(
md.retrieve(meta_str=md_data, last_path=self.last_file))
md.cfn_init()
self.assertThat(foo_file.name, ttm.FileContains('bar'))
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_cfn_init_with_ignore_errors_false(self, mock_cp):
md_data = {"AWS::CloudFormation::Init": {"config": {"commands": {
"00_foo": {"command": "/bin/command1",
"ignoreErrors": "false"}}}}}
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen('Doing something', 'error', -1)
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(
md.retrieve(meta_str=md_data, last_path=self.last_file))
self.assertRaises(cfn_helper.CommandsHandlerRunError, md.cfn_init)
mock_popen.assert_has_calls(popen_root_calls(['/bin/command1'],
shell=True))
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_cfn_init_with_ignore_errors_true(self, mock_cp):
calls = []
returns = []
calls.extend(popen_root_calls(['/bin/command1'], shell=True))
returns.append(FakePOpen('Doing something', 'error', -1))
calls.extend(popen_root_calls(['/bin/command2'], shell=True))
returns.append(FakePOpen('All good'))
md_data = {"AWS::CloudFormation::Init": {"config": {"commands": {
"00_foo": {"command": "/bin/command1",
"ignoreErrors": "true"},
"01_bar": {"command": "/bin/command2",
"ignoreErrors": "false"}
}}}}
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(
md.retrieve(meta_str=md_data, last_path=self.last_file))
md.cfn_init()
mock_popen.assert_has_calls(calls)
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_cfn_init_runs_list_commands_without_shell(self, mock_cp):
calls = []
returns = []
# command supplied as list shouldn't run on shell
calls.extend(popen_root_calls([['/bin/command1', 'arg']], shell=False))
returns.append(FakePOpen('Doing something'))
# command supplied as string should run on shell
calls.extend(popen_root_calls(['/bin/command2'], shell=True))
returns.append(FakePOpen('All good'))
md_data = {"AWS::CloudFormation::Init": {"config": {"commands": {
"00_foo": {"command": ["/bin/command1", "arg"]},
"01_bar": {"command": "/bin/command2"}
}}}}
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.side_effect = returns
md = cfn_helper.Metadata('teststack', None)
self.assertTrue(
md.retrieve(meta_str=md_data, last_path=self.last_file))
md.cfn_init()
mock_popen.assert_has_calls(calls)
class TestSourcesHandler(testtools.TestCase):
def test_apply_sources_empty(self):
sh = cfn_helper.SourcesHandler({})
sh.apply_sources()
def _test_apply_sources(self, url, end_file):
dest = tempfile.mkdtemp()
self.addCleanup(os.rmdir, dest)
sources = {dest: url}
td = os.path.dirname(end_file)
er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -"
calls = popen_root_calls([er % (dest, dest, url)], shell=True)
with mock.patch.object(tempfile, 'mkdtemp') as mock_mkdtemp:
mock_mkdtemp.return_value = td
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen('Curl good')
sh = cfn_helper.SourcesHandler(sources)
sh.apply_sources()
mock_popen.assert_has_calls(calls)
mock_mkdtemp.assert_called_with()
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_apply_sources_github(self, mock_cp):
url = "https://github.com/NoSuchProject/tarball/NoSuchTarball"
dest = tempfile.mkdtemp()
self.addCleanup(os.rmdir, dest)
sources = {dest: url}
er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -"
calls = popen_root_calls([er % (dest, dest, url)], shell=True)
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen('Curl good')
sh = cfn_helper.SourcesHandler(sources)
sh.apply_sources()
mock_popen.assert_has_calls(calls)
@mock.patch.object(cfn_helper, 'controlled_privileges')
def test_apply_sources_general(self, mock_cp):
url = "https://website.no.existe/a/b/c/file.tar.gz"
dest = tempfile.mkdtemp()
self.addCleanup(os.rmdir, dest)
sources = {dest: url}
er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -"
calls = popen_root_calls([er % (dest, dest, url)], shell=True)
with mock.patch('subprocess.Popen') as mock_popen:
mock_popen.return_value = FakePOpen('Curl good')
sh = cfn_helper.SourcesHandler(sources)
sh.apply_sources()
mock_popen.assert_has_calls(calls)
def test_apply_source_cmd(self):
sh = cfn_helper.SourcesHandler({})
er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | %s | tar -xvf -"
dest = '/tmp'
# test tgz
url = 'http://www.example.com/a.tgz'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, url, "gunzip"), cmd)
# test tar.gz
url = 'http://www.example.com/a.tar.gz'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, url, "gunzip"), cmd)
# test github - tarball 1
url = 'https://github.com/openstack/heat-cfntools/tarball/master'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, url, "gunzip"), cmd)
# test github - tarball 2
url = 'https://github.com/openstack/heat-cfntools/tarball/master/'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, url, "gunzip"), cmd)
# test tbz2
url = 'http://www.example.com/a.tbz2'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, url, "bunzip2"), cmd)
# test tar.bz2
url = 'http://www.example.com/a.tar.bz2'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, url, "bunzip2"), cmd)
# test zip
er = "mkdir -p '%s'; cd '%s'; curl -s -o '%s' '%s' && unzip -o '%s'"
url = 'http://www.example.com/a.zip'
d = "/tmp/tmp2I0yNK"
tmp = "%s/a.zip" % d
with mock.patch.object(tempfile, 'mkdtemp') as mock_mkdtemp:
mock_mkdtemp.return_value = d
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, tmp, url, tmp), cmd)
# test gz
er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | %s > '%s'"
url = 'http://www.example.com/a.sh.gz'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, url, "gunzip", "a.sh"), cmd)
# test bz2
url = 'http://www.example.com/a.sh.bz2'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual(er % (dest, dest, url, "bunzip2", "a.sh"), cmd)
# test other
url = 'http://www.example.com/a.sh'
cmd = sh._apply_source_cmd(dest, url)
self.assertEqual("", cmd)
mock_mkdtemp.assert_called_with()
| 39.30677 | 79 | 0.578467 | 6,015 | 55,737 | 5.151288 | 0.084954 | 0.025851 | 0.029821 | 0.020042 | 0.830595 | 0.795901 | 0.754462 | 0.727739 | 0.703857 | 0.691464 | 0 | 0.011982 | 0.288767 | 55,737 | 1,417 | 80 | 39.33451 | 0.769638 | 0.024346 | 0 | 0.671353 | 0 | 0.008787 | 0.178015 | 0.039778 | 0 | 0 | 0 | 0 | 0.136204 | 1 | 0.066784 | false | 0.003515 | 0.007909 | 0.002636 | 0.096661 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
966a780ae8ab7bc07530140e071455dacca94155 | 361 | py | Python | crowdsource/tests/test_all.py | timkpaine/crowdsource | fa6acd9572812876421229ce3543db0606ceb445 | [
"Apache-2.0"
] | null | null | null | crowdsource/tests/test_all.py | timkpaine/crowdsource | fa6acd9572812876421229ce3543db0606ceb445 | [
"Apache-2.0"
] | 46 | 2017-09-30T04:01:00.000Z | 2021-12-12T20:26:10.000Z | crowdsource/tests/test_all.py | timkpaine/crowdsource | fa6acd9572812876421229ce3543db0606ceb445 | [
"Apache-2.0"
] | 1 | 2019-11-12T00:53:31.000Z | 2019-11-12T00:53:31.000Z | # accurate coverage
from crowdsource.client import *
from crowdsource.client.samples import *
from crowdsource.client.samples_mixin import *
from crowdsource.handlers import *
from crowdsource.persistence import *
from crowdsource.persistence.models import *
from crowdsource.enums import *
from crowdsource.exceptions import *
from crowdsource.server import *
| 32.818182 | 46 | 0.831025 | 42 | 361 | 7.119048 | 0.333333 | 0.451505 | 0.561873 | 0.180602 | 0.227425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108033 | 361 | 10 | 47 | 36.1 | 0.928571 | 0.047091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
96765c12a9749f2b243964eb1d07e072ed85b3ca | 137 | py | Python | oscarapi/views/mixin.py | dhaker-ramnivas/ecomAPI | 5b029974d16aad4f4b7955d52e190169590136a9 | [
"BSD-3-Clause"
] | 3 | 2020-03-30T13:11:57.000Z | 2020-04-22T13:55:31.000Z | oscarapi/views/mixin.py | dhaker-ramnivas/ecomAPI | 5b029974d16aad4f4b7955d52e190169590136a9 | [
"BSD-3-Clause"
] | 9 | 2020-10-29T08:03:28.000Z | 2021-09-08T01:21:10.000Z | oscarapi/views/mixin.py | dhaker-ramnivas/ecomAPI | 5b029974d16aad4f4b7955d52e190169590136a9 | [
"BSD-3-Clause"
] | 2 | 2021-01-06T19:25:07.000Z | 2021-05-14T02:00:19.000Z | class PutIsPatchMixin(object):
def put(self, request, *args, **kwargs):
return self.partial_update(request, *args, **kwargs)
| 34.25 | 60 | 0.686131 | 16 | 137 | 5.8125 | 0.75 | 0.236559 | 0.365591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167883 | 137 | 3 | 61 | 45.666667 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
967b18fd328ee91377f911d60eab0cf56b2e282f | 4,708 | py | Python | tests/test_cli.py | sz3/pog | e7d53a425f29c61e243a500d12f6830c9c69461b | [
"MIT"
] | 3 | 2020-02-22T21:58:04.000Z | 2021-07-18T04:23:09.000Z | tests/test_cli.py | sz3/pog | e7d53a425f29c61e243a500d12f6830c9c69461b | [
"MIT"
] | null | null | null | tests/test_cli.py | sz3/pog | e7d53a425f29c61e243a500d12f6830c9c69461b | [
"MIT"
] | null | null | null | from os import environ
from subprocess import PIPE
from unittest import TestCase
from unittest.mock import patch
from .helpers import TestDirMixin
from pog.cli import PogCli, POG_ROOT
class PogCliTest(TestDirMixin, TestCase):
# test for the PogCli class
def test_keyfiles(self):
cli = PogCli()
# pick decryption file
for i in range(3):
cli.set_keyfiles('foo.encrypt', 'foo.decrypt', 'other.keyfile')
self.assertEqual(cli.config.get('decryption-keyfile'), 'foo.decrypt')
self.assertEqual(cli.config.get('encryption-keyfile'), 'foo.encrypt')
self.assertEqual(cli.config.get('keyfile'), None)
cli.set_keyfiles('foo.encrypt', 'other.keyfile')
self.assertEqual(cli.config.get('decryption-keyfile'), None)
self.assertEqual(cli.config.get('encryption-keyfile'), 'foo.encrypt')
self.assertEqual(cli.config.get('keyfile'), None)
cli.set_keyfiles('other.keyfile', 'second.keyfile')
self.assertEqual(cli.config.get('decryption-keyfile'), None)
self.assertEqual(cli.config.get('encryption-keyfile'), None)
self.assertEqual(cli.config.get('keyfile'), 'other.keyfile')
# clear
cli.set_keyfiles()
self.assertEqual(cli.config.get('decryption-keyfile'), None)
self.assertEqual(cli.config.get('encryption-keyfile'), None)
self.assertEqual(cli.config.get('keyfile'), None)
@patch('pog.cli.Popen', autospec=True)
def test_dump_manifest(self, mock_run):
mock_run.return_value = mock_run
mock_run.__enter__.return_value = mock_run
mock_run.stdout = [
b'* 1.txt:\n',
b'abcdef12345\n',
b'fghjkl34567\n',
]
cli = PogCli()
cli.set_keyfiles('foo.decrypt')
res = cli.dumpManifest('my.mfn')
self.assertEqual(res, {'1.txt': ['abcdef12345', 'fghjkl34567']})
env = dict(environ)
env['PYTHONPATH'] = POG_ROOT
mock_run.assert_called_once_with(
['python', '-u', '-m', 'pog.pog', '--dump-manifest', 'my.mfn', '--decryption-keyfile=foo.decrypt'],
env=env, stdout=PIPE,
)
@patch('pog.cli.Popen', autospec=True)
def test_dump_manifest_index(self, mock_run):
mock_run.return_value = mock_run
mock_run.__enter__.return_value = mock_run
mock_run.stdout = [
b'abcdef12345\n',
b'fghjkl34567\n',
]
cli = PogCli()
cli.set_keyfiles('foo.encrypt')
res = list(cli.dumpManifestIndex('my.mfn'))
self.assertEqual(res, ['abcdef12345', 'fghjkl34567'])
env = dict(environ)
env['PYTHONPATH'] = POG_ROOT
mock_run.assert_called_once_with(
['python', '-u', '-m', 'pog.pog', '--dump-manifest-index', 'my.mfn', '--encryption-keyfile=foo.encrypt'],
env=env, stdout=PIPE,
)
@patch('pog.cli.Popen', autospec=True)
def test_decrypt(self, mock_run):
mock_run.return_value = mock_run
mock_run.__enter__.return_value = mock_run
mock_run.stdout = [
b'*** 1/2: foo.txt\n',
b'*** 2/2: bar.txt\n',
]
cli = PogCli()
cli.set_keyfiles('foo.decrypt')
res = list(cli.decrypt('my.mfn'))
self.assertEqual(res, [
{'current': 1, 'filename': 'foo.txt', 'total': 2},
{'current': 2, 'filename': 'bar.txt', 'total': 2}
])
env = dict(environ)
env['PYTHONPATH'] = POG_ROOT
mock_run.assert_called_once_with(
['python', '-u', '-m', 'pog.pog', '--decrypt', 'my.mfn', '--decryption-keyfile=foo.decrypt'],
env=env, stdout=PIPE,
)
@patch('pog.cli.Popen', autospec=True)
def test_encrypt(self, mock_run):
mock_run.return_value = mock_run
mock_run.__enter__.return_value = mock_run
mock_run.stdout = [
b'*** 1/2: foo.txt\n',
b'12345abcdefh\n',
b'*** 2/2: bar.txt\n',
b'abcdefg12345\n',
]
cli = PogCli()
cli.set_keyfiles('foo.decrypt', 'foo.encrypt')
res = list(cli.encrypt(['foo.txt', 'bar.txt'], ['b2://bucket', 's3:bucket']))
self.assertEqual(res, [
{'current': 1, 'filename': 'foo.txt', 'total': 2},
{'current': 2, 'filename': 'bar.txt', 'total': 2}
])
env = dict(environ)
env['PYTHONPATH'] = POG_ROOT
mock_run.assert_called_once_with(
['python', '-u', '-m', 'pog.pog', '--save-to=b2://bucket,s3:bucket', 'foo.txt', 'bar.txt',
'--encryption-keyfile=foo.encrypt'], env=env, stdout=PIPE,
)
| 36.215385 | 117 | 0.580501 | 566 | 4,708 | 4.678445 | 0.157244 | 0.074018 | 0.081571 | 0.108761 | 0.803248 | 0.757175 | 0.757175 | 0.748489 | 0.709215 | 0.668429 | 0 | 0.020875 | 0.257222 | 4,708 | 129 | 118 | 36.496124 | 0.736345 | 0.011045 | 0 | 0.598131 | 0 | 0 | 0.242046 | 0.038693 | 0 | 0 | 0 | 0 | 0.186916 | 1 | 0.046729 | false | 0 | 0.056075 | 0 | 0.11215 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7366aaa284e7dc7b5dbf03067f63bcfb1f5d5f29 | 175 | py | Python | testing/tests/001-main/005-unittests/002-api/013-filechange.py | piwaniuk/critic | 28ed20bb8032d7cc5aa23de98da51e619fd84164 | [
"Apache-2.0"
] | 216 | 2015-01-05T12:48:10.000Z | 2022-03-08T00:12:23.000Z | testing/tests/001-main/005-unittests/002-api/013-filechange.py | piwaniuk/critic | 28ed20bb8032d7cc5aa23de98da51e619fd84164 | [
"Apache-2.0"
] | 55 | 2015-02-28T12:10:26.000Z | 2020-11-18T17:45:16.000Z | testing/tests/001-main/005-unittests/002-api/013-filechange.py | piwaniuk/critic | 28ed20bb8032d7cc5aa23de98da51e619fd84164 | [
"Apache-2.0"
] | 34 | 2015-05-02T15:15:10.000Z | 2020-06-15T19:20:37.000Z | instance.unittest("api.filechange", ["pre"])
instance.synchronize_service("changeset") # wait for changeset creation to finish
instance.unittest("api.filechange", ["post"])
| 29.166667 | 81 | 0.76 | 20 | 175 | 6.6 | 0.7 | 0.242424 | 0.287879 | 0.439394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 175 | 5 | 82 | 35 | 0.819876 | 0.211429 | 0 | 0 | 0 | 0 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.