hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
836d8ad70b847fb5700fcd667575b2609a5d51d0 | 14,307 | py | Python | pybind/nos/v7_1_0/rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/nos/v7_1_0/rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/nos/v7_1_0/rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class database_filter(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-rbridge - based on the path /rbridge-id/interface/ve/ip/interface-vlan-ospf-conf/ospf-interface-config/database-filter. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__all_out','__all_external','__all_summary_external',)
_yang_name = 'database-filter'
_rest_name = 'database-filter'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__all_external = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'allow-default-out': {'value': 1}, u'allow-default-and-type4-out': {'value': 2}, u'out': {'value': 3}},), is_leaf=True, yang_name="all-external", rest_name="all-external", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Filter all external LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='database-filter-options', is_config=True)
self.__all_summary_external = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'allow-default-out': {'value': 1}, u'allow-default-and-type4-out': {'value': 2}, u'out': {'value': 3}},), is_leaf=True, yang_name="all-summary-external", rest_name="all-summary-external", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Filter all summary external LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='database-filter-options', is_config=True)
self.__all_out = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="all-out", rest_name="all-out", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'filter all LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'rbridge-id', u'interface', u've', u'ip', u'interface-vlan-ospf-conf', u'ospf-interface-config', u'database-filter']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'rbridge-id', u'interface', u'Ve', u'ip', u'ospf', u'database-filter']
def _get_all_out(self):
"""
Getter method for all_out, mapped from YANG variable /rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/all_out (empty)
"""
return self.__all_out
def _set_all_out(self, v, load=False):
"""
Setter method for all_out, mapped from YANG variable /rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/all_out (empty)
If this variable is read-only (config: false) in the
source YANG file, then _set_all_out is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_all_out() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="all-out", rest_name="all-out", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'filter all LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """all_out must be of a type compatible with empty""",
'defined-type': "empty",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="all-out", rest_name="all-out", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'filter all LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)""",
})
self.__all_out = t
if hasattr(self, '_set'):
self._set()
def _unset_all_out(self):
self.__all_out = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="all-out", rest_name="all-out", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'filter all LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
def _get_all_external(self):
"""
Getter method for all_external, mapped from YANG variable /rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/all_external (database-filter-options)
"""
return self.__all_external
def _set_all_external(self, v, load=False):
"""
Setter method for all_external, mapped from YANG variable /rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/all_external (database-filter-options)
If this variable is read-only (config: false) in the
source YANG file, then _set_all_external is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_all_external() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'allow-default-out': {'value': 1}, u'allow-default-and-type4-out': {'value': 2}, u'out': {'value': 3}},), is_leaf=True, yang_name="all-external", rest_name="all-external", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Filter all external LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='database-filter-options', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """all_external must be of a type compatible with database-filter-options""",
'defined-type': "brocade-ospf:database-filter-options",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'allow-default-out': {'value': 1}, u'allow-default-and-type4-out': {'value': 2}, u'out': {'value': 3}},), is_leaf=True, yang_name="all-external", rest_name="all-external", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Filter all external LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='database-filter-options', is_config=True)""",
})
self.__all_external = t
if hasattr(self, '_set'):
self._set()
def _unset_all_external(self):
self.__all_external = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'allow-default-out': {'value': 1}, u'allow-default-and-type4-out': {'value': 2}, u'out': {'value': 3}},), is_leaf=True, yang_name="all-external", rest_name="all-external", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Filter all external LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='database-filter-options', is_config=True)
def _get_all_summary_external(self):
"""
Getter method for all_summary_external, mapped from YANG variable /rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/all_summary_external (database-filter-options)
"""
return self.__all_summary_external
def _set_all_summary_external(self, v, load=False):
"""
Setter method for all_summary_external, mapped from YANG variable /rbridge_id/interface/ve/ip/interface_vlan_ospf_conf/ospf_interface_config/database_filter/all_summary_external (database-filter-options)
If this variable is read-only (config: false) in the
source YANG file, then _set_all_summary_external is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_all_summary_external() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'allow-default-out': {'value': 1}, u'allow-default-and-type4-out': {'value': 2}, u'out': {'value': 3}},), is_leaf=True, yang_name="all-summary-external", rest_name="all-summary-external", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Filter all summary external LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='database-filter-options', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """all_summary_external must be of a type compatible with database-filter-options""",
'defined-type': "brocade-ospf:database-filter-options",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'allow-default-out': {'value': 1}, u'allow-default-and-type4-out': {'value': 2}, u'out': {'value': 3}},), is_leaf=True, yang_name="all-summary-external", rest_name="all-summary-external", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Filter all summary external LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='database-filter-options', is_config=True)""",
})
self.__all_summary_external = t
if hasattr(self, '_set'):
self._set()
def _unset_all_summary_external(self):
self.__all_summary_external = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'allow-default-out': {'value': 1}, u'allow-default-and-type4-out': {'value': 2}, u'out': {'value': 3}},), is_leaf=True, yang_name="all-summary-external", rest_name="all-summary-external", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Filter all summary external LSAs'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='database-filter-options', is_config=True)
all_out = __builtin__.property(_get_all_out, _set_all_out)
all_external = __builtin__.property(_get_all_external, _set_all_external)
all_summary_external = __builtin__.property(_get_all_summary_external, _set_all_summary_external)
_pyangbind_elements = {'all_out': all_out, 'all_external': all_external, 'all_summary_external': all_summary_external, }
| 74.129534 | 729 | 0.703781 | 1,931 | 14,307 | 4.967374 | 0.095287 | 0.039616 | 0.06005 | 0.017515 | 0.807652 | 0.764283 | 0.746351 | 0.740826 | 0.738741 | 0.732902 | 0 | 0.003173 | 0.162997 | 14,307 | 192 | 730 | 74.515625 | 0.797829 | 0.148599 | 0 | 0.40625 | 0 | 0.023438 | 0.363001 | 0.1543 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.0625 | 0 | 0.289063 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
79055eadfcf0cb8d1cb96dc6ff1085b7d3f4d342 | 849 | py | Python | src/AuShadha/demographics/guardian/dijit_fields_constants.py | GosthMan/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 46 | 2015-03-04T14:19:47.000Z | 2021-12-09T02:58:46.000Z | src/AuShadha/demographics/guardian/dijit_fields_constants.py | aytida23/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 2 | 2015-06-05T10:29:04.000Z | 2015-12-06T16:54:10.000Z | src/AuShadha/demographics/guardian/dijit_fields_constants.py | aytida23/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 24 | 2015-03-23T01:38:11.000Z | 2022-01-24T16:23:42.000Z | GUARDIAN_FORM_CONSTANTS = {
'guardian_name':{'max_length': 30,
"data-dojo-type": "dijit.form.ValidationTextBox",
"data-dojo-props": r"'required' :'true' ,'regExp':'[\\w]+','invalidMessage':'Invalid Character' "
},
'relation_to_guardian':{
'max_length': 30,
"data-dojo-type": "dijit.form.Select",
"data-dojo-props": r"'required' : 'true' ,'regExp':'[\\w]+','invalidMessage' : 'Invalid Character'"
},
'guardian_phone':{
'max_length': 30,
"data-dojo-type": "dijit.form.ValidationTextBox",
"data-dojo-props": r"'required' : 'true' ,'regExp':'[\\w]+','invalidMessage' : 'Invalid Character'"
}
} | 42.45 | 123 | 0.468787 | 70 | 849 | 5.557143 | 0.357143 | 0.123393 | 0.084833 | 0.115681 | 0.820051 | 0.820051 | 0.820051 | 0.820051 | 0.737789 | 0.737789 | 0 | 0.010969 | 0.355713 | 849 | 20 | 124 | 42.45 | 0.700183 | 0 | 0 | 0.4375 | 0 | 0 | 0.548235 | 0.2 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f71f2df9d428bc57d6384865ba76147ba636e02e | 178 | py | Python | prvsnlib/tasks/hostname.py | acoomans/prvsn | af6b313c2e779ae4e3a9cdba0b1c3a1f4b4c085e | [
"BSD-2-Clause"
] | null | null | null | prvsnlib/tasks/hostname.py | acoomans/prvsn | af6b313c2e779ae4e3a9cdba0b1c3a1f4b4c085e | [
"BSD-2-Clause"
] | null | null | null | prvsnlib/tasks/hostname.py | acoomans/prvsn | af6b313c2e779ae4e3a9cdba0b1c3a1f4b4c085e | [
"BSD-2-Clause"
] | null | null | null | import logging
from prvsnlib.utils.run import Run
def hostname(name, secure=False):
logging.header('Hostname ' + name)
Run(['hostnamectl', 'set-hostname', name]).run()
| 22.25 | 52 | 0.702247 | 23 | 178 | 5.434783 | 0.608696 | 0.288 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146067 | 178 | 7 | 53 | 25.428571 | 0.822368 | 0 | 0 | 0 | 0 | 0 | 0.179775 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f7501259c15280902a6f3ae33e990bbd04d27ee9 | 75 | py | Python | astropy_helpers/src/setup_package.py | migueldvb/astropy-helpers | 950358a24ce74be14a1679732bd8c94e6f5854d6 | [
"PSF-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | 1 | 2020-06-17T00:44:39.000Z | 2020-06-17T00:44:39.000Z | astropy_helpers/src/setup_package.py | fred3m/astropy-helpers | 19bb078dcd8c9dd08122da5c4b51f3703c3cc21c | [
"PSF-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | astropy_helpers/src/setup_package.py | fred3m/astropy-helpers | 19bb078dcd8c9dd08122da5c4b51f3703c3cc21c | [
"PSF-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | def get_package_data():
return {'astropy_helpers.src': ['compiler.c']}
| 25 | 50 | 0.693333 | 10 | 75 | 4.9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 75 | 2 | 51 | 37.5 | 0.742424 | 0 | 0 | 0 | 0 | 0 | 0.386667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f7557e97156941e604a93790a20470223828e158 | 447 | py | Python | xsconnect/peripheralboards/StickIt_Buttons_V2.py | xesscorp/xsconnect | b08b2e24e7a017d9f87ad4651f82915179f1fd52 | [
"MIT"
] | null | null | null | xsconnect/peripheralboards/StickIt_Buttons_V2.py | xesscorp/xsconnect | b08b2e24e7a017d9f87ad4651f82915179f1fd52 | [
"MIT"
] | 1 | 2017-01-26T12:09:44.000Z | 2021-03-07T14:13:14.000Z | xsconnect/peripheralboards/StickIt_Buttons_V2.py | xesscorp/xsconnect | b08b2e24e7a017d9f87ad4651f82915179f1fd52 | [
"MIT"
] | 1 | 2021-08-16T07:31:24.000Z | 2021-08-16T07:31:24.000Z | brd = {
'name': ('StickIt! Buttons V2'),
'port': {
'pmod': {
'default' : {
'b0': 'd0',
'b1': 'd1',
'b2': 'd2',
'b3': 'd3'
}
},
'wing': {
'default' : {
'b0': 'd0',
'b1': 'd1',
'b2': 'd2',
'b3': 'd3'
}
}
}
}
| 20.318182 | 37 | 0.181208 | 26 | 447 | 3.115385 | 0.653846 | 0.222222 | 0.271605 | 0.320988 | 0.567901 | 0.567901 | 0.567901 | 0.567901 | 0.567901 | 0 | 0 | 0.10303 | 0.630872 | 447 | 21 | 38 | 21.285714 | 0.387879 | 0 | 0 | 0.47619 | 0 | 0 | 0.190141 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7987da0e8917a85ee03c731b634c8f07d074415 | 1,352 | py | Python | src/arangomlFeatureStore/managed_service_conn_parameters.py | rajivsam/arangomlFeatureStore | f4a63c6cfdf6871cb50bd7382d65786a40ab6450 | [
"MIT"
] | null | null | null | src/arangomlFeatureStore/managed_service_conn_parameters.py | rajivsam/arangomlFeatureStore | f4a63c6cfdf6871cb50bd7382d65786a40ab6450 | [
"MIT"
] | null | null | null | src/arangomlFeatureStore/managed_service_conn_parameters.py | rajivsam/arangomlFeatureStore | f4a63c6cfdf6871cb50bd7382d65786a40ab6450 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Dec 5 09:05:19 2019
@author: Rajiv Sambasivan
"""
class ManagedServiceConnParam:
@property
def DB_SERVICE_HOST(self):
return "DB_service_host"
@property
def DB_SERVICE_END_POINT(self):
return "DB_end_point"
@property
def DB_SERVICE_NAME(self):
return "DB_service_name"
@property
def DB_SERVICE_PORT(self):
return "DB_service_port"
@property
def DB_NAME(self):
return "dbName"
@property
def DB_REPLICATION_FACTOR(self):
return "arangodb_replication_factor"
@property
def DB_USER_NAME(self):
return "username"
@property
def DB_PASSWORD(self):
return "password"
@property
def DB_ROOT_USER(self):
return "root_user"
@property
def DB_ROOT_USER_PASSWORD(self):
return "root_user_password"
@property
def DB_CONN_PROTOCOL(self):
return "conn_protocol"
@property
def DB_NOTIFICATION_EMAIL(self):
return "email"
@property
def OASIS_HOST(self):
return "hostname"
@property
def OASIS_PORT(self):
return "port"
@property
def OASIS_CONN_PROTOCOL(self):
return "protocol"
@property
def OASIS_FS_GRAPH(self):
return "graph_name"
| 18.777778 | 44 | 0.633876 | 162 | 1,352 | 5.012346 | 0.296296 | 0.216749 | 0.192118 | 0.098522 | 0.051724 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013292 | 0.276627 | 1,352 | 71 | 45 | 19.042254 | 0.816973 | 0.078402 | 0 | 0.326531 | 0 | 0 | 0.146559 | 0.021862 | 0 | 0 | 0 | 0 | 0 | 1 | 0.326531 | false | 0.081633 | 0 | 0.326531 | 0.673469 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
541a0dc1f26455fbfb601b7efb2fcc6c0f611460 | 80 | py | Python | Logic-1/near_ten.py | VivekM27/Coding-Bat-Python-Solutions | 14d5c6ccaa2129e56a5898374dec60740fe6761b | [
"Apache-2.0"
] | null | null | null | Logic-1/near_ten.py | VivekM27/Coding-Bat-Python-Solutions | 14d5c6ccaa2129e56a5898374dec60740fe6761b | [
"Apache-2.0"
] | null | null | null | Logic-1/near_ten.py | VivekM27/Coding-Bat-Python-Solutions | 14d5c6ccaa2129e56a5898374dec60740fe6761b | [
"Apache-2.0"
] | null | null | null | # NEAR_TEN
def near_ten(num):
return True if num%10<3 or num%10>7 else False | 26.666667 | 48 | 0.7125 | 18 | 80 | 3.055556 | 0.722222 | 0.254545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092308 | 0.1875 | 80 | 3 | 48 | 26.666667 | 0.753846 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
543120f114a5a4ccce456d3f5ba79efaba22b56b | 61 | py | Python | forced_phot/__init__.py | askap-vast/forced_phot | 8f4307825781743755d189418a9cb9111aaf0b63 | [
"MIT"
] | null | null | null | forced_phot/__init__.py | askap-vast/forced_phot | 8f4307825781743755d189418a9cb9111aaf0b63 | [
"MIT"
] | null | null | null | forced_phot/__init__.py | askap-vast/forced_phot | 8f4307825781743755d189418a9cb9111aaf0b63 | [
"MIT"
] | null | null | null | from forced_phot.forced_phot import ForcedPhot # noqa: F401
| 30.5 | 60 | 0.819672 | 9 | 61 | 5.333333 | 0.777778 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056604 | 0.131148 | 61 | 1 | 61 | 61 | 0.849057 | 0.163934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54699c3efe84c25db863658e3841e59ed835c589 | 23,624 | py | Python | tests/test_server.py | m-novikov/websockets | 668f320e0547d80afe6529528e1ecc6088955cdc | [
"BSD-3-Clause"
] | 3,909 | 2015-01-02T02:35:09.000Z | 2022-03-31T14:03:01.000Z | tests/test_server.py | m-novikov/websockets | 668f320e0547d80afe6529528e1ecc6088955cdc | [
"BSD-3-Clause"
] | 1,066 | 2015-01-19T07:32:35.000Z | 2022-03-26T15:00:11.000Z | tests/test_server.py | m-novikov/websockets | 668f320e0547d80afe6529528e1ecc6088955cdc | [
"BSD-3-Clause"
] | 549 | 2015-01-10T10:23:42.000Z | 2022-03-25T16:38:32.000Z | import http
import logging
import unittest
import unittest.mock
from websockets.connection import CONNECTING, OPEN
from websockets.datastructures import Headers
from websockets.exceptions import InvalidHeader, InvalidOrigin, InvalidUpgrade
from websockets.frames import OP_TEXT, Frame
from websockets.http import USER_AGENT
from websockets.http11 import Request, Response
from websockets.server import *
from .extensions.utils import (
OpExtension,
Rsv2Extension,
ServerOpExtensionFactory,
ServerRsv2ExtensionFactory,
)
from .test_utils import ACCEPT, KEY
from .utils import DATE
class ConnectTests(unittest.TestCase):
def test_receive_connect(self):
server = ServerConnection()
server.receive_data(
(
f"GET /test HTTP/1.1\r\n"
f"Host: example.com\r\n"
f"Upgrade: websocket\r\n"
f"Connection: Upgrade\r\n"
f"Sec-WebSocket-Key: {KEY}\r\n"
f"Sec-WebSocket-Version: 13\r\n"
f"User-Agent: {USER_AGENT}\r\n"
f"\r\n"
).encode(),
)
[request] = server.events_received()
self.assertIsInstance(request, Request)
def test_connect_request(self):
server = ServerConnection()
server.receive_data(
(
f"GET /test HTTP/1.1\r\n"
f"Host: example.com\r\n"
f"Upgrade: websocket\r\n"
f"Connection: Upgrade\r\n"
f"Sec-WebSocket-Key: {KEY}\r\n"
f"Sec-WebSocket-Version: 13\r\n"
f"User-Agent: {USER_AGENT}\r\n"
f"\r\n"
).encode(),
)
[request] = server.events_received()
self.assertEqual(request.path, "/test")
self.assertEqual(
request.headers,
Headers(
{
"Host": "example.com",
"Upgrade": "websocket",
"Connection": "Upgrade",
"Sec-WebSocket-Key": KEY,
"Sec-WebSocket-Version": "13",
"User-Agent": USER_AGENT,
}
),
)
class AcceptRejectTests(unittest.TestCase):
def make_request(self):
return Request(
path="/test",
headers=Headers(
{
"Host": "example.com",
"Upgrade": "websocket",
"Connection": "Upgrade",
"Sec-WebSocket-Key": KEY,
"Sec-WebSocket-Version": "13",
"User-Agent": USER_AGENT,
}
),
)
def test_send_accept(self):
server = ServerConnection()
with unittest.mock.patch("email.utils.formatdate", return_value=DATE):
response = server.accept(self.make_request())
self.assertIsInstance(response, Response)
server.send_response(response)
self.assertEqual(
server.data_to_send(),
[
f"HTTP/1.1 101 Switching Protocols\r\n"
f"Date: {DATE}\r\n"
f"Upgrade: websocket\r\n"
f"Connection: Upgrade\r\n"
f"Sec-WebSocket-Accept: {ACCEPT}\r\n"
f"Server: {USER_AGENT}\r\n"
f"\r\n".encode()
],
)
self.assertEqual(server.state, OPEN)
def test_send_reject(self):
server = ServerConnection()
with unittest.mock.patch("email.utils.formatdate", return_value=DATE):
response = server.reject(http.HTTPStatus.NOT_FOUND, "Sorry folks.\n")
self.assertIsInstance(response, Response)
server.send_response(response)
self.assertEqual(
server.data_to_send(),
[
f"HTTP/1.1 404 Not Found\r\n"
f"Date: {DATE}\r\n"
f"Connection: close\r\n"
f"Content-Length: 13\r\n"
f"Content-Type: text/plain; charset=utf-8\r\n"
f"Server: {USER_AGENT}\r\n"
f"\r\n"
f"Sorry folks.\n".encode(),
b"",
],
)
self.assertEqual(server.state, CONNECTING)
def test_accept_response(self):
server = ServerConnection()
with unittest.mock.patch("email.utils.formatdate", return_value=DATE):
response = server.accept(self.make_request())
self.assertIsInstance(response, Response)
self.assertEqual(response.status_code, 101)
self.assertEqual(response.reason_phrase, "Switching Protocols")
self.assertEqual(
response.headers,
Headers(
{
"Date": DATE,
"Upgrade": "websocket",
"Connection": "Upgrade",
"Sec-WebSocket-Accept": ACCEPT,
"Server": USER_AGENT,
}
),
)
self.assertIsNone(response.body)
def test_reject_response(self):
server = ServerConnection()
with unittest.mock.patch("email.utils.formatdate", return_value=DATE):
response = server.reject(http.HTTPStatus.NOT_FOUND, "Sorry folks.\n")
self.assertIsInstance(response, Response)
self.assertEqual(response.status_code, 404)
self.assertEqual(response.reason_phrase, "Not Found")
self.assertEqual(
response.headers,
Headers(
{
"Date": DATE,
"Connection": "close",
"Content-Length": "13",
"Content-Type": "text/plain; charset=utf-8",
"Server": USER_AGENT,
}
),
)
self.assertEqual(response.body, b"Sorry folks.\n")
def test_basic(self):
server = ServerConnection()
request = self.make_request()
response = server.accept(request)
self.assertEqual(response.status_code, 101)
def test_unexpected_exception(self):
server = ServerConnection()
request = self.make_request()
with unittest.mock.patch(
"websockets.server.ServerConnection.process_request",
side_effect=Exception("BOOM"),
):
response = server.accept(request)
self.assertEqual(response.status_code, 500)
with self.assertRaises(Exception) as raised:
raise request.exception
self.assertEqual(str(raised.exception), "BOOM")
def test_missing_connection(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Connection"]
response = server.accept(request)
self.assertEqual(response.status_code, 426)
self.assertEqual(response.headers["Upgrade"], "websocket")
with self.assertRaises(InvalidUpgrade) as raised:
raise request.exception
self.assertEqual(str(raised.exception), "missing Connection header")
def test_invalid_connection(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Connection"]
request.headers["Connection"] = "close"
response = server.accept(request)
self.assertEqual(response.status_code, 426)
self.assertEqual(response.headers["Upgrade"], "websocket")
with self.assertRaises(InvalidUpgrade) as raised:
raise request.exception
self.assertEqual(str(raised.exception), "invalid Connection header: close")
def test_missing_upgrade(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Upgrade"]
response = server.accept(request)
self.assertEqual(response.status_code, 426)
self.assertEqual(response.headers["Upgrade"], "websocket")
with self.assertRaises(InvalidUpgrade) as raised:
raise request.exception
self.assertEqual(str(raised.exception), "missing Upgrade header")
def test_invalid_upgrade(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Upgrade"]
request.headers["Upgrade"] = "h2c"
response = server.accept(request)
self.assertEqual(response.status_code, 426)
self.assertEqual(response.headers["Upgrade"], "websocket")
with self.assertRaises(InvalidUpgrade) as raised:
raise request.exception
self.assertEqual(str(raised.exception), "invalid Upgrade header: h2c")
def test_missing_key(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Sec-WebSocket-Key"]
response = server.accept(request)
self.assertEqual(response.status_code, 400)
with self.assertRaises(InvalidHeader) as raised:
raise request.exception
self.assertEqual(str(raised.exception), "missing Sec-WebSocket-Key header")
def test_multiple_key(self):
server = ServerConnection()
request = self.make_request()
request.headers["Sec-WebSocket-Key"] = KEY
response = server.accept(request)
self.assertEqual(response.status_code, 400)
with self.assertRaises(InvalidHeader) as raised:
raise request.exception
self.assertEqual(
str(raised.exception),
"invalid Sec-WebSocket-Key header: "
"more than one Sec-WebSocket-Key header found",
)
def test_invalid_key(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Sec-WebSocket-Key"]
request.headers["Sec-WebSocket-Key"] = "not Base64 data!"
response = server.accept(request)
self.assertEqual(response.status_code, 400)
with self.assertRaises(InvalidHeader) as raised:
raise request.exception
self.assertEqual(
str(raised.exception), "invalid Sec-WebSocket-Key header: not Base64 data!"
)
def test_truncated_key(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Sec-WebSocket-Key"]
request.headers["Sec-WebSocket-Key"] = KEY[
:16
] # 12 bytes instead of 16, Base64-encoded
response = server.accept(request)
self.assertEqual(response.status_code, 400)
with self.assertRaises(InvalidHeader) as raised:
raise request.exception
self.assertEqual(
str(raised.exception), f"invalid Sec-WebSocket-Key header: {KEY[:16]}"
)
def test_missing_version(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Sec-WebSocket-Version"]
response = server.accept(request)
self.assertEqual(response.status_code, 400)
with self.assertRaises(InvalidHeader) as raised:
raise request.exception
self.assertEqual(str(raised.exception), "missing Sec-WebSocket-Version header")
def test_multiple_version(self):
server = ServerConnection()
request = self.make_request()
request.headers["Sec-WebSocket-Version"] = "11"
response = server.accept(request)
self.assertEqual(response.status_code, 400)
with self.assertRaises(InvalidHeader) as raised:
raise request.exception
self.assertEqual(
str(raised.exception),
"invalid Sec-WebSocket-Version header: "
"more than one Sec-WebSocket-Version header found",
)
def test_invalid_version(self):
server = ServerConnection()
request = self.make_request()
del request.headers["Sec-WebSocket-Version"]
request.headers["Sec-WebSocket-Version"] = "11"
response = server.accept(request)
self.assertEqual(response.status_code, 400)
with self.assertRaises(InvalidHeader) as raised:
raise request.exception
self.assertEqual(
str(raised.exception), "invalid Sec-WebSocket-Version header: 11"
)
def test_no_origin(self):
server = ServerConnection(origins=["https://example.com"])
request = self.make_request()
response = server.accept(request)
self.assertEqual(response.status_code, 403)
with self.assertRaises(InvalidOrigin) as raised:
raise request.exception
self.assertEqual(str(raised.exception), "missing Origin header")
def test_origin(self):
server = ServerConnection(origins=["https://example.com"])
request = self.make_request()
request.headers["Origin"] = "https://example.com"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(server.origin, "https://example.com")
def test_unexpected_origin(self):
server = ServerConnection(origins=["https://example.com"])
request = self.make_request()
request.headers["Origin"] = "https://other.example.com"
response = server.accept(request)
self.assertEqual(response.status_code, 403)
with self.assertRaises(InvalidOrigin) as raised:
raise request.exception
self.assertEqual(
str(raised.exception), "invalid Origin header: https://other.example.com"
)
def test_multiple_origin(self):
server = ServerConnection(
origins=["https://example.com", "https://other.example.com"]
)
request = self.make_request()
request.headers["Origin"] = "https://example.com"
request.headers["Origin"] = "https://other.example.com"
response = server.accept(request)
# This is prohibited by the HTTP specification, so the return code is
# 400 Bad Request rather than 403 Forbidden.
self.assertEqual(response.status_code, 400)
with self.assertRaises(InvalidHeader) as raised:
raise request.exception
self.assertEqual(
str(raised.exception),
"invalid Origin header: more than one Origin header found",
)
def test_supported_origin(self):
server = ServerConnection(
origins=["https://example.com", "https://other.example.com"]
)
request = self.make_request()
request.headers["Origin"] = "https://other.example.com"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(server.origin, "https://other.example.com")
def test_unsupported_origin(self):
server = ServerConnection(
origins=["https://example.com", "https://other.example.com"]
)
request = self.make_request()
request.headers["Origin"] = "https://original.example.com"
response = server.accept(request)
self.assertEqual(response.status_code, 403)
with self.assertRaises(InvalidOrigin) as raised:
raise request.exception
self.assertEqual(
str(raised.exception), "invalid Origin header: https://original.example.com"
)
def test_no_origin_accepted(self):
server = ServerConnection(origins=[None])
request = self.make_request()
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertIsNone(server.origin)
def test_no_extensions(self):
server = ServerConnection()
request = self.make_request()
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Extensions", response.headers)
self.assertEqual(server.extensions, [])
def test_no_extension(self):
server = ServerConnection(extensions=[ServerOpExtensionFactory()])
request = self.make_request()
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Extensions", response.headers)
self.assertEqual(server.extensions, [])
def test_extension(self):
server = ServerConnection(extensions=[ServerOpExtensionFactory()])
request = self.make_request()
request.headers["Sec-WebSocket-Extensions"] = "x-op; op"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(response.headers["Sec-WebSocket-Extensions"], "x-op; op")
self.assertEqual(server.extensions, [OpExtension()])
def test_unexpected_extension(self):
server = ServerConnection()
request = self.make_request()
request.headers["Sec-WebSocket-Extensions"] = "x-op; op"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Extensions", response.headers)
self.assertEqual(server.extensions, [])
def test_unsupported_extension(self):
server = ServerConnection(extensions=[ServerRsv2ExtensionFactory()])
request = self.make_request()
request.headers["Sec-WebSocket-Extensions"] = "x-op; op"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Extensions", response.headers)
self.assertEqual(server.extensions, [])
def test_supported_extension_parameters(self):
server = ServerConnection(extensions=[ServerOpExtensionFactory("this")])
request = self.make_request()
request.headers["Sec-WebSocket-Extensions"] = "x-op; op=this"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(response.headers["Sec-WebSocket-Extensions"], "x-op; op=this")
self.assertEqual(server.extensions, [OpExtension("this")])
def test_unsupported_extension_parameters(self):
server = ServerConnection(extensions=[ServerOpExtensionFactory("this")])
request = self.make_request()
request.headers["Sec-WebSocket-Extensions"] = "x-op; op=that"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Extensions", response.headers)
self.assertEqual(server.extensions, [])
def test_multiple_supported_extension_parameters(self):
server = ServerConnection(
extensions=[
ServerOpExtensionFactory("this"),
ServerOpExtensionFactory("that"),
]
)
request = self.make_request()
request.headers["Sec-WebSocket-Extensions"] = "x-op; op=that"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(response.headers["Sec-WebSocket-Extensions"], "x-op; op=that")
self.assertEqual(server.extensions, [OpExtension("that")])
def test_multiple_extensions(self):
server = ServerConnection(
extensions=[ServerOpExtensionFactory(), ServerRsv2ExtensionFactory()]
)
request = self.make_request()
request.headers["Sec-WebSocket-Extensions"] = "x-op; op"
request.headers["Sec-WebSocket-Extensions"] = "x-rsv2"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(
response.headers["Sec-WebSocket-Extensions"], "x-op; op, x-rsv2"
)
self.assertEqual(server.extensions, [OpExtension(), Rsv2Extension()])
def test_multiple_extensions_order(self):
server = ServerConnection(
extensions=[ServerOpExtensionFactory(), ServerRsv2ExtensionFactory()]
)
request = self.make_request()
request.headers["Sec-WebSocket-Extensions"] = "x-rsv2"
request.headers["Sec-WebSocket-Extensions"] = "x-op; op"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(
response.headers["Sec-WebSocket-Extensions"], "x-rsv2, x-op; op"
)
self.assertEqual(server.extensions, [Rsv2Extension(), OpExtension()])
def test_no_subprotocols(self):
server = ServerConnection()
request = self.make_request()
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Protocol", response.headers)
self.assertIsNone(server.subprotocol)
def test_no_subprotocol(self):
server = ServerConnection(subprotocols=["chat"])
request = self.make_request()
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Protocol", response.headers)
self.assertIsNone(server.subprotocol)
def test_subprotocol(self):
server = ServerConnection(subprotocols=["chat"])
request = self.make_request()
request.headers["Sec-WebSocket-Protocol"] = "chat"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(response.headers["Sec-WebSocket-Protocol"], "chat")
self.assertEqual(server.subprotocol, "chat")
def test_unexpected_subprotocol(self):
server = ServerConnection()
request = self.make_request()
request.headers["Sec-WebSocket-Protocol"] = "chat"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Protocol", response.headers)
self.assertIsNone(server.subprotocol)
def test_multiple_subprotocols(self):
server = ServerConnection(subprotocols=["superchat", "chat"])
request = self.make_request()
request.headers["Sec-WebSocket-Protocol"] = "superchat"
request.headers["Sec-WebSocket-Protocol"] = "chat"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(response.headers["Sec-WebSocket-Protocol"], "superchat")
self.assertEqual(server.subprotocol, "superchat")
def test_supported_subprotocol(self):
server = ServerConnection(subprotocols=["superchat", "chat"])
request = self.make_request()
request.headers["Sec-WebSocket-Protocol"] = "chat"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertEqual(response.headers["Sec-WebSocket-Protocol"], "chat")
self.assertEqual(server.subprotocol, "chat")
def test_unsupported_subprotocol(self):
server = ServerConnection(subprotocols=["superchat", "chat"])
request = self.make_request()
request.headers["Sec-WebSocket-Protocol"] = "otherchat"
response = server.accept(request)
self.assertEqual(response.status_code, 101)
self.assertNotIn("Sec-WebSocket-Protocol", response.headers)
self.assertIsNone(server.subprotocol)
class MiscTests(unittest.TestCase):
def test_bypass_handshake(self):
server = ServerConnection(state=OPEN)
server.receive_data(b"\x81\x86\x00\x00\x00\x00Hello!")
[frame] = server.events_received()
self.assertEqual(frame, Frame(OP_TEXT, b"Hello!"))
def test_custom_logger(self):
logger = logging.getLogger("test")
with self.assertLogs("test", logging.DEBUG) as logs:
ServerConnection(logger=logger)
self.assertEqual(len(logs.records), 1)
| 38.103226 | 88 | 0.625635 | 2,365 | 23,624 | 6.162791 | 0.077378 | 0.09777 | 0.088371 | 0.077599 | 0.821955 | 0.796501 | 0.786552 | 0.765146 | 0.761509 | 0.744151 | 0 | 0.011396 | 0.260836 | 23,624 | 619 | 89 | 38.164782 | 0.823273 | 0.006307 | 0 | 0.63327 | 0 | 0 | 0.159303 | 0.05151 | 0 | 0 | 0 | 0 | 0.249527 | 1 | 0.086957 | false | 0.00189 | 0.026465 | 0.00189 | 0.120983 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5484b80cc57ac94f4cb897290f2c6167dfeff4b3 | 5,905 | py | Python | liteflow/tests/test_losses.py | petrux/LiteFlowX | 96197bf4b5a87e682c980d303a0e6429cdb34964 | [
"Apache-2.0"
] | 2 | 2017-07-11T13:14:48.000Z | 2017-12-10T22:14:06.000Z | liteflow/tests/test_losses.py | petrux/LiteFlowX | 96197bf4b5a87e682c980d303a0e6429cdb34964 | [
"Apache-2.0"
] | null | null | null | liteflow/tests/test_losses.py | petrux/LiteFlowX | 96197bf4b5a87e682c980d303a0e6429cdb34964 | [
"Apache-2.0"
] | 1 | 2019-11-13T02:15:51.000Z | 2019-11-13T02:15:51.000Z | """Test module for liteflow.losses module."""
import math
import mock
import tensorflow as tf
from liteflow import losses
from liteflow import streaming
from liteflow.losses import categorical_crossentropy as xentropy
class TestStreamingLoss(tf.test.TestCase):
"""Test case for the liteflow.metrics.StreamingMetric class."""
def test_default(self):
"""Default test case."""
scope = 'StreamingLossScope'
targets = tf.constant([[0, 1, 2], [0, 9, 23]], dtype=tf.int32)
predictions = tf.constant([[0, 1, 2], [0, 9, 23]], dtype=tf.int32)
weights = tf.constant([[1, 1, 1], [0, 0, 1]], dtype=tf.float32)
values = tf.constant([5, 6, 7], dtype=tf.float32)
weights_out = tf.constant([1, 0, 1], dtype=tf.float32)
func = mock.Mock()
func.side_effect = [(values, weights_out)]
avg = streaming.StreamingAverage()
avg.compute = mock.MagicMock()
loss = losses.StreamingLoss(func, avg)
loss.compute(targets, predictions, weights, scope=scope)
func.assert_called_once_with(targets, predictions, weights)
avg.compute.assert_called_once()
args, kwargs = avg.compute.call_args
act_values, = args
self.assertEqual(act_values, values)
self.assertIn('weights', kwargs)
self.assertEqual(kwargs.pop('weights'), weights_out)
self.assertIn('scope', kwargs)
self.assertEqual(kwargs.pop('scope').name, scope)
def test_weights_in_none(self):
"""Test case with no weights passed to the wrapped function."""
scope = 'StreamingLossScope'
targets = tf.constant([[0, 1, 2], [0, 9, 23]], dtype=tf.int32)
predictions = tf.constant([[0, 1, 2], [0, 9, 23]], dtype=tf.int32)
values = tf.constant([5, 6, 7], dtype=tf.float32)
weights_out = tf.constant([1, 0, 1], dtype=tf.float32)
func = mock.Mock()
func.side_effect = [(values, weights_out)]
avg = streaming.StreamingAverage()
avg.compute = mock.MagicMock()
loss = losses.StreamingLoss(func, avg)
loss.compute(targets, predictions, scope=scope)
func.assert_called_once_with(targets, predictions, None)
avg.compute.assert_called_once()
args, kwargs = avg.compute.call_args
act_values, = args
self.assertEqual(act_values, values)
self.assertIn('weights', kwargs)
self.assertEqual(kwargs.pop('weights'), weights_out)
self.assertIn('scope', kwargs)
self.assertEqual(kwargs.pop('scope').name, scope)
def test_weights_out_none(self):
"""Test case with no weights returned by the wrapped function."""
scope = 'StreamingLossScope'
targets = tf.constant([[0, 1, 2], [0, 9, 23]], dtype=tf.int32)
predictions = tf.constant([[0, 1, 2], [0, 9, 23]], dtype=tf.int32)
weights = tf.constant([[1, 1, 1], [0, 0, 1]], dtype=tf.float32)
values = tf.constant([5, 6, 7], dtype=tf.float32)
func = mock.Mock()
func.side_effect = [(values, None)]
avg = streaming.StreamingAverage()
avg.compute = mock.MagicMock()
loss = losses.StreamingLoss(func, avg)
loss.compute(targets, predictions, weights, scope=scope)
func.assert_called_once_with(targets, predictions, weights)
avg.compute.assert_called_once()
args, kwargs = avg.compute.call_args
act_values, = args
self.assertEqual(act_values, values)
self.assertIn('weights', kwargs)
self.assertEqual(kwargs.pop('weights'), None)
self.assertIn('scope', kwargs)
self.assertEqual(kwargs.pop('scope').name, scope)
def test_weights_in_out_none(self):
"""Test case with no weights at all."""
scope = 'StreamingLossScope'
targets = tf.constant([[0, 1, 2], [0, 9, 23]], dtype=tf.int32)
predictions = tf.constant([[0, 1, 2], [0, 9, 23]], dtype=tf.int32)
values = tf.constant([5, 6, 7], dtype=tf.float32)
func = mock.Mock()
func.side_effect = [(values, None)]
avg = streaming.StreamingAverage()
avg.compute = mock.MagicMock()
loss = losses.StreamingLoss(func, avg)
loss.compute(targets, predictions, scope=scope)
func.assert_called_once_with(targets, predictions, None)
avg.compute.assert_called_once()
args, kwargs = avg.compute.call_args
act_values, = args
self.assertEqual(act_values, values)
self.assertIn('weights', kwargs)
self.assertEqual(kwargs.pop('weights'), None)
self.assertIn('scope', kwargs)
self.assertEqual(kwargs.pop('scope').name, scope)
class TestCategoricalCrossentropy(tf.test.TestCase):
"""Test case for the liteflow.losses.categorical_crossentropy function."""
def test_default(self):
"""Default test for liteflow.losses.categorical_crossentropy function."""
targets = tf.constant([[0, 1, 2, 0]], dtype=tf.int32)
predictions = tf.constant(
[[[0.5, 0.3, 0.2],
[0.5, 0.3, 0.2],
[0.5, 0.3, 0.2],
[0.9, 0.1, 0.0]]],
dtype=tf.float32)
weights = tf.constant([[1.0, 1.0, 1.0, 0.0]], dtype=tf.float32)
loss_t, weights_out_t = xentropy(targets, predictions, weights)
exp_loss = [[-math.log(0.5), -math.log(0.3), -math.log(0.2), 0.0]]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
exp_weights_out = sess.run(weights)
loss, weights_out = sess.run([loss_t, weights_out_t])
self.assertAllClose(loss, exp_loss)
self.assertAllEqual(weights_out, exp_weights_out)
def test_no_weights(self):
"""Test for liteflow.losses.categorical_crossentropy function with no weights."""
pass
if __name__ == '__main__':
tf.test.main() | 36.90625 | 89 | 0.622523 | 748 | 5,905 | 4.802139 | 0.124332 | 0.052895 | 0.030624 | 0.030067 | 0.832962 | 0.801225 | 0.786192 | 0.734131 | 0.696269 | 0.696269 | 0 | 0.0373 | 0.237257 | 5,905 | 160 | 90 | 36.90625 | 0.760213 | 0.081456 | 0 | 0.743363 | 0 | 0 | 0.03272 | 0 | 0 | 0 | 0 | 0 | 0.265487 | 1 | 0.053097 | false | 0.00885 | 0.053097 | 0 | 0.123894 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54adbaa85b59e636c78d0b11f2bf7a413da65f95 | 508 | py | Python | sickbeard/lib/hachoir_metadata/__init__.py | Branlala/docker-sickbeardfr | 3ac85092dc4cc8a4171fb3c83e9682162245e13e | [
"MIT"
] | null | null | null | sickbeard/lib/hachoir_metadata/__init__.py | Branlala/docker-sickbeardfr | 3ac85092dc4cc8a4171fb3c83e9682162245e13e | [
"MIT"
] | null | null | null | sickbeard/lib/hachoir_metadata/__init__.py | Branlala/docker-sickbeardfr | 3ac85092dc4cc8a4171fb3c83e9682162245e13e | [
"MIT"
] | null | null | null | from lib.hachoir_metadata.version import VERSION as __version__
from lib.hachoir_metadata.metadata import extractMetadata
# Just import the module,
# each module use registerExtractor() method
import lib.hachoir_metadata.archive
import lib.hachoir_metadata.audio
import lib.hachoir_metadata.file_system
import lib.hachoir_metadata.image
import lib.hachoir_metadata.jpeg
import lib.hachoir_metadata.misc
import lib.hachoir_metadata.program
import lib.hachoir_metadata.riff
import lib.hachoir_metadata.video
| 31.75 | 63 | 0.866142 | 71 | 508 | 5.971831 | 0.352113 | 0.259434 | 0.466981 | 0.509434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080709 | 508 | 15 | 64 | 33.866667 | 0.907923 | 0.129921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49d537571a62cff7330ff87b7cc0f432a81a9ba8 | 43 | py | Python | python/fedml/simulation/single_process/fednova/__init__.py | ray-ruisun/FedML | 24ff30d636bb70f64e94e9ca205375033597d3dd | [
"Apache-2.0"
] | null | null | null | python/fedml/simulation/single_process/fednova/__init__.py | ray-ruisun/FedML | 24ff30d636bb70f64e94e9ca205375033597d3dd | [
"Apache-2.0"
] | null | null | null | python/fedml/simulation/single_process/fednova/__init__.py | ray-ruisun/FedML | 24ff30d636bb70f64e94e9ca205375033597d3dd | [
"Apache-2.0"
] | null | null | null | from .fednova_trainer import FedNovaTrainer | 43 | 43 | 0.906977 | 5 | 43 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49f7cd558c796c2beb2908f4f90e082a3ec3357e | 248 | py | Python | toontown/cogdominium/CogdoCraneGameBase.py | LittleNed/toontown-stride | 1252a8f9a8816c1810106006d09c8bdfe6ad1e57 | [
"Apache-2.0"
] | 4 | 2019-07-01T15:46:43.000Z | 2021-07-23T16:26:48.000Z | toontown/cogdominium/CogdoCraneGameBase.py | NoraTT/Historical-Commits-Project-Altis-Source | fe88e6d07edf418f7de6ad5b3d9ecb3d0d285179 | [
"Apache-2.0"
] | 1 | 2019-06-29T03:40:05.000Z | 2021-06-13T01:15:16.000Z | toontown/cogdominium/CogdoCraneGameBase.py | NoraTT/Historical-Commits-Project-Altis-Source | fe88e6d07edf418f7de6ad5b3d9ecb3d0d285179 | [
"Apache-2.0"
] | 4 | 2019-07-28T21:18:46.000Z | 2021-02-25T06:37:25.000Z | from toontown.cogdominium import CogdoCraneGameSpec
from toontown.cogdominium import CogdoCraneGameConsts as Consts
class CogdoCraneGameBase:
def getConsts(self):
return Consts
def getSpec(self):
return CogdoCraneGameSpec | 24.8 | 63 | 0.778226 | 24 | 248 | 8.041667 | 0.625 | 0.124352 | 0.238342 | 0.300518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185484 | 248 | 10 | 64 | 24.8 | 0.955446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0.285714 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b703d544b48ba15412189f1979163b3caed750fb | 149 | py | Python | recommender/contrib/financialmodelingprep/__init__.py | stungkit/stock_trend_analysis | e9d3f2db19a9af93cc8dc55c2394ae88c1b3ee6e | [
"MIT"
] | 7 | 2020-04-16T18:25:15.000Z | 2022-02-20T03:57:31.000Z | recommender/contrib/financialmodelingprep/__init__.py | stungkit/stock_trend_analysis | e9d3f2db19a9af93cc8dc55c2394ae88c1b3ee6e | [
"MIT"
] | 4 | 2020-04-10T05:40:48.000Z | 2022-01-13T01:40:24.000Z | recommender/contrib/financialmodelingprep/__init__.py | stungkit/stock_trend_analysis | e9d3f2db19a9af93cc8dc55c2394ae88c1b3ee6e | [
"MIT"
] | 4 | 2020-11-30T06:43:42.000Z | 2021-03-12T05:42:13.000Z | '''Setup all relevant packages.'''
from . import utils
from . import statements
from . import profile
from . import indicators
from . import prices
| 18.625 | 34 | 0.751678 | 19 | 149 | 5.894737 | 0.578947 | 0.446429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167785 | 149 | 7 | 35 | 21.285714 | 0.903226 | 0.187919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ff69e1934c86cad0cc97b878ae3bfd4144bdc9d | 2,429 | py | Python | yt/visualization/tests/test_normal_plot_api.py | jisuoqing/yt | e86179e6bd1b75c863ae638bdbc566d9dc241d94 | [
"BSD-3-Clause-Clear"
] | null | null | null | yt/visualization/tests/test_normal_plot_api.py | jisuoqing/yt | e86179e6bd1b75c863ae638bdbc566d9dc241d94 | [
"BSD-3-Clause-Clear"
] | null | null | null | yt/visualization/tests/test_normal_plot_api.py | jisuoqing/yt | e86179e6bd1b75c863ae638bdbc566d9dc241d94 | [
"BSD-3-Clause-Clear"
] | null | null | null | import pytest
from yt._maintenance.deprecation import VisibleDeprecationWarning
from yt.testing import fake_amr_ds
from yt.visualization.plot_window import ProjectionPlot, SlicePlot
@pytest.fixture(scope="module")
def ds():
return fake_amr_ds(geometry="cartesian")
@pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot))
def test_normalplot_all_positional_args(ds, plot_cls):
plot_cls(ds, "z", ("stream", "Density"))
@pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot))
def test_normalplot_normal_kwarg(ds, plot_cls):
plot_cls(ds, normal="z", fields=("stream", "Density"))
@pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot))
def test_normalplot_axis_kwarg(ds, plot_cls):
with pytest.warns(
VisibleDeprecationWarning,
match=(
"Argument 'axis' is a deprecated alias for 'normal'.\n"
"Deprecated since yt 4.1.0\n"
"This feature is planned for removal in yt 4.2.0"
),
):
plot_cls(ds, axis="z", fields=("stream", "Density"))
@pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot))
def test_error_with_missing_fields_and_normal(ds, plot_cls):
with pytest.raises(
TypeError,
match="missing 2 required positional arguments: 'normal' and 'fields'",
):
plot_cls(ds)
@pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot))
def test_error_with_missing_fields_with_axis_kwarg(ds, plot_cls):
with pytest.warns(
VisibleDeprecationWarning,
match=(
"Argument 'axis' is a deprecated alias for 'normal'.\n"
"Deprecated since yt 4.1.0\n"
"This feature is planned for removal in yt 4.2.0"
),
):
with pytest.raises(
TypeError, match="missing required positional argument: 'fields'"
):
plot_cls(ds, axis="z")
@pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot))
def test_error_with_missing_fields_with_normal_kwarg(ds, plot_cls):
with pytest.raises(
TypeError, match="missing required positional argument: 'fields'"
):
plot_cls(ds, normal="z")
@pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot))
def test_error_with_missing_fields_with_positional(ds, plot_cls):
with pytest.raises(
TypeError, match="missing required positional argument: 'fields'"
):
plot_cls(ds, "z")
| 29.987654 | 79 | 0.687114 | 301 | 2,429 | 5.335548 | 0.215947 | 0.091532 | 0.091532 | 0.108966 | 0.808219 | 0.779577 | 0.754047 | 0.754047 | 0.754047 | 0.725405 | 0 | 0.006646 | 0.19473 | 2,429 | 80 | 80 | 30.3625 | 0.814417 | 0 | 0 | 0.596491 | 0 | 0 | 0.234664 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.140351 | false | 0 | 0.070175 | 0.017544 | 0.22807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b7746166136d6846c59132d8e8062fd6a49ab331 | 12,900 | py | Python | unit_tests/test_tlslite_utils_aesccm.py | t184256/tlslite-ng | cef7d75f79fd746c001b339399257098a00b46be | [
"Unlicense"
] | null | null | null | unit_tests/test_tlslite_utils_aesccm.py | t184256/tlslite-ng | cef7d75f79fd746c001b339399257098a00b46be | [
"Unlicense"
] | null | null | null | unit_tests/test_tlslite_utils_aesccm.py | t184256/tlslite-ng | cef7d75f79fd746c001b339399257098a00b46be | [
"Unlicense"
] | null | null | null | # compatibility with Python 2.6, for that we need unittest2 package,
# which is not available on 3.3 or 3.4
try:
import unittest2 as unittest
except ImportError:
import unittest
from tlslite.utils.rijndael import Rijndael
from tlslite.utils.aesccm import AESCCM
from tlslite.utils.cipherfactory import createAESCCM, createAESCCM_8
class TestAESCCM(unittest.TestCase):
def test___init__128(self):
key = bytearray(16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
self.assertIsNotNone(aesCCM)
self.assertEqual(aesCCM.name, "aes128ccm")
def test___init__128_8(self):
key = bytearray(16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt, 8)
self.assertIsNotNone(aesCCM)
self.assertEqual(aesCCM.name, "aes128ccm_8")
def test___init__256(self):
key = bytearray(32)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
self.assertIsNotNone(aesCCM)
self.assertEqual(aesCCM.name, "aes256ccm")
def test___init__256_8(self):
key = bytearray(32)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt, 8)
self.assertIsNotNone(aesCCM)
self.assertEqual(aesCCM.name, "aes256ccm_8")
def test___init___with_invalid_key(self):
key = bytearray(8)
with self.assertRaises(AssertionError):
aesCCM = AESCCM(key, "python", Rijndael(bytearray(16), 16).encrypt)
def test_default_implementation(self):
key = bytearray(16)
aesCCM = createAESCCM(key)
self.assertEqual(aesCCM.implementation, "python")
def test_default_implementation_small_tag(self):
key = bytearray(16)
aesCCM = createAESCCM_8(key)
self.assertEqual(aesCCM.implementation, "python")
def test_seal(self):
key = bytearray(b'\x01'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*12)
plaintext = bytearray(b'text to encrypt.')
self.assertEqual(len(plaintext), 16)
encData = aesCCM.seal(nonce, plaintext, bytearray(0))
self.assertEqual(bytearray(b'%}Q.\x99\xa3\r\xae\xcbMc\xf2\x16,^\xff'
b'\xa0I\x8e\xf9\xc9F>\xbf\xa4\x00Y\x02p'
b'\xe3\xb8\xa2'), encData)
def test_seal_256(self):
key = bytearray(b'\x01'*32)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*12)
plaintext = bytearray(b'text to encrypt.')
self.assertEqual(len(plaintext), 16)
encData = aesCCM.seal(nonce, plaintext, bytearray(0))
self.assertEqual(bytearray(b'IN\x1c\x06\xb8\x0b9SD<\xf8RL'
b'\xb4,=\xd6&d\xae^1\xf8\xbf'
b'\xfa8D\x98\xdd\x14\xb51'), encData)
def test_seal_small_tag(self):
key = bytearray(b'\x01'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt, 8)
nonce = bytearray(b'\x02'*12)
plaintext = bytearray(b'text to encrypt.')
self.assertEqual(len(plaintext), 16)
encData = aesCCM.seal(nonce, plaintext, bytearray(0))
self.assertEqual(bytearray(b'%}Q.\x99\xa3\r\xae\xcbMc\xf2\x16,^\xff'
b'\x14\xb8-?\x7f\xac\x8bI'), encData)
def test_seal_256_small_tag(self):
key = bytearray(b'\x01'*32)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt, 8)
nonce = bytearray(b'\x02'*12)
plaintext = bytearray(b'text to encrypt.')
self.assertEqual(len(plaintext), 16)
encData = aesCCM.seal(nonce, plaintext, bytearray(0))
self.assertEqual(bytearray(b'IN\x1c\x06\xb8\x0b9SD<\xf8RL'
b'\xb4,=\xa2\x91\x84j1*\x0f\xeb'), encData)
def test_seal_with_invalid_nonce(self):
key = bytearray(b'\x01'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*11)
plaintext = bytearray(b'text to encrypt.')
self.assertEqual(len(plaintext), 16)
with self.assertRaises(ValueError) as err:
aesCCM.seal(nonce, plaintext, bytearray(0))
self.assertEqual("Bad nonce length", str(err.exception))
def test_open(self):
key = bytearray(b'\x01'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*12)
ciphertext = bytearray(b'%}Q.\x99\xa3\r\xae\xcbMc\xf2\x16,^\xff\xa0I'
b'\x8e\xf9\xc9F>\xbf\xa4\x00Y\x02p\xe3\xb8\xa2')
plaintext = aesCCM.open(nonce, ciphertext, bytearray(0))
self.assertEqual(plaintext, bytearray(b'text to encrypt.'))
def test_open_256(self):
key = bytearray(b'\x01'*32)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*12)
ciphertext = bytearray(b'IN\x1c\x06\xb8\x0b9SD<\xf8RL'
b'\xb4,=\xd6&d\xae^1\xf8\xbf'
b'\xfa8D\x98\xdd\x14\xb51')
plaintext = aesCCM.open(nonce, ciphertext, bytearray(0))
self.assertEqual(plaintext, bytearray(b'text to encrypt.'))
def test_open_small_tag(self):
key = bytearray(b'\x01'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt, 8)
nonce = bytearray(b'\x02'*12)
ciphertext = bytearray(b'%}Q.\x99\xa3\r\xae\xcbMc\xf2\x16,^\xff\x14'
b'\xb8-?\x7f\xac\x8bI')
plaintext = aesCCM.open(nonce, ciphertext, bytearray(0))
self.assertEqual(plaintext, bytearray(b'text to encrypt.'))
def test_open_256_small_tag(self):
key = bytearray(b'\x01'*32)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt, 8)
nonce = bytearray(b'\x02'*12)
ciphertext = bytearray(b'IN\x1c\x06\xb8\x0b9SD<\xf8RL'
b'\xb4,=\xa2\x91\x84j1*\x0f\xeb')
plaintext = aesCCM.open(nonce, ciphertext, bytearray(0))
self.assertEqual(plaintext, bytearray(b'text to encrypt.'))
def test_open_with_incorrect_key(self):
key = bytearray(b'\x01'*15 + b'\x00')
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*12)
ciphertext = bytearray(
b'\'\x81h\x17\xe6Z)\\\xf2\x8emF\xcb\x91\x0eu'
b'z1:\xf6}\xa7\\@\xba\x11\xd8r\xdf#K\xd4')
plaintext = aesCCM.open(nonce, ciphertext, bytearray(0))
self.assertIsNone(plaintext)
def test_open_with_incorrect_nonce(self):
key = bytearray(b'\x01'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*11 + b'\x01')
ciphertext = bytearray(
b'\'\x81h\x17\xe6Z)\\\xf2\x8emF\xcb\x91\x0eu'
b'z1:\xf6}\xa7\\@\xba\x11\xd8r\xdf#K\xd4')
plaintext = aesCCM.open(nonce, ciphertext, bytearray(0))
self.assertIsNone(plaintext)
def test_open_with_invalid_nonce(self):
key = bytearray(b'\x01'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*11)
ciphertext = bytearray(
b'\'\x81h\x17\xe6Z)\\\xf2\x8emF\xcb\x91\x0eu'
b'z1:\xf6}\xa7\\@\xba\x11\xd8r\xdf#K\xd4')
with self.assertRaises(ValueError) as err:
aesCCM.open(nonce, ciphertext, bytearray(0))
self.assertEqual("Bad nonce length", str(err.exception))
def test_open_with_invalid_ciphertext(self):
key = bytearray(b'\x01'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x02'*12)
ciphertext = bytearray(
b'\xff'*15)
self.assertIsNone(aesCCM.open(nonce, ciphertext, bytearray(0)))
def test_seal_with_test_vector_1(self):
key = bytearray(b'\x00'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x00'*12)
plaintext = bytearray(b'')
self.assertEqual(len(plaintext), 0)
encData = aesCCM.seal(nonce, plaintext, bytearray(0))
self.assertEqual(bytearray(b'\xb9\xf6P\xfb<9\xbb\x1b\xee\x0e)\x1d3'
b'\xf6\xae('), encData)
def test_seal_with_test_vector_2(self):
key = bytearray(b'\x00'*16)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\x00'*12)
plaintext = bytearray(b'\x00'*16)
self.assertEqual(len(plaintext), 16)
encData = aesCCM.seal(nonce, plaintext, bytearray(0))
self.assertEqual(bytearray(b'n\xc7_\xb2\xe2\xb4\x87F\x1e\xdd\xcb\xb8'
b'\x97\x11\x92\xbaMO\xa3\xaf\x0b\xf6\xd3E'
b'Aq0o\xfa\xdd\x9a\xfd'), encData)
def test_seal_with_test_vector_3(self):
key = bytearray(b'\xfe\xff\xe9\x92\x86\x65\x73\x1c'
b'\x6d\x6a\x8f\x94\x67\x30\x83\x08')
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\xca\xfe\xba\xbe\xfa\xce\xdb\xad\xde\xca\xf8\x88')
plaintext = bytearray(b'\xd9\x31\x32\x25\xf8\x84\x06\xe5'
b'\xa5\x59\x09\xc5\xaf\xf5\x26\x9a'
b'\x86\xa7\xa9\x53\x15\x34\xf7\xda'
b'\x2e\x4c\x30\x3d\x8a\x31\x8a\x72'
b'\x1c\x3c\x0c\x95\x95\x68\x09\x53'
b'\x2f\xcf\x0e\x24\x49\xa6\xb5\x25'
b'\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57'
b'\xba\x63\x7b\x39\x1a\xaf\xd2\x55')
self.assertEqual(len(plaintext), 4*16)
encData = aesCCM.seal(nonce, plaintext, bytearray(0))
self.assertEqual(bytearray(b"\x08\x93\xe9K\x91H\x80\x1a\xf0\xf74&"
b"\xab\xb0\x0e<\xa4\x9b\xf0\x9dy\xa2"
b"\x01\'\xa7\xeb\x19&\xfa\x89\x057\x87"
b"\xff\x02\xd0}q\x81;\x88[\x85\xe7\xf9"
b"lN\xed\xf4 \xdb\x12j\x04Q\xce\x13\xbdA"
b"\xba\x01\x8d\x1b\xa7\xfc\xece\x99Dg\xa7"
b"{\x8b&B\xde\x91,\x01."), encData)
def test_seal_with_test_vector_4(self):
key = bytearray(b'\xfe\xff\xe9\x92\x86\x65\x73\x1c' +
b'\x6d\x6a\x8f\x94\x67\x30\x83\x08')
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(b'\xca\xfe\xba\xbe\xfa\xce\xdb\xad\xde\xca\xf8\x88')
plaintext = bytearray(b'\xd9\x31\x32\x25\xf8\x84\x06\xe5'
b'\xa5\x59\x09\xc5\xaf\xf5\x26\x9a'
b'\x86\xa7\xa9\x53\x15\x34\xf7\xda'
b'\x2e\x4c\x30\x3d\x8a\x31\x8a\x72'
b'\x1c\x3c\x0c\x95\x95\x68\x09\x53'
b'\x2f\xcf\x0e\x24\x49\xa6\xb5\x25'
b'\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57'
b'\xba\x63\x7b\x39')
data = bytearray(b'\xfe\xed\xfa\xce\xde\xad\xbe\xef'
b'\xfe\xed\xfa\xce\xde\xad\xbe\xef'
b'\xab\xad\xda\xd2')
encData = aesCCM.seal(nonce, plaintext, data)
self.assertEqual(bytearray(b'\x08\x93\xe9K\x91H\x80\x1a\xf0\xf74&\xab'
b'\xb0\x0e<\xa4\x9b\xf0\x9dy\xa2\x01\'\xa7'
b'\xeb\x19&\xfa\x89\x057\x87\xff\x02\xd0}q'
b'\x81;\x88[\x85\xe7\xf9lN\xed\xf4 \xdb'
b'\x12j\x04Q\xce\x13\xbdA\xba\x028\xc3&'
b'\xb4{4\xf7\x8fe\x9eu'
b'\x10\x96\xcd"'), encData)
def test_seal_with_test_vector_5(self):
key = bytearray(32)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(12)
plaintext = bytearray(0)
data = bytearray(0)
encData = aesCCM.seal(nonce, plaintext, data)
self.assertEqual(bytearray(b'\xa8\x90&^C\xa2hU\xf2i'
b'\xb9?\xf4\xdd\xde\xf6'), encData)
def test_seal_with_test_vector_6(self):
key = bytearray(32)
aesCCM = AESCCM(key, "python", Rijndael(key, 16).encrypt)
nonce = bytearray(12)
plaintext = bytearray(16)
data = bytearray(0)
encData = aesCCM.seal(nonce, plaintext, data)
self.assertEqual(bytearray(b'\xc1\x94@D\xc8\xe7\xaa\x95\xd2\xde\x95'
b'\x13\xc7\xf3\xdd\x8cK\n>^Q\xf1Q\xeb\x0f'
b'\xfa\xe7\xc4=\x01\x0f\xdb'), encData)
| 36.752137 | 79 | 0.562791 | 1,654 | 12,900 | 4.322249 | 0.1711 | 0.092321 | 0.05819 | 0.070499 | 0.839558 | 0.81438 | 0.79046 | 0.745419 | 0.719261 | 0.719261 | 0 | 0.095644 | 0.29 | 12,900 | 350 | 80 | 36.857143 | 0.6849 | 0.007985 | 0 | 0.594142 | 0 | 0.029289 | 0.214163 | 0.163202 | 0 | 0 | 0 | 0 | 0.167364 | 1 | 0.108787 | false | 0 | 0.025105 | 0 | 0.138075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b77e7dbdc5561e0b3a690261f7f32b4ef320783b | 44 | py | Python | codes/hook_bc/examplar.py | thautwarm/AOP | 32925fceacb43f34e3156b52ccdf9001870fbcbf | [
"MIT"
] | 4 | 2019-12-22T12:16:46.000Z | 2020-05-20T05:30:36.000Z | codes/hook_bc/examplar.py | thautwarm/AOP | 32925fceacb43f34e3156b52ccdf9001870fbcbf | [
"MIT"
] | null | null | null | codes/hook_bc/examplar.py | thautwarm/AOP | 32925fceacb43f34e3156b52ccdf9001870fbcbf | [
"MIT"
] | null | null | null | def f(x):
return x + 1
a = print(f(1)) | 8.8 | 16 | 0.477273 | 10 | 44 | 2.1 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.318182 | 44 | 5 | 17 | 8.8 | 0.633333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b7889e2a1f8f6808b55fba5389938a51a0e277cb | 27 | py | Python | src/__init__.py | KermitPurple/pycoord | 18db2841104aee00b6a352bd367921af72390321 | [
"MIT"
] | null | null | null | src/__init__.py | KermitPurple/pycoord | 18db2841104aee00b6a352bd367921af72390321 | [
"MIT"
] | null | null | null | src/__init__.py | KermitPurple/pycoord | 18db2841104aee00b6a352bd367921af72390321 | [
"MIT"
] | null | null | null | from .pycoord import Coord
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d150535db22b45b57874a4156a68c6156cc5c1b | 50,029 | py | Python | logging/unit_tests/test__gax.py | ammayathrajeshnair/googlecloudpython | 22ded3be30dda0206e23a7846b5883a2caeeeddc | [
"Apache-2.0"
] | null | null | null | logging/unit_tests/test__gax.py | ammayathrajeshnair/googlecloudpython | 22ded3be30dda0206e23a7846b5883a2caeeeddc | [
"Apache-2.0"
] | null | null | null | logging/unit_tests/test__gax.py | ammayathrajeshnair/googlecloudpython | 22ded3be30dda0206e23a7846b5883a2caeeeddc | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import mock
try:
# pylint: disable=unused-import
import google.cloud.logging._gax
# pylint: enable=unused-import
except ImportError: # pragma: NO COVER
_HAVE_GAX = False
else:
_HAVE_GAX = True
from google.cloud._testing import _GAXBaseAPI
def _make_credentials():
# pylint: disable=redefined-outer-name
import google.auth.credentials
# pylint: enable=redefined-outer-name
return mock.Mock(spec=google.auth.credentials.Credentials)
class _Base(object):
PROJECT = 'PROJECT'
PROJECT_PATH = 'projects/%s' % (PROJECT,)
FILTER = 'logName:syslog AND severity>=ERROR'
def _make_one(self, *args, **kw):
return self._get_target_class()(*args, **kw)
@unittest.skipUnless(_HAVE_GAX, 'No gax-python')
class Test_LoggingAPI(_Base, unittest.TestCase):
LOG_NAME = 'log_name'
LOG_PATH = 'projects/%s/logs/%s' % (_Base.PROJECT, LOG_NAME)
@staticmethod
def _get_target_class():
from google.cloud.logging._gax import _LoggingAPI
return _LoggingAPI
def test_ctor(self):
gax_api = _GAXLoggingAPI()
client = object()
api = self._make_one(gax_api, client)
self.assertIs(api._gax_api, gax_api)
self.assertIs(api._client, client)
def test_list_entries_no_paging(self):
import datetime
from google.api.monitored_resource_pb2 import MonitoredResource
from google.gax import INITIAL_PAGE
from google.cloud.grpc.logging.v2.log_entry_pb2 import LogEntry
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud._helpers import UTC
from google.cloud._testing import _GAXPageIterator
from google.cloud.logging import DESCENDING
from google.cloud.logging.client import Client
from google.cloud.logging.entries import TextEntry
from google.cloud.logging.logger import Logger
TOKEN = 'TOKEN'
TEXT = 'TEXT'
resource_pb = MonitoredResource(type='global')
timestamp = datetime.datetime.utcnow().replace(tzinfo=UTC)
timestamp_pb = _datetime_to_pb_timestamp(timestamp)
entry_pb = LogEntry(log_name=self.LOG_PATH,
resource=resource_pb,
timestamp=timestamp_pb,
text_payload=TEXT)
response = _GAXPageIterator([entry_pb], page_token=TOKEN)
gax_api = _GAXLoggingAPI(_list_log_entries_response=response)
client = Client(project=self.PROJECT, credentials=_make_credentials(),
use_gax=True)
api = self._make_one(gax_api, client)
iterator = api.list_entries(
[self.PROJECT], self.FILTER, DESCENDING)
entries = list(iterator)
next_token = iterator.next_page_token
# First check the token.
self.assertEqual(next_token, TOKEN)
# Then check the entries returned.
self.assertEqual(len(entries), 1)
entry = entries[0]
self.assertIsInstance(entry, TextEntry)
self.assertEqual(entry.payload, TEXT)
self.assertIsInstance(entry.logger, Logger)
self.assertEqual(entry.logger.name, self.LOG_NAME)
self.assertIsNone(entry.insert_id)
self.assertEqual(entry.timestamp, timestamp)
self.assertIsNone(entry.labels)
self.assertIsNone(entry.severity)
self.assertIsNone(entry.http_request)
resource_names, projects, filter_, order_by, page_size, options = (
gax_api._list_log_entries_called_with)
self.assertEqual(resource_names, [])
self.assertEqual(projects, [self.PROJECT])
self.assertEqual(filter_, self.FILTER)
self.assertEqual(order_by, DESCENDING)
self.assertEqual(page_size, 0)
self.assertIs(options.page_token, INITIAL_PAGE)
def _list_entries_with_paging_helper(self, payload, struct_pb):
import datetime
from google.api.monitored_resource_pb2 import MonitoredResource
from google.cloud.grpc.logging.v2.log_entry_pb2 import LogEntry
from google.cloud._helpers import _datetime_to_pb_timestamp
from google.cloud._helpers import UTC
from google.cloud._testing import _GAXPageIterator
from google.cloud.logging.client import Client
from google.cloud.logging.entries import StructEntry
from google.cloud.logging.logger import Logger
SIZE = 23
TOKEN = 'TOKEN'
NEW_TOKEN = 'NEW_TOKEN'
resource_pb = MonitoredResource(type='global')
timestamp = datetime.datetime.utcnow().replace(tzinfo=UTC)
timestamp_pb = _datetime_to_pb_timestamp(timestamp)
entry_pb = LogEntry(log_name=self.LOG_PATH,
resource=resource_pb,
timestamp=timestamp_pb,
json_payload=struct_pb)
response = _GAXPageIterator([entry_pb], page_token=NEW_TOKEN)
gax_api = _GAXLoggingAPI(_list_log_entries_response=response)
client = Client(project=self.PROJECT, credentials=_make_credentials(),
use_gax=True)
api = self._make_one(gax_api, client)
iterator = api.list_entries(
[self.PROJECT], page_size=SIZE, page_token=TOKEN)
entries = list(iterator)
next_token = iterator.next_page_token
# First check the token.
self.assertEqual(next_token, NEW_TOKEN)
self.assertEqual(len(entries), 1)
entry = entries[0]
self.assertIsInstance(entry, StructEntry)
self.assertEqual(entry.payload, payload)
self.assertIsInstance(entry.logger, Logger)
self.assertEqual(entry.logger.name, self.LOG_NAME)
self.assertIsNone(entry.insert_id)
self.assertEqual(entry.timestamp, timestamp)
self.assertIsNone(entry.labels)
self.assertIsNone(entry.severity)
self.assertIsNone(entry.http_request)
resource_names, projects, filter_, order_by, page_size, options = (
gax_api._list_log_entries_called_with)
self.assertEqual(resource_names, [])
self.assertEqual(projects, [self.PROJECT])
self.assertEqual(filter_, '')
self.assertEqual(order_by, '')
self.assertEqual(page_size, SIZE)
self.assertEqual(options.page_token, TOKEN)
def test_list_entries_with_paging(self):
from google.protobuf.struct_pb2 import Struct
from google.protobuf.struct_pb2 import Value
payload = {'message': 'MESSAGE', 'weather': 'sunny'}
struct_pb = Struct(fields={
key: Value(string_value=value) for key, value in payload.items()
})
self._list_entries_with_paging_helper(payload, struct_pb)
def test_list_entries_with_paging_nested_payload(self):
from google.protobuf.struct_pb2 import Struct
from google.protobuf.struct_pb2 import Value
payload = {}
struct_fields = {}
# Add a simple key.
key = 'message'
payload[key] = 'MESSAGE'
struct_fields[key] = Value(string_value=payload[key])
# Add a nested key.
key = 'weather'
sub_value = {}
sub_fields = {}
sub_key = 'temperature'
sub_value[sub_key] = 75
sub_fields[sub_key] = Value(number_value=sub_value[sub_key])
sub_key = 'precipitation'
sub_value[sub_key] = False
sub_fields[sub_key] = Value(bool_value=sub_value[sub_key])
# Update the parent payload.
payload[key] = sub_value
struct_fields[key] = Value(struct_value=Struct(fields=sub_fields))
# Make the struct_pb for our dict.
struct_pb = Struct(fields=struct_fields)
self._list_entries_with_paging_helper(payload, struct_pb)
def _make_log_entry_with_extras(self, labels, iid, type_url, now):
from google.api.monitored_resource_pb2 import MonitoredResource
from google.cloud.grpc.logging.v2.log_entry_pb2 import LogEntry
from google.cloud.grpc.logging.v2.log_entry_pb2 import (
LogEntryOperation)
from google.logging.type.http_request_pb2 import HttpRequest
from google.logging.type.log_severity_pb2 import WARNING
from google.protobuf.any_pb2 import Any
from google.cloud._helpers import _datetime_to_pb_timestamp
resource_pb = MonitoredResource(
type='global', labels=labels)
proto_payload = Any(type_url=type_url)
timestamp_pb = _datetime_to_pb_timestamp(now)
request_pb = HttpRequest(
request_url='http://example.com/requested',
request_method='GET',
status=200,
referer='http://example.com/referer',
user_agent='AGENT',
cache_hit=True,
request_size=256,
response_size=1024,
remote_ip='1.2.3.4',
)
operation_pb = LogEntryOperation(
producer='PRODUCER',
first=True,
last=True,
id='OPID',
)
entry_pb = LogEntry(log_name=self.LOG_PATH,
resource=resource_pb,
proto_payload=proto_payload,
timestamp=timestamp_pb,
severity=WARNING,
insert_id=iid,
http_request=request_pb,
labels=labels,
operation=operation_pb)
return entry_pb
def test_list_entries_with_extra_properties(self):
import datetime
# Import the wrappers to register the type URL for BoolValue
# pylint: disable=unused-variable
from google.protobuf import wrappers_pb2
# pylint: enable=unused-variable
from google.cloud._helpers import UTC
from google.cloud._testing import _GAXPageIterator
from google.cloud.logging.client import Client
from google.cloud.logging.entries import ProtobufEntry
from google.cloud.logging.logger import Logger
NOW = datetime.datetime.utcnow().replace(tzinfo=UTC)
SIZE = 23
TOKEN = 'TOKEN'
NEW_TOKEN = 'NEW_TOKEN'
SEVERITY = 'WARNING'
LABELS = {
'foo': 'bar',
}
IID = 'IID'
bool_type_url = 'type.googleapis.com/google.protobuf.BoolValue'
entry_pb = self._make_log_entry_with_extras(
LABELS, IID, bool_type_url, NOW)
response = _GAXPageIterator([entry_pb], page_token=NEW_TOKEN)
gax_api = _GAXLoggingAPI(_list_log_entries_response=response)
client = Client(project=self.PROJECT, credentials=_make_credentials(),
use_gax=True)
api = self._make_one(gax_api, client)
iterator = api.list_entries(
[self.PROJECT], page_size=SIZE, page_token=TOKEN)
entries = list(iterator)
next_token = iterator.next_page_token
# First check the token.
self.assertEqual(next_token, NEW_TOKEN)
# Then check the entries returned.
self.assertEqual(len(entries), 1)
entry = entries[0]
self.assertIsInstance(entry, ProtobufEntry)
self.assertEqual(entry.payload, {
'@type': bool_type_url,
'value': False,
})
self.assertIsInstance(entry.logger, Logger)
self.assertEqual(entry.logger.name, self.LOG_NAME)
self.assertEqual(entry.insert_id, IID)
self.assertEqual(entry.timestamp, NOW)
self.assertEqual(entry.labels, {'foo': 'bar'})
self.assertEqual(entry.severity, SEVERITY)
self.assertEqual(entry.http_request, {
'requestMethod': entry_pb.http_request.request_method,
'requestUrl': entry_pb.http_request.request_url,
'status': entry_pb.http_request.status,
'requestSize': str(entry_pb.http_request.request_size),
'responseSize': str(entry_pb.http_request.response_size),
'referer': entry_pb.http_request.referer,
'userAgent': entry_pb.http_request.user_agent,
'remoteIp': entry_pb.http_request.remote_ip,
'cacheHit': entry_pb.http_request.cache_hit,
})
resource_names, projects, filter_, order_by, page_size, options = (
gax_api._list_log_entries_called_with)
self.assertEqual(resource_names, [])
self.assertEqual(projects, [self.PROJECT])
self.assertEqual(filter_, '')
self.assertEqual(order_by, '')
self.assertEqual(page_size, SIZE)
self.assertEqual(options.page_token, TOKEN)
def test_write_entries_single(self):
from google.cloud.grpc.logging.v2.log_entry_pb2 import LogEntry
TEXT = 'TEXT'
ENTRY = {
'logName': self.LOG_PATH,
'resource': {'type': 'global'},
'textPayload': TEXT,
}
gax_api = _GAXLoggingAPI()
api = self._make_one(gax_api, None)
api.write_entries([ENTRY])
entries, log_name, resource, labels, partial_success, options = (
gax_api._write_log_entries_called_with)
self.assertEqual(len(entries), 1)
entry = entries[0]
self.assertIsInstance(entry, LogEntry)
self.assertEqual(entry.log_name, self.LOG_PATH)
self.assertEqual(entry.resource.type, 'global')
self.assertEqual(entry.labels, {})
self.assertEqual(entry.text_payload, TEXT)
self.assertIsNone(log_name)
self.assertIsNone(resource)
self.assertIsNone(labels)
self.assertEqual(partial_success, False)
self.assertIsNone(options)
def test_write_entries_w_extra_properties(self):
# pylint: disable=too-many-statements
from datetime import datetime
from google.logging.type.log_severity_pb2 import WARNING
from google.cloud.grpc.logging.v2.log_entry_pb2 import LogEntry
from google.cloud._helpers import UTC, _pb_timestamp_to_datetime
NOW = datetime.utcnow().replace(tzinfo=UTC)
TEXT = 'TEXT'
SEVERITY = 'WARNING'
LABELS = {
'foo': 'bar',
}
IID = 'IID'
REQUEST_METHOD = 'GET'
REQUEST_URL = 'http://example.com/requested'
STATUS = 200
REQUEST_SIZE = 256
RESPONSE_SIZE = 1024
REFERRER_URL = 'http://example.com/referer'
USER_AGENT = 'Agent/1.0'
REMOTE_IP = '1.2.3.4'
REQUEST = {
'requestMethod': REQUEST_METHOD,
'requestUrl': REQUEST_URL,
'status': STATUS,
'requestSize': REQUEST_SIZE,
'responseSize': RESPONSE_SIZE,
'referer': REFERRER_URL,
'userAgent': USER_AGENT,
'remoteIp': REMOTE_IP,
'cacheHit': False,
}
PRODUCER = 'PRODUCER'
OPID = 'OPID'
OPERATION = {
'producer': PRODUCER,
'id': OPID,
'first': False,
'last': True,
}
ENTRY = {
'logName': self.LOG_PATH,
'resource': {'type': 'global'},
'textPayload': TEXT,
'severity': SEVERITY,
'labels': LABELS,
'insertId': IID,
'timestamp': NOW,
'httpRequest': REQUEST,
'operation': OPERATION,
}
gax_api = _GAXLoggingAPI()
api = self._make_one(gax_api, None)
api.write_entries([ENTRY])
entries, log_name, resource, labels, partial_success, options = (
gax_api._write_log_entries_called_with)
self.assertEqual(len(entries), 1)
entry = entries[0]
self.assertIsInstance(entry, LogEntry)
self.assertEqual(entry.log_name, self.LOG_PATH)
self.assertEqual(entry.resource.type, 'global')
self.assertEqual(entry.text_payload, TEXT)
self.assertEqual(entry.severity, WARNING)
self.assertEqual(entry.labels, LABELS)
self.assertEqual(entry.insert_id, IID)
stamp = _pb_timestamp_to_datetime(entry.timestamp)
self.assertEqual(stamp, NOW)
request = entry.http_request
self.assertEqual(request.request_method, REQUEST_METHOD)
self.assertEqual(request.request_url, REQUEST_URL)
self.assertEqual(request.status, STATUS)
self.assertEqual(request.request_size, REQUEST_SIZE)
self.assertEqual(request.response_size, RESPONSE_SIZE)
self.assertEqual(request.referer, REFERRER_URL)
self.assertEqual(request.user_agent, USER_AGENT)
self.assertEqual(request.remote_ip, REMOTE_IP)
self.assertEqual(request.cache_hit, False)
operation = entry.operation
self.assertEqual(operation.producer, PRODUCER)
self.assertEqual(operation.id, OPID)
self.assertFalse(operation.first)
self.assertTrue(operation.last)
self.assertIsNone(log_name)
self.assertIsNone(resource)
self.assertIsNone(labels)
self.assertEqual(partial_success, False)
self.assertIsNone(options)
# pylint: enable=too-many-statements
def _write_entries_multiple_helper(self, json_payload, json_struct_pb):
# pylint: disable=too-many-statements
import datetime
from google.logging.type.log_severity_pb2 import WARNING
from google.cloud.grpc.logging.v2.log_entry_pb2 import LogEntry
from google.protobuf.any_pb2 import Any
from google.cloud._helpers import _datetime_to_rfc3339
from google.cloud._helpers import UTC
TEXT = 'TEXT'
NOW = datetime.datetime.utcnow().replace(tzinfo=UTC)
TIMESTAMP_TYPE_URL = 'type.googleapis.com/google.protobuf.Timestamp'
PROTO = {
'@type': TIMESTAMP_TYPE_URL,
'value': _datetime_to_rfc3339(NOW),
}
PRODUCER = 'PRODUCER'
OPID = 'OPID'
URL = 'http://example.com/'
ENTRIES = [
{'textPayload': TEXT,
'severity': WARNING},
{'jsonPayload': json_payload,
'operation': {'producer': PRODUCER, 'id': OPID}},
{'protoPayload': PROTO,
'httpRequest': {'requestUrl': URL}},
]
RESOURCE = {
'type': 'global',
}
LABELS = {
'foo': 'bar',
}
gax_api = _GAXLoggingAPI()
api = self._make_one(gax_api, None)
api.write_entries(ENTRIES, self.LOG_PATH, RESOURCE, LABELS)
entries, log_name, resource, labels, partial_success, options = (
gax_api._write_log_entries_called_with)
self.assertEqual(len(entries), len(ENTRIES))
entry = entries[0]
self.assertIsInstance(entry, LogEntry)
self.assertEqual(entry.log_name, '')
self.assertEqual(entry.resource.type, '')
self.assertEqual(entry.labels, {})
self.assertEqual(entry.text_payload, TEXT)
self.assertEqual(entry.severity, WARNING)
entry = entries[1]
self.assertIsInstance(entry, LogEntry)
self.assertEqual(entry.log_name, '')
self.assertEqual(entry.resource.type, '')
self.assertEqual(entry.labels, {})
self.assertEqual(entry.json_payload, json_struct_pb)
operation = entry.operation
self.assertEqual(operation.producer, PRODUCER)
self.assertEqual(operation.id, OPID)
entry = entries[2]
self.assertIsInstance(entry, LogEntry)
self.assertEqual(entry.log_name, '')
self.assertEqual(entry.resource.type, '')
self.assertEqual(entry.labels, {})
proto = entry.proto_payload
self.assertIsInstance(proto, Any)
self.assertEqual(proto.type_url, TIMESTAMP_TYPE_URL)
request = entry.http_request
self.assertEqual(request.request_url, URL)
self.assertEqual(log_name, self.LOG_PATH)
self.assertEqual(resource, RESOURCE)
self.assertEqual(labels, LABELS)
self.assertEqual(partial_success, False)
self.assertIsNone(options)
# pylint: enable=too-many-statements
def test_write_entries_multiple(self):
from google.protobuf.struct_pb2 import Struct
from google.protobuf.struct_pb2 import Value
json_payload = {'payload': 'PAYLOAD', 'type': 'json'}
json_struct_pb = Struct(fields={
key: Value(string_value=value)
for key, value in json_payload.items()
})
self._write_entries_multiple_helper(json_payload, json_struct_pb)
def test_write_entries_multiple_nested_payload(self):
from google.protobuf.struct_pb2 import Struct
from google.protobuf.struct_pb2 import Value
json_payload = {}
struct_fields = {}
# Add a simple key.
key = 'hello'
json_payload[key] = 'me you looking for'
struct_fields[key] = Value(string_value=json_payload[key])
# Add a nested key.
key = 'everything'
sub_value = {}
sub_fields = {}
sub_key = 'answer'
sub_value[sub_key] = 42
sub_fields[sub_key] = Value(number_value=sub_value[sub_key])
sub_key = 'really?'
sub_value[sub_key] = False
sub_fields[sub_key] = Value(bool_value=sub_value[sub_key])
# Update the parent payload.
json_payload[key] = sub_value
struct_fields[key] = Value(struct_value=Struct(fields=sub_fields))
# Make the struct_pb for our dict.
json_struct_pb = Struct(fields=struct_fields)
self._write_entries_multiple_helper(json_payload, json_struct_pb)
def test_logger_delete(self):
gax_api = _GAXLoggingAPI()
api = self._make_one(gax_api, None)
api.logger_delete(self.PROJECT, self.LOG_NAME)
log_name, options = gax_api._delete_log_called_with
self.assertEqual(log_name, self.LOG_PATH)
self.assertIsNone(options)
def test_logger_delete_not_found(self):
from google.cloud.exceptions import NotFound
gax_api = _GAXLoggingAPI(_delete_not_found=True)
api = self._make_one(gax_api, None)
with self.assertRaises(NotFound):
api.logger_delete(self.PROJECT, self.LOG_NAME)
log_name, options = gax_api._delete_log_called_with
self.assertEqual(log_name, self.LOG_PATH)
self.assertIsNone(options)
def test_logger_delete_error(self):
from google.gax.errors import GaxError
gax_api = _GAXLoggingAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.logger_delete(self.PROJECT, self.LOG_NAME)
log_name, options = gax_api._delete_log_called_with
self.assertEqual(log_name, self.LOG_PATH)
self.assertIsNone(options)
@unittest.skipUnless(_HAVE_GAX, 'No gax-python')
class Test_SinksAPI(_Base, unittest.TestCase):
SINK_NAME = 'sink_name'
SINK_PATH = 'projects/%s/sinks/%s' % (_Base.PROJECT, SINK_NAME)
DESTINATION_URI = 'faux.googleapis.com/destination'
@staticmethod
def _get_target_class():
from google.cloud.logging._gax import _SinksAPI
return _SinksAPI
def test_ctor(self):
gax_api = _GAXSinksAPI()
client = object()
api = self._make_one(gax_api, client)
self.assertIs(api._gax_api, gax_api)
self.assertIs(api._client, client)
def test_list_sinks_no_paging(self):
import six
from google.gax import INITIAL_PAGE
from google.cloud.grpc.logging.v2.logging_config_pb2 import LogSink
from google.cloud._testing import _GAXPageIterator
from google.cloud.logging.sink import Sink
TOKEN = 'TOKEN'
sink_pb = LogSink(name=self.SINK_PATH,
destination=self.DESTINATION_URI,
filter=self.FILTER)
response = _GAXPageIterator([sink_pb], page_token=TOKEN)
gax_api = _GAXSinksAPI(_list_sinks_response=response)
client = object()
api = self._make_one(gax_api, client)
iterator = api.list_sinks(self.PROJECT)
page = six.next(iterator.pages)
sinks = list(page)
token = iterator.next_page_token
# First check the token.
self.assertEqual(token, TOKEN)
# Then check the sinks returned.
self.assertEqual(len(sinks), 1)
sink = sinks[0]
self.assertIsInstance(sink, Sink)
self.assertEqual(sink.name, self.SINK_PATH)
self.assertEqual(sink.filter_, self.FILTER)
self.assertEqual(sink.destination, self.DESTINATION_URI)
self.assertIs(sink.client, client)
project, page_size, options = gax_api._list_sinks_called_with
self.assertEqual(project, self.PROJECT_PATH)
self.assertEqual(page_size, 0)
self.assertEqual(options.page_token, INITIAL_PAGE)
def test_list_sinks_w_paging(self):
from google.cloud.grpc.logging.v2.logging_config_pb2 import LogSink
from google.cloud._testing import _GAXPageIterator
from google.cloud.logging.sink import Sink
TOKEN = 'TOKEN'
PAGE_SIZE = 42
sink_pb = LogSink(name=self.SINK_PATH,
destination=self.DESTINATION_URI,
filter=self.FILTER)
response = _GAXPageIterator([sink_pb])
gax_api = _GAXSinksAPI(_list_sinks_response=response)
client = object()
api = self._make_one(gax_api, client)
iterator = api.list_sinks(
self.PROJECT, page_size=PAGE_SIZE, page_token=TOKEN)
sinks = list(iterator)
token = iterator.next_page_token
# First check the token.
self.assertIsNone(token)
# Then check the sinks returned.
self.assertEqual(len(sinks), 1)
sink = sinks[0]
self.assertIsInstance(sink, Sink)
self.assertEqual(sink.name, self.SINK_PATH)
self.assertEqual(sink.filter_, self.FILTER)
self.assertEqual(sink.destination, self.DESTINATION_URI)
self.assertIs(sink.client, client)
project, page_size, options = gax_api._list_sinks_called_with
self.assertEqual(project, self.PROJECT_PATH)
self.assertEqual(page_size, PAGE_SIZE)
self.assertEqual(options.page_token, TOKEN)
def test_sink_create_error(self):
from google.gax.errors import GaxError
gax_api = _GAXSinksAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.sink_create(
self.PROJECT, self.SINK_NAME, self.FILTER,
self.DESTINATION_URI)
def test_sink_create_conflict(self):
from google.cloud.exceptions import Conflict
gax_api = _GAXSinksAPI(_create_sink_conflict=True)
api = self._make_one(gax_api, None)
with self.assertRaises(Conflict):
api.sink_create(
self.PROJECT, self.SINK_NAME, self.FILTER,
self.DESTINATION_URI)
def test_sink_create_ok(self):
from google.cloud.grpc.logging.v2.logging_config_pb2 import LogSink
gax_api = _GAXSinksAPI()
api = self._make_one(gax_api, None)
api.sink_create(
self.PROJECT, self.SINK_NAME, self.FILTER, self.DESTINATION_URI)
parent, sink, options = (
gax_api._create_sink_called_with)
self.assertEqual(parent, self.PROJECT_PATH)
self.assertIsInstance(sink, LogSink)
self.assertEqual(sink.name, self.SINK_NAME)
self.assertEqual(sink.filter, self.FILTER)
self.assertEqual(sink.destination, self.DESTINATION_URI)
self.assertIsNone(options)
def test_sink_get_error(self):
from google.cloud.exceptions import NotFound
gax_api = _GAXSinksAPI()
api = self._make_one(gax_api, None)
with self.assertRaises(NotFound):
api.sink_get(self.PROJECT, self.SINK_NAME)
def test_sink_get_miss(self):
from google.gax.errors import GaxError
gax_api = _GAXSinksAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.sink_get(self.PROJECT, self.SINK_NAME)
def test_sink_get_hit(self):
from google.cloud.grpc.logging.v2.logging_config_pb2 import LogSink
RESPONSE = {
'name': self.SINK_PATH,
'filter': self.FILTER,
'destination': self.DESTINATION_URI,
}
sink_pb = LogSink(name=self.SINK_PATH,
destination=self.DESTINATION_URI,
filter=self.FILTER)
gax_api = _GAXSinksAPI(_get_sink_response=sink_pb)
api = self._make_one(gax_api, None)
response = api.sink_get(self.PROJECT, self.SINK_NAME)
self.assertEqual(response, RESPONSE)
sink_name, options = gax_api._get_sink_called_with
self.assertEqual(sink_name, self.SINK_PATH)
self.assertIsNone(options)
def test_sink_update_error(self):
from google.gax.errors import GaxError
gax_api = _GAXSinksAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.sink_update(
self.PROJECT, self.SINK_NAME, self.FILTER,
self.DESTINATION_URI)
def test_sink_update_miss(self):
from google.cloud.exceptions import NotFound
gax_api = _GAXSinksAPI()
api = self._make_one(gax_api, None)
with self.assertRaises(NotFound):
api.sink_update(
self.PROJECT, self.SINK_NAME, self.FILTER,
self.DESTINATION_URI)
def test_sink_update_hit(self):
from google.cloud.grpc.logging.v2.logging_config_pb2 import LogSink
response = LogSink(name=self.SINK_NAME,
destination=self.DESTINATION_URI,
filter=self.FILTER)
gax_api = _GAXSinksAPI(_update_sink_response=response)
api = self._make_one(gax_api, None)
api.sink_update(
self.PROJECT, self.SINK_NAME, self.FILTER, self.DESTINATION_URI)
sink_name, sink, options = (
gax_api._update_sink_called_with)
self.assertEqual(sink_name, self.SINK_PATH)
self.assertIsInstance(sink, LogSink)
self.assertEqual(sink.name, self.SINK_PATH)
self.assertEqual(sink.filter, self.FILTER)
self.assertEqual(sink.destination, self.DESTINATION_URI)
self.assertIsNone(options)
def test_sink_delete_error(self):
from google.gax.errors import GaxError
gax_api = _GAXSinksAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.sink_delete(self.PROJECT, self.SINK_NAME)
def test_sink_delete_miss(self):
from google.cloud.exceptions import NotFound
gax_api = _GAXSinksAPI(_sink_not_found=True)
api = self._make_one(gax_api, None)
with self.assertRaises(NotFound):
api.sink_delete(self.PROJECT, self.SINK_NAME)
def test_sink_delete_hit(self):
gax_api = _GAXSinksAPI()
api = self._make_one(gax_api, None)
api.sink_delete(self.PROJECT, self.SINK_NAME)
sink_name, options = gax_api._delete_sink_called_with
self.assertEqual(sink_name, self.SINK_PATH)
self.assertIsNone(options)
@unittest.skipUnless(_HAVE_GAX, 'No gax-python')
class Test_MetricsAPI(_Base, unittest.TestCase):
METRIC_NAME = 'metric_name'
METRIC_PATH = 'projects/%s/metrics/%s' % (_Base.PROJECT, METRIC_NAME)
DESCRIPTION = 'Description'
@staticmethod
def _get_target_class():
from google.cloud.logging._gax import _MetricsAPI
return _MetricsAPI
def test_ctor(self):
gax_api = _GAXMetricsAPI()
api = self._make_one(gax_api, None)
self.assertIs(api._gax_api, gax_api)
def test_list_metrics_no_paging(self):
import six
from google.gax import INITIAL_PAGE
from google.cloud.grpc.logging.v2.logging_metrics_pb2 import LogMetric
from google.cloud._testing import _GAXPageIterator
from google.cloud.logging.metric import Metric
TOKEN = 'TOKEN'
metric_pb = LogMetric(name=self.METRIC_PATH,
description=self.DESCRIPTION,
filter=self.FILTER)
response = _GAXPageIterator([metric_pb], page_token=TOKEN)
gax_api = _GAXMetricsAPI(_list_log_metrics_response=response)
client = object()
api = self._make_one(gax_api, client)
iterator = api.list_metrics(self.PROJECT)
page = six.next(iterator.pages)
metrics = list(page)
token = iterator.next_page_token
# First check the token.
self.assertEqual(token, TOKEN)
# Then check the metrics returned.
self.assertEqual(len(metrics), 1)
metric = metrics[0]
self.assertIsInstance(metric, Metric)
self.assertEqual(metric.name, self.METRIC_PATH)
self.assertEqual(metric.filter_, self.FILTER)
self.assertEqual(metric.description, self.DESCRIPTION)
self.assertIs(metric.client, client)
project, page_size, options = gax_api._list_log_metrics_called_with
self.assertEqual(project, self.PROJECT_PATH)
self.assertEqual(page_size, 0)
self.assertEqual(options.page_token, INITIAL_PAGE)
def test_list_metrics_w_paging(self):
from google.cloud.grpc.logging.v2.logging_metrics_pb2 import LogMetric
from google.cloud._testing import _GAXPageIterator
from google.cloud.logging.metric import Metric
TOKEN = 'TOKEN'
PAGE_SIZE = 42
metric_pb = LogMetric(name=self.METRIC_PATH,
description=self.DESCRIPTION,
filter=self.FILTER)
response = _GAXPageIterator([metric_pb])
gax_api = _GAXMetricsAPI(_list_log_metrics_response=response)
client = object()
api = self._make_one(gax_api, client)
iterator = api.list_metrics(
self.PROJECT, page_size=PAGE_SIZE, page_token=TOKEN)
metrics = list(iterator)
token = iterator.next_page_token
# First check the token.
self.assertIsNone(token)
# Then check the metrics returned.
self.assertEqual(len(metrics), 1)
metric = metrics[0]
self.assertIsInstance(metric, Metric)
self.assertEqual(metric.name, self.METRIC_PATH)
self.assertEqual(metric.filter_, self.FILTER)
self.assertEqual(metric.description, self.DESCRIPTION)
self.assertIs(metric.client, client)
project, page_size, options = gax_api._list_log_metrics_called_with
self.assertEqual(project, self.PROJECT_PATH)
self.assertEqual(page_size, PAGE_SIZE)
self.assertEqual(options.page_token, TOKEN)
def test_metric_create_error(self):
from google.gax.errors import GaxError
gax_api = _GAXMetricsAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.metric_create(
self.PROJECT, self.METRIC_NAME, self.FILTER,
self.DESCRIPTION)
def test_metric_create_conflict(self):
from google.cloud.exceptions import Conflict
gax_api = _GAXMetricsAPI(_create_log_metric_conflict=True)
api = self._make_one(gax_api, None)
with self.assertRaises(Conflict):
api.metric_create(
self.PROJECT, self.METRIC_NAME, self.FILTER,
self.DESCRIPTION)
def test_metric_create_ok(self):
from google.cloud.grpc.logging.v2.logging_metrics_pb2 import LogMetric
gax_api = _GAXMetricsAPI()
api = self._make_one(gax_api, None)
api.metric_create(
self.PROJECT, self.METRIC_NAME, self.FILTER, self.DESCRIPTION)
parent, metric, options = (
gax_api._create_log_metric_called_with)
self.assertEqual(parent, self.PROJECT_PATH)
self.assertIsInstance(metric, LogMetric)
self.assertEqual(metric.name, self.METRIC_NAME)
self.assertEqual(metric.filter, self.FILTER)
self.assertEqual(metric.description, self.DESCRIPTION)
self.assertIsNone(options)
def test_metric_get_error(self):
from google.cloud.exceptions import NotFound
gax_api = _GAXMetricsAPI()
api = self._make_one(gax_api, None)
with self.assertRaises(NotFound):
api.metric_get(self.PROJECT, self.METRIC_NAME)
def test_metric_get_miss(self):
from google.gax.errors import GaxError
gax_api = _GAXMetricsAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.metric_get(self.PROJECT, self.METRIC_NAME)
def test_metric_get_hit(self):
from google.cloud.grpc.logging.v2.logging_metrics_pb2 import LogMetric
RESPONSE = {
'name': self.METRIC_PATH,
'filter': self.FILTER,
'description': self.DESCRIPTION,
}
metric_pb = LogMetric(name=self.METRIC_PATH,
description=self.DESCRIPTION,
filter=self.FILTER)
gax_api = _GAXMetricsAPI(_get_log_metric_response=metric_pb)
api = self._make_one(gax_api, None)
response = api.metric_get(self.PROJECT, self.METRIC_NAME)
self.assertEqual(response, RESPONSE)
metric_name, options = gax_api._get_log_metric_called_with
self.assertEqual(metric_name, self.METRIC_PATH)
self.assertIsNone(options)
def test_metric_update_error(self):
from google.gax.errors import GaxError
gax_api = _GAXMetricsAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.metric_update(
self.PROJECT, self.METRIC_NAME, self.FILTER,
self.DESCRIPTION)
def test_metric_update_miss(self):
from google.cloud.exceptions import NotFound
gax_api = _GAXMetricsAPI()
api = self._make_one(gax_api, None)
with self.assertRaises(NotFound):
api.metric_update(
self.PROJECT, self.METRIC_NAME, self.FILTER,
self.DESCRIPTION)
def test_metric_update_hit(self):
from google.cloud.grpc.logging.v2.logging_metrics_pb2 import LogMetric
response = LogMetric(name=self.METRIC_NAME,
description=self.DESCRIPTION,
filter=self.FILTER)
gax_api = _GAXMetricsAPI(_update_log_metric_response=response)
api = self._make_one(gax_api, None)
api.metric_update(
self.PROJECT, self.METRIC_NAME, self.FILTER, self.DESCRIPTION)
metric_name, metric, options = (
gax_api._update_log_metric_called_with)
self.assertEqual(metric_name, self.METRIC_PATH)
self.assertIsInstance(metric, LogMetric)
self.assertEqual(metric.name, self.METRIC_PATH)
self.assertEqual(metric.filter, self.FILTER)
self.assertEqual(metric.description, self.DESCRIPTION)
self.assertIsNone(options)
def test_metric_delete_error(self):
from google.gax.errors import GaxError
gax_api = _GAXMetricsAPI(_random_gax_error=True)
api = self._make_one(gax_api, None)
with self.assertRaises(GaxError):
api.metric_delete(self.PROJECT, self.METRIC_NAME)
def test_metric_delete_miss(self):
from google.cloud.exceptions import NotFound
gax_api = _GAXMetricsAPI(_log_metric_not_found=True)
api = self._make_one(gax_api, None)
with self.assertRaises(NotFound):
api.metric_delete(self.PROJECT, self.METRIC_NAME)
def test_metric_delete_hit(self):
gax_api = _GAXMetricsAPI()
api = self._make_one(gax_api, None)
api.metric_delete(self.PROJECT, self.METRIC_NAME)
metric_name, options = gax_api._delete_log_metric_called_with
self.assertEqual(metric_name, self.METRIC_PATH)
self.assertIsNone(options)
@unittest.skipUnless(_HAVE_GAX, 'No gax-python')
class Test_make_gax_logging_api(unittest.TestCase):
def _call_fut(self, client):
from google.cloud.logging._gax import make_gax_logging_api
return make_gax_logging_api(client)
def test_it(self):
from google.cloud.logging._gax import _LoggingAPI
from google.cloud.logging._gax import DEFAULT_USER_AGENT
creds = object()
conn = mock.Mock(credentials=creds, spec=['credentials'])
client = mock.Mock(_connection=conn, spec=['_connection'])
channels = []
channel_args = []
channel_obj = object()
generated = object()
def make_channel(*args):
channel_args.append(args)
return channel_obj
def generated_api(channel=None):
channels.append(channel)
return generated
host = 'foo.apis.invalid'
generated_api.SERVICE_ADDRESS = host
patch = mock.patch.multiple(
'google.cloud.logging._gax',
LoggingServiceV2Client=generated_api,
make_secure_channel=make_channel)
with patch:
logging_api = self._call_fut(client)
self.assertEqual(channels, [channel_obj])
self.assertEqual(channel_args,
[(creds, DEFAULT_USER_AGENT, host)])
self.assertIsInstance(logging_api, _LoggingAPI)
self.assertIs(logging_api._gax_api, generated)
self.assertIs(logging_api._client, client)
@unittest.skipUnless(_HAVE_GAX, 'No gax-python')
class Test_make_gax_metrics_api(unittest.TestCase):
def _call_fut(self, client):
from google.cloud.logging._gax import make_gax_metrics_api
return make_gax_metrics_api(client)
def test_it(self):
from google.cloud.logging._gax import _MetricsAPI
from google.cloud.logging._gax import DEFAULT_USER_AGENT
creds = object()
conn = mock.Mock(credentials=creds, spec=['credentials'])
client = mock.Mock(_connection=conn, spec=['_connection'])
channels = []
channel_args = []
channel_obj = object()
generated = object()
def make_channel(*args):
channel_args.append(args)
return channel_obj
def generated_api(channel=None):
channels.append(channel)
return generated
host = 'foo.apis.invalid'
generated_api.SERVICE_ADDRESS = host
patch = mock.patch.multiple(
'google.cloud.logging._gax',
MetricsServiceV2Client=generated_api,
make_secure_channel=make_channel)
with patch:
metrics_api = self._call_fut(client)
self.assertEqual(channels, [channel_obj])
self.assertEqual(channel_args,
[(creds, DEFAULT_USER_AGENT, host)])
self.assertIsInstance(metrics_api, _MetricsAPI)
self.assertIs(metrics_api._gax_api, generated)
self.assertIs(metrics_api._client, client)
@unittest.skipUnless(_HAVE_GAX, 'No gax-python')
class Test_make_gax_sinks_api(unittest.TestCase):
def _call_fut(self, client):
from google.cloud.logging._gax import make_gax_sinks_api
return make_gax_sinks_api(client)
def test_it(self):
from google.cloud.logging._gax import _SinksAPI
from google.cloud.logging._gax import DEFAULT_USER_AGENT
creds = object()
conn = mock.Mock(credentials=creds, spec=['credentials'])
client = mock.Mock(_connection=conn, spec=['_connection'])
channels = []
channel_args = []
channel_obj = object()
generated = object()
def make_channel(*args):
channel_args.append(args)
return channel_obj
def generated_api(channel=None):
channels.append(channel)
return generated
host = 'foo.apis.invalid'
generated_api.SERVICE_ADDRESS = host
patch = mock.patch.multiple(
'google.cloud.logging._gax',
ConfigServiceV2Client=generated_api,
make_secure_channel=make_channel)
with patch:
sinks_api = self._call_fut(client)
self.assertEqual(channels, [channel_obj])
self.assertEqual(channel_args,
[(creds, DEFAULT_USER_AGENT, host)])
self.assertIsInstance(sinks_api, _SinksAPI)
self.assertIs(sinks_api._gax_api, generated)
self.assertIs(sinks_api._client, client)
class _GAXLoggingAPI(_GAXBaseAPI):
_delete_not_found = False
def list_log_entries(
self, resource_names, project_ids, filter_,
order_by, page_size, options):
self._list_log_entries_called_with = (
resource_names, project_ids, filter_,
order_by, page_size, options)
return self._list_log_entries_response
def write_log_entries(self, entries, log_name, resource, labels,
partial_success, options):
self._write_log_entries_called_with = (
entries, log_name, resource, labels, partial_success, options)
def delete_log(self, log_name, options):
from google.gax.errors import GaxError
self._delete_log_called_with = log_name, options
if self._random_gax_error:
raise GaxError('error')
if self._delete_not_found:
raise GaxError('notfound', self._make_grpc_not_found())
class _GAXSinksAPI(_GAXBaseAPI):
_create_sink_conflict = False
_sink_not_found = False
def list_sinks(self, parent, page_size, options):
self._list_sinks_called_with = parent, page_size, options
return self._list_sinks_response
def create_sink(self, parent, sink, options):
from google.gax.errors import GaxError
self._create_sink_called_with = parent, sink, options
if self._random_gax_error:
raise GaxError('error')
if self._create_sink_conflict:
raise GaxError('conflict', self._make_grpc_failed_precondition())
def get_sink(self, sink_name, options):
from google.gax.errors import GaxError
self._get_sink_called_with = sink_name, options
if self._random_gax_error:
raise GaxError('error')
try:
return self._get_sink_response
except AttributeError:
raise GaxError('notfound', self._make_grpc_not_found())
def update_sink(self, sink_name, sink, options=None):
from google.gax.errors import GaxError
self._update_sink_called_with = sink_name, sink, options
if self._random_gax_error:
raise GaxError('error')
try:
return self._update_sink_response
except AttributeError:
raise GaxError('notfound', self._make_grpc_not_found())
def delete_sink(self, sink_name, options=None):
from google.gax.errors import GaxError
self._delete_sink_called_with = sink_name, options
if self._random_gax_error:
raise GaxError('error')
if self._sink_not_found:
raise GaxError('notfound', self._make_grpc_not_found())
class _GAXMetricsAPI(_GAXBaseAPI):
_create_log_metric_conflict = False
_log_metric_not_found = False
def list_log_metrics(self, parent, page_size, options):
self._list_log_metrics_called_with = parent, page_size, options
return self._list_log_metrics_response
def create_log_metric(self, parent, metric, options):
from google.gax.errors import GaxError
self._create_log_metric_called_with = parent, metric, options
if self._random_gax_error:
raise GaxError('error')
if self._create_log_metric_conflict:
raise GaxError('conflict', self._make_grpc_failed_precondition())
def get_log_metric(self, metric_name, options):
from google.gax.errors import GaxError
self._get_log_metric_called_with = metric_name, options
if self._random_gax_error:
raise GaxError('error')
try:
return self._get_log_metric_response
except AttributeError:
raise GaxError('notfound', self._make_grpc_not_found())
def update_log_metric(self, metric_name, metric, options=None):
from google.gax.errors import GaxError
self._update_log_metric_called_with = metric_name, metric, options
if self._random_gax_error:
raise GaxError('error')
try:
return self._update_log_metric_response
except AttributeError:
raise GaxError('notfound', self._make_grpc_not_found())
def delete_log_metric(self, metric_name, options=None):
from google.gax.errors import GaxError
self._delete_log_metric_called_with = metric_name, options
if self._random_gax_error:
raise GaxError('error')
if self._log_metric_not_found:
raise GaxError('notfound', self._make_grpc_not_found())
| 36.597659 | 78 | 0.652262 | 5,783 | 50,029 | 5.357946 | 0.057755 | 0.069711 | 0.033403 | 0.018073 | 0.822011 | 0.792868 | 0.763466 | 0.738841 | 0.723253 | 0.702888 | 0 | 0.003701 | 0.260029 | 50,029 | 1,366 | 79 | 36.624451 | 0.833279 | 0.030143 | 0 | 0.650704 | 0 | 0 | 0.033007 | 0.004497 | 0 | 0 | 0 | 0 | 0.218779 | 1 | 0.070423 | false | 0 | 0.112676 | 0.000939 | 0.228169 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d2d16146b1ad8abacdcecd1f011baaaa7a60e95 | 48 | py | Python | nr_pypackage/blueprints/blueprint_database/__init__.py | nitred/nr-pypackage | 426fa4ffca5a69ca4ed6f70cdf9d9d6be76e4e45 | [
"MIT"
] | 1 | 2019-07-10T07:00:24.000Z | 2019-07-10T07:00:24.000Z | nr_pypackage/blueprints/blueprint_database/__init__.py | nitred/nr-pypackage | 426fa4ffca5a69ca4ed6f70cdf9d9d6be76e4e45 | [
"MIT"
] | 9 | 2018-09-27T10:33:58.000Z | 2018-10-26T12:47:57.000Z | nr_pypackage/blueprints/blueprint_database/__init__.py | nitred/nr-pypackage | 426fa4ffca5a69ca4ed6f70cdf9d9d6be76e4e45 | [
"MIT"
] | null | null | null | from .blueprint_database import handle_database
| 24 | 47 | 0.895833 | 6 | 48 | 6.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
4d33a35bea05a38c647ad2612606a7c4bad6b5bb | 168 | py | Python | school/admin.py | manisharmagarg/Torrins | a468677d91699795ed82d5896199b197a6771fe2 | [
"Apache-2.0"
] | null | null | null | school/admin.py | manisharmagarg/Torrins | a468677d91699795ed82d5896199b197a6771fe2 | [
"Apache-2.0"
] | null | null | null | school/admin.py | manisharmagarg/Torrins | a468677d91699795ed82d5896199b197a6771fe2 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
# Register your models here.
from .models import (
SchoolProfile,
)
# Register your models here.
admin.site.register(SchoolProfile) | 18.666667 | 34 | 0.779762 | 21 | 168 | 6.238095 | 0.52381 | 0.183206 | 0.274809 | 0.335878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 168 | 9 | 34 | 18.666667 | 0.909722 | 0.315476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4d6c30fbb1e0206060a055a77257d131b3674f22 | 101 | py | Python | sentry_mattermost/__init__.py | dbalagansky/sentry-mattermost | 4bb7796650fcf73b9f8465c9d2bc6b5942df025a | [
"MIT"
] | 1 | 2019-02-23T22:49:53.000Z | 2019-02-23T22:49:53.000Z | sentry_mattermost/__init__.py | dbalagansky/sentry-mattermost | 4bb7796650fcf73b9f8465c9d2bc6b5942df025a | [
"MIT"
] | null | null | null | sentry_mattermost/__init__.py | dbalagansky/sentry-mattermost | 4bb7796650fcf73b9f8465c9d2bc6b5942df025a | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from sentry_plugins.base import assert_package_not_installed
| 25.25 | 60 | 0.90099 | 14 | 101 | 5.857143 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089109 | 101 | 3 | 61 | 33.666667 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d75728d0b7d4c38c46a3e9b9f10300ebdfcbaeb | 1,242 | py | Python | tests/data_sources/sun/test_sun_data_source.py | rohancalum/nowcasting_dataset | b88e31f4b381d97bb06274f357992108472a3f07 | [
"MIT"
] | null | null | null | tests/data_sources/sun/test_sun_data_source.py | rohancalum/nowcasting_dataset | b88e31f4b381d97bb06274f357992108472a3f07 | [
"MIT"
] | null | null | null | tests/data_sources/sun/test_sun_data_source.py | rohancalum/nowcasting_dataset | b88e31f4b381d97bb06274f357992108472a3f07 | [
"MIT"
] | null | null | null | import pandas as pd
from nowcasting_dataset.data_sources.sun.sun_data_source import SunDataSource
def test_init(test_data_folder):
zarr_path = test_data_folder + "/sun/test.zarr"
_ = SunDataSource(zarr_path=zarr_path, history_minutes=30, forecast_minutes=60)
def test_get_example(test_data_folder):
zarr_path = test_data_folder + "/sun/test.zarr"
sun_data_source = SunDataSource(zarr_path=zarr_path, history_minutes=30, forecast_minutes=60)
x = 256895.63164759654
y = 666180.3018829626
start_dt = pd.Timestamp("2019-01-01 12:00:00.000")
example = sun_data_source.get_example(t0_dt=start_dt, x_meters_center=x, y_meters_center=y)
assert len(example.elevation) == 19
assert len(example.azimuth) == 19
def test_get_example_different_year(test_data_folder):
zarr_path = test_data_folder + "/sun/test.zarr"
sun_data_source = SunDataSource(zarr_path=zarr_path, history_minutes=30, forecast_minutes=60)
x = 256895.63164759654
y = 666180.3018829626
start_dt = pd.Timestamp("2021-01-01 12:00:00.000")
example = sun_data_source.get_example(t0_dt=start_dt, x_meters_center=x, y_meters_center=y)
assert len(example.elevation) == 19
assert len(example.azimuth) == 19
| 31.05 | 97 | 0.756039 | 190 | 1,242 | 4.610526 | 0.263158 | 0.082192 | 0.09589 | 0.061644 | 0.829909 | 0.829909 | 0.829909 | 0.829909 | 0.829909 | 0.829909 | 0 | 0.114662 | 0.143317 | 1,242 | 39 | 98 | 31.846154 | 0.708647 | 0 | 0 | 0.652174 | 0 | 0 | 0.070853 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 1 | 0.130435 | false | 0 | 0.086957 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d800148e413f6659e339b1258a8bd8b68562296 | 160 | py | Python | wydatki/admin.py | ciszko/Splitistic | bd2924d04f3629b377915af6e3641030ad0a941c | [
"MIT"
] | null | null | null | wydatki/admin.py | ciszko/Splitistic | bd2924d04f3629b377915af6e3641030ad0a941c | [
"MIT"
] | null | null | null | wydatki/admin.py | ciszko/Splitistic | bd2924d04f3629b377915af6e3641030ad0a941c | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Zakup, Uzytkownik
# Register your models here.
admin.site.register(Zakup)
#admin.site.register(Uzytkownik) | 26.666667 | 37 | 0.8125 | 22 | 160 | 5.909091 | 0.545455 | 0.138462 | 0.261538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 160 | 6 | 38 | 26.666667 | 0.902778 | 0.35625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4da4a0b74535c34e80ced3e1d0620b1cddc358b1 | 37 | py | Python | exporter/__init__.py | piotrb5e3/wason-w-interval | 5139352c63f4eb155e9d1f0afde48de1aadce1a3 | [
"MIT"
] | 1,450 | 2019-03-04T15:47:38.000Z | 2022-03-30T03:33:35.000Z | exporter/__init__.py | piotrb5e3/wason-w-interval | 5139352c63f4eb155e9d1f0afde48de1aadce1a3 | [
"MIT"
] | 34 | 2019-03-05T09:50:38.000Z | 2021-08-31T15:20:27.000Z | jsuarez/extra/embyr_deprecated/embyr2d/__init__.py | LaudateCorpus1/neural-mmo | a9a7c34a1fb24fbf252e2958bdb869c213e580a3 | [
"MIT"
] | 164 | 2019-03-04T16:09:19.000Z | 2022-02-26T15:43:40.000Z | from .application import Application
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4deadbe3a4bdcbef8ba231cca06d13587b44f582 | 30 | py | Python | snetwork/utils/__init__.py | nyue/MyActionPipeline | c730cd0442b8761cf7d2e082d328b0a3fbcd2641 | [
"Apache-2.0"
] | null | null | null | snetwork/utils/__init__.py | nyue/MyActionPipeline | c730cd0442b8761cf7d2e082d328b0a3fbcd2641 | [
"Apache-2.0"
] | null | null | null | snetwork/utils/__init__.py | nyue/MyActionPipeline | c730cd0442b8761cf7d2e082d328b0a3fbcd2641 | [
"Apache-2.0"
] | null | null | null | def status():
return True
| 10 | 15 | 0.633333 | 4 | 30 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 30 | 2 | 16 | 15 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
128cf35f710a65f04e898f4357b717c929ec3f0e | 53 | py | Python | src/kol/bot/__init__.py | thedufer/pykol | bad8ff4bf2f4bc6a7a5b6dbbd9333ef5aaf3432a | [
"BSD-3-Clause"
] | 1 | 2016-05-08T12:10:32.000Z | 2016-05-08T12:10:32.000Z | src/kol/bot/__init__.py | ZJ/pykol | c0523a4a4d09bcdf16f8c86c78da96914e961076 | [
"BSD-3-Clause"
] | null | null | null | src/kol/bot/__init__.py | ZJ/pykol | c0523a4a4d09bcdf16f8c86c78da96914e961076 | [
"BSD-3-Clause"
] | null | null | null | # $Id: __init__.py 475 2008-06-28 14:37:00Z scelis $
| 26.5 | 52 | 0.679245 | 11 | 53 | 2.909091 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.377778 | 0.150943 | 53 | 1 | 53 | 53 | 0.333333 | 0.943396 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
12dc663e25aff381f223e67d70a9c263e7abce50 | 32 | py | Python | boxbox/utils/__init__.py | jbwang1997/BoxBox | 827d4af74a645da832282a2faa65a59240b82240 | [
"Apache-2.0"
] | 1 | 2021-08-14T07:19:26.000Z | 2021-08-14T07:19:26.000Z | boxbox/utils/__init__.py | jbwang1997/BoxBox | 827d4af74a645da832282a2faa65a59240b82240 | [
"Apache-2.0"
] | null | null | null | boxbox/utils/__init__.py | jbwang1997/BoxBox | 827d4af74a645da832282a2faa65a59240b82240 | [
"Apache-2.0"
] | null | null | null | from .registry import Registrar
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12f548814aeec702cc58b58f44952a54e555a015 | 2,355 | py | Python | test-framework/test-suites/integration/tests/disable/test_disable_cart.py | shivanshs9/stacki | 258740748281dfe89b0f566261eaf23102f91aa4 | [
"BSD-3-Clause"
] | null | null | null | test-framework/test-suites/integration/tests/disable/test_disable_cart.py | shivanshs9/stacki | 258740748281dfe89b0f566261eaf23102f91aa4 | [
"BSD-3-Clause"
] | null | null | null | test-framework/test-suites/integration/tests/disable/test_disable_cart.py | shivanshs9/stacki | 258740748281dfe89b0f566261eaf23102f91aa4 | [
"BSD-3-Clause"
] | null | null | null | import json
from textwrap import dedent
class TestDisableCart:
def test_disable_cart_no_args(self, host):
result = host.run('stack disable cart')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "cart" argument is required
{cart ...} [box=string]
''')
def test_disable_cart_invalid_cart(self, host):
result = host.run('stack disable cart test')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "test" argument is not a valid cart
{cart ...} [box=string]
''')
def test_disable_cart_invalid_box(self, host):
result = host.run('stack disable cart test box=test')
assert result.rc == 255
assert result.stderr == 'error - unknown box "test"\n'
def test_disable_cart_default_box(self, host):
# Add our test cart
result = host.run('stack add cart test')
assert result.rc == 0
# Add the cart to the default box
result = host.run('stack enable cart test')
assert result.rc == 0
# Confirm it is in the box now
result = host.run('stack list cart test output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'name': 'test',
'boxes': 'default'
}
]
# Disable the cart
result = host.run('stack disable cart test')
assert result.rc == 0
# Confirm it isn't in the box now
result = host.run('stack list cart test output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'name': 'test',
'boxes': ''
}
]
def test_disable_cart_with_box(self, host):
# Add our test box
result = host.run('stack add box test')
assert result.rc == 0
# Add our test cart
result = host.run('stack add cart test')
assert result.rc == 0
# Add the cart to the test box
result = host.run('stack enable cart test box=test')
assert result.rc == 0
# Confirm it is in the box now
result = host.run('stack list cart test output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'name': 'test',
'boxes': 'test'
}
]
# Disable the cart
result = host.run('stack disable cart test box=test')
assert result.rc == 0
# Confirm it isn't in the box now
result = host.run('stack list cart test output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'name': 'test',
'boxes': ''
}
]
| 24.53125 | 62 | 0.649682 | 350 | 2,355 | 4.314286 | 0.157143 | 0.135099 | 0.12053 | 0.166887 | 0.854967 | 0.85298 | 0.800662 | 0.800662 | 0.67351 | 0.611258 | 0 | 0.01087 | 0.218684 | 2,355 | 95 | 63 | 24.789474 | 0.809783 | 0.114225 | 0 | 0.57971 | 0 | 0 | 0.304725 | 0 | 0 | 0 | 0 | 0 | 0.304348 | 1 | 0.072464 | false | 0 | 0.028986 | 0 | 0.115942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
420661fae1deeefacc4778493ef26f9b783f2fa0 | 117 | py | Python | Chapter02/sources/matplot-ex1.py | gabrielmahia/AIWuShu | 0c2507e812ab0824f50e44c17470ba15fc1042d2 | [
"MIT"
] | 63 | 2019-08-26T04:52:37.000Z | 2022-02-16T19:04:46.000Z | Chapter02/sources/matplot-ex1.py | urantialife/Hands-On-Artificial-Intelligence-for-Cybersecurity | 507e736b23b8c62ded7f544763edaf3ccaba506d | [
"MIT"
] | null | null | null | Chapter02/sources/matplot-ex1.py | urantialife/Hands-On-Artificial-Intelligence-for-Cybersecurity | 507e736b23b8c62ded7f544763edaf3ccaba506d | [
"MIT"
] | 47 | 2019-08-15T21:46:01.000Z | 2022-03-08T01:12:23.000Z | import numpy as np
import matplotlib.pyplot as plt
plt.plot(np.arange(15), np.arange(15))
plt.show()
| 13 | 39 | 0.649573 | 19 | 117 | 4 | 0.578947 | 0.210526 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.230769 | 117 | 8 | 40 | 14.625 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
42486c43c635a53d63e421a44fa8dab10d71812d | 197 | py | Python | Library_version.py | fulcrum101/LAPO_fire_in_the_server_room | e8e6216679dd79d64e0748ee76857846d8ac439b | [
"MIT"
] | null | null | null | Library_version.py | fulcrum101/LAPO_fire_in_the_server_room | e8e6216679dd79d64e0748ee76857846d8ac439b | [
"MIT"
] | null | null | null | Library_version.py | fulcrum101/LAPO_fire_in_the_server_room | e8e6216679dd79d64e0748ee76857846d8ac439b | [
"MIT"
] | null | null | null | import pygame
import sys
import firebase_admin
print(f'Python version: {sys.version}')
print(f'Pygame version: {pygame.__version__}')
print(f'Firebase_admin version: {firebase_admin.__version__}') | 28.142857 | 62 | 0.80203 | 27 | 197 | 5.444444 | 0.333333 | 0.265306 | 0.176871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076142 | 197 | 7 | 62 | 28.142857 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0.590909 | 0.141414 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
42957c806d01ed0750dc6dc606e6c9d78c89c7bc | 812 | py | Python | heimdal/cloudwatch.py | dbuksbaum/heimdal | 7de7b9b93abdb03082b5234bbf0b45e20e90c789 | [
"MIT"
] | null | null | null | heimdal/cloudwatch.py | dbuksbaum/heimdal | 7de7b9b93abdb03082b5234bbf0b45e20e90c789 | [
"MIT"
] | null | null | null | heimdal/cloudwatch.py | dbuksbaum/heimdal | 7de7b9b93abdb03082b5234bbf0b45e20e90c789 | [
"MIT"
] | null | null | null | from collectors import BaseCollector
class AlarmsCollector(BaseCollector):
@property
def region_name(self):
return self.region
def __init__(self, region=None):
super().__init__()
self.region=region
def list_all_alarms(self):
return self.getResource(serviceName='cloudwatch', region=self.region).alarms.all()
def collect(self):
return self.list_all_alarms()
class MetricsCollector(BaseCollector):
@property
def region_name(self):
return self.region
def __init__(self, region=None):
super().__init__()
self.region=region
def list_all_metrics(self):
return self.getResource(serviceName='cloudwatch', region=self.region).metrics.all()
def collect(self):
return self.list_all_metrics()
| 24.606061 | 91 | 0.678571 | 92 | 812 | 5.706522 | 0.26087 | 0.152381 | 0.16 | 0.114286 | 0.784762 | 0.784762 | 0.784762 | 0.784762 | 0.655238 | 0.419048 | 0 | 0 | 0.21798 | 812 | 32 | 92 | 25.375 | 0.826772 | 0 | 0 | 0.608696 | 0 | 0 | 0.024631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.347826 | false | 0 | 0.043478 | 0.26087 | 0.73913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4298b25ab62127f106d6780fa2bd886bfcb78ae5 | 1,997 | py | Python | pirates/leveleditor/worldData/int_battle_test.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 3 | 2021-02-25T06:38:13.000Z | 2022-03-22T07:00:15.000Z | pirates/leveleditor/worldData/int_battle_test.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | null | null | null | pirates/leveleditor/worldData/int_battle_test.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 1 | 2021-02-25T06:38:17.000Z | 2021-02-25T06:38:17.000Z | # uncompyle6 version 3.2.0
# Python bytecode 2.4 (62061)
# Decompiled from: Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)]
# Embedded file name: pirates.leveleditor.worldData.int_battle_test
from pandac.PandaModules import Point3, VBase3, Vec4
objectStruct = {'Objects': {'1155771754.68fxlara0': {'Type': 'Building Interior', 'Name': '', 'Instanced': False, 'Objects': {'1222901694.02akelts': {'Type': 'Door Locator Node', 'Name': 'door_locator', 'Hpr': VBase3(-180.0, 0.0, 0.0), 'Pos': Point3(-13.419, 47.56, 5.309), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1222901713.91akelts': {'Type': 'Light - Dynamic', 'Attenuation': '0.005', 'ConeAngle': '60.0000', 'DropOff': '0.0000', 'FlickRate': '0.5000', 'Hpr': Point3(0.0, 0.0, 0.0), 'Intensity': '1.0000', 'LightType': 'POINT', 'Pos': Point3(-0.778, -10.188, 20.811), 'Scale': VBase3(1.0, 1.0, 1.0), 'VisSize': '', 'Visual': {'Color': (1, 1, 1, 1), 'Model': 'models/props/light_tool_bulb'}}, '1222901747.67akelts': {'Type': 'Light - Dynamic', 'Attenuation': '0.005', 'ConeAngle': '60.0000', 'DropOff': '0.0000', 'FlickRate': '0.5000', 'Hpr': Point3(0.0, 0.0, 0.0), 'Intensity': '0.6506', 'LightType': 'POINT', 'Pos': Point3(1.468, 21.11, 22.163), 'Scale': VBase3(1.0, 1.0, 1.0), 'VisSize': '', 'Visual': {'Color': (0.88, 0.98, 1.0, 1.0), 'Model': 'models/props/light_tool_bulb'}}}, 'VisSize': '', 'Visual': {'Model': 'models/buildings/interior_spanish_store'}}}, 'Node Links': [], 'Layers': {}, 'ObjectIds': {'1155771754.68fxlara0': '["Objects"]["1155771754.68fxlara0"]', '1222901694.02akelts': '["Objects"]["1155771754.68fxlara0"]["Objects"]["1222901694.02akelts"]', '1222901713.91akelts': '["Objects"]["1155771754.68fxlara0"]["Objects"]["1222901713.91akelts"]', '1222901747.67akelts': '["Objects"]["1155771754.68fxlara0"]["Objects"]["1222901747.67akelts"]'}}
extraInfo = {'camPos': Point3(-112.106, -38.8109, 72.0423), 'camHpr': VBase3(-62.9996, -22.1646, 0), 'focalLength': 1.39999997616, 'skyState': -1, 'fog': 0} | 285.285714 | 1,557 | 0.648473 | 278 | 1,997 | 4.625899 | 0.453237 | 0.021773 | 0.025661 | 0.024883 | 0.265941 | 0.262053 | 0.216952 | 0.216952 | 0.203733 | 0.203733 | 0 | 0.237493 | 0.089134 | 1,997 | 7 | 1,558 | 285.285714 | 0.469489 | 0.110666 | 0 | 0 | 0 | 0 | 0.534989 | 0.190181 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c43a38079548b095266e9b4e77fbf9173e50adac | 2,552 | py | Python | epytope/Data/pssms/smmpmbec/mat/B_35_03_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smmpmbec/mat/B_35_03_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smmpmbec/mat/B_35_03_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | B_35_03_10 = {0: {'A': 0.153, 'C': -0.022, 'E': 0.021, 'D': 0.033, 'G': 0.035, 'F': -0.253, 'I': -0.048, 'H': -0.005, 'K': 0.125, 'M': -0.088, 'L': -0.127, 'N': -0.005, 'Q': 0.038, 'P': 0.097, 'S': 0.143, 'R': 0.08, 'T': 0.098, 'W': -0.122, 'V': 0.027, 'Y': -0.179}, 1: {'A': 0.078, 'C': -0.001, 'E': -0.031, 'D': -0.09, 'G': 0.014, 'F': 0.126, 'I': 0.023, 'H': 0.039, 'K': 0.006, 'M': 0.132, 'L': 0.141, 'N': -0.005, 'Q': -0.095, 'P': -0.556, 'S': 0.041, 'R': -0.012, 'T': 0.049, 'W': -0.018, 'V': 0.046, 'Y': 0.113}, 2: {'A': 0.033, 'C': 0.002, 'E': 0.004, 'D': 0.011, 'G': 0.031, 'F': -0.044, 'I': -0.068, 'H': 0.007, 'K': -0.006, 'M': -0.028, 'L': -0.044, 'N': 0.022, 'Q': 0.036, 'P': 0.019, 'S': 0.047, 'R': 0.013, 'T': 0.039, 'W': -0.016, 'V': -0.019, 'Y': -0.039}, 3: {'A': -0.013, 'C': 0.006, 'E': 0.004, 'D': -0.011, 'G': 0.003, 'F': 0.002, 'I': -0.025, 'H': 0.006, 'K': 0.014, 'M': 0.005, 'L': 0.005, 'N': 0.024, 'Q': 0.009, 'P': -0.008, 'S': -0.005, 'R': 0.022, 'T': -0.024, 'W': 0.002, 'V': -0.014, 'Y': -0.001}, 4: {'A': -0.003, 'C': 0.002, 'E': 0.002, 'D': 0.0, 'G': 0.003, 'F': 0.01, 'I': 0.01, 'H': -0.008, 'K': -0.021, 'M': 0.005, 'L': 0.009, 'N': -0.001, 'Q': 0.004, 'P': 0.001, 'S': -0.01, 'R': -0.017, 'T': -0.005, 'W': 0.005, 'V': 0.003, 'Y': 0.01}, 5: {'A': 0.057, 'C': -0.006, 'E': -0.038, 'D': -0.011, 'G': -0.002, 'F': 0.001, 'I': 0.028, 'H': -0.004, 'K': 0.039, 'M': -0.014, 'L': -0.016, 'N': -0.029, 'Q': -0.07, 'P': 0.016, 'S': 0.009, 'R': 0.02, 'T': 0.016, 'W': -0.024, 'V': 0.03, 'Y': -0.002}, 6: {'A': -0.02, 'C': 0.006, 'E': 0.016, 'D': 0.012, 'G': -0.017, 'F': 0.006, 'I': -0.007, 'H': 0.017, 'K': 0.022, 'M': -0.005, 'L': -0.021, 'N': -0.003, 'Q': -0.001, 'P': -0.003, 'S': 0.0, 'R': 0.03, 'T': -0.013, 'W': 0.007, 'V': -0.033, 'Y': 0.009}, 7: {'A': 0.021, 'C': 0.001, 'E': 0.029, 'D': 0.022, 'G': -0.003, 'F': -0.022, 'I': -0.015, 'H': 0.016, 'K': 0.025, 'M': -0.026, 'L': -0.009, 'N': 0.008, 'Q': 0.007, 'P': -0.019, 'S': 0.018, 'R': 0.026, 'T': 0.018, 'W': -0.031, 'V': 0.002, 'Y': -0.067}, 8: {'A': 0.083, 'C': 0.039, 'E': -0.062, 'D': -0.004, 'G': 0.021, 'F': 0.207, 'I': -0.171, 'H': 0.218, 'K': 0.262, 'M': -0.172, 'L': -0.345, 'N': -0.013, 'Q': -0.244, 'P': -0.17, 'S': 0.127, 'R': 0.31, 'T': -0.03, 'W': 0.034, 'V': -0.191, 'Y': 0.101}, 9: {'A': 0.093, 'C': 0.055, 'E': 0.014, 'D': 0.071, 'G': 0.084, 'F': -0.074, 'I': 0.041, 'H': -0.035, 'K': 0.064, 'M': -0.152, 'L': -0.34, 'N': -0.003, 'Q': -0.064, 'P': 0.063, 'S': 0.099, 'R': -0.176, 'T': 0.086, 'W': 0.066, 'V': 0.006, 'Y': 0.101}, -1: {'con': 4.67913}} | 2,552 | 2,552 | 0.39616 | 618 | 2,552 | 1.631068 | 0.205502 | 0.019841 | 0.014881 | 0.017857 | 0.18254 | 0.02381 | 0.02381 | 0.02381 | 0 | 0 | 0 | 0.376052 | 0.162226 | 2,552 | 1 | 2,552 | 2,552 | 0.095416 | 0 | 0 | 0 | 0 | 0 | 0.079514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c46056feb8a519f07d5801acec3d3d94325f1f04 | 330 | py | Python | bitmovin_api_sdk/encoding/configurations/audio/opus/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/encoding/configurations/audio/opus/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/encoding/configurations/audio/opus/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.encoding.configurations.audio.opus.opus_api import OpusApi
from bitmovin_api_sdk.encoding.configurations.audio.opus.customdata.customdata_api import CustomdataApi
from bitmovin_api_sdk.encoding.configurations.audio.opus.opus_audio_configuration_list_query_params import OpusAudioConfigurationListQueryParams
| 82.5 | 144 | 0.915152 | 41 | 330 | 7.04878 | 0.414634 | 0.124567 | 0.155709 | 0.186851 | 0.536332 | 0.536332 | 0.536332 | 0.536332 | 0.366782 | 0 | 0 | 0 | 0.036364 | 330 | 3 | 145 | 110 | 0.908805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c4644a6aebcd9b79b9336a571248958756e60277 | 148 | py | Python | client/rewards/__init__.py | tbienhoff/carla-rl | 51960c8ce3b7e90cdd6c3ab5e18721d1969e1b50 | [
"MIT"
] | 80 | 2019-01-30T13:14:11.000Z | 2022-02-14T08:51:01.000Z | client/rewards/__init__.py | tbienhoff/carla-rl | 51960c8ce3b7e90cdd6c3ab5e18721d1969e1b50 | [
"MIT"
] | 8 | 2019-02-03T18:21:36.000Z | 2020-10-23T00:51:30.000Z | client/rewards/__init__.py | tbienhoff/carla-rl | 51960c8ce3b7e90cdd6c3ab5e18721d1969e1b50 | [
"MIT"
] | 27 | 2019-03-15T08:22:19.000Z | 2022-03-20T05:37:48.000Z | from .carla_reward import CarlaReward
from .sparse_reward import SparseReward
from .cirl_reward import CIRLReward
from .her_reward import HERReward
| 29.6 | 39 | 0.864865 | 20 | 148 | 6.2 | 0.55 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 148 | 4 | 40 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67a0296dc458848f6a88abcc99528b8ccbf4c373 | 63 | py | Python | desper/glet/__init__.py | Ball-Man/monospace | 570faa0b800b95e5305e83542512c38ff500b3b2 | [
"MIT"
] | 1 | 2021-06-19T00:24:17.000Z | 2021-06-19T00:24:17.000Z | desper/glet/__init__.py | Ball-Man/monospace | 570faa0b800b95e5305e83542512c38ff500b3b2 | [
"MIT"
] | null | null | null | desper/glet/__init__.py | Ball-Man/monospace | 570faa0b800b95e5305e83542512c38ff500b3b2 | [
"MIT"
] | null | null | null | from .ecs import *
from .gamemodel import *
from .res import *
| 15.75 | 24 | 0.714286 | 9 | 63 | 5 | 0.555556 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 63 | 3 | 25 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
db3385c1f2e6a612afbe3ee002afae556f143fb9 | 12,053 | py | Python | test/test_assert_callback_registration.py | lifeisafractal/Appdaemon-Test-Framework | 9821439b8a8e362b6117d7aa7ad503fb9bdba9e7 | [
"MIT"
] | 37 | 2018-08-08T10:48:13.000Z | 2022-03-09T22:31:11.000Z | test/test_assert_callback_registration.py | lifeisafractal/Appdaemon-Test-Framework | 9821439b8a8e362b6117d7aa7ad503fb9bdba9e7 | [
"MIT"
] | 58 | 2018-10-05T13:36:57.000Z | 2022-02-06T11:37:20.000Z | test/test_assert_callback_registration.py | lifeisafractal/Appdaemon-Test-Framework | 9821439b8a8e362b6117d7aa7ad503fb9bdba9e7 | [
"MIT"
] | 13 | 2018-12-04T19:22:23.000Z | 2022-02-06T10:32:04.000Z | from datetime import time, datetime
import appdaemon.plugins.hass.hassapi as hass
import pytest
from pytest import mark
from appdaemontestframework import automation_fixture
class MockAutomation(hass.Hass):
should_listen_state = False
should_listen_event = False
should_register_run_daily = False
should_register_run_minutely = False
should_register_run_at = False
def initialize(self):
if self.should_listen_state:
self.listen_state(self._my_listen_state_callback, 'some_entity', new='off')
if self.should_listen_event:
self.listen_event(self._my_listen_event_callback, 'zwave.scene_activated', scene_id=3)
if self.should_register_run_daily:
self.run_daily(self._my_run_daily_callback, time(hour=3, minute=7), extra_param='ok')
if self.should_register_run_minutely:
self.run_minutely(self._my_run_minutely_callback, time(hour=3, minute=7), extra_param='ok')
if self.should_register_run_at:
self.run_at(self._my_run_at_callback, datetime(2019,11,5,22,43,0,0), extra_param='ok')
def _my_listen_state_callback(self, entity, attribute, old, new, kwargs):
pass
def _my_listen_event_callback(self, event_name, data, kwargs):
pass
def _my_run_daily_callback(self, kwargs):
pass
def _my_run_minutely_callback(self, kwargs):
pass
def _my_run_at_callback(self, kwargs):
pass
def _some_other_function(self, entity, attribute, old, new, kwargs):
pass
def enable_listen_state_during_initialize(self):
self.should_listen_state = True
def enable_listen_event_during_initialize(self):
self.should_listen_event = True
def enable_register_run_daily_during_initialize(self):
self.should_register_run_daily = True
def enable_register_run_minutely_during_initialize(self):
self.should_register_run_minutely = True
def enable_register_run_at_during_initialize(self):
self.should_register_run_at = True
@automation_fixture(MockAutomation)
def automation():
pass
class TestAssertListenState:
def test_success(self, automation: MockAutomation, assert_that):
automation.enable_listen_state_during_initialize()
assert_that(automation) \
.listens_to.state('some_entity', new='off') \
.with_callback(automation._my_listen_state_callback)
def test_failure__not_listening(self, automation: MockAutomation, assert_that):
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.state('some_entity', new='off') \
.with_callback(automation._my_listen_state_callback)
def test_failure__wrong_entity(self, automation: MockAutomation, assert_that):
automation.enable_listen_state_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.state('WRONG', new='on') \
.with_callback(automation._my_listen_state_callback)
def test_failure__wrong_kwargs(self, automation: MockAutomation, assert_that):
automation.enable_listen_state_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.state('some_entity', new='WRONG') \
.with_callback(automation._my_listen_state_callback)
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.state('some_entity', wrong='off') \
.with_callback(automation._my_listen_state_callback)
def test_failure__wrong_callback(self, automation: MockAutomation, assert_that):
automation.enable_listen_state_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.state('some_entity', new='on') \
.with_callback(automation._some_other_function)
class TestAssertListenEvent:
def test_success(self, automation: MockAutomation, assert_that):
automation.enable_listen_event_during_initialize()
assert_that(automation) \
.listens_to.event('zwave.scene_activated', scene_id=3) \
.with_callback(automation._my_listen_event_callback)
def test_failure__not_listening(self, automation: MockAutomation, assert_that):
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.event('zwave.scene_activated', scene_id=3) \
.with_callback(automation._my_listen_event_callback)
def test_failure__wrong_event(self, automation: MockAutomation, assert_that):
automation.enable_listen_state_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.event('WRONG', scene_id=3) \
.with_callback(automation._my_listen_event_callback)
def test_failure__wrong_kwargs(self, automation: MockAutomation, assert_that):
automation.enable_listen_state_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.event('zwave.scene_activated', scene_id='WRONG') \
.with_callback(automation._my_listen_event_callback)
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.event('zwave.scene_activated', wrong=3) \
.with_callback(automation._my_listen_event_callback)
def test_failure__wrong_callback(self, automation: MockAutomation, assert_that):
automation.enable_listen_state_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.listens_to.event('zwave.scene_activated', scene_id=3) \
.with_callback(automation._some_other_function)
class TestRegisteredRunDaily:
def test_success(self, automation: MockAutomation, assert_that):
automation.enable_register_run_daily_during_initialize()
assert_that(automation) \
.registered.run_daily(time(hour=3, minute=7), extra_param='ok') \
.with_callback(automation._my_run_daily_callback)
def test_failure__not_listening(self, automation: MockAutomation, assert_that):
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_daily(time(hour=3, minute=7), extra_param='ok') \
.with_callback(automation._my_run_daily_callback)
def test_failure__wrong_time(self, automation: MockAutomation, assert_that):
automation.enable_register_run_daily_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_daily(time(hour=4), extra_param='ok') \
.with_callback(automation._my_run_daily_callback)
def test_failure__wrong_kwargs(self, automation: MockAutomation, assert_that):
automation.enable_register_run_daily_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_daily(time(hour=3, minute=7), extra_param='WRONG') \
.with_callback(automation._my_run_daily_callback)
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_daily(time(hour=3, minute=7), wrong='ok') \
.with_callback(automation._my_run_daily_callback)
def test_failure__wrong_callback(self, automation: MockAutomation, assert_that):
automation.enable_register_run_daily_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_daily(time(hour=3, minute=7), extra_param='ok') \
.with_callback(automation._some_other_function)
class TestRegisteredRunMinutely:
def test_success(self, automation: MockAutomation, assert_that):
automation.enable_register_run_minutely_during_initialize()
assert_that(automation) \
.registered.run_minutely(time(hour=3, minute=7), extra_param='ok') \
.with_callback(automation._my_run_minutely_callback)
def test_failure__not_listening(self, automation: MockAutomation, assert_that):
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_minutely(time(hour=3, minute=7), extra_param='ok') \
.with_callback(automation._my_run_minutely_callback)
def test_failure__wrong_time(self, automation: MockAutomation, assert_that):
automation.enable_register_run_minutely_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_minutely(time(hour=4), extra_param='ok') \
.with_callback(automation._my_run_minutely_callback)
def test_failure__wrong_kwargs(self, automation: MockAutomation, assert_that):
automation.enable_register_run_minutely_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_minutely(time(hour=3, minute=7), extra_param='WRONG') \
.with_callback(automation._my_run_minutely_callback)
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_minutely(time(hour=3, minute=7), wrong='ok') \
.with_callback(automation._my_run_minutely_callback)
def test_failure__wrong_callback(self, automation: MockAutomation, assert_that):
automation.enable_register_run_minutely_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_minutely(time(hour=3, minute=7), extra_param='ok') \
.with_callback(automation._some_other_function)
class TestRegisteredRunAt:
def test_success(self, automation: MockAutomation, assert_that):
automation.enable_register_run_at_during_initialize()
assert_that(automation) \
.registered.run_at(datetime(2019,11,5,22,43,0,0), extra_param='ok') \
.with_callback(automation._my_run_at_callback)
def test_failure__not_listening(self, automation: MockAutomation, assert_that):
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_at(datetime(2019,11,5,22,43,0,0), extra_param='ok') \
.with_callback(automation._my_run_at_callback)
def test_failure__wrong_time(self, automation: MockAutomation, assert_that):
automation.enable_register_run_at_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_at(datetime(2019,11,5,20,43,0,0), extra_param='ok') \
.with_callback(automation._my_run_at_callback)
def test_failure__wrong_kwargs(self, automation: MockAutomation, assert_that):
automation.enable_register_run_at_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_at(datetime(2019,11,5,22,43,0,0), extra_param='WRONG') \
.with_callback(automation._my_run_at_callback)
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_at(datetime(2019,11,5,22,43,0,0), wrong='ok') \
.with_callback(automation._my_run_minutely_callback)
def test_failure__wrong_callback(self, automation: MockAutomation, assert_that):
automation.enable_register_run_at_during_initialize()
with pytest.raises(AssertionError):
assert_that(automation) \
.registered.run_at(datetime(2019,11,5,22,43,0,0), extra_param='ok') \
.with_callback(automation._some_other_function)
| 42.291228 | 103 | 0.697171 | 1,366 | 12,053 | 5.74451 | 0.062958 | 0.07009 | 0.127437 | 0.108322 | 0.890914 | 0.870141 | 0.84784 | 0.803237 | 0.789474 | 0.787307 | 0 | 0.012977 | 0.21364 | 12,053 | 284 | 104 | 42.440141 | 0.81494 | 0 | 0 | 0.698113 | 0 | 0 | 0.023148 | 0.010454 | 0 | 0 | 0 | 0 | 0.386792 | 1 | 0.179245 | false | 0.033019 | 0.023585 | 0 | 0.254717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c03ba4fade45b3ab7f0dc3c839327d4eeb5422fd | 9,317 | py | Python | boa3_test/tests/compiler_tests/test_types.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/tests/compiler_tests/test_types.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/tests/compiler_tests/test_types.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | import ast
from boa3.analyser.analyser import Analyser
from boa3.analyser.typeanalyser import TypeAnalyser
from boa3.model.type.annotation.uniontype import UnionType
from boa3.model.type.collection.sequence.mutable.listtype import ListType
from boa3.model.type.collection.sequence.tupletype import TupleType
from boa3.model.type.type import Type
from boa3_test.tests.boa_test import BoaTest
class TestTypes(BoaTest):
def test_small_integer_constant(self):
input = 42
node = ast.parse(str(input)).body[0].value
expected_output = Type.int
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_negative_integer_constant(self):
input = -10
node = ast.parse(str(input)).body[0].value
expected_output = Type.int
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_boolean_constant(self):
input = True
node = ast.parse(str(input)).body[0].value
expected_output = Type.bool
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_string_constant(self):
input = 'unit_test'
node = ast.parse(str(input)).body[0].value
expected_output = Type.str
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_integer_tuple_constant(self):
input = (1, 2, 3)
node = ast.parse(str(input)).body[0].value
expected_output = TupleType(Type.int)
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_boolean_tuple_constant(self):
input = (True, False)
node = ast.parse(str(input)).body[0].value
expected_output = TupleType(Type.bool)
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_string_tuple_constant(self):
input = (1, 2, 3)
node = ast.parse(str(input)).body[0].value
expected_output = TupleType(Type.int)
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_any_tuple_constant(self):
input = (1, '2', False)
node = ast.parse(str(input)).body[0].value
expected_output = TupleType(UnionType.build([Type.str,
Type.int]))
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_integer_list_constant(self):
input = [1, 2, 3]
node = ast.parse(str(input)).body[0].value
expected_output = ListType(Type.int)
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_boolean_list_constant(self):
input = [True, False]
node = ast.parse(str(input)).body[0].value
expected_output = ListType(Type.bool)
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_string_list_constant(self):
input = [1, 2, 3]
node = ast.parse(str(input)).body[0].value
expected_output = ListType(Type.int)
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_any_list_constant(self):
input = [1, '2', False]
node = ast.parse(str(input)).body[0].value
expected_output = ListType(UnionType.build([Type.int,
Type.str]))
typeanalyser = TypeAnalyser(Analyser(node), {})
output = typeanalyser.get_type(input)
self.assertEqual(expected_output, output)
def test_sequence_any_is_type_of_str(self):
sequence_type = Type.sequence
str_type = Type.str
self.assertTrue(sequence_type.is_type_of(str_type))
def test_sequence_any_is_type_of_tuple_any(self):
sequence_type = Type.sequence
tuple_type = Type.tuple
self.assertTrue(sequence_type.is_type_of(tuple_type))
def test_sequence_int_is_type_of_tuple_any(self):
sequence_type = Type.sequence.build_collection(Type.int)
tuple_type = Type.tuple
self.assertFalse(sequence_type.is_type_of(tuple_type))
def test_sequence_any_is_type_of_tuple_int(self):
sequence_type = Type.sequence
tuple_type = Type.tuple.build_collection(Type.int)
self.assertTrue(sequence_type.is_type_of(tuple_type))
def test_sequence_any_is_type_of_list_any(self):
sequence_type = Type.sequence
list_type = Type.list
self.assertTrue(sequence_type.is_type_of(list_type))
def test_sequence_int_is_type_of_list_any(self):
sequence_type = Type.sequence.build_collection(Type.int)
list_type = Type.list
self.assertFalse(sequence_type.is_type_of(list_type))
def test_sequence_any_is_type_of_list_int(self):
sequence_type = Type.sequence
list_type = Type.list.build_collection(Type.int)
self.assertTrue(sequence_type.is_type_of(list_type))
def test_tuple_any_is_type_of_sequence(self):
sequence_type = Type.sequence
tuple_type = Type.tuple
self.assertFalse(tuple_type.is_type_of(sequence_type))
def test_tuple_any_is_type_of_tuple_int(self):
tuple_type = Type.tuple
tuple_int_type = Type.tuple.build_collection(Type.int)
self.assertTrue(tuple_type.is_type_of(tuple_int_type))
def test_tuple_int_is_type_of_tuple_any(self):
tuple_type = Type.tuple.build_collection(Type.int)
tuple_any_type = Type.tuple
self.assertFalse(tuple_type.is_type_of(tuple_any_type))
def test_list_any_is_type_of_sequence(self):
list_type = Type.list
sequence_type = Type.sequence
self.assertFalse(list_type.is_type_of(sequence_type))
def test_list_any_is_type_of_list_int(self):
list_type = Type.list
list_int_type = Type.list.build_collection(Type.int)
self.assertTrue(list_type.is_type_of(list_int_type))
def test_list_int_is_type_of_list_any(self):
list_type = Type.list.build_collection(Type.int)
list_any_type = Type.list
self.assertFalse(list_type.is_type_of(list_any_type))
def test_str_any_is_type_of_sequence(self):
sequence_type = Type.sequence
str_type = Type.str
self.assertFalse(str_type.is_type_of(sequence_type))
def test_str_any_is_type_of_sequence_str(self):
sequence_type = Type.sequence.build_collection(Type.str)
str_type = Type.str
self.assertFalse(str_type.is_type_of(sequence_type))
def test_optional_is_type_of_union(self):
optional_type = Type.optional.build(Type.str)
union_type = Type.union.build({Type.str, Type.none})
self.assertTrue(optional_type.is_type_of(union_type))
self.assertTrue(union_type.is_type_of(optional_type))
optional_type = Type.optional.build({Type.str, Type.int, Type.bool, Type.bytes})
union_type = Type.union.build({Type.str, Type.int, Type.bool, Type.bytes, Type.none})
self.assertTrue(optional_type.is_type_of(union_type))
self.assertTrue(union_type.is_type_of(optional_type))
optional_type = Type.optional.build(Type.str)
union_type = Type.union.build({Type.str, Type.int, Type.bool, Type.bytes, Type.none})
self.assertFalse(optional_type.is_type_of(union_type))
self.assertTrue(union_type.is_type_of(optional_type))
optional_type = Type.optional.build({Type.str, Type.int, Type.bool, Type.bytes})
union_type = Type.union.build({Type.str})
self.assertTrue(optional_type.is_type_of(union_type))
self.assertFalse(union_type.is_type_of(optional_type))
optional_type = Type.optional.build({Type.str, Type.int, Type.bool, Type.bytes})
union_type = Type.union.build({Type.str, Type.int, Type.bool, Type.bytes})
self.assertTrue(optional_type.is_type_of(union_type))
self.assertFalse(union_type.is_type_of(optional_type))
optional_type = Type.optional.build(Type.any)
union_type = Type.union.build({Type.str, Type.int, Type.bool, Type.bytes, Type.none})
self.assertTrue(optional_type.is_type_of(union_type))
self.assertFalse(union_type.is_type_of(optional_type))
optional_type = Type.optional.build({Type.str, Type.int, Type.bool, Type.bytes, Type.none})
union_type = Type.union.build(Type.any)
self.assertFalse(optional_type.is_type_of(union_type))
self.assertTrue(union_type.is_type_of(optional_type))
| 37.873984 | 99 | 0.688097 | 1,231 | 9,317 | 4.918765 | 0.050366 | 0.059455 | 0.059455 | 0.057473 | 0.908175 | 0.889843 | 0.857969 | 0.824443 | 0.78431 | 0.737903 | 0 | 0.005292 | 0.20908 | 9,317 | 245 | 100 | 38.028571 | 0.816393 | 0 | 0 | 0.603261 | 0 | 0 | 0.001181 | 0 | 0 | 0 | 0 | 0 | 0.222826 | 1 | 0.152174 | false | 0 | 0.043478 | 0 | 0.201087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c04365bd2cf9fe088d5a540a3e55d47d4e26614a | 170 | py | Python | gan_training/metrics/__init__.py | AlamiMejjati/controllable_image_synthesis | 06f81359d5f10854af275cd313023d9f1e0afd4c | [
"MIT"
] | 55 | 2020-03-19T11:27:52.000Z | 2022-03-24T06:43:55.000Z | gan_training/metrics/__init__.py | AlamiMejjati/controllable_image_synthesis | 06f81359d5f10854af275cd313023d9f1e0afd4c | [
"MIT"
] | 1 | 2021-01-24T12:47:46.000Z | 2021-01-24T12:47:46.000Z | gan_training/metrics/__init__.py | AlamiMejjati/controllable_image_synthesis | 06f81359d5f10854af275cd313023d9f1e0afd4c | [
"MIT"
] | 11 | 2020-06-22T09:17:23.000Z | 2022-02-26T09:18:54.000Z | from gan_training.metrics.inception_score import inception_score
from gan_training.metrics.fid_score import FIDEvaluator
__all__ = [
inception_score,
FIDEvaluator
]
| 21.25 | 64 | 0.841176 | 21 | 170 | 6.333333 | 0.47619 | 0.315789 | 0.225564 | 0.330827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111765 | 170 | 7 | 65 | 24.285714 | 0.880795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fbec9af1364a6fbb823e05886687b547d630b52d | 25 | py | Python | nu/v1/Membranes/Firings/__init__.py | bullgom/pysnn2 | dad5ae26b029afd5c5bf76fe141249b0f7b7a36c | [
"MIT"
] | null | null | null | nu/v1/Membranes/Firings/__init__.py | bullgom/pysnn2 | dad5ae26b029afd5c5bf76fe141249b0f7b7a36c | [
"MIT"
] | null | null | null | nu/v1/Membranes/Firings/__init__.py | bullgom/pysnn2 | dad5ae26b029afd5c5bf76fe141249b0f7b7a36c | [
"MIT"
] | null | null | null | from .Fixed import Fixed
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
220fa874eee05fcaae99e670e8a73cf6b8112c44 | 144 | py | Python | dee/tasks/__init__.py | Nero0017/DocEE | c7e8a8f8c8fc3b6c39d498ec2d8f61eef47d60c2 | [
"MIT"
] | 1 | 2022-02-22T09:54:21.000Z | 2022-02-22T09:54:21.000Z | dee/tasks/__init__.py | Nero0017/DocEE | c7e8a8f8c8fc3b6c39d498ec2d8f61eef47d60c2 | [
"MIT"
] | null | null | null | dee/tasks/__init__.py | Nero0017/DocEE | c7e8a8f8c8fc3b6c39d498ec2d8f61eef47d60c2 | [
"MIT"
] | null | null | null | from .dee_task import DEETask, DEETaskSetting
from .base_task import BasePytorchTask, TaskSetting
from .ner_task import NERTask, NERTaskSetting
| 36 | 51 | 0.854167 | 18 | 144 | 6.666667 | 0.666667 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 144 | 3 | 52 | 48 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
97fa08f7610c2c4a8b9f392e95b3e3e98c306967 | 28 | py | Python | src/lablet_generalization_benchmark/__init__.py | floringogianu/InDomainGeneralizationBenchmark | eca354723e1685d05d8d114fc9d6e4ef880b63dc | [
"Apache-2.0"
] | 496 | 2019-04-08T18:36:03.000Z | 2022-03-28T08:31:53.000Z | src/lablet_generalization_benchmark/__init__.py | floringogianu/InDomainGeneralizationBenchmark | eca354723e1685d05d8d114fc9d6e4ef880b63dc | [
"Apache-2.0"
] | 70 | 2021-03-31T17:10:18.000Z | 2022-03-31T15:04:45.000Z | src/lablet_generalization_benchmark/__init__.py | floringogianu/InDomainGeneralizationBenchmark | eca354723e1685d05d8d114fc9d6e4ef880b63dc | [
"Apache-2.0"
] | 106 | 2020-10-01T13:46:36.000Z | 2022-03-28T18:17:10.000Z | # Implement your code here.
| 14 | 27 | 0.75 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 1 | 28 | 28 | 0.913043 | 0.892857 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f189c296c97d31413302dcff9622e86e702ed5f | 8,891 | py | Python | modules/segmentation_unet.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | 1 | 2020-01-25T19:43:45.000Z | 2020-01-25T19:43:45.000Z | modules/segmentation_unet.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | null | null | null | modules/segmentation_unet.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | null | null | null | import tensorflow as tf
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Conv2D, Conv2DTranspose
from tensorflow.keras.layers import MaxPool2D, UpSampling2D, Concatenate
from tensorflow.keras.layers import BatchNormalization, Activation, Dropout
# Unet Layers.
def unet_conv_layer(x, filters, kernel_size, kernel_initializer, activation, batch_norm):
x = Conv2D(filters=filters,
kernel_size=kernel_size,
kernel_initializer=kernel_initializer,
padding="same")(x)
x = Activation(activation)(x)
if batch_norm:
x = BatchNormalization()(x)
return x
def unet_max_layer(x):
x = MaxPool2D(pool_size=2,
padding="same")(x)
return x
def unet_up_layer(x):
x = UpSampling2D(size=2)(x)
return x
def unet_downconv_layer(x, filters, kernel_size, kernel_initializer, activation, batch_norm):
x = Conv2D(filters=filters,
kernel_size=kernel_size,
kernel_initializer=kernel_initializer,
strides=2,
padding="same")(x)
x = Activation(activation)(x)
if batch_norm:
x = BatchNormalization()(x)
return x
def unet_transconv_layer(x, filters, kernel_size, kernel_initializer, activation, batch_norm):
x = Conv2DTranspose(filters=filters,
kernel_size=kernel_size,
kernel_initializer=kernel_initializer,
strides=2,
padding = "same")(x)
x = Activation(activation)(x)
if batch_norm:
x = BatchNormalization()(x)
return x
def unet_final_layer(x, final_filters, final_activation, kernel_size, kernel_initializer):
x = Conv2D(filters=final_filters,
kernel_size=1,
kernel_initializer=kernel_initializer,
activation=final_activation)(x)
return x
# Unet Blocks.
def unet_conv_block(x, filters, kernel_size, kernel_initializer, activation, batch_norm):
conv1 = unet_conv_layer(x, filters, kernel_size, kernel_initializer, activation, batch_norm)
conv2 = unet_conv_layer(conv1, filters, kernel_size, kernel_initializer, activation, batch_norm)
return conv2, filters
def unet_convpool_block(x, dropout, filters, kernel_size, kernel_initializer, activation, batch_norm):
conv1 = unet_conv_layer(x, filters, kernel_size, kernel_initializer, activation, batch_norm)
conv2 = unet_conv_layer(conv1, filters, kernel_size, kernel_initializer, activation, batch_norm)
pool = unet_max_layer(conv2)
drop = Dropout(dropout)(pool)
return drop, conv2, filters
def unet_downconv_block(x, dropout, filters, kernel_size, kernel_initializer, activation, batch_norm):
conv1 = unet_conv_layer(x, filters, kernel_size, kernel_initializer, activation, batch_norm)
conv2 = unet_conv_layer(conv1, filters, kernel_size, kernel_initializer, activation, batch_norm)
downconv = unet_downconv_layer(conv2, filters, kernel_size, kernel_initializer, activation, batch_norm)
drop = Dropout(dropout)(downconv)
return drop, conv2, filters
def unet_upconv_block(x, dropout, filters, kernel_size, kernel_initializer, activation, batch_norm):
up = unet_up_layer(x)
kernel_size = 2
conv = unet_conv_layer(up, filters, kernel_size, kernel_initializer, activation, batch_norm)
return conv
def unet_transconv_block(x, dropout, filters, kernel_size, kernel_initializer, activation, batch_norm):
transconv = unet_transconv_layer(x, filters, kernel_size, kernel_initializer, activation, batch_norm)
kernel_size = 2
conv = unet_conv_layer(transconv, filters, kernel_size, kernel_initializer, activation, batch_norm)
return conv
def unet_concat_block(x1, x2, dropout, filters, kernel_size, kernel_initializer, activation, batch_norm):
concat = Concatenate(axis=3)([x1,x2])
conv1 = unet_conv_layer(concat, filters, kernel_size, kernel_initializer, activation, batch_norm)
conv2 = unet_conv_layer(conv1, filters, kernel_size, kernel_initializer, activation, batch_norm)
drop = Dropout(dropout)(conv2)
return drop, filters
# Generic Unet model.
def unet_model_standard(input_size,
scale,
filters,
kernel_size,
kernel_initializer,
activation,
final_filters,
final_activation,
dropout,
batch_norm,
use_input):
params = {"kernel_size": kernel_size,
"kernel_initializer": kernel_initializer,
"activation": activation,
"batch_norm": batch_norm}
scaled_input = (int(input_size[0] * scale), int(input_size[1] * scale), input_size[2])
unet_input = Input(shape=(scaled_input))
drop1, conv1, filters = unet_convpool_block(x=unet_input, filters=filters*2, dropout=dropout*0.5, **params)
drop2, conv2, filters = unet_convpool_block(x=drop1, filters=filters*2, dropout=dropout, **params)
drop3, conv3, filters = unet_convpool_block(x=drop2, filters=filters*2, dropout=dropout, **params)
conv4, filters = unet_conv_block(x=drop3, filters=filters*2, **params)
upconv5 = unet_upconv_block(x=conv4, filters=filters/2, dropout=dropout, **params)
concat5, filters = unet_concat_block(x1=conv3, x2=upconv5, filters=filters/2, dropout=dropout, **params)
upconv6 = unet_upconv_block(x=concat5, filters=filters/2, dropout=dropout, **params)
concat6, filters = unet_concat_block(x1=conv2, x2=upconv6, filters=filters/2, dropout=dropout, **params)
upconv7 = unet_upconv_block(x=concat6, filters=filters/2, dropout=dropout, **params)
concat7, filters = unet_concat_block(x1=conv1, x2=upconv7, filters=filters/2, dropout=dropout, **params)
unet_output = unet_final_layer(concat7, final_filters, final_activation, kernel_size, kernel_initializer)
if use_input:
unet_output = Concatenate(axis=3)([unet_input, unet_output])
model = Model(inputs=unet_input, outputs=unet_output)
return model
def unet_model_convolutional(input_size,
scale,
filters,
kernel_size,
kernel_initializer,
activation,
final_filters,
final_activation,
dropout,
batch_norm,
use_input):
params = {"kernel_size": kernel_size,
"kernel_initializer": kernel_initializer,
"activation": activation,
"batch_norm": batch_norm}
scaled_input = (int(input_size[0] * scale), int(input_size[1] * scale), input_size[2])
unet_input = Input(shape=(scaled_input))
drop1, conv1, filters = unet_downconv_block(x=unet_input, filters=filters*2, dropout=dropout*0.5, **params)
drop2, conv2, filters = unet_downconv_block(x=drop1, filters=filters*2, dropout=dropout, **params)
drop3, conv3, filters = unet_downconv_block(x=drop2, filters=filters*2, dropout=dropout, **params)
conv4, filters = unet_conv_block(x=drop3, filters=filters*2, **params)
transconv5 = unet_transconv_block(x=conv4, filters=filters/2, dropout=dropout, **params)
concat5, filters = unet_concat_block(x1=conv3, x2=transconv5, filters=filters/2, dropout=dropout, **params)
transconv6 = unet_transconv_block(x=concat5, filters=filters/2, dropout=dropout, **params)
concat6, filters = unet_concat_block(x1=conv2, x2=transconv6, filters=filters/2, dropout=dropout, **params)
transconv7 = unet_transconv_block(x=concat6, filters=filters/2, dropout=dropout, **params)
concat7, filters = unet_concat_block(x1=conv1, x2=transconv7, filters=filters/2, dropout=dropout, **params)
unet_output = unet_final_layer(concat7, final_filters, final_activation, kernel_size, kernel_initializer)
if use_input:
unet_output = Concatenate(axis=3)([unet_input, unet_output])
model = Model(inputs=unet_input, outputs=unet_output)
return model
def get_unet_segmentation():
params = {'input_size': (288, 288, 3),
'scale': 0.5,
'filters': 8,
'kernel_size': 3,
'kernel_initializer': "he_normal",
'activation': "relu",
'final_filters': 1,
'final_activation': "sigmoid",
'use_input': False,
'dropout': 0.05,
'batch_norm': True}
model = unet_model_standard(**params)
return model | 36.142276 | 111 | 0.655607 | 1,022 | 8,891 | 5.445205 | 0.09002 | 0.071878 | 0.103504 | 0.150404 | 0.813477 | 0.774843 | 0.739263 | 0.7292 | 0.719497 | 0.707457 | 0 | 0.022632 | 0.249578 | 8,891 | 246 | 112 | 36.142276 | 0.811451 | 0.005061 | 0 | 0.56051 | 0 | 0 | 0.028271 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095541 | false | 0 | 0.031847 | 0 | 0.22293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f625996d3e706007a7e8c77d789780034e06c86 | 29 | py | Python | rlib/algorithms/trpo/__init__.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | 1 | 2019-09-08T08:33:13.000Z | 2019-09-08T08:33:13.000Z | rlib/algorithms/trpo/__init__.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | 26 | 2019-03-15T03:11:21.000Z | 2022-03-11T23:42:46.000Z | rlib/algorithms/trpo/__init__.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | null | null | null | from .agent import TRPOAgent
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3f67adbd21d07e36912480f25f68402867bfaf2a | 206 | py | Python | config.py | loblab/numexam | ea50f144f0a8917535a04246ca26b6d2bc906f4b | [
"Apache-2.0"
] | null | null | null | config.py | loblab/numexam | ea50f144f0a8917535a04246ca26b6d2bc906f4b | [
"Apache-2.0"
] | null | null | null | config.py | loblab/numexam | ea50f144f0a8917535a04246ca26b6d2bc906f4b | [
"Apache-2.0"
] | null | null | null | USER_NAME = "Kitty"
MAX_ANWSER = 9999
EXAM_ITEMS = 5
EXAM_TIME = 120
EXAM_TYPES = [
"99 * 99",
"99 +- 99",
"9999 / 99",
"999 +- 999",
"999 +- 999 +- 999",
"999 +-*/ (199 +-* 99)",
]
| 15.846154 | 28 | 0.475728 | 28 | 206 | 3.321429 | 0.535714 | 0.322581 | 0.387097 | 0.387097 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3125 | 0.300971 | 206 | 12 | 29 | 17.166667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0.373786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
58abcb904de8f124d125e5ca1384c856745219a0 | 2,040 | py | Python | farmacia/migrations/0001_initial.py | Italo-Carvalho/farmacia | db4cab3e024b73286107f1f742d407ccf939dcb0 | [
"MIT"
] | null | null | null | farmacia/migrations/0001_initial.py | Italo-Carvalho/farmacia | db4cab3e024b73286107f1f742d407ccf939dcb0 | [
"MIT"
] | null | null | null | farmacia/migrations/0001_initial.py | Italo-Carvalho/farmacia | db4cab3e024b73286107f1f742d407ccf939dcb0 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.4 on 2021-06-04 23:56
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Cliente',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nome', models.CharField(max_length=100, verbose_name='Nome')),
('sobrenome', models.CharField(max_length=180, verbose_name='Sobrenome')),
('criado_em', models.DateTimeField(auto_now_add=True, verbose_name='Criado em')),
],
options={
'ordering': ['-criado_em'],
},
),
migrations.CreateModel(
name='Funcionario',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nome', models.CharField(max_length=100, verbose_name='Nome')),
('sobrenome', models.CharField(max_length=180, verbose_name='Sobrenome')),
('criado_em', models.DateTimeField(auto_now_add=True, verbose_name='Criado em')),
],
options={
'ordering': ['-criado_em'],
},
),
migrations.CreateModel(
name='Produto',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nome', models.CharField(max_length=100, verbose_name='Nome')),
('preco', models.DecimalField(decimal_places=2, max_digits=7, verbose_name='Preço')),
('imagem', models.ImageField(upload_to='produtos/', verbose_name='Image')),
('criado_em', models.DateTimeField(auto_now_add=True, verbose_name='Criado em')),
],
options={
'ordering': ['-criado_em'],
},
),
]
| 39.230769 | 117 | 0.556373 | 197 | 2,040 | 5.563452 | 0.329949 | 0.130474 | 0.082117 | 0.109489 | 0.725365 | 0.725365 | 0.725365 | 0.725365 | 0.725365 | 0.725365 | 0 | 0.022409 | 0.3 | 2,040 | 51 | 118 | 40 | 0.745098 | 0.022059 | 0 | 0.659091 | 1 | 0 | 0.117913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022727 | 0 | 0.113636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
452d682283c86b4dc0977089ed3b3bb1689a4d11 | 26 | py | Python | app/core/__init__.py | sumitsawant/electionguard-api-python | 17d98336b7188a446fa2746a531a04f25b9edd1a | [
"MIT"
] | 1 | 2021-07-06T16:18:50.000Z | 2021-07-06T16:18:50.000Z | app/core/__init__.py | QPC-database/electionguard-api-python | bee0f8d2e4982df6e11d09322065e22ebd26e2c2 | [
"MIT"
] | null | null | null | app/core/__init__.py | QPC-database/electionguard-api-python | bee0f8d2e4982df6e11d09322065e22ebd26e2c2 | [
"MIT"
] | null | null | null | from .repository import *
| 13 | 25 | 0.769231 | 3 | 26 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18b6685c8a50ead3c696d19cac6ac339644b0845 | 188 | py | Python | utils/dataset.py | hagerrady13/ERFNet-PyTorch | 0892636d270e854093ed45bd9fa2b91133406caf | [
"MIT"
] | 30 | 2018-07-30T11:46:28.000Z | 2022-01-24T02:46:43.000Z | utils/dataset.py | hagerrady13/ERFNet-PyTorch | 0892636d270e854093ed45bd9fa2b91133406caf | [
"MIT"
] | 7 | 2019-07-23T08:03:59.000Z | 2022-03-11T23:30:25.000Z | utils/dataset.py | hagerrady13/ERFNet-PyTorch | 0892636d270e854093ed45bd9fa2b91133406caf | [
"MIT"
] | 9 | 2018-07-29T21:47:39.000Z | 2021-05-14T10:51:04.000Z | import numpy as np
def calc_dataset_stats(dataset, axis=0, ep=1e-7):
return (np.mean(dataset, axis=axis) / 255.0).tolist(), (
np.std(dataset + ep, axis=axis) / 255.0).tolist() | 37.6 | 60 | 0.648936 | 32 | 188 | 3.75 | 0.5625 | 0.183333 | 0.183333 | 0.2 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070513 | 0.170213 | 188 | 5 | 61 | 37.6 | 0.698718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
18c3ebf77b53b684bbbff077114ba90331767483 | 38 | py | Python | utils/models/other/bdclstm/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | 3 | 2022-01-18T19:25:46.000Z | 2022-02-05T18:53:24.000Z | utils/models/other/bdclstm/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | null | null | null | utils/models/other/bdclstm/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | null | null | null | from .model import BDCLSTM, UNetSmall
| 19 | 37 | 0.815789 | 5 | 38 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18fb36c8df8372ead7304cc10cf8ed8e60d208ce | 101 | py | Python | office365/sharepoint/storagemetrics/storage_metrics.py | rikeshtailor/Office365-REST-Python-Client | ca7bfa1b22212137bb4e984c0457632163e89a43 | [
"MIT"
] | null | null | null | office365/sharepoint/storagemetrics/storage_metrics.py | rikeshtailor/Office365-REST-Python-Client | ca7bfa1b22212137bb4e984c0457632163e89a43 | [
"MIT"
] | null | null | null | office365/sharepoint/storagemetrics/storage_metrics.py | rikeshtailor/Office365-REST-Python-Client | ca7bfa1b22212137bb4e984c0457632163e89a43 | [
"MIT"
] | null | null | null | from office365.sharepoint.base_entity import BaseEntity
class StorageMetrics(BaseEntity):
pass
| 16.833333 | 55 | 0.821782 | 11 | 101 | 7.454545 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034091 | 0.128713 | 101 | 5 | 56 | 20.2 | 0.897727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e15407023adc68daf9690140a1c645fbb4ca93d8 | 3,881 | py | Python | classifier/helperfunctions.py | shivammehta007/NLPinEnglishLearning | ae869d868e39df9b1787134ba6e964acd385dd2e | [
"Apache-2.0"
] | 1 | 2020-05-27T22:21:33.000Z | 2020-05-27T22:21:33.000Z | classifier/helperfunctions.py | shivammehta007/NLPinEnglishLearning | ae869d868e39df9b1787134ba6e964acd385dd2e | [
"Apache-2.0"
] | null | null | null | classifier/helperfunctions.py | shivammehta007/NLPinEnglishLearning | ae869d868e39df9b1787134ba6e964acd385dd2e | [
"Apache-2.0"
] | null | null | null | """
Helper Functions containing training and evaluation methods
"""
import torch
from tqdm.auto import tqdm
from utility import categorical_accuracy, other_evaluations
from config.root import device
def train(model, iterator, optimizer, criterion, dataset_tag):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in tqdm(iterator, total=len(iterator)):
optimizer.zero_grad()
text, text_lengths = get_batch_data(batch, dataset_tag)
predictions = model(text, text_lengths).squeeze(1)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def get_batch_data(batch, dataset_tag):
if dataset_tag == "multi":
(question, question_len), (key, key_len), (answer, answer_len) = (
batch.question,
batch.key,
batch.answer,
)
text = torch.cat((question, key, answer), dim=0)
text_lengths = question_len + key_len + answer_len
else:
text, text_lengths = batch.text
return text, text_lengths
def evaluate(model, iterator, criterion, dataset_tag):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in tqdm(iterator, total=len(iterator)):
text, text_lengths = get_batch_data(batch, dataset_tag)
predictions = model(text, text_lengths).squeeze(1)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def train_tag_model(model, iterator, optimizer, criterion, tag_field):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in tqdm(iterator, total=len(iterator)):
optimizer.zero_grad()
text, text_lengths, tag = get_batch_data_and_tag(batch, tag_field)
predictions = model(text, text_lengths, tag).squeeze(1)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def get_batch_data_and_tag(batch, tag_field):
(question, question_len), (key, key_len), (answer, answer_len) = (
batch.question,
batch.key,
batch.answer,
)
question_tag = torch.full_like(
question, tag_field.vocab.stoi["Q"], dtype=torch.long, device=device
)
key_tag = torch.full_like(
key, tag_field.vocab.stoi["K"], dtype=torch.long, device=device
)
answer_tag = torch.full_like(
answer, tag_field.vocab.stoi["A"], dtype=torch.long, device=device
)
tag = torch.cat((question_tag, key_tag, answer_tag), dim=0)
text = torch.cat((question, key, answer), dim=0)
text_lengths = question_len + key_len + answer_len
return text, text_lengths, tag
def evaluate_tag_model(model, iterator, criterion, tag_field):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in tqdm(iterator, total=len(iterator)):
text, text_lengths, tag = get_batch_data_and_tag(batch, tag_field)
predictions = model(text, text_lengths, tag).squeeze(1)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
| 23.95679 | 78 | 0.644937 | 486 | 3,881 | 4.936214 | 0.150206 | 0.059608 | 0.068779 | 0.037516 | 0.769904 | 0.737391 | 0.731138 | 0.731138 | 0.723218 | 0.723218 | 0 | 0.005144 | 0.248647 | 3,881 | 161 | 79 | 24.10559 | 0.817558 | 0.015202 | 0 | 0.703297 | 0 | 0 | 0.002098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065934 | false | 0 | 0.043956 | 0 | 0.175824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e16029695f46e622ec14b8258d5b39f60d36feb5 | 36 | py | Python | python/testData/refactoring/move/staleFromImportsRemovedWhenSeveralMovedSymbolsUsedInSameModule/before/src/importing.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2018-12-29T09:53:39.000Z | 2018-12-29T09:53:42.000Z | python/testData/refactoring/move/staleFromImportsRemovedWhenSeveralMovedSymbolsUsedInSameModule/before/src/importing.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/move/staleFromImportsRemovedWhenSeveralMovedSymbolsUsedInSameModule/before/src/importing.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from a import A, B
print(A(), B())
| 9 | 18 | 0.555556 | 8 | 36 | 2.5 | 0.625 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 36 | 3 | 19 | 12 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e164ca37db0a4b3cefbec917e7c37880b233db17 | 126 | py | Python | inheritance_exercise/players_and_monsters/project/blade__knight.py | Veselin-Stoilov/software-university-OOP | 452a77cabf2e7d93f30f629c67c6b22682eb255d | [
"MIT"
] | null | null | null | inheritance_exercise/players_and_monsters/project/blade__knight.py | Veselin-Stoilov/software-university-OOP | 452a77cabf2e7d93f30f629c67c6b22682eb255d | [
"MIT"
] | null | null | null | inheritance_exercise/players_and_monsters/project/blade__knight.py | Veselin-Stoilov/software-university-OOP | 452a77cabf2e7d93f30f629c67c6b22682eb255d | [
"MIT"
] | null | null | null | from inheritance_exercise.players_and_monsters.project.dark_knight import DarkKnight
class SoulMaster(DarkKnight):
pass
| 21 | 84 | 0.849206 | 15 | 126 | 6.866667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103175 | 126 | 5 | 85 | 25.2 | 0.911504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
bed1e9b6edd55d34d2ca2fcee7b9c6c15eda692e | 58 | py | Python | stellar_account_prometheus_exporter/__init__.py | AYCH-Inc/aych.lum.acmonitor | 332f450009810499103dd4935e314f0fd6621d83 | [
"Apache-2.0"
] | 9 | 2020-05-22T18:37:02.000Z | 2022-01-28T20:37:33.000Z | stellar_account_prometheus_exporter/__init__.py | AYCH-Inc/aych.lum.acmonitor | 332f450009810499103dd4935e314f0fd6621d83 | [
"Apache-2.0"
] | 4 | 2020-04-30T17:31:07.000Z | 2022-02-10T16:03:57.000Z | stellar_account_prometheus_exporter/__init__.py | AYCH-Inc/aych.lum.acmonitor | 332f450009810499103dd4935e314f0fd6621d83 | [
"Apache-2.0"
] | 12 | 2019-12-02T13:26:05.000Z | 2022-02-03T17:16:06.000Z | def run():
from . import exporter
exporter.main()
| 14.5 | 26 | 0.62069 | 7 | 58 | 5.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258621 | 58 | 3 | 27 | 19.333333 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bed5a6d1d42412c31dd7ab822465a0be8d8958c8 | 18,716 | py | Python | python/test/test_mpsa.py | dalexa10/puma | ca02309c9f5c71e2e80ad8d64155dd6ca936c667 | [
"NASA-1.3"
] | null | null | null | python/test/test_mpsa.py | dalexa10/puma | ca02309c9f5c71e2e80ad8d64155dd6ca936c667 | [
"NASA-1.3"
] | null | null | null | python/test/test_mpsa.py | dalexa10/puma | ca02309c9f5c71e2e80ad8d64155dd6ca936c667 | [
"NASA-1.3"
] | null | null | null | import unittest
import numpy as np
import pumapy as puma
from pumapy.physicsmodels.mpsa_elasticity import Elasticity
import scipy.sparse
class TestElasticity(unittest.TestCase):
def setUp(self):
self.X = 6
self.Y = 4
self.Z = 8
self.ws_homoiso = puma.Workspace.from_array(np.ones((self.X, self.Y, self.Z)))
self.elast_map_homoiso = puma.ElasticityMap()
self.elast_map_homoiso.add_material((1, 1),
1, 0.1, 0.2, 0.3, 0.4, 0.5,
2, 0.6, 0.7, 0.8, 0.9,
3, 0.11, 0.12, 0.13,
4, 0.14, 0.15,
5, 0.16,
6)
# elasticity solution tensor for homogeneous
# 1 0.1 0.2 0.3 0.4 0.5
# 0.1 2 0.6 0.7 0.8 0.9
# 0.2 0.6 3 0.11 0.12 0.13
# 0.3 0.7 0.11 4 0.14 0.15
# 0.4 0.8 0.12 0.14 5 0.16
# 0.5 0.9 0.13 0.15 0.16 6
self.solution_homoiso_x = np.zeros((self.X, self.Y, self.Z), dtype=float)
for i in range(self.X):
self.solution_homoiso_x[i] = i / (self.X - 1.)
self.solution_homoiso_y = np.zeros((self.X, self.Y, self.Z), dtype=float)
for j in range(self.Y):
self.solution_homoiso_y[:, j] = j / (self.Y - 1.)
self.solution_homoiso_z = np.zeros((self.X, self.Y, self.Z), dtype=float)
for k in range(self.Z):
self.solution_homoiso_z[:, :, k] = k / (self.Z - 1.)
self.ws_matSeriesInx = puma.Workspace.from_array(np.ones((self.X, self.Y, self.Z)))
self.ws_matSeriesInx[int(self.ws_matSeriesInx.matrix.shape[0] / 2.):, :] = 2 # in series in x
self.ws_matSeriesIny = puma.Workspace.from_array(np.ones((self.X, self.Y, self.Z)))
self.ws_matSeriesIny[:, int(self.ws_matSeriesIny.matrix.shape[1] / 2.):] = 2 # in series in y
self.ws_matSeriesInz = puma.Workspace.from_array(np.ones((self.X, self.Y, self.Z)))
self.ws_matSeriesInz[:, :, int(self.ws_matSeriesInz.matrix.shape[2] / 2.):] = 2 # in series in z
self.elast_matSeries = puma.ElasticityMap()
self.elast_matSeries.add_material((1, 1),
10, 0.2, 0.3, 0, 0, 0,
10, 0.5, 0, 0, 0,
10, 0, 0, 0,
0.5, 0, 0,
0.5, 0,
0.5)
self.elast_matSeries.add_material((2, 2),
1, 0.2, 0.3, 0, 0, 0,
1, 0.5, 0, 0, 0,
1, 0, 0, 0,
0.5, 0, 0,
0.5, 0,
0.5)
# elasticity solution tensor for mat in series along x
# 1.8181 0.2 0.3
# 0.2 5.5 0.5
# 0.3 0.5 5.5
# elasticity solution tensor for mat in series along y
# 5.5 0.2 0.3
# 0.2 1.8181 0.5
# 0.3 0.5 5.5
# elasticity solution tensor for mat in series along z
# 5.5 0.2 0.3
# 0.2 5.5 0.5
# 0.3 0.5 1.8181
def test_homoiso_x(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_homoiso, self.elast_map_homoiso, 'x', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [1, 0.1, 0.2, 0.3, 0.4, 0.5])
np.testing.assert_array_almost_equal(u[:, :, :, 0], self.solution_homoiso_x)
def test_homoiso_y(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_homoiso, self.elast_map_homoiso, 'y', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.1, 2, 0.6, 0.7, 0.8, 0.9])
np.testing.assert_array_almost_equal(u[:, :, :, 1], self.solution_homoiso_y)
def test_homoiso_z(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_homoiso, self.elast_map_homoiso, 'z', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.2, 0.6, 3, 0.11, 0.12, 0.13])
np.testing.assert_array_almost_equal(u[:, :, :, 2], self.solution_homoiso_z)
def test_matSeriesInx_x_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'x', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [1.818181818, 0.2, 0.3, 0, 0, 0])
def test_matSeriesInx_y_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'y', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.2, 5.5, 0.5, 0, 0, 0])
def test_matSeriesInx_z_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'z', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.3, 0.5, 5.5, 0, 0, 0])
def test_matSeriesIny_x_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesIny, self.elast_matSeries, 'x', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [5.5, 0.2, 0.3, 0, 0, 0])
def test_matSeriesIny_y_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesIny, self.elast_matSeries, 'y', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.2, 1.818181818, 0.5, 0, 0, 0])
def test_matSeriesIny_z_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesIny, self.elast_matSeries, 'z', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.3, 0.5, 5.5, 0, 0, 0])
def test_matSeriesInz_x_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInz, self.elast_matSeries, 'x', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [5.5, 0.2, 0.3, 0, 0, 0])
def test_matSeriesInz_y_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInz, self.elast_matSeries, 'y', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.2, 5.5, 0.5, 0, 0, 0])
def test_matSeriesInz_z_per(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInz, self.elast_matSeries, 'z', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.3, 0.5, 1.818181818, 0, 0, 0])
def test_matSeriesInx_x_sym(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'x', 's', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [1.818181818, 0.2, 0.3, 0, 0, 0])
def test_matSeriesInx_y_sym(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'y', 's', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.2, 5.5, 0.5, 0, 0, 0])
def test_matSeriesInx_z_sym(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'z', 's', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [0.3, 0.5, 5.5, 0, 0, 0])
def test_matSeriesInx_x_sym_bicgstab(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'x', 's', solver_type='bicgstab')
np.testing.assert_array_almost_equal(Ceff, [1.818181818, 0.2, 0.3, 0, 0, 0], decimal=4)
def test_matSeriesInx_y_sym_bicgstab(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'y', 's', solver_type='bicgstab')
np.testing.assert_array_almost_equal(Ceff, [0.2, 5.5, 0.5, 0, 0, 0], decimal=4)
def test_matSeriesInx_z_sym_bicgstab(self):
Ceff, u, _, _ = puma.compute_elasticity(self.ws_matSeriesInx, self.elast_matSeries, 'z', 's', solver_type='bicgstab')
np.testing.assert_array_almost_equal(Ceff, [0.3, 0.5, 5.5, 0, 0, 0], decimal=4)
def test_symmetry(self):
X, Y, Z = (8, 6, 4)
ws = puma.Workspace.from_array(np.ones((X, Y, Z)))
bc = puma.ElasticityBC.from_workspace(ws)
elast_map = puma.ElasticityMap()
elast_map.add_isotropic_material((1, 1), 10, 0.3)
elast_map.add_isotropic_material((2, 2), 7.3, 0.23)
# Along x
bc[0, :, :, 0] = -1.
bc[-1, :, :, 0] = 1.
bc[0, :, :, [1, 2]] = 0.
bc[-1, :, :, [1, 2]] = 0.
# puma.Workspace.show_orientation(bc)
ws[:, :int(Y/2)] = 2
# ws.show_matrix()
u, _, _ = puma.compute_stress_analysis(ws, elast_map, bc, side_bc='p', solver_type='direct')
# puma.Workspace.show_orientation(u, 5)
np.testing.assert_array_almost_equal(u[:int(X / 2), :, :, 0], -u[int(X / 2):, :, :, 0][::-1], decimal=4)
np.testing.assert_array_almost_equal(u[:int(X / 2), :, :, 1], u[int(X / 2):, :, :, 1][::-1], decimal=4)
np.testing.assert_array_almost_equal(u[:int(X / 2), :, :, 2], u[int(X / 2):, :, :, 2][::-1], decimal=4)
np.testing.assert_array_almost_equal(np.array([[0.14197716, 0.04368197, 0.],
[0.42584737, 0.04142283, 0.],
[0.71028350, 0.03215080, 0.],
[1.00000000, 0.00000000, 0.]], dtype=float), u[int(X / 2):, 2, 2], decimal=7)
# Along y
bc = puma.ElasticityBC.from_workspace(ws)
bc[:, 0, :, 1] = -1.
bc[:, -1, :, 1] = 1.
bc[:, 0, :, [0, 2]] = 0.
bc[:, -1, :, [0, 2]] = 0.
# puma.Workspace.show_orientation(bc)
ws = puma.Workspace.from_array(np.ones((X, Y, Z)))
ws[:int(X/2)] = 2
# ws.show_matrix()
u, _, _ = puma.compute_stress_analysis(ws, elast_map, bc, side_bc='p', solver_type='direct')
# puma.Workspace.show_orientation(u, 5)
np.testing.assert_array_almost_equal(u[:, :int(Y / 2), :, 0], u[:, int(Y / 2):, :, 0][:, ::-1], decimal=4)
np.testing.assert_array_almost_equal(u[:, :int(Y / 2), :, 1], -u[:, int(Y / 2):, :, 1][:, ::-1], decimal=4)
np.testing.assert_array_almost_equal(u[:, :int(Y / 2), :, 2], u[:, int(Y / 2):, :, 2][:, ::-1], decimal=4)
# Along z
bc = puma.ElasticityBC.from_workspace(ws)
bc[:, :, 0, 2] = -1.
bc[:, :, -1, 2] = 1.
bc[:, :, 0, [0, 1]] = 0.
bc[:, :, -1, [0, 1]] = 0.
# puma.Workspace.show_orientation(bc)
u, _, _ = puma.compute_stress_analysis(ws, elast_map, bc, side_bc='p', solver_type='direct')
# puma.Workspace.show_orientation(u, 5)
np.testing.assert_array_almost_equal(u[:, :, :int(Z / 2), 0], u[:, :, int(Z / 2):, 0][:, :, ::-1], decimal=4)
np.testing.assert_array_almost_equal(u[:, :, :int(Z / 2), 1], u[:, :, int(Z / 2):, 1][:, :, ::-1], decimal=4)
np.testing.assert_array_almost_equal(u[:, :, :int(Z / 2), 2], -u[:, :, int(Z / 2):, 2][:, :, ::-1], decimal=4)
def test_tensor_rotation_x(self):
ws = puma.Workspace.from_array(np.ones((self.X, self.Y, self.Z)))
ws.set(orientation_value=(1, 0, 0))
elast_map = puma.ElasticityMap()
elast_map.add_material_to_orient((1, 1), 10, 20, 0.23, 0.3, 50)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'x', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [14.33251433, 9.41850942, 9.41850942, 0, 0, 0], decimal=7)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'y', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [9.41850942, 28.16732817, 12.78271278, 0, 0, 0], decimal=7)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'z', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [9.41850942, 12.78271278, 28.16732817, 0, 0, 0], decimal=7)
def test_tensor_rotation_y(self):
ws = puma.Workspace.from_array(np.ones((self.X, self.Y, self.Z)))
ws.set(orientation_value=(0, 1, 0))
elast_map = puma.ElasticityMap()
elast_map.add_material_to_orient((1, 1), 10, 20, 0.23, 0.3, 50)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'x', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [28.167328167328126, 9.418509418509416, 12.782712782712776, 0, 0, 0], decimal=7)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'y', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [9.418509418509423, 14.33251433251433, 9.418509418509421, 0, 0, 0], decimal=7)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'z', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [12.78271278271278, 9.418509418509414, 28.16732816732822, 0, 0, 0], decimal=7)
def test_tensor_rotation_z(self):
ws = puma.Workspace.from_array(np.ones((self.X, self.Y, self.Z)))
ws.set(orientation_value=(0, 0, 1))
elast_map = puma.ElasticityMap()
elast_map.add_material_to_orient((1, 1), 10, 20, 0.23, 0.3, 50)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'x', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [28.167328167328208, 12.782712782712784, 9.418509418509416, 0, 0, 0], decimal=7)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'y', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [12.782712782712778, 28.167328167328137, 9.418509418509416, 0, 0, 0], decimal=7)
Ceff, u, _, _ = puma.compute_elasticity(ws, elast_map, 'z', 'p', solver_type='direct')
np.testing.assert_array_almost_equal(Ceff, [9.418509418509425, 9.418509418509425, 14.33251433251433, 0, 0, 0], decimal=7)
def test_Amat_builtinbeam596(self):
ws = puma.Workspace.from_shape_value((5, 9, 6), 1)
elast_map = puma.ElasticityMap()
elast_map.add_isotropic_material((1, 1), 200, 0.3)
bc = puma.ElasticityBC.from_workspace(ws)
bc[0] = 0
bc[-1] = [0, 1, 0]
solver = Elasticity(ws, elast_map, None, 'f', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_xf.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
solver = Elasticity(ws, elast_map, None, 'p', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_xp.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
solver = Elasticity(ws, elast_map, None, 's', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_xs.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
bc = puma.ElasticityBC.from_workspace(ws)
bc[:, 0] = 0
bc[:, -1] = [0, 0, 1]
solver = Elasticity(ws, elast_map, None, 'f', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_yf.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
solver = Elasticity(ws, elast_map, None, 'p', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_yp.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
solver = Elasticity(ws, elast_map, None, 's', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_ys.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
bc = puma.ElasticityBC.from_workspace(ws)
bc[:, :, 0] = 0
bc[:, :, -1] = [1, 0, 0]
solver = Elasticity(ws, elast_map, None, 'f', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_zf.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
solver = Elasticity(ws, elast_map, None, 'p', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_zp.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
solver = Elasticity(ws, elast_map, None, 's', bc, None, None, "direct", True, (0, 0, 0, 0, 0))
solver.error_check()
solver.initialize()
solver.assemble_bvector()
solver.assemble_Amatrix()
Amat_correct = scipy.sparse.load_npz('testdata/mpsa_Amat/Amat_builtinbeam596_zs.npz')
test_Amat = np.abs(solver.Amat.toarray() - Amat_correct.toarray())
self.assertAlmostEqual(test_Amat.max(), 0, 10)
if __name__ == '__main__':
unittest.main()
| 50.583784 | 132 | 0.585114 | 2,689 | 18,716 | 3.859799 | 0.061361 | 0.022353 | 0.017632 | 0.077079 | 0.855285 | 0.825128 | 0.819058 | 0.799788 | 0.772136 | 0.757106 | 0 | 0.091392 | 0.256358 | 18,716 | 369 | 133 | 50.720867 | 0.654333 | 0.050438 | 0 | 0.413534 | 0 | 0 | 0.040534 | 0.022832 | 0 | 0 | 0 | 0 | 0.184211 | 1 | 0.090226 | false | 0 | 0.018797 | 0 | 0.112782 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
834e1fdba8616bc095b05bbeddd01a503a993975 | 78 | py | Python | organization/organization/app.py | estebistec/morepath-sandbox | 9b6167ece8a831fcb80d339e9437e6d482069b73 | [
"BSD-3-Clause"
] | 1 | 2019-06-23T09:15:16.000Z | 2019-06-23T09:15:16.000Z | organization/organization/app.py | estebistec/morepath-sandbox | 9b6167ece8a831fcb80d339e9437e6d482069b73 | [
"BSD-3-Clause"
] | null | null | null | organization/organization/app.py | estebistec/morepath-sandbox | 9b6167ece8a831fcb80d339e9437e6d482069b73 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import morepath
class App(morepath.App):
pass
| 8.666667 | 24 | 0.602564 | 10 | 78 | 4.7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.230769 | 78 | 8 | 25 | 9.75 | 0.766667 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
83540f5b1238e5192c01126914be5fb64e3ccd3d | 75 | py | Python | src/applications/profile/views/__init__.py | tgrx/obliviscor | 31f9a4476892460c931b9a8fc5403c3afcc47607 | [
"Apache-2.0"
] | null | null | null | src/applications/profile/views/__init__.py | tgrx/obliviscor | 31f9a4476892460c931b9a8fc5403c3afcc47607 | [
"Apache-2.0"
] | 20 | 2020-04-16T23:45:50.000Z | 2020-05-05T14:22:03.000Z | src/applications/profile/views/__init__.py | tgrx/obliviscor | 31f9a4476892460c931b9a8fc5403c3afcc47607 | [
"Apache-2.0"
] | null | null | null | from .profile import ProfileView
from .profile_edit import ProfileEditView
| 25 | 41 | 0.866667 | 9 | 75 | 7.111111 | 0.666667 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 75 | 2 | 42 | 37.5 | 0.955224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83631e9cd68f3376ae4c4f48a4ac9be3ed77d8a6 | 12,892 | py | Python | thesis_util/experiment_eval/compare_all_models.py | duennbart/masterthesis_VAE | 1a161bc5c234acc0a021d84cde8cd69e784174e1 | [
"BSD-3-Clause"
] | 14 | 2020-06-28T15:38:48.000Z | 2021-12-05T01:49:50.000Z | thesis_util/experiment_eval/compare_all_models.py | duennbart/masterthesis_VAE | 1a161bc5c234acc0a021d84cde8cd69e784174e1 | [
"BSD-3-Clause"
] | null | null | null | thesis_util/experiment_eval/compare_all_models.py | duennbart/masterthesis_VAE | 1a161bc5c234acc0a021d84cde8cd69e784174e1 | [
"BSD-3-Clause"
] | 3 | 2020-06-28T15:38:49.000Z | 2022-02-13T22:04:34.000Z | # create recon and sample images for all models
from thesis_util.thesis_util import stack_trials,create_eval_recon_all_imgs
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import tikzplotlib
from scipy import signal
# for my pc
#path_to_git = r'C:\GIT'
# for tum pc
path_to_git = r'C:\Users\ga45tis\GIT'
save_path = path_to_git + r'\masterthesisgeneral\latex\900 Report\images\experiments\\'
save_path = path_to_git+ r"\masterthesisgeneral\latex\900 Report\images\experiments\\"
title= 'Reconstruction of Test Data'
pdf_file_name='recon_all_experiments_test'
# creat for test images
data = [
( path_to_git + r"\mastherthesiseval\experiments\VAE_01_02PM on November 20, 2019\imgs\recon_test_epoch_197.png", r'Input', 0),
(path_to_git + r"\mastherthesiseval\experiments\VAE_01_02PM on November 20, 2019\imgs\recon_test_epoch_197.png", r'$\textrm{VAE}_{50}$',1),
(path_to_git + r"\mastherthesiseval\experiments\VAE8192_02_14AM on November 28, 2019\imgs\recon_test_epoch_223.png",r'$\text{VAE}_{8192}$',1),
(path_to_git + r"\mastherthesiseval\experiments\SpatialVAE_02_14AM on November 21, 2019\imgs\recon_test_epoch_291.png",r'$\text{SVAE}_{3 \times 3 \times 9}$',1),
(path_to_git + r"\mastherthesiseval\experiments\SpatialVAE161632adpt_05_35PM on November 28, 2019\imgs\recon_test_epoch_292.png",r'$\text{SVAE}_{16 \times 16 \times 32}$',1),
(path_to_git + r"\mastherthesiseval\experiments\VPGA_04_50AM on November 27, 2019\imgs\recon_test_epoch_247.png",r'$\text{VPGA}_{50}$',1),
(path_to_git + r"\mastherthesiseval\experiments\VQVAE_10_30AM on November 24, 2019\imgs\recon_test_epoch_243.png",r'$\text{VQ-VAE}_{std}$',1),
(path_to_git + r"\mastherthesiseval\experiments\VQVAEadapt_06_25AM on November 25, 2019\imgs\recon_test_epoch_292.png",r'$\text{VQ-VAE}_{adpt}$',1),
(path_to_git + r"\mastherthesiseval\experiments\IntroVAE_02_17AM on November 20, 2019\imgs\recon_test_epoch_300.png",r'$\text{IntroVAE}_{50}$',1)
]
#create_eval_recon_all_imgs(data,title,pdf_file_name,save_directory=save_path,prefix_4include=r"images/experiments/")
# create for training data
data = [
( path_to_git + r'\mastherthesiseval\experiments\VAE_01_02PM on November 20, 2019\imgs\recon_train_epoch_290.png', r'Input', 0),
(path_to_git + r'\mastherthesiseval\experiments\VAE_01_02PM on November 20, 2019\imgs\recon_train_epoch_290.png', r'$\textrm{VAE}_{50}$',1),
(path_to_git + r'\mastherthesiseval\experiments\VAE8192_02_14AM on November 28, 2019\imgs\recon_train_epoch_293.png',r'$\text{VAE}_{8192}$',1),
(path_to_git + r'\mastherthesiseval\experiments\SpatialVAE_02_14AM on November 21, 2019\imgs\recon_train_epoch_300.png',r'$\text{SVAE}_{3 \times 3 \times 9}$',1),
(path_to_git + r'\mastherthesiseval\experiments\SpatialVAE161632adpt_05_35PM on November 28, 2019\imgs\recon_train_epoch_281.png',r'$\text{SVAE}_{16 \times 16 \times 32}$',1),
(path_to_git + r'\mastherthesiseval\experiments\VPGA_04_50AM on November 27, 2019\imgs\recon_train_epoch_287.png',r'$\text{VPGA}_{50}$',1),
(path_to_git + r'\mastherthesiseval\experiments\VQVAE_10_30AM on November 24, 2019\imgs\recon_train_epoch_282.png',r'$\text{VQ-VAE}_{std}$',1),
(path_to_git + r'\mastherthesiseval\experiments\VQVAEadapt_06_25AM on November 25, 2019\imgs\recon_train_epoch_256.png',r'$\text{VQ-VAE}_{adpt}$',1),
(path_to_git + r'\mastherthesiseval\experiments\IntroVAE_02_17AM on November 20, 2019\imgs\recon_train_epoch_300.png',r'$\text{IntroVAE}_{50}$',1)
]
title= 'Reconstruction of Training Data'
pdf_file_name='recon_all_experiments_train'
#create_eval_recon_all_imgs(data,title,pdf_file_name,save_directory=save_path,prefix_4include=r"images/experiments/")
#create for random generated samples
data = [
( path_to_git + r'\mastherthesiseval\experiments\VAE_01_02PM on November 20, 2019\imgs\generated_sample_epoch_300.png', r'Input', 0),
(path_to_git + r'\mastherthesiseval\experiments\VAE_01_02PM on November 20, 2019\imgs\generated_sample_epoch_300.png', r'$\textrm{VAE}_{50}$',1),
(path_to_git + r'\mastherthesiseval\experiments\VAE8192_02_14AM on November 28, 2019\imgs\generated_sample_epoch_300.png',r'$\text{VAE}_{8192}$',1),
(path_to_git + r'\mastherthesiseval\experiments\SpatialVAE_02_14AM on November 21, 2019\imgs\generated_sample_epoch_300.png',r'$\text{SVAE}_{3 \times 3 \times 9}$',1),
(path_to_git + r'\mastherthesiseval\experiments\SpatialVAE161632adpt_05_35PM on November 28, 2019\imgs\generated_sample_epoch_292.png',r'$\text{SVAE}_{16 \times 16 \times 32}$',1),
(path_to_git + r'\mastherthesiseval\experiments\VPGA_04_50AM on November 27, 2019\imgs\generated_sample_epoch_300.png',r'$\text{VPGA}_{50}$',1),
(path_to_git + r'\mastherthesiseval\experiments\VQVAE_10_30AM on November 24, 2019\imgs\generated_sample_epoch_282.png',r'$\text{VQ-VAE}_{std}$',1),
(path_to_git + r'\mastherthesiseval\experiments\VQVAEadapt_06_25AM on November 25, 2019\imgs\generated_sample_epoch_300.png',r'$\text{VQ-VAE}_{adpt}$',1),
(path_to_git + r'\mastherthesiseval\experiments\IntroVAE_02_17AM on November 20, 2019\imgs\generated_sample_epoch_299.png',r'$\text{IntroVAE}_{50}$',1)
]
title= 'Random Generated Samples'
pdf_file_name='random_generated_all_experiments'
#create_eval_recon_all_imgs(data,title,pdf_file_name,save_directory=save_path,prefix_4include=r"images/experiments/",add_kl_class=False)
# create learning cuve figures for all experiments
experiments = []
# VAE_50
pathes_2_experiments = [path_to_git + r'\mastherthesiseval\experiments\VAE_01_02PM on November 20, 2019',
path_to_git + r'\mastherthesiseval\experiments\VAE_04_12PM on November 20, 2019',
path_to_git + r'\mastherthesiseval\experiments\VAE_07_12PM on November 20, 2019',
path_to_git + r'\mastherthesiseval\experiments\VAE_11_20PM on November 19, 2019']
model = {"paths": pathes_2_experiments,
"title": r'{\normalsize$\textrm{VAE}_{50}$}'}
experiments.append(model)
# VAE_8192
pathes_2_experiments = [path_to_git + r'\mastherthesiseval\experiments\VAE8192_02_14AM on November 28, 2019',
path_to_git + r'\mastherthesiseval\experiments\VAE8192_05_56AM on November 28, 2019',
path_to_git + r'\mastherthesiseval\experiments\VAE8192_09_41AM on November 28, 2019',
path_to_git + r'\mastherthesiseval\experiments\VAE8192_10_33PM on November 27, 2019']
model = {"paths": pathes_2_experiments,
"title": r'{\normalsize $\textrm{VAE}_{8192}$}'}
experiments.append(model)
# SVAE_339
pathes_2_experiments = [path_to_git + r'\mastherthesiseval\experiments\SpatialVAE_02_14AM on November 21, 2019',
path_to_git + r'\mastherthesiseval\experiments\SpatialVAE_05_12AM on November 21, 2019',
path_to_git + r'\mastherthesiseval\experiments\SpatialVAE_06_52AM on November 20, 2019',
path_to_git + r'\mastherthesiseval\experiments\SpatialVAE_11_16PM on November 20, 2019']
model = {"paths": pathes_2_experiments,
"title": r'{\normalsize $\textrm{SVAE}_{3 \times 3 \times 9}$}'}
experiments.append(model)
# SVAE_161632
pathes_2_experiments = [path_to_git + r'\mastherthesiseval\experiments\SpatialVAE161632adpt_01_08AM on November 29, 2019',
path_to_git + r'\mastherthesiseval\experiments\SpatialVAE161632adpt_01_50PM on November 28, 2019',
path_to_git + r'\mastherthesiseval\experiments\SpatialVAE161632adpt_05_35PM on November 28, 2019',
path_to_git + r'\mastherthesiseval\experiments\SpatialVAE161632adpt_09_26PM on November 28, 2019']
model = {"paths": pathes_2_experiments,
"title": r'{\normalsize $\textrm{SVAE}_{16 \times 16 \times 32}$}'}
experiments.append(model)
# VPGA_50
pathes_2_experiments = [path_to_git + r'\mastherthesiseval\experiments\VPGA_01_32PM on November 27, 2019',
path_to_git + r'\mastherthesiseval\experiments\VPGA_04_50AM on November 27, 2019',
path_to_git + r'\mastherthesiseval\experiments\VPGA_06_51PM on November 26, 2019']
model = {"paths": pathes_2_experiments,
"title": r'{\normalsize $\textrm{VPGA}_{50}$}'}
experiments.append(model)
# VQ-VAE std
pathes_2_experiments = [path_to_git +r'\mastherthesiseval\experiments\VQVAE_01_23PM on November 24, 2019',
path_to_git +r'\mastherthesiseval\experiments\VQVAE_04_16PM on November 24, 2019',
path_to_git +r'\mastherthesiseval\experiments\VQVAE_07_09PM on November 24, 2019',
path_to_git +r'\mastherthesiseval\experiments\VQVAE_10_30AM on November 24, 2019']
model = {"paths": pathes_2_experiments,
"title": r'{\normalsize $\textrm{VQ-VAE}_{std}$}'}
experiments.append(model)
# VQ-VAE adpt
pathes_2_experiments = [path_to_git +r'\mastherthesiseval\experiments\VQVAEadapt_06_25AM on November 25, 2019',
path_to_git +r'\mastherthesiseval\experiments\VQVAEadapt_07_24PM on November 25, 2019',
path_to_git +r'\mastherthesiseval\experiments\VQVAEadapt_12_05AM on November 25, 2019',
path_to_git +r'\mastherthesiseval\experiments\VQVAEadapt_12_49PM on November 25, 2019']
model = {"paths": pathes_2_experiments,
"title": r'{\normalsize $\textrm{VQ-VAE}_{adpt}$}'}
experiments.append(model)
# intro vae
pathes_2_experiments = [path_to_git + r'\mastherthesiseval\experiments\IntroVAE_02_17AM on November 20, 2019',
path_to_git + r'\mastherthesiseval\experiments\IntroVAE_02_36PM on November 21, 2019',
path_to_git + r'\mastherthesiseval\experiments\IntroVAE_09_57AM on November 21, 2019']
model = {"paths": pathes_2_experiments,
"title":r'{\normalsize $\textrm{IntroVAE}_{50}$}'}
experiments.append(model)
# create MSE plot
xlabel = "Epoch"
ylabel = "MSE"
plot_title = "Average Learning Curve for Test Data "
legend_position='upper right'
epochs = np.arange(0, 300)
fig, ax = plt.subplots()
matplotlib.rcParams['text.usetex'] = True
b, a = signal.butter(1, 0.07)
for element in experiments:
paths = element["paths"]
# mse
result_path_test = [x + r'\results\mse_test300.npy' for x in paths]
mse = stack_trials(result_path_test)
mse = mse.mean(axis=0)
mse = mse[:300]
print(mse.shape)
title = element["title"]
mse = signal.filtfilt(b, a, mse)
ax.plot(epochs, mse, label=title, linewidth=2.5)
#ax.legend(loc=legend_position,ncol=1,bbox_to_anchor=(1.55, 0.8))
ax.grid()
ax.yaxis.set_ticks_position('both')
ax.xaxis.set_ticks_position('both')
ax.set_ylim([0,0.006])
ax.set_xlim([0,300])
ax.ticklabel_format(style='sci',scilimits=(0,0),axis='y')
plt.xlabel(xlabel)
#plt.ylabel(ylabel)
plt.title(ylabel)
tikzplotlib.save(save_path + "mse_all_test.tex")
plt.show()
# create MSSIM plot
xlabel = "Epoch"
ylabel = "MS-SSIM"
plot_title = "Average Learning Curve for Test Data "
legend_position='upper right'
epochs = np.arange(0, 300)
fig, ax = plt.subplots()
matplotlib.rcParams['text.usetex'] = True
for element in experiments:
paths = element["paths"]
# mse
result_path_test = [x + r'\results\msssim_test300.npy' for x in paths]
mse = stack_trials(result_path_test)
mse = mse.mean(axis=0)
mse = mse[:300]
mse = signal.filtfilt(b, a, mse)
print(mse.shape)
title = element["title"]
ax.plot(epochs, mse, label=title, linewidth=2.5)
#ax.legend(loc=legend_position,ncol=1,bbox_to_anchor=(1.55, 0.8))
ax.grid()
ax.yaxis.set_ticks_position('both')
ax.xaxis.set_ticks_position('both')
ax.set_ylim([0.7,1])
ax.set_xlim([0,300])
ax.ticklabel_format(style='sci',scilimits=(0,0),axis='y')
plt.xlabel(xlabel)
#plt.ylabel(ylabel)
plt.title(ylabel)
tikzplotlib.save(save_path + "msssim_all_test.tex")
plt.show()
# create FID plot
xlabel = "Epoch"
ylabel = "FID"
plot_title = "Average Learning Curve for Test Data "
legend_position='upper right'
epochs = np.arange(0, 300)
fig, ax = plt.subplots()
matplotlib.rcParams['text.usetex'] = True
for element in experiments:
paths = element["paths"]
# mse
result_path_test = [x + r'\results\fid_score300.npy' for x in paths]
mse = stack_trials(result_path_test)
mse = mse.mean(axis=0)
mse = mse[:300]
mse = signal.filtfilt(b, a, mse)
print(mse.shape)
title = element["title"]
ax.plot(epochs, mse, label=title, linewidth=2.5)
ax.legend(loc=legend_position,ncol=1,bbox_to_anchor=(1.55, 0.8))
ax.grid()
ax.yaxis.set_ticks_position('both')
ax.xaxis.set_ticks_position('both')
#ax.set_ylim([0.7,1])
ax.set_xlim([0,300])
ax.ticklabel_format(style='sci',scilimits=(0,0),axis='y')
plt.xlabel(xlabel)
#plt.ylabel(ylabel)
plt.title(ylabel)
tikzplotlib.save(save_path + "fid_all_test.tex")
plt.show()
| 53.272727 | 184 | 0.728359 | 1,907 | 12,892 | 4.660724 | 0.120608 | 0.041179 | 0.061769 | 0.068632 | 0.883663 | 0.865662 | 0.848447 | 0.836521 | 0.811769 | 0.77689 | 0 | 0.086291 | 0.138846 | 12,892 | 241 | 185 | 53.493776 | 0.714286 | 0.07206 | 0 | 0.475676 | 0 | 0 | 0.541111 | 0.359484 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032432 | 0 | 0.032432 | 0.016216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55d87bf85cbe9d9f98bfddf53e2646db789742ca | 222 | py | Python | dns/rdtypes/ANY/SMIMEA.py | Ashiq5/dnspython | 5449af5318d88bada34f661247f3bcb16f58f057 | [
"ISC"
] | 1,666 | 2015-01-02T17:46:14.000Z | 2022-03-30T07:27:32.000Z | dns/rdtypes/ANY/SMIMEA.py | felixonmars/dnspython | 2691834df42aab74914883fdf26109aeb62ec647 | [
"ISC"
] | 591 | 2015-01-16T12:19:49.000Z | 2022-03-30T21:32:11.000Z | dns/rdtypes/ANY/SMIMEA.py | felixonmars/dnspython | 2691834df42aab74914883fdf26109aeb62ec647 | [
"ISC"
] | 481 | 2015-01-14T04:14:43.000Z | 2022-03-30T19:28:52.000Z | # Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
import dns.immutable
import dns.rdtypes.tlsabase
@dns.immutable.immutable
class SMIMEA(dns.rdtypes.tlsabase.TLSABase):
"""SMIMEA record"""
| 22.2 | 75 | 0.774775 | 29 | 222 | 5.931034 | 0.62069 | 0.104651 | 0.209302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126126 | 222 | 9 | 76 | 24.666667 | 0.886598 | 0.396396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36056c20194b21d8b1a1e189117c8df54fc4525a | 30 | py | Python | hello/hellorh.py | djdibyo90/DO400-apps-external | 59dd75cced4771bfb164394c906924cc21b47e42 | [
"Apache-2.0"
] | null | null | null | hello/hellorh.py | djdibyo90/DO400-apps-external | 59dd75cced4771bfb164394c906924cc21b47e42 | [
"Apache-2.0"
] | null | null | null | hello/hellorh.py | djdibyo90/DO400-apps-external | 59dd75cced4771bfb164394c906924cc21b47e42 | [
"Apache-2.0"
] | 1 | 2021-05-25T01:59:34.000Z | 2021-05-25T01:59:34.000Z | print("Hello RedHat DevOps!")
| 15 | 29 | 0.733333 | 4 | 30 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
36454933c4f7a500c067185aad9e6251194f7e7d | 610 | py | Python | deploy/support.py | danielSbastos/blue-green-aws | bb78448edd2496d69f4802ef7337ce648a965be3 | [
"MIT"
] | null | null | null | deploy/support.py | danielSbastos/blue-green-aws | bb78448edd2496d69f4802ef7337ce648a965be3 | [
"MIT"
] | null | null | null | deploy/support.py | danielSbastos/blue-green-aws | bb78448edd2496d69f4802ef7337ce648a965be3 | [
"MIT"
] | null | null | null | import boto3
from settings import SETTINGS
class Resource:
@staticmethod
def ec2():
return boto3.resource('ec2', region_name=SETTINGS['region'])
@staticmethod
def s3():
return boto3.resource('s3', region_name=SETTINGS['region'])
class Client:
@staticmethod
def ec2():
return boto3.client('ec2', region_name=SETTINGS['region'])
@staticmethod
def auto_scaling():
return boto3.client('autoscaling', region_name=SETTINGS['region'])
@staticmethod
def load_balancer():
return boto3.client('elb', region_name=SETTINGS['region'])
| 21.785714 | 74 | 0.663934 | 67 | 610 | 5.940299 | 0.298507 | 0.188442 | 0.226131 | 0.301508 | 0.454774 | 0.309045 | 0.211055 | 0 | 0 | 0 | 0 | 0.024896 | 0.209836 | 610 | 27 | 75 | 22.592593 | 0.80083 | 0 | 0 | 0.368421 | 0 | 0 | 0.085246 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | true | 0 | 0.105263 | 0.263158 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
36ae4b68d0f0468284db4adde76121c4b9c9cc87 | 170 | py | Python | ml-intro/quiz-classifier/ClassifyNB.py | moreirab/udacity-nfml-quizzes | 91e5fc8ce0bee835bb7eec324669dd5c0c85a702 | [
"MIT"
] | null | null | null | ml-intro/quiz-classifier/ClassifyNB.py | moreirab/udacity-nfml-quizzes | 91e5fc8ce0bee835bb7eec324669dd5c0c85a702 | [
"MIT"
] | null | null | null | ml-intro/quiz-classifier/ClassifyNB.py | moreirab/udacity-nfml-quizzes | 91e5fc8ce0bee835bb7eec324669dd5c0c85a702 | [
"MIT"
] | null | null | null | from sklearn.naive_bayes import GaussianNB
def classify(features_train, labels_train):
clf = GaussianNB()
clf.fit(features_train, labels_train)
return clf | 28.333333 | 46 | 0.758824 | 22 | 170 | 5.636364 | 0.636364 | 0.209677 | 0.306452 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170588 | 170 | 6 | 47 | 28.333333 | 0.879433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
36b41f97568d72863cbee2bf59de00bac22e3a83 | 23,199 | py | Python | aptenodytes/main.py | yongrenjie/aptenodytes | 0eb33b89c2358be42e9c3c4aa554618c6b2809e2 | [
"MIT"
] | null | null | null | aptenodytes/main.py | yongrenjie/aptenodytes | 0eb33b89c2358be42e9c3c4aa554618c6b2809e2 | [
"MIT"
] | null | null | null | aptenodytes/main.py | yongrenjie/aptenodytes | 0eb33b89c2358be42e9c3c4aa554618c6b2809e2 | [
"MIT"
] | null | null | null | """
main.py
-------
All the functionality is in this one file. They are for personal use, they are
largely undocumented! Use at your own risk!
"""
import os
from pathlib import Path
from typing import List, Tuple, Optional, Sequence, Any, Union, Generator
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import penguins as pg
from penguins import dataset as ds # for type annotations
# These are pure convenience routines for my personal use.
# Default save location for plots
dsl = Path("/Users/yongrenjie/Desktop/a_plot.png")
# Path to NMR spectra. The $nmrd environment variable should resolve to
# .../dphil/expn/nmr. On my Mac this is set to my SSD.
def __getenv(key):
if os.getenv(key) is not None:
x = Path(os.getenv(key))
if x.exists():
return x
raise FileNotFoundError("$nmrd does not point to a valid location.")
def nmrd():
return __getenv("nmrd")
# -- Seaborn plotting functions for SNR comparisons
def hsqc_stripplot(molecule: Any,
datasets: Union[ds.Dataset2D, Sequence[ds.Dataset2D]],
ref_dataset: ds.Dataset2D,
expt_labels: Union[str, Sequence[str]],
xlabel: str = "Experiment",
ylabel: str = "Intensity",
title: str = "",
edited: bool = False,
show_averages: bool = True,
ncol: int = 3,
loc: str = "upper center",
ax: Optional[Any] = None,
**kwargs: Any,
) -> Tuple[Any, Any]:
"""
Plot HSQC strip plots (i.e. plot relative intensities, split by
multiplicity).
Parameters
----------
molecule : pg.private.Andrographolide or pg.private.Zolmitriptan
The class from which the hsqc attribute will be taken from
datasets : pg.Dataset2D or sequence of pg.Dataset2D
Dataset(s) to analyse intensities of
ref_dataset : pg.Dataset2D
Reference dataset
expt_labels : str or sequence of strings
Labels for the analysed datasets
xlabel : str, optional
Axes x-label, defaults to "Experiment"
ylabel : str, optional
Axes y-label, defaults to "Intensity"
title : str, optional
Axes title, defaults to empty string
edited : bool, default False
Whether editing is enabled or not.
show_averages : bool, default True
Whether to indicate averages in each category using sns.pointplot.
ncol : int, optional
Passed to ax.legend(). Defaults to 4.
loc : str, optional
Passed to ax.legend(). Defaults to "upper center".
ax : matplotlib.axes.Axes, optional
Axes instance to plot on. If not provided, uses plt.gca().
kwargs : dict, optional
Keywords passed on to sns.stripplot().
Returns
-------
(fig, ax).
"""
# Stick dataset/label into a list if needed
if isinstance(datasets, ds.Dataset2D):
datasets = [datasets]
if isinstance(expt_labels, str):
expt_labels = [expt_labels]
# Calculate dataframes of relative intensities.
rel_ints_dfs = [molecule.hsqc.rel_ints_df(dataset=ds,
ref_dataset=ref_dataset,
label=label,
edited=edited)
for (ds, label) in zip(datasets, expt_labels)]
all_dfs = pd.concat(rel_ints_dfs)
# Calculate the average integrals by multiplicity
avgd_ints = pd.concat((df.groupby("mult").mean() for df in rel_ints_dfs),
axis=1)
avgd_ints.drop(columns=["f1", "f2"], inplace=True)
# Get currently active axis if none provided
if ax is None:
ax = plt.gca()
# Plot the intensities.
stripplot_alpha = 0.3 if show_averages else 0.8
sns.stripplot(x="expt", y="int", hue="mult",
zorder=0, alpha=stripplot_alpha,
dodge=True, data=all_dfs, ax=ax, **kwargs)
if show_averages:
sns.pointplot(x="expt", y="int", hue="mult", zorder=1,
dodge=0.5, data=all_dfs, ax=ax, join=False,
markers='_', palette="dark", ci=None, scale=1.25)
# Customise the plot
ax.set(xlabel=xlabel, ylabel=ylabel, title=title)
handles, _ = ax.get_legend_handles_labels()
l = ax.legend(ncol=ncol, loc=loc,
markerscale=0.4,
handles=handles[0:3],
labels=["CH", r"CH$_2$", r"CH$_3$"])
ax.axhline(y=1, color="grey", linewidth=0.5, linestyle="--")
# Set y-limits. We need to expand it by ~20% to make space for the legend,
# as well as the averaged values.
EXPANSION_FACTOR = 1.2
ymin, ymax = ax.get_ylim()
ymean = (ymin + ymax)/2
ylength = (ymax - ymin)/2
new_ymin = ymean - (EXPANSION_FACTOR * ylength)
new_ymax = ymean + (EXPANSION_FACTOR * ylength)
ax.set_ylim((new_ymin, new_ymax))
# add the text
for x, (_, expt_avgs) in enumerate(avgd_ints.items()):
for i, ((_, avg), color) in enumerate(zip(expt_avgs.items(),
sns.color_palette("deep"))):
ax.text(x=x-0.25+i*0.25, y=0.02, s=f"({avg:.2f})",
color=color, horizontalalignment="center",
transform=ax.get_xaxis_transform())
pg.style_axes(ax, "plot")
return plt.gcf(), ax
def cosy_stripplot(molecule: Any,
datasets: Union[ds.Dataset2D, Sequence[ds.Dataset2D]],
ref_dataset: ds.Dataset2D,
expt_labels: Union[str, Sequence[str]],
xlabel: str = "Experiment",
ylabel: str = "Intensity",
title: str = "",
ncol: int = 2,
separate_type: bool = True,
loc: str = "upper center",
ax: Optional[Any] = None,
**kwargs: Any,
) -> Tuple[Any, Any]:
"""
Plot COSY strip plots (i.e. plot relative intensities, split by peak type).
Parameters
----------
molecule : pg.private.Andrographolide or pg.private.Zolmitriptan
The class from which the cosy attribute will be taken from
datasets : pg.Dataset2D or sequence of pg.Dataset2D
Dataset(s) to analyse intensities of
ref_dataset : pg.Dataset2D
Reference dataset
expt_labels : str or sequence of strings
Labels for the analysed datasets
xlabel : str, optional
Axes x-label, defaults to "Experiment"
ylabel : str, optional
Axes y-label, defaults to "Intensity"
title : str, optional
Axes title, defaults to empty string
ncol : int, optional
Passed to ax.legend(). Defaults to 4.
loc : str, optional
Passed to ax.legend(). Defaults to "upper center".
ax : matplotlib.axes.Axes, optional
Axes instance to plot on. If not provided, uses plt.gca().
kwargs : dict, optional
Keywords passed on to sns.stripplot().
Returns
-------
(fig, ax).
"""
# Stick dataset/label into a list if needed
if isinstance(datasets, ds.Dataset2D):
datasets = [datasets]
if isinstance(expt_labels, str):
expt_labels = [expt_labels]
# Calculate dataframes of relative intensities.
rel_ints_dfs = [molecule.cosy.rel_ints_df(dataset=ds,
ref_dataset=ref_dataset,
label=label)
for (ds, label) in zip(datasets, expt_labels)]
if not separate_type:
rel_ints_dfs = [rel_int_df.assign(type="cosy")
for rel_int_df in rel_ints_dfs]
all_dfs = pd.concat(rel_ints_dfs)
# Calculate the average integrals by type
avgd_ints = pd.concat((df.groupby("type").mean() for df in rel_ints_dfs),
axis=1)
avgd_ints.drop(columns=["f1", "f2"], inplace=True)
# Get currently active axis if none provided
if ax is None:
ax = plt.gca()
# Plot the intensities.
sns.stripplot(x="expt", y="int", hue="type",
dodge=True, data=all_dfs, ax=ax,
palette=sns.color_palette("deep")[3:], **kwargs)
# Customise the plot
ax.set(xlabel=xlabel, ylabel=ylabel, title=title)
if separate_type:
ax.legend(ncol=ncol, loc=loc,
labels=["diagonal", "cross"]).set(title=None)
else:
ax.legend().set_visible(False)
ax.axhline(y=1, color="grey", linewidth=0.5, linestyle="--")
# Set y-limits. We need to expand it by ~20% to make space for the legend,
# as well as the averaged values.
EXPANSION_FACTOR = 1.2
ymin, ymax = ax.get_ylim()
ymean = (ymin + ymax)/2
ylength = (ymax - ymin)/2
new_ymin = ymean - (EXPANSION_FACTOR * ylength)
new_ymax = ymean + (EXPANSION_FACTOR * ylength)
ax.set_ylim((new_ymin, new_ymax))
# add the text
offset = -0.2 if separate_type else 0
dx = 0.4 if separate_type else 1
for x, (_, expt_avgs) in enumerate(avgd_ints.items()):
for i, ((_, avg), color) in enumerate(zip(
expt_avgs.items(), sns.color_palette("deep")[3:])):
ax.text(x=x-offset+i*dx, y=0.02, s=f"({avg:.2f})",
color=color, horizontalalignment="center",
transform=ax.get_xaxis_transform())
pg.style_axes(ax, "plot")
return plt.gcf(), ax
def hsqc_cosy_stripplot(molecule: Any,
datasets: Sequence[ds.Dataset2D],
ref_datasets: Sequence[ds.Dataset2D],
xlabel: str = "Experiment",
ylabel: str = "Intensity",
title: str = "",
edited: bool = False,
show_averages: bool = True,
separate_mult: bool = True,
ncol: int = 4,
loc: str = "upper center",
ax: Optional[Any] = None,
font_kwargs: Optional[dict] = None,
**kwargs: Any,
) -> Tuple[Any, Any]:
"""
Plot HSQC and COSY relative intensities on the same Axes. HSQC peaks are
split by multiplicity, COSY peaks are not split.
Parameters
----------
molecule : pg.private.Andrographolide or pg.private.Zolmitriptan
The class from which the hsqc and cosy attributes will be taken from
datasets : (pg.Dataset2D, pg.Dataset2D)
HSQC and COSY dataset(s) to analyse intensities of
ref_datasets : (pg.Dataset2D, pg.Dataset2D)
Reference HSQC and COSY datasets
xlabel : str, optional
Axes x-label, defaults to "Experiment"
ylabel : str, optional
Axes y-label, defaults to "Intensity"
title : str, optional
Axes title, defaults to empty string
edited : bool, default False
Whether editing in the HSQC is enabled or not.
show_averages : bool, default True
Whether to indicate averages in each category using sns.pointplot.
ncol : int, optional
Passed to ax.legend(). Defaults to 4.
loc : str, optional
Passed to ax.legend(). Defaults to "upper center".
ax : matplotlib.axes.Axes, optional
Axes instance to plot on. If not provided, uses plt.gca().
kwargs : dict, optional
Keywords passed on to sns.stripplot().
Returns
-------
(fig, ax).
"""
# Set up default font_kwargs if not provided.
font_kwargs = font_kwargs or {}
# Calculate dataframes of relative intensities.
hsqc_rel_ints_df = molecule.hsqc.rel_ints_df(dataset=datasets[0],
ref_dataset=ref_datasets[0],
edited=edited)
# Rename mult -> type to match COSY
hsqc_rel_ints_df = hsqc_rel_ints_df.rename(columns={"mult": "type"})
# Remove multiplicity information if separation is not desired
if not separate_mult:
hsqc_rel_ints_df = hsqc_rel_ints_df.assign(type="hsqc")
cosy_rel_ints_df = molecule.cosy.rel_ints_df(dataset=datasets[1],
ref_dataset=ref_datasets[1])
cosy_rel_ints_df = cosy_rel_ints_df.assign(type="cosy")
rel_ints_df = pd.concat((hsqc_rel_ints_df, cosy_rel_ints_df))
# Calculate the average integrals by multiplicity
avgd_ints = rel_ints_df.groupby("type").mean()
# Fix the order if we need to (because by default it would be alphabetical)
if not separate_mult:
avgd_ints = avgd_ints.reindex(["hsqc", "cosy"])
avgd_ints.drop(columns=["f1", "f2"], inplace=True)
# Get currently active axis if none provided
if ax is None:
ax = plt.gca()
# Plot the intensities.
stripplot_alpha = 0.3 if show_averages else 0.8
sns.stripplot(x="expt", y="int", hue="type",
zorder=0, alpha=stripplot_alpha,
dodge=True, data=rel_ints_df, ax=ax, **kwargs)
if show_averages:
dodge = 0.6 if separate_mult else 0.4
sns.pointplot(x="expt", y="int", hue="type", zorder=1,
dodge=dodge, data=rel_ints_df, ax=ax, join=False,
markers='_', palette="dark", ci=None, scale=1.25)
# Customise the plot
ax.set(xlabel=xlabel, ylabel=ylabel, title=title, xticks=[])
# Setting the handles manually ensures that we get stripplot handles
# rather than the pointplot ones (if present).
handles, _ = ax.get_legend_handles_labels()
l = ax.legend(ncol=ncol, loc=loc,
markerscale=0.4,
handles=handles[0:4],
labels=["HSQC CH", r"HSQC CH$_2$", r"HSQC CH$_3$", "COSY"])
l.set(title=None)
ax.axhline(y=1, color="grey", linewidth=0.5, linestyle="--")
# Set y-limits. We need to expand it by ~20% to make space for the legend,
# as well as the averaged values.
EXPANSION_FACTOR = 1.2
ymin, ymax = ax.get_ylim()
ymean = (ymin + ymax)/2
ylength = (ymax - ymin)/2
new_ymin = ymean - (EXPANSION_FACTOR * ylength)
new_ymax = ymean + (EXPANSION_FACTOR * ylength)
ax.set_ylim((new_ymin, new_ymax))
# Add the text and averages
x0 = -0.3 if separate_mult else -0.2
dx = 0.2 if separate_mult else 0.4
for x, (_, expt_avgs) in enumerate(avgd_ints.items()):
for i, ((_, avg), deep) in enumerate(zip(expt_avgs.items(),
sns.color_palette("deep"))):
ax.text(x=x+x0+i*dx, y=0.02, s=f"({avg:.2f})",
color=deep, horizontalalignment="center",
transform=ax.get_xaxis_transform(),
**font_kwargs)
pg.style_axes(ax, "plot")
return plt.gcf(), ax
def hsqcc_stripplot(molecule: Any,
datasets: Union[ds.Dataset2D, Sequence[ds.Dataset2D]],
ref_dataset: ds.Dataset2D,
expt_labels: Union[str, Sequence[str]],
xlabel: str = "Experiment",
ylabel: str = "Intensity",
title: str = "",
edited: bool = True,
show_averages: bool = True,
ncol: int = 2,
loc: str = "upper center",
ax: Optional[Any] = None,
**kwargs: Any,
) -> Tuple[Any, Any]:
"""
Plot HSQC-COSY strip plots (i.e. plot relative intensities, split by peak
type).
Parameters
----------
molecule : pg.private.Andrographolide
The class from which the hsqc attribute will be taken from
datasets : pg.Dataset2D or sequence of pg.Dataset2D
Dataset(s) to analyse intensities of
ref_dataset : pg.Dataset2D
Reference dataset
expt_labels : str or sequence of strings
Labels for the analysed datasets
xlabel : str, optional
Axes x-label, defaults to "Experiment"
ylabel : str, optional
Axes y-label, defaults to "Intensity"
title : str, optional
Axes title, defaults to empty string
edited : bool, default False
Whether editing is enabled or not.
show_averages : bool, default True
Whether to indicate averages in each category using sns.pointplot.
ncol : int, optional
Passed to ax.legend(). Defaults to 2.
loc : str, optional
Passed to ax.legend(). Defaults to "upper center".
ax : matplotlib.axes.Axes, optional
Axes instance to plot on. If not provided, uses plt.gca().
kwargs : dict, optional
Keywords passed on to sns.stripplot().
Returns
-------
(fig, ax).
"""
# Stick dataset/label into a list if needed
if isinstance(datasets, ds.Dataset2D):
datasets = [datasets]
if isinstance(expt_labels, str):
expt_labels = [expt_labels]
# Calculate dataframes of relative intensities.
rel_ints_dfs = [molecule.hsqc_cosy.rel_ints_df(dataset=ds,
ref_dataset=ref_dataset,
label=label,
edited=edited)
for (ds, label) in zip(datasets, expt_labels)]
all_dfs = pd.concat(rel_ints_dfs)
# Calculate the average integrals by multiplicity
avgd_ints = pd.concat((df.groupby("type").mean() for df in rel_ints_dfs),
axis=1)
avgd_ints.drop(columns=["f1", "f2"], inplace=True)
# Get currently active axis if none provided
if ax is None:
ax = plt.gca()
# Plot the intensities.
stripplot_alpha = 0.3 if show_averages else 0.8
sns.stripplot(x="expt", y="int", hue="type",
zorder=0, alpha=stripplot_alpha,
dodge=True, data=all_dfs, ax=ax, **kwargs)
if show_averages:
sns.pointplot(x="expt", y="int", hue="type", zorder=1,
dodge=0.4, data=all_dfs, ax=ax, join=False,
markers='_', palette="dark", ci=None, scale=1.25)
# Customise the plot
ax.set(xlabel=xlabel, ylabel=ylabel, title=title)
handles, _ = ax.get_legend_handles_labels()
l = ax.legend(ncol=ncol, loc=loc,
markerscale=0.4,
handles=handles[0:3],
labels=["direct", "indirect"])
ax.axhline(y=1, color="grey", linewidth=0.5, linestyle="--")
# Set y-limits. We need to expand it by ~20% to make space for the legend,
# as well as the averaged values.
EXPANSION_FACTOR = 1.2
ymin, ymax = ax.get_ylim()
ymean = (ymin + ymax)/2
ylength = (ymax - ymin)/2
new_ymin = ymean - (EXPANSION_FACTOR * ylength)
new_ymax = ymean + (EXPANSION_FACTOR * ylength)
ax.set_ylim((new_ymin, new_ymax))
# add the text
for x, (_, expt_avgs) in enumerate(avgd_ints.items()):
for i, ((_, avg), color) in enumerate(zip(expt_avgs.items(),
sns.color_palette("deep"))):
ax.text(x=x-0.2+i*0.4, y=0.02, s=f"({avg:.2f})",
color=color, horizontalalignment="center",
transform=ax.get_xaxis_transform())
pg.style_axes(ax, "plot")
return plt.gcf(), ax
def generic_stripplot(experiment: Any,
datasets: Union[ds.Dataset2D, Sequence[ds.Dataset2D]],
ref_dataset: ds.Dataset2D,
expt_labels: Union[str, Sequence[str]],
xlabel: str = "Experiment",
ylabel: str = "Intensity",
title: str = "",
show_averages: bool = True,
ncol: int = 2,
loc: str = "upper center",
ax: Optional[Any] = None,
**kwargs: Any,
) -> Tuple[Any, Any]:
# Stick dataset/label into a list if needed
if isinstance(datasets, ds.Dataset2D):
datasets = [datasets]
if isinstance(expt_labels, str):
expt_labels = [expt_labels]
# Calculate dataframes of relative intensities.
rel_ints_dfs = [experiment.rel_ints_df(dataset=ds,
ref_dataset=ref_dataset,
label=label)
for (ds, label) in zip(datasets, expt_labels)]
all_dfs = pd.concat(rel_ints_dfs)
# Calculate the average integrals
avgd_ints = pd.concat((df[["int"]].mean() for df in rel_ints_dfs), axis=1).transpose()
avgd_ints.drop(columns=["f1", "f2"], inplace=True)
# Get currently active axis if none provided
if ax is None:
ax = plt.gca()
# Plot the intensities.
sns.stripplot(x="expt", y="int",
dodge=True, data=all_dfs, ax=ax,
palette=sns.color_palette("deep"), **kwargs)
# Customise the plot
ax.set(xlabel=xlabel, ylabel=ylabel, title=title)
ax.axhline(y=1, color="grey", linewidth=0.5, linestyle="--")
# Set y-limits. We need to expand it by ~20% to make space for the legend,
# as well as the averaged values.
EXPANSION_FACTOR = 1.2
ymin, ymax = ax.get_ylim()
ymean = (ymin + ymax)/2
ylength = (ymax - ymin)/2
new_ymin = ymean - (EXPANSION_FACTOR * ylength)
new_ymax = ymean + (EXPANSION_FACTOR * ylength)
ax.set_ylim((new_ymin, new_ymax))
# add the text
for x, (_, expt_avgs) in enumerate(avgd_ints.items()):
for i, ((_, avg), color) in enumerate(zip(
expt_avgs.items(), sns.color_palette("deep"))):
ax.text(x=x+i, y=0.02, s=f"({avg:.2f})",
color=color, horizontalalignment="center",
transform=ax.get_xaxis_transform())
pg.style_axes(ax, "plot")
return plt.gcf(), ax
def make_colorbar(cs, ax):
"""
Quickly add a colour bar to a contour plot or similar.
You can get the first argument as the return value of contour() or
contourf(). imshow() also works. The second argument is the Axes.
"""
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
plt.colorbar(cs, cax=cax)
def enzip(*iterables) -> Generator[tuple, None, None]:
for i, t in enumerate(zip(*iterables)):
yield (i, *t)
# -- Styling
def fira() -> None:
plt.rcParams['font.family'] = 'Fira Sans'
plt.rcParams['mathtext.fontset'] = 'custom'
plt.rcParams['mathtext.rm'] = 'Fira Sans'
plt.rcParams['mathtext.it'] = 'Fira Sans:italic'
plt.rcParams['font.size'] = 12
plt.rcParams['savefig.dpi'] = 600
def source_serif() -> None:
plt.rcParams['font.family'] = 'Source Serif Pro'
plt.rcParams['mathtext.fontset'] = 'custom'
plt.rcParams['mathtext.rm'] = 'Source Serif Pro'
plt.rcParams['mathtext.it'] = 'Source Serif Pro'
plt.rcParams['font.size'] = 12
plt.rcParams['savefig.dpi'] = 600
| 40.067358 | 90 | 0.579508 | 2,944 | 23,199 | 4.459239 | 0.121264 | 0.018129 | 0.013711 | 0.020567 | 0.822441 | 0.797913 | 0.781307 | 0.767748 | 0.754418 | 0.741164 | 0 | 0.01349 | 0.309798 | 23,199 | 578 | 91 | 40.136678 | 0.806395 | 0.30191 | 0 | 0.685976 | 0 | 0 | 0.056412 | 0.002313 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033537 | false | 0 | 0.030488 | 0.003049 | 0.085366 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36c6919e661907c75a5540d103cedb829d516c7c | 17,637 | py | Python | gym_minigrid/envs/fourrooms.py | lorenzosteccanella/gym-minigrid | 8e99cc7c2a6161de31c0df71c958de7e6933dc80 | [
"Apache-2.0"
] | null | null | null | gym_minigrid/envs/fourrooms.py | lorenzosteccanella/gym-minigrid | 8e99cc7c2a6161de31c0df71c958de7e6933dc80 | [
"Apache-2.0"
] | null | null | null | gym_minigrid/envs/fourrooms.py | lorenzosteccanella/gym-minigrid | 8e99cc7c2a6161de31c0df71c958de7e6933dc80 | [
"Apache-2.0"
] | null | null | null | from gym_minigrid.minigrid import *
from gym_minigrid.register import register
import random
class FourRoomsEnv(MiniGridEnv):
"""
Classic 4 rooms gridworld environment.
Can specify agent and goal position, if not it set at random.
"""
def __init__(self, agent_pos=None, goal_pos=None):
self._agent_default_pos = agent_pos
self._goal_default_pos = goal_pos
super().__init__(grid_size=15, max_steps=1000)
def _gen_grid(self, width, height):
# Create the grid
self.grid = Grid(width, height)
# Generate the surrounding walls
self.grid.horz_wall(0, 0)
self.grid.horz_wall(0, height - 1)
self.grid.vert_wall(0, 0)
self.grid.vert_wall(width - 1, 0)
room_w = width // 2
room_h = height // 2
# For each row of rooms
for j in range(0, 2):
# For each column
for i in range(0, 2):
xL = i * room_w
yT = j * room_h
xR = xL + room_w
yB = yT + room_h
# Bottom wall and door
if i + 1 < 2:
self.grid.vert_wall(xR, yT, room_h)
pos = (xR, self._rand_int(yT + 1, yB))
self.grid.set(*pos, None)
# Bottom wall and door
if j + 1 < 2:
self.grid.horz_wall(xL, yB, room_w)
pos = (self._rand_int(xL + 1, xR), yB)
self.grid.set(*pos, None)
# Randomize the player start position and orientation
if self._agent_default_pos is not None:
self.agent_pos = self._agent_default_pos
self.grid.set(*self._agent_default_pos, None)
self.agent_dir = self._rand_int(0, 4) # assuming random start direction
else:
self.place_agent()
if self._goal_default_pos is not None:
goal = Goal()
self.put_obj(goal, *self._goal_default_pos)
goal.init_pos, goal.cur_pos = self._goal_default_pos
else:
self.place_obj(Goal())
self.mission = 'Reach the goal'
def step(self, action):
obs, reward, done, info = MiniGridEnv.step(self, action)
return obs, reward, done, info
class NoRoomsDetEnv(MiniGridEnv):
"""
Classic 4 rooms gridworld environment.
Can specify agent and goal position, if not it set at random.
"""
def __init__(self, agent_pos=None, goal_pos=None):
self._agent_default_pos = agent_pos
self._goal_default_pos = goal_pos
super().__init__(grid_size=19, max_steps=1000)
def _gen_grid(self, width, height):
# Create the grid
self.grid = Grid(width, height)
# Generate the surrounding walls
self.grid.horz_wall(0, 0) # obj_type=Lava)
self.grid.horz_wall(0, height - 1)
self.grid.vert_wall(0, 0)
self.grid.vert_wall(width - 1, 0)
# Randomize the player start position and orientation
if self._agent_default_pos is not None:
self.agent_pos = self._agent_default_pos
self.grid.set(*self._agent_default_pos, None)
self.agent_dir = self._rand_int(0, 4) # assuming random start direction
else:
self.place_agent()
if self._goal_default_pos is not None:
goal = Goal()
self.put_obj(goal, *self._goal_default_pos)
goal.init_pos, goal.cur_pos = self._goal_default_pos
self.mission = 'Reach the goal'
def step(self, action):
obs, reward, done, info = MiniGridEnv.step(self, action)
return obs, reward, done, info
class SmallNoRoomsDetEnv(MiniGridEnv):
"""
Classic 4 rooms gridworld environment.
Can specify agent and goal position, if not it set at random.
"""
def __init__(self, agent_pos=None, goal_pos=None):
self._agent_default_pos = agent_pos
self._goal_default_pos = goal_pos
super().__init__(grid_size=8, max_steps=1000)
def _gen_grid(self, width, height):
# Create the grid
self.grid = Grid(width, height)
# Generate the surrounding walls
self.grid.horz_wall(0, 0) # obj_type=Lava)
self.grid.horz_wall(0, height - 1)
self.grid.vert_wall(0, 0)
self.grid.vert_wall(width - 1, 0)
# Randomize the player start position and orientation
if self._agent_default_pos is not None:
self.agent_pos = self._agent_default_pos
self.grid.set(*self._agent_default_pos, None)
self.agent_dir = self._rand_int(0, 4) # assuming random start direction
else:
self.place_agent()
if self._goal_default_pos is not None:
goal = Goal()
self.put_obj(goal, *self._goal_default_pos)
goal.init_pos, goal.cur_pos = self._goal_default_pos
else:
self.place_obj(Goal())
self.mission = 'Reach the goal'
def step(self, action):
obs, reward, done, info = MiniGridEnv.step(self, action)
return obs, reward, done, info
class FourRoomsDetEnv(MiniGridEnv):
"""
Classic 4 rooms gridworld environment.
Can specify agent and goal position, if not it set at random.
"""
def __init__(self, agent_pos=None, goal_pos=None):
self._agent_default_pos = agent_pos
self._goal_default_pos = goal_pos
super().__init__(grid_size=9, max_steps=1000)
def _gen_grid(self, width, height):
# Create the grid
self.grid = Grid(width, height)
# Generate the surrounding walls
self.grid.horz_wall(0, 0) #obj_type=Lava)
self.grid.horz_wall(0, height - 1)
self.grid.vert_wall(0, 0)
self.grid.vert_wall(width - 1, 0)
# 4 rooms
self.grid.horz_wall(0, 4)
self.grid.vert_wall(4, 0)
# gates
self.grid.set(2, 4, None)
# self.grid.set(4, 7, None)
self.grid.set(4, 2, None)
# self.grid.set(7, 4, None)
# self.grid.set(7, 10, None)
self.grid.set(4, 6, None)
# self.grid.set(10, 7, None)
self.grid.set(6, 4, None)
# Randomize the player start position and orientation
if self._agent_default_pos is not None:
self.agent_pos = self._agent_default_pos
self.grid.set(*self._agent_default_pos, None)
self.agent_dir = self._rand_int(0, 4) # assuming random start direction
else:
self.place_agent()
if self._goal_default_pos is not None:
goal = Goal()
self.put_obj(goal, *self._goal_default_pos)
goal.init_pos, goal.cur_pos = self._goal_default_pos
self.mission = 'Reach the goal'
def step(self, action):
obs, reward, done, info = MiniGridEnv.step(self, action)
return obs, reward, done, info
class NineRoomsDetEnv(MiniGridEnv):
"""
Classic 4 rooms gridworld environment.
Can specify agent and goal position, if not it set at random.
"""
def __init__(self, agent_pos=None, goal_pos=None):
self._agent_default_pos = agent_pos
self._goal_default_pos = goal_pos
super().__init__(grid_size=19, max_steps=1000) # always neet to set this up to more then the wraper one
def _gen_grid(self, width, height):
# Create the grid
self.grid = Grid(width, height)
# Generate the surrounding walls
self.grid.horz_wall(0, 0)
self.grid.horz_wall(0, height - 1)
self.grid.vert_wall(0, 0)
self.grid.vert_wall(width - 1, 0)
# 9 rooms
self.grid.horz_wall(0, 6)
self.grid.vert_wall(6, 0)
self.grid.horz_wall(0, 12)
self.grid.vert_wall(12, 0)
# self.grid.horz_wall(0, 4)
# self.grid.vert_wall(4, 0)
# # gates
self.grid.set(3, 6, None)
self.grid.set(6, 3, None)
self.grid.set(3, 12, None)
self.grid.set(12, 3, None)
self.grid.set(9, 6, None)
self.grid.set(6, 9, None)
self.grid.set(9, 12, None)
self.grid.set(12, 9, None)
self.grid.set(15, 12, None)
self.grid.set(15, 6, None)
self.grid.set(12, 15, None)
self.grid.set(6, 15, None)
# Randomize the player start position and orientation
if self._agent_default_pos is not None:
self.agent_pos = self._agent_default_pos
self.grid.set(*self._agent_default_pos, None)
self.agent_dir = 0 #self._rand_int(0, 4) # assuming random start direction
else:
self.place_agent()
if self._goal_default_pos is not None:
goal = Goal()
self.put_obj(goal, *self._goal_default_pos)
goal.init_pos, goal.cur_pos = self._goal_default_pos
self.mission = 'Reach the goal'
def step(self, action):
obs, reward, done, info = MiniGridEnv.step(self, action)
return obs, reward, done, info
class NineRoomsDetEnv_v2(MiniGridEnv):
"""
Classic 4 rooms gridworld environment.
Can specify agent and goal position, if not it set at random.
"""
def __init__(self, agent_pos=None, goal_pos=None):
self._agent_default_pos = agent_pos
self._goal_default_pos = goal_pos
super().__init__(grid_size=19, max_steps=1000) # always neet to set this up to more then the wraper one
def _gen_grid(self, width, height):
# Create the grid
self.grid = Grid(width, height)
# Generate the surrounding walls
self.grid.horz_wall(0, 0)
self.grid.horz_wall(0, height - 1)
self.grid.vert_wall(0, 0)
self.grid.vert_wall(width - 1, 0)
# 9 rooms
# #original
# self.grid.horz_wall(0, 6)
# self.grid.vert_wall(6, 0)
# self.grid.horz_wall(0, 12)
# self.grid.vert_wall(12, 0)
#
# # # gates
# self.grid.set(12, 3, None)
# self.grid.set(6, 3, None)
# self.grid.set(3, 6, None)
#
# self.grid.set(9, 6, None)
# self.grid.set(9, 14, None)
# self.grid.set(6, 9, None)
# self.grid.set(9, 12, None)
# self.grid.set(12, 9, None)
#
# self.grid.set(15, 12, None)
# self.grid.set(15, 6, None)
#
# self.grid.set(3, 12, None)
# self.grid.set(12, 15, None)
# self.grid.set(6, 15, None)
#1 setup
self.grid.horz_wall(0, 4)
self.grid.vert_wall(6, 0)
self.grid.horz_wall(0, 12)
self.grid.vert_wall(12, 0)
# # gates
self.grid.set(3, 4, None)
self.grid.set(8, 4, None)
self.grid.set(3, 12, None)
self.grid.set(12, 3, None)
self.grid.set(6, 2, None)
self.grid.set(15, 4, None)
self.grid.set(15, 12, None)
self.grid.set(12, 15, None)
self.grid.set(6, 15, None)
self.grid.set(6, 9, None)
self.grid.set(9, 12, None)
self.grid.set(12, 9, None)
# # # 2 setup
# self.grid.horz_wall(0, 4, length=7)
# self.grid.horz_wall(0, 12, length=7)
# self.grid.horz_wall(6, 6, length=7)
# self.grid.horz_wall(6, 14, length=7)
# self.grid.horz_wall(12, 4, length=7)
# self.grid.horz_wall(12, 12, length=7)
# self.grid.vert_wall(6, 0)
# self.grid.vert_wall(12, 0)
#
# # # gates
# self.grid.set(3, 4, None)
# self.grid.set(3, 12, None)
# self.grid.set(12, 3, None)
# self.grid.set(6, 2, None)
# self.grid.set(15, 4, None)
# self.grid.set(9, 6, None)
# self.grid.set(9, 14, None)
# self.grid.set(6, 9, None)
# self.grid.set(12, 9, None)
# self.grid.set(12, 15, None)
# self.grid.set(6, 15, None)
# # 3 setup
# self.grid.horz_wall(0, 3, length=3)
# self.grid.vert_wall(3, 0, length=4)
# self.grid.horz_wall(0, 6, length=3)
# self.grid.vert_wall(3, 3, length=4)
# self.grid.horz_wall(4, 3, length=3)
# self.grid.vert_wall(6, 0, length=4)
# self.grid.horz_wall(4, 6, length=3)
# self.grid.vert_wall(6, 3, length=4)
# self.grid.horz_wall(2, 9, length=2)
# self.grid.vert_wall(3, 8, length=2)
# self.grid.horz_wall(5, 9, length=2)
# self.grid.horz_wall(0, 9)
# self.grid.vert_wall(6, 0)
#
#
# # # gates
# self.grid.set(3, 1, None)
# self.grid.set(3, 4, None)
# self.grid.set(1, 3, None)
# self.grid.set(4, 3, None)
# self.grid.set(1, 9, None)
# self.grid.set(4, 9, None)
# self.grid.set(6, 7, None)
# self.grid.set(4, 6, None)
# self.grid.set(6, 4, None)
# self.grid.set(6, 1, None)
# self.grid.set(1, 6, None)
# self.grid.set(12, 9, None)
# self.grid.set(6, 12, None)
# Randomize the player start position and orientation
if self._agent_default_pos is not None:
self.agent_pos = self._agent_default_pos
self.grid.set(*self._agent_default_pos, None)
self.agent_dir = 0 #self._rand_int(0, 4) # assuming random start direction
else:
self.place_agent()
if self._goal_default_pos is not None:
goal = Goal()
self.put_obj(goal, *self._goal_default_pos)
goal.init_pos, goal.cur_pos = self._goal_default_pos
self.mission = 'Reach the goal'
def step(self, action):
obs, reward, done, info = MiniGridEnv.step(self, action)
return obs, reward, done, info
class NoRoomsDetEnv(NoRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=None, goal_pos=None)
class NoRoomsDetEnvGoalFixed1(NoRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=None, goal_pos=(17,17))
class SmallNoRoomsDetEnvGoalFixed1(SmallNoRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=(1, 1), goal_pos=(7,6))
class NoRoomsDetEnvGoalFixed2(NoRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=None, goal_pos=(1,17))
class NoRoomsDetEnvGoalFixed3(NoRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=None, goal_pos=(17,1))
class FourRoomsDetEnvGoalFixed0(FourRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=None, goal_pos=None)
class FourRoomsDetEnvGoalFixed1(FourRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=None, goal_pos=(17,17))
class NineRoomsDetEnvGoalFixed0(NineRoomsDetEnv):
def __init__(self, goal_pos=None):
super().__init__(agent_pos=None, goal_pos=None)
class NineRoomsDetEnvGoalFixed0v2(NineRoomsDetEnv_v2):
def __init__(self, goal_pos=None):
super().__init__(agent_pos=None, goal_pos=None)
class NineRoomsDetEnvGoalFixed1(NineRoomsDetEnv):
def __init__(self, goal_pos=None):
super().__init__(agent_pos=(1, 1), goal_pos=(5,17))
class FourRoomsDetEnvGoalFixed2(FourRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=None, goal_pos=(1,17))
class NineRoomsDetEnvGoalFixed2(NineRoomsDetEnv):
def __init__(self, goal_pos=None):
super().__init__(agent_pos=(1, 1), goal_pos=(11,17))
class FourRoomsDetEnvGoalFixed3(FourRoomsDetEnv):
def __init__(self, **kwargs):
super().__init__(agent_pos=None, goal_pos=(17,1))
class NineRoomsDetEnvGoalFixed3(NineRoomsDetEnv):
def __init__(self, goal_pos=None):
super().__init__(agent_pos=(1, 1), goal_pos=(17,17))
class NineRoomsDetEnvGoalFixed4(NineRoomsDetEnv):
def __init__(self, goal_pos=None):
super().__init__(agent_pos=(1, 1), goal_pos=(6,3))
register(
id='MiniGrid-FourRooms-v0',
entry_point='gym_minigrid.envs:FourRoomsEnv'
)
register(
id='MiniGrid-FourRoomsDet-v0',
entry_point='gym_minigrid.envs:FourRoomsDetEnvGoalFixed0'
)
register(
id='MiniGrid-FourRoomsDet-v1',
entry_point='gym_minigrid.envs:FourRoomsDetEnvGoalFixed1'
)
register(
id='MiniGrid-NineRoomsDetv2-v0',
entry_point='gym_minigrid.envs:NineRoomsDetEnvGoalFixed0v2'
)
register(
id='MiniGrid-NineRoomsDet-v0',
entry_point='gym_minigrid.envs:NineRoomsDetEnvGoalFixed0'
)
register(
id='MiniGrid-NineRoomsDet-v1',
entry_point='gym_minigrid.envs:NineRoomsDetEnvGoalFixed1'
)
register(
id='MiniGrid-FourRoomsDet-v2',
entry_point='gym_minigrid.envs:FourRoomsDetEnvGoalFixed2'
)
register(
id='MiniGrid-NineRoomsDet-v2',
entry_point='gym_minigrid.envs:NineRoomsDetEnvGoalFixed2'
)
register(
id='MiniGrid-FourRoomsDet-v3',
entry_point='gym_minigrid.envs:FourRoomsDetEnvGoalFixed3'
)
register(
id='MiniGrid-NineRoomsDet-v3',
entry_point='gym_minigrid.envs:NineRoomsDetEnvGoalFixed3'
)
register(
id='MiniGrid-NineRoomsDet-v4',
entry_point='gym_minigrid.envs:NineRoomsDetEnvGoalFixed4'
)
register(
id='MiniGrid-NoRoomsDet-v0',
entry_point='gym_minigrid.envs:NoRoomsDetEnv'
)
register(
id='MiniGrid-NoRoomsDet-v1',
entry_point='gym_minigrid.envs:NoRoomsDetEnvGoalFixed1'
)
register(
id='MiniGrid-NoRoomsDet-v2',
entry_point='gym_minigrid.envs:NoRoomsDetEnvGoalFixed2'
)
register(
id='MiniGrid-NoRoomsDet-v3',
entry_point='gym_minigrid.envs:NoRoomsDetEnvGoalFixed3'
)
register(
id='MiniGrid-SmallNoRoomsDet-v1',
entry_point='gym_minigrid.envs:SmallNoRoomsDetEnvGoalFixed1'
) | 31.494643 | 114 | 0.612236 | 2,414 | 17,637 | 4.241094 | 0.064209 | 0.114085 | 0.082731 | 0.092303 | 0.844306 | 0.816956 | 0.759328 | 0.72309 | 0.717132 | 0.714104 | 0 | 0.038659 | 0.269604 | 17,637 | 560 | 115 | 31.494643 | 0.756094 | 0.212168 | 0 | 0.6625 | 0 | 0 | 0.082362 | 0.076207 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103125 | false | 0 | 0.009375 | 0 | 0.196875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36d85c0675eb744761b94faa280363e12c3c82e4 | 155 | py | Python | pyvalidator/is_lowercase.py | theteladras/py.validator | 624ace7973552c8ac9353f48acbf96ec0ecc24a9 | [
"MIT"
] | 15 | 2021-11-01T14:14:56.000Z | 2022-03-17T11:52:29.000Z | pyvalidator/is_lowercase.py | theteladras/py.validator | 624ace7973552c8ac9353f48acbf96ec0ecc24a9 | [
"MIT"
] | 1 | 2022-03-16T13:39:16.000Z | 2022-03-17T09:16:00.000Z | pyvalidator/is_lowercase.py | theteladras/py.validator | 624ace7973552c8ac9353f48acbf96ec0ecc24a9 | [
"MIT"
] | null | null | null | from .utils.assert_string import assert_string
def is_lowercase(input: str) -> bool:
input = assert_string(input)
return input == input.lower()
| 19.375 | 46 | 0.722581 | 21 | 155 | 5.142857 | 0.619048 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174194 | 155 | 7 | 47 | 22.142857 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
7fc56db4ce72c3709a59a9dd6aff3cb40353bd05 | 3,469 | py | Python | datajob_tests/datajob_cli_tests/test_datajob_deploy.py | LorenzoCevolani/datajob | dbb0775c63df2cabcbff77b0df2015eac429a126 | [
"Apache-2.0"
] | 90 | 2021-01-04T20:08:20.000Z | 2022-03-14T11:20:24.000Z | datajob_tests/datajob_cli_tests/test_datajob_deploy.py | LorenzoCevolani/datajob | dbb0775c63df2cabcbff77b0df2015eac429a126 | [
"Apache-2.0"
] | 93 | 2020-12-12T22:10:33.000Z | 2021-11-21T16:12:24.000Z | datajob_tests/datajob_cli_tests/test_datajob_deploy.py | LorenzoCevolani/datajob | dbb0775c63df2cabcbff77b0df2015eac429a126 | [
"Apache-2.0"
] | 13 | 2020-12-12T22:11:01.000Z | 2021-09-22T14:37:09.000Z | import pathlib
import unittest
from unittest.mock import patch
from typer.testing import CliRunner
from datajob import datajob
current_dir = str(pathlib.Path(__file__).absolute().parent)
class TestDatajobDeploy(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.runner = CliRunner()
@patch("datajob.datajob.call_cdk")
def test_datajob_deploy_cli_runs_successfully(self, m_call_cdk):
result = self.runner.invoke(
datajob.app,
["deploy", "--config", "some_config.py", "--stage", "some-stage"],
)
self.assertEqual(result.exit_code, 0)
@patch("datajob.datajob.call_cdk")
def test_datajob_deploy_cli_runs_with_unknown_args_successfully(self, m_call_cdk):
result = self.runner.invoke(
datajob.app,
[
"deploy",
"--config",
"some_config.py",
"--stage",
"some-stage",
"--unknown-arg",
"unkown-value",
],
)
self.assertEqual(result.exit_code, 0)
@patch("datajob.package.wheel.create_wheel")
@patch("datajob.datajob.call_cdk")
def test_datajob_deploy_cli_runs_with_project_root_successfully(
self, m_call_cdk, m_create_wheel
):
result = self.runner.invoke(
datajob.app,
[
"deploy",
"--config",
"some_config.py",
"--stage",
"some-stage",
"--package",
"poetry",
],
)
self.assertEqual(result.exit_code, 0)
self.assertEqual(m_create_wheel.call_count, 1)
@patch("datajob.package.wheel._poetry_wheel")
@patch("datajob.datajob.call_cdk")
def test_datajob_deploy_with_package_poetry(self, m_call_cdk, m_create_wheel):
result = self.runner.invoke(
datajob.app,
[
"deploy",
"--config",
"some_config.py",
"--stage",
"some-stage",
"--package",
"poetry",
],
)
self.assertEqual(result.exit_code, 0)
self.assertEqual(m_create_wheel.call_count, 1)
@patch("datajob.package.wheel._setuppy_wheel")
@patch("datajob.datajob.call_cdk")
def test_datajob_deploy_with_package_setuppy(self, m_call_cdk, m_create_wheel):
result = self.runner.invoke(
datajob.app,
[
"deploy",
"--config",
"some_config.py",
"--stage",
"some-stage",
"--package",
"setuppy",
],
)
self.assertEqual(result.exit_code, 0)
self.assertEqual(m_create_wheel.call_count, 1)
@patch("datajob.datajob.call_cdk")
def test_datajob_deploy_cli_runs_with_stage_successfully(self, m_call_cdk):
result = self.runner.invoke(
datajob.app,
["deploy", "--config", "some_config.py", "--stage", "some-stage"],
)
self.assertEqual(result.exit_code, 0)
@patch("datajob.datajob.call_cdk")
def test_datajob_deploy_cli_runs_with_no_stage_successfully(self, m_call_cdk):
result = self.runner.invoke(
datajob.app, ["deploy", "--config", "some_config.py"]
)
self.assertEqual(result.exit_code, 0)
| 31.252252 | 86 | 0.556356 | 359 | 3,469 | 5.08078 | 0.172702 | 0.053728 | 0.072917 | 0.088268 | 0.816886 | 0.810307 | 0.79386 | 0.79386 | 0.770833 | 0.770833 | 0 | 0.004259 | 0.323148 | 3,469 | 110 | 87 | 31.536364 | 0.772572 | 0 | 0 | 0.622449 | 0 | 0 | 0.185068 | 0.078697 | 0 | 0 | 0 | 0 | 0.102041 | 1 | 0.081633 | false | 0 | 0.05102 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7fca7f7f06e05bf6e72ef93f9080943a0a732de3 | 258 | py | Python | brainreg_napari/plugins.py | neuromusic/brainreg-napari | 051524a6cb7065de88312ddde2e215e00021e322 | [
"MIT"
] | 4 | 2021-07-13T19:39:00.000Z | 2022-02-06T17:07:53.000Z | brainreg_napari/plugins.py | neuromusic/brainreg-napari | 051524a6cb7065de88312ddde2e215e00021e322 | [
"MIT"
] | 11 | 2021-07-13T17:41:43.000Z | 2022-02-01T14:55:25.000Z | brainreg_napari/plugins.py | neuromusic/brainreg-napari | 051524a6cb7065de88312ddde2e215e00021e322 | [
"MIT"
] | 2 | 2021-12-20T22:01:35.000Z | 2022-03-11T14:28:57.000Z | from napari_plugin_engine import napari_hook_implementation
from brainreg_napari.register import brainreg_register
@napari_hook_implementation
def napari_experimental_provide_dock_widget():
return [(brainreg_register, {"name": "Atlas Registration"})]
| 28.666667 | 64 | 0.844961 | 30 | 258 | 6.833333 | 0.6 | 0.097561 | 0.234146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089147 | 258 | 8 | 65 | 32.25 | 0.87234 | 0 | 0 | 0 | 0 | 0 | 0.085271 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
3d1ea09c5e39bcdeefe940ebe80e076a5df4613c | 150 | py | Python | hhcms/settings/test.py | youngershen/hhcms | 748bfcaaf250584b2b7233f271644ca33f8ff80b | [
"MIT"
] | null | null | null | hhcms/settings/test.py | youngershen/hhcms | 748bfcaaf250584b2b7233f271644ca33f8ff80b | [
"MIT"
] | null | null | null | hhcms/settings/test.py | youngershen/hhcms | 748bfcaaf250584b2b7233f271644ca33f8ff80b | [
"MIT"
] | 1 | 2018-07-15T05:33:34.000Z | 2018-07-15T05:33:34.000Z | # PROJECT : hhcms
# TIME : 18-4-15 下午6:50
# AUTHOR : Younger Shen
# CELL : 13811754531
# WECHAT : 13811754531
from hhcms.settings.development import * | 25 | 40 | 0.726667 | 20 | 150 | 5.45 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24 | 0.166667 | 150 | 6 | 40 | 25 | 0.632 | 0.66 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9fb6ad7b8d2a57bed12118bc5b7a0761bb6fa9a | 20,438 | py | Python | RCD_dim_pulsar/feature_info.py | dipangwvu/rare_case_detection | 9325c29c57143ae00bca0618204f04fc3b111b94 | [
"MIT"
] | null | null | null | RCD_dim_pulsar/feature_info.py | dipangwvu/rare_case_detection | 9325c29c57143ae00bca0618204f04fc3b111b94 | [
"MIT"
] | null | null | null | RCD_dim_pulsar/feature_info.py | dipangwvu/rare_case_detection | 9325c29c57143ae00bca0618204f04fc3b111b94 | [
"MIT"
] | null | null | null | from scipy.stats import median_absolute_deviation
import numpy as np
import pandas as pd
import statistics
feature_list = ['SPEG_rank', 'group_rank', 'group_max_SNR', 'group_median_SNR', 'peak_SNR',
'centered_DM', 'clipped_SPEG', 'SNR_sym_index', 'DM_sym_index', 'peak_score',
'bright_recur_times', 'recur_times', 'size_ratio', 'cluster_density', 'DM_range',
'time_range', 'pulse_width', 'time_ratio', 'n_SPEGs_zero_DM', 'n_brighter_SPEGs_zero_DM'
]
data_types = {'SPEG_rank': 'interval',
'group_rank': 'interval',
'group_max_SNR': 'interval',
'group_median_SNR': 'interval',
'peak_SNR': 'interval',
'centered_DM': 'interval',
'clipped_SPEG': 'categorical',
'SNR_sym_index': 'interval',
'DM_sym_index': 'interval',
'peak_score': 'ordinal',
'bright_recur_times': 'interval',
'recur_times': 'interval',
'size_ratio': 'interval',
'cluster_density': 'interval',
'DM_range': 'interval',
'time_range': 'interval',
'pulse_width': 'interval',
'time_ratio': 'interval',
'n_SPEGs_zero_DM': 'interval',
'n_brighter_SPEGs_zero_DM': 'interval'
}
# bins based on quantile=true
bins = {'SPEG_rank': [0, 10, 20, 30, 40, 50, 60, 80, 100, 120, 140, 160, 180, 200, 225, 250, 275, 300, 350, 400,
500, 750, 1000, 1500, 2200],
'group_rank': [0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 13, 17, 21, 25, 30, 50, 345],
'group_max_SNR': 20,
'group_median_SNR':20,
'peak_SNR':20,
'centered_DM':25,
'SNR_sym_index': 20,
'DM_sym_index': 25,
'bright_recur_times': [0, 1, 2, 3, 4, 5, 10, 20, 30, 50, 70, 90, 110, 140, 170, 200, 240, 280, 320, 370,
420, 500, 800, 1200, 1613],
'recur_times': [0, 1, 2, 3, 5, 7, 10, 20, 40, 60, 100, 140, 180, 220, 260, 300, 350, 400, 450, 500, 550,
600, 700, 800, 1500, 2553],
'size_ratio': 20,
'cluster_density': 20,
'DM_range': 20,
'time_range': 20,
'pulse_width': 20,
'time_ratio': 20,
'n_SPEGs_zero_DM': 10,
'n_brighter_SPEGs_zero_DM': 10
}
# bins = {'mileage':5, 'driver_age':[18,25,35,45,55,65,125]}
def get_sim_value_SPEGs(feature = None, df=None, target_SPEG=None, candidate_SPEG=None):
# print("cur feature: ", feature)
target_value = getattr(target_SPEG, feature)
candidate_value = getattr(candidate_SPEG, feature)
cur_values = df[feature]
if feature in ['SPEG_rank', 'group_max_SNR', 'group_median_SNR', 'peak_SNR', 'centered_DM',
'SNR_sym_index', 'DM_sym_index', 'cluster_density', 'DM_range',
'time_range', 'pulse_width', 'time_ratio']:
# log transform
cur_values = np.log(cur_values)
cur_MAD = median_absolute_deviation(cur_values)
cur_MAD = max(cur_MAD, 0.000001)
cur_sim_value = max(0, 1 - abs(np.log(target_value) - np.log(candidate_value)) / (3 * 1.483 * cur_MAD))
# print("target_value:", target_value)
# print("candidate_value:", candidate_value)
# print("cur_MAD:", cur_MAD)
elif feature in ['group_rank', 'size_ratio']:
# log transform
cur_values = np.log(cur_values)
# min_max
cur_sim_value = max(0, 1 - abs(np.log(target_value) - np.log(candidate_value)) / (max(cur_values) - min(cur_values)))
# print("target_value:", target_value)
# print("candidate_value:", candidate_value)
elif feature in ['bright_recur_times', 'recur_times']:
# log transform
cur_values = np.log(cur_values)
cur_MAD = median_absolute_deviation(cur_values)
cur_MAD = max(cur_MAD, 0.000001)
# print("cur_MAD:", cur_MAD)
# do not multiply by 3
# milder penalty for over shooting
delta = 1.0 * (candidate_value > target_value)
alpha = 0.5
cur_sim_value = max(0, 1 - abs(np.log(target_value) - np.log(candidate_value)) / (3 * 1.483 * cur_MAD) +
alpha * delta * abs(np.log(target_value) - np.log(candidate_value)) / (3 * 1.483 * cur_MAD))
elif feature == 'clipped_SPEG':
cur_sim_value = 1 - abs(target_value - candidate_value)
elif feature == 'peak_score':
cur_sim_value = 1 - abs(target_value - candidate_value) / (6 - 2)
elif feature in ['n_SPEGs_zero_DM', 'n_brighter_SPEGs_zero_DM']:
max_proxy = max(1, np.quantile(cur_values, 0.95))
# print("max_proxy: ", max_proxy)
cur_sim_value = max(0, 1 - abs(target_value - candidate_value) / max_proxy)
return cur_sim_value
def get_outlyingness(feature=None, df=None, target_value=None):
target_value_count = 0
# cur_values = df[feature]
if feature in ['SPEG_rank', 'group_rank']:
# find the one theat is closest to current value
df_sort = df.iloc[(df[feature] - target_value).abs().argsort()[:1]]
# print(df[feature])
cur_proxy_value = df_sort[feature].tolist()
print("cur_proxy_value: ", cur_proxy_value)
cur_values_count = df[feature].value_counts()
cur_mode_count = cur_values_count.max()
print("cur_mode: ", cur_mode_count)
target_value_count = float(cur_values_count[cur_proxy_value])
print("target_value_count: ", target_value_count)
cur_outlyingness = np.log(cur_mode_count / target_value_count)
elif feature in ['group_max_SNR', 'group_median_SNR', 'peak_SNR', 'centered_DM', 'SNR_sym_index', 'DM_sym_index',
'bright_recur_times', 'recur_times', 'DM_range', 'time_range', 'pulse_width', 'time_ratio']:
cur_values = df[feature]
# log transform
cur_values = np.log(cur_values)
cur_median = statistics.median(cur_values)
cur_MAD = median_absolute_deviation(cur_values)
cur_z_score = abs(np.log(target_value) - cur_median) / (1.483 * cur_MAD)
cur_outlyingness = cur_z_score
elif feature in ['peak_score']:
df_tmp = df.copy()
bin_results = pd.cut(df_tmp[feature], bins=[1, 3, 4, 5, 6]).value_counts()
print("df_cut----------")
print(bin_results)
cur_mode_count = bin_results.max()
print("cur_mode: ", cur_mode_count)
print("target_value: ", target_value)
for cur_bin in bin_results.index:
# print(each_bin)
if target_value in cur_bin:
target_value_count = bin_results.at[cur_bin]
print("found: ", target_value_count)
break
cur_outlyingness = (cur_mode_count / target_value_count)
elif feature in ['size_ratio']:
df_tmp = df.copy()
bin_results = pd.cut(np.log(df_tmp[feature]), bins=[-0.1, 0.1, 0.2, 0.3, 2]).value_counts()
print("df_cut----------")
print(bin_results)
cur_mode_count = bin_results.max()
print("target_value: ", target_value)
for cur_bin in bin_results.index:
if np.log(target_value) in cur_bin:
target_value_count = bin_results.at[cur_bin]
print("found: ", target_value_count)
break
# TODO: log or not
if target_value_count < 1:
target_value_count = 1
cur_outlyingness = (cur_mode_count / target_value_count)
elif feature in ['cluster_density']:
df_tmp = df.copy()
bin_results = pd.cut(df_tmp[feature], bins=np.linspace(0, 0.035, 8)).value_counts()
print("df_cut----------")
print(bin_results)
cur_mode_count = bin_results.max()
# print("cur_mode: ", cur_mode_count)
for cur_bin in bin_results.index:
# print(each_bin)
if target_value in cur_bin:
target_value_count = bin_results.at[cur_bin]
print("found: ", target_value_count)
break
print("searching: ", target_value)
# TODO: log or not
if target_value_count < 1:
target_value_count = 1
cur_outlyingness = np.sqrt(cur_mode_count / target_value_count)
elif feature in ['clipped_SPEG']:
# find the one theat is closest to current value
df_sort = df.iloc[(df[feature] - target_value).abs().argsort()[:1]]
# print(df[feature])
cur_proxy_value = df_sort[feature].tolist()
print("cur_proxy_value: ", cur_proxy_value)
cur_values_count = df[feature].value_counts()
cur_mode_count = cur_values_count.max()
print("cur_mode: ", cur_mode_count)
target_value_count = float(cur_values_count[cur_proxy_value])
print("target_value_count: ", target_value_count)
cur_outlyingness = np.sqrt(cur_mode_count / target_value_count)
elif feature in ['n_SPEGs_zero_DM']:
df_tmp = df.copy()
bin_results = pd.cut(df_tmp[feature], bins=[-1, 1, 2, 8]).value_counts()
print("df_cut----------")
print(bin_results)
cur_mode_count = bin_results.max()
# print("cur_mode: ", cur_mode_count)
for cur_bin in bin_results.index:
# print(each_bin)
if target_value in cur_bin:
target_value_count = bin_results.at[cur_bin]
print("found: ", target_value_count)
break
# TODO: log or not
if target_value_count < 1:
target_value_count = 1
cur_outlyingness = np.sqrt((cur_mode_count / target_value_count))
elif feature in ['n_brighter_SPEGs_zero_DM']:
cur_outlyingness = 1
print("cur_outlyingness: ", cur_outlyingness)
# set minimumn values 1
cur_outlyingness = max(1, cur_outlyingness)
return cur_outlyingness
# def get_outlyingness2(feature=None, df=None, target_value=None):
# target_value_count = 0
# # cur_values = df[feature]
# if feature in ['SPEG_rank', 'group_rank']:
# # find the one theat is closest to current value
# df_sort = df.iloc[(df[feature] - target_value).abs().argsort()[:1]]
# # print(df[feature])
# cur_proxy_value = df_sort[feature].tolist()
# print("cur_proxy_value: ", cur_proxy_value)
#
# cur_values_count = df[feature].value_counts()
# cur_mode_count = cur_values_count.max()
# print("cur_mode: ", cur_mode_count)
# target_value_count = float(cur_values_count[cur_proxy_value])
# print("target_value_count: ", target_value_count)
#
# cur_outlyingness = np.log(cur_mode_count / target_value_count)
#
# # different bins
# elif feature in ['group_max_SNR', 'peak_SNR']:
# df_tmp = df.copy()
# bin_results = pd.cut(df_tmp[feature], bins=np.linspace(0, 80, 9)).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# # print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if target_value in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# # print("found: ", target_value_count)
# cur_outlyingness = (cur_mode_count / target_value_count)
# # print("cur_outlyingness: ", cur_outlyingness)
#
# elif feature in ['bright_recur_times', 'recur_times']:
# df_tmp = df.copy()
# bin_results = pd.cut(np.log(df_tmp[feature]), bins=np.linspace(0, 8, 9)).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# # print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if np.log(target_value) in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# # TODO: np.sqrt or not
# if target_value_count < 1:
# target_value_count = 1
# cur_outlyingness = np.sqrt(cur_mode_count / target_value_count)
#
# # elif feature in ['recur_times']:
# # df_tmp = df.copy()
# # bin_results = pd.cut(np.log(df_tmp[feature]), bins=np.linspace(1, 8, 8)).value_counts()
# # print("df_cut----------")
# # print(bin_results)
# #
# # cur_mode_count = bin_results.max()
# # # print("cur_mode: ", cur_mode_count)
# # for cur_bin in bin_results.index:
# # # print(each_bin)
# # if np.log(target_value) in cur_bin:
# # target_value_count = bin_results.at[cur_bin]
# # print("found: ", target_value_count)
# # break
# # # TODO: log or not
# # if target_value_count < 1:
# # target_value_count = 1
# # cur_outlyingness = np.sqrt(cur_mode_count / target_value_count)
#
# elif feature in ['DM_range']:
# df_tmp = df.copy()
# bin_results = pd.cut(np.log(df_tmp[feature]), bins=np.linspace(0, 7, 8)).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# # print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if np.log(target_value) in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# # TODO: log or not
# if target_value_count < 1:
# target_value_count = 1
# cur_outlyingness = cur_mode_count / target_value_count
#
# elif feature in ['time_range']:
# df_tmp = df.copy()
# bin_results = pd.cut(np.log(df_tmp[feature]), bins=np.linspace(-8, -1, 8)).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# # print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if np.log(target_value) in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# # TODO: log or not
# if target_value_count < 1:
# target_value_count = 1
# cur_outlyingness = np.log(cur_mode_count / target_value_count)
#
# elif feature in ['pulse_width']:
# df_tmp = df.copy()
# bin_results = pd.cut(np.log(df_tmp[feature]), bins=10).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# # print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if np.log(target_value) in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# # TODO: log or not
# if target_value_count < 1:
# target_value_count = 1
# cur_outlyingness = np.log(cur_mode_count / target_value_count)
#
# elif feature in ['peak_score']:
# df_tmp = df.copy()
# bin_results = pd.cut(df_tmp[feature], bins=[1, 3, 4, 5, 6]).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# print("cur_mode: ", cur_mode_count)
# print("target_value: ", target_value)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if target_value in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# cur_outlyingness = (cur_mode_count / target_value_count)
#
# elif feature in ['size_ratio']:
# df_tmp = df.copy()
# bin_results = pd.cut(np.log(df_tmp[feature]), bins=[-0.1, 0.1, 0.2, 0.3, 2]).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# # print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if np.log(target_value) in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# # TODO: log or not
# if target_value_count < 1:
# target_value_count = 1
# cur_outlyingness = (cur_mode_count / target_value_count)
#
# elif feature in ['cluster_density']:
# df_tmp = df.copy()
# bin_results = pd.cut(df_tmp[feature], bins=np.linspace(0, 0.035, 8)).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# # print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if target_value in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# # TODO: log or not
# if target_value_count < 1:
# target_value_count = 1
# cur_outlyingness = np.sqrt(cur_mode_count / target_value_count)
#
# elif feature in ['group_median_SNR']:
# df_tmp = df.copy()
# bin_results = pd.cut(df_tmp[feature], bins=np.linspace(5, 25, 11)).value_counts()
# # print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if target_value in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# # TODO: log or not
# cur_outlyingness = abs((cur_mode_count / target_value_count))
#
# # do not bin every feature
# elif feature in ['centered_DM', 'SNR_sym_index', 'DM_sym_index', 'time_ratio']:
# cur_values = df[feature]
# # log transform
# cur_values = np.log(cur_values)
# cur_median = statistics.median(cur_values)
# cur_MAD = median_absolute_deviation(cur_values)
# cur_z_score = abs(np.log(target_value) - cur_median) / (1.483 * cur_MAD)
# cur_outlyingness = cur_z_score
#
# elif feature in ['clipped_SPEG']:
# # find the one theat is closest to current value
# df_sort = df.iloc[(df[feature] - target_value).abs().argsort()[:1]]
# # print(df[feature])
# cur_proxy_value = df_sort[feature].tolist()
# print("cur_proxy_value: ", cur_proxy_value)
#
# cur_values_count = df[feature].value_counts()
# cur_mode_count = cur_values_count.max()
# print("cur_mode: ", cur_mode_count)
# target_value_count = float(cur_values_count[cur_proxy_value])
# print("target_value_count: ", target_value_count)
#
# cur_outlyingness = np.sqrt(cur_mode_count / target_value_count)
#
# elif feature in ['n_SPEGs_zero_DM']:
# df_tmp = df.copy()
# bin_results = pd.cut(df_tmp[feature], bins=[-1, 1, 2, 8]).value_counts()
# print("df_cut----------")
# print(bin_results)
#
# cur_mode_count = bin_results.max()
# # print("cur_mode: ", cur_mode_count)
# for cur_bin in bin_results.index:
# # print(each_bin)
# if target_value in cur_bin:
# target_value_count = bin_results.at[cur_bin]
# print("found: ", target_value_count)
# break
# # TODO: log or not
# if target_value_count < 1:
# target_value_count = 1
# cur_outlyingness = np.sqrt(cur_mode_count / target_value_count)
#
# elif feature in ['n_brighter_SPEGs_zero_DM']:
# cur_outlyingness = 1
#
# print("cur_outlyingness: ", cur_outlyingness)
# # set minimumn values 1
# cur_outlyingness = max(1, cur_outlyingness)
# return cur_outlyingness
#
| 41.37247 | 125 | 0.585429 | 2,662 | 20,438 | 4.146882 | 0.072878 | 0.127548 | 0.1232 | 0.051363 | 0.860857 | 0.843192 | 0.831054 | 0.828155 | 0.812936 | 0.796268 | 0 | 0.029146 | 0.284862 | 20,438 | 493 | 126 | 41.456389 | 0.726122 | 0.526177 | 0 | 0.428571 | 0 | 0 | 0.173787 | 0.012849 | 0 | 0 | 0 | 0.002028 | 0 | 1 | 0.011429 | false | 0 | 0.022857 | 0 | 0.045714 | 0.131429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
180a0b7645b237f80946c3f315fe516b290355f7 | 41 | py | Python | quickunit/vcs/__init__.py | dcramer/quickunit | f72b038aaead2c6f2c6013a94a1823724f59a205 | [
"Apache-2.0"
] | 7 | 2015-02-17T21:31:27.000Z | 2019-08-24T10:32:23.000Z | quickunit/vcs/__init__.py | dcramer/quickunit | f72b038aaead2c6f2c6013a94a1823724f59a205 | [
"Apache-2.0"
] | null | null | null | quickunit/vcs/__init__.py | dcramer/quickunit | f72b038aaead2c6f2c6013a94a1823724f59a205 | [
"Apache-2.0"
] | null | null | null | from quickunit.vcs.base import * # NOQA
| 20.5 | 40 | 0.731707 | 6 | 41 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 41 | 1 | 41 | 41 | 0.882353 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
184ba027bb8fc13fc121a97b297563839e9d1663 | 10,400 | py | Python | deepaffects/apis/featurize_api.py | s16h/deepaffects-python | 3be2bea30921964fc73eac81cb8fb05180203925 | [
"MIT"
] | 13 | 2017-12-15T20:47:48.000Z | 2021-08-06T05:42:34.000Z | deepaffects/apis/featurize_api.py | s16h/deepaffects-python | 3be2bea30921964fc73eac81cb8fb05180203925 | [
"MIT"
] | 23 | 2018-07-21T15:59:31.000Z | 2020-05-05T07:01:52.000Z | deepaffects/apis/featurize_api.py | s16h/deepaffects-python | 3be2bea30921964fc73eac81cb8fb05180203925 | [
"MIT"
] | 8 | 2018-02-08T14:17:46.000Z | 2019-10-15T08:01:50.000Z | # coding: utf-8
"""
DeepAffects
OpenAPI spec version: v1
"""
from __future__ import absolute_import
# python 2 and python 3 compatibility library
from six import iteritems
from ..api_client import ApiClient
from ..configuration import Configuration
class FeaturizeApi(object):
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def async_featurize_audio(self, body, webhook, **kwargs):
"""
Extract paralinguistic feature from an audio file asynchronously.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.async_featurize_audio(body, webhook, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Audio body: Audio object that needs to be featurized. (required)
:param str webhook: The webhook url where result from async resource is posted (required)
:param str request_id: Unique identifier for the request
:return: AsyncResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.async_featurize_audio_with_http_info(body, webhook, **kwargs)
else:
(data) = self.async_featurize_audio_with_http_info(body, webhook, **kwargs)
return data
def async_featurize_audio_with_http_info(self, body, webhook, **kwargs):
"""
featurize an audio file
Extract paralinguistic feature from an audio file.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.async_featurize_audio_with_http_info(body, webhook, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Audio body: Audio object that needs to be featurized. (required)
:param str webhook: The webhook url where result from async resource is posted (required)
:param str request_id: Unique identifier for the request
:return: AsyncResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'webhook', 'request_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method async_featurize_audio" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `async_featurize_audio`")
# verify the required parameter 'webhook' is set
if ('webhook' not in params) or (params['webhook'] is None):
raise ValueError("Missing the required parameter `webhook` when calling `async_featurize_audio`")
collection_formats = {}
resource_path = '/audio/generic/api/v1/async/featurize'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'webhook' in params:
query_params['webhook'] = params['webhook']
if 'request_id' in params:
query_params['request_id'] = params['request_id']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['UserSecurity']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AsyncResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def sync_featurize_audio(self, body, **kwargs):
"""
Extract paralinguistic feature from an audio file synchronously.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.sync_featurize_audio(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Audio body: Audio object that needs to be featurized. (required)
:return: AudioFeatures
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.sync_featurize_audio_with_http_info(body, **kwargs)
else:
(data) = self.sync_featurize_audio_with_http_info(body, **kwargs)
return data
def sync_featurize_audio_with_http_info(self, body, **kwargs):
"""
Extract paralinguistic feature from an audio file synchronously.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.sync_featurize_audio_with_http_info(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Audio body: Audio object that needs to be featurized. (required)
:return: AudioFeatures
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method sync_featurize_audio" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `sync_featurize_audio`")
collection_formats = {}
resource_path = '/audio/generic/api/v1/sync/featurize'.replace('{format}', 'json')
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['UserSecurity']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AudioFeatures',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 41.434263 | 109 | 0.581346 | 1,060 | 10,400 | 5.484906 | 0.141509 | 0.05504 | 0.020124 | 0.024768 | 0.880461 | 0.862917 | 0.861369 | 0.841589 | 0.806674 | 0.792226 | 0 | 0.000873 | 0.339327 | 10,400 | 250 | 110 | 41.6 | 0.845292 | 0.319231 | 0 | 0.68 | 0 | 0 | 0.165078 | 0.045189 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.032 | 0 | 0.128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
185c8ce21a37a75e25f2ac1010775211bc1e2619 | 100 | py | Python | src/datamodules/tokenizer.py | thechuong98/Question-Answering | cdefaa70611dcb4d02b6ca4e2e810bd746451478 | [
"MIT"
] | null | null | null | src/datamodules/tokenizer.py | thechuong98/Question-Answering | cdefaa70611dcb4d02b6ca4e2e810bd746451478 | [
"MIT"
] | null | null | null | src/datamodules/tokenizer.py | thechuong98/Question-Answering | cdefaa70611dcb4d02b6ca4e2e810bd746451478 | [
"MIT"
] | null | null | null | from tokenizers.processors import TemplateProcessing
from tokenizers import ByteLevelBPETokenizer
| 20 | 52 | 0.89 | 9 | 100 | 9.888889 | 0.666667 | 0.314607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 100 | 4 | 53 | 25 | 0.988889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a130c8b51c15275cc935811fb4edcafffb88cdbf | 113 | py | Python | musicript/__init__.py | Mygod/Musicript | 7e642fc206a959dd218d5d309a1d167e582a51d9 | [
"Apache-2.0"
] | 1 | 2019-12-17T15:12:21.000Z | 2019-12-17T15:12:21.000Z | musicript/__init__.py | Mygod/Musicript | 7e642fc206a959dd218d5d309a1d167e582a51d9 | [
"Apache-2.0"
] | null | null | null | musicript/__init__.py | Mygod/Musicript | 7e642fc206a959dd218d5d309a1d167e582a51d9 | [
"Apache-2.0"
] | null | null | null | from .core import Musicript, Instrument, bpm
from .recursiveyielder import track_worker
from .track import Track
| 28.25 | 44 | 0.831858 | 15 | 113 | 6.2 | 0.6 | 0.236559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123894 | 113 | 3 | 45 | 37.666667 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a1c693c9252ea9c7e7772e33bd1b56a2a854093b | 26 | py | Python | napari/layers/_shapes_layer/__init__.py | donovanr/napari | 580b5eab8cc40af53aef780a65adb9216d968a32 | [
"BSD-3-Clause"
] | null | null | null | napari/layers/_shapes_layer/__init__.py | donovanr/napari | 580b5eab8cc40af53aef780a65adb9216d968a32 | [
"BSD-3-Clause"
] | 1 | 2019-05-24T17:01:51.000Z | 2019-05-24T18:06:22.000Z | napari/layers/_shapes_layer/__init__.py | AllenCellModeling/napari | 3566383e6310d02e8673b564b6f63411fa176708 | [
"BSD-3-Clause"
] | null | null | null | from .model import Shapes
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a1d915b47b84d5ec35639b4a1c8adcd06ce53f1a | 10,539 | py | Python | tests/test_stream_consumers.py | alisaifee/coredis | e72f5d7c665b53e6a1d41e1a7fb9e400858a8b19 | [
"MIT"
] | 9 | 2022-01-07T07:42:08.000Z | 2022-03-21T15:54:09.000Z | tests/test_stream_consumers.py | alisaifee/coredis | e72f5d7c665b53e6a1d41e1a7fb9e400858a8b19 | [
"MIT"
] | 30 | 2022-01-15T23:33:36.000Z | 2022-03-30T22:39:53.000Z | tests/test_stream_consumers.py | alisaifee/coredis | e72f5d7c665b53e6a1d41e1a7fb9e400858a8b19 | [
"MIT"
] | 3 | 2022-01-13T06:11:13.000Z | 2022-02-21T11:19:33.000Z | from __future__ import annotations
import asyncio
import threading
import pytest
from coredis.exceptions import StreamConsumerInitializationError
from coredis.stream import Consumer, GroupConsumer
from tests.conftest import targets
@targets(
"redis_basic",
"redis_basic_raw",
"redis_basic_resp3",
"redis_basic_raw_resp3",
"redis_cluster",
"redis_cluster_raw",
"keydb",
)
@pytest.mark.asyncio()
class TestStreamConsumers:
async def test_single_consumer(self, client, _s):
consumer = await Consumer(client, ["a", "b"])
[await client.xadd("a", {"id": i}) for i in range(10)]
[await client.xadd("b", {"id": i}) for i in range(10, 20)]
consumed = {}
[
consumed.setdefault(e[0], []).append(e[1])
for _ in range(20)
if (e := await consumer.get_entry())
]
assert list(range(10)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("a")]
]
assert list(range(10, 20)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("b")]
]
async def test_single_consumer_start_from_latest(self, client, _s):
[await client.xadd("a", {"id": i}) for i in range(5)]
[await client.xadd("b", {"id": i}) for i in range(10, 15)]
consumer = await Consumer(client, ["a", "b"])
[await client.xadd("a", {"id": i}) for i in range(5, 10)]
[await client.xadd("b", {"id": i}) for i in range(15, 20)]
consumed = {}
[
consumed.setdefault(e[0], []).append(e[1])
for _ in range(20)
if (e := await consumer.get_entry())
]
assert list(range(5, 10)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("a")]
]
assert list(range(15, 20)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("b")]
]
async def test_single_consumer_start_from_beginning(self, client, _s):
[await client.xadd("a", {"id": i}) for i in range(5)]
[await client.xadd("b", {"id": i}) for i in range(10, 15)]
consumer = await Consumer(client, ["a", "b"], a={"identifier": "0-0"})
[await client.xadd("a", {"id": i}) for i in range(5, 10)]
[await client.xadd("b", {"id": i}) for i in range(15, 20)]
consumed = {}
[
consumed.setdefault(e[0], []).append(e[1])
for _ in range(20)
if (e := await consumer.get_entry())
]
assert list(range(0, 10)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("a")]
]
assert list(range(15, 20)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("b")]
]
async def test_single_group_consumer(self, client, _s):
with pytest.raises(StreamConsumerInitializationError):
await GroupConsumer(
client, ["a", "b"], "group-a", "consumer-a", auto_create=False
)
await client.xgroup_create("a", "group-a", "$", mkstream=True)
await client.xgroup_create("b", "group-a", "$", mkstream=True)
consumer = await GroupConsumer(
client, ["a", "b"], "group-a", "consumer-a", auto_create=False
)
[await client.xadd("a", {"id": i}) for i in range(10)]
[await client.xadd("b", {"id": i}) for i in range(10, 20)]
consumed = {}
[
consumed.setdefault(e[0], []).append(e[1])
for _ in range(20)
if (e := await consumer.get_entry())
]
assert list(range(10)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("a")]
]
assert list(range(10, 20)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("b")]
]
async def test_single_group_consumer_auto_create_group_stream(self, client, _s):
consumer = await GroupConsumer(
client, ["a", "b"], "group-a", "consumer-a", auto_create=True
)
[await client.xadd("a", {"id": i}) for i in range(10)]
[await client.xadd("b", {"id": i}) for i in range(10, 20)]
consumed = {}
[
consumed.setdefault(e[0], []).append(e[1])
for _ in range(20)
if (e := await consumer.get_entry())
]
assert list(range(10)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("a")]
]
assert list(range(10, 20)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("b")]
]
async def test_multiple_group_consumer_auto_create_group_stream(
self, client, cloner, _s
):
client_2 = await cloner(client)
consumer_1 = await GroupConsumer(
client, ["a", "b"], "group-a", "consumer-1", auto_create=True
)
consumer_2 = await GroupConsumer(
client_2, ["a", "b"], "group-a", "consumer-2", auto_create=True
)
[await client.xadd("a", {"id": i}) for i in range(10)]
[await client.xadd("b", {"id": i}) for i in range(10, 20)]
consumed = {}
[
consumed.setdefault(e[0], []).append(e[1])
for _ in range(10)
if (e := await consumer_1.get_entry())
]
[
consumed.setdefault(e[0], []).append(e[1])
for _ in range(10)
if (e := await consumer_2.get_entry())
]
assert list(range(10)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("a")]
]
assert list(range(10, 20)) == [
int(entry.field_values[_s("id")]) for entry in consumed[_s("b")]
]
async def test_group_consumer_start_from_pending_list(self, client, _s):
consumer = await GroupConsumer(
client, ["a", "b"], "group-a", "consumer-1", auto_create=True
)
[await client.xadd("a", {"id": i}) for i in range(10)]
[await client.xadd("b", {"id": i}) for i in range(10)]
[await consumer.get_entry() for _ in range(10)]
consumer = await GroupConsumer(
client,
["a", "b"],
"group-a",
"consumer-1",
start_from_backlog=True,
auto_create=True,
)
[await client.xadd("a", {"id": i}) for i in range(10, 15)]
[await client.xadd("b", {"id": i}) for i in range(10, 15)]
consumed = {}
for i in range(30):
stream, entry = await consumer.get_entry()
await client.xack(stream, "group-a", [entry.identifier])
consumed.setdefault(stream, []).append(int(entry.field_values[_s("id")]))
assert list(range(15)) == consumed[_s("a")]
assert list(range(15)) == consumed[_s("b")]
assert not consumer.state[_s("a")].get("pending")
assert not consumer.state[_s("b")].get("pending")
assert (None, None) == await consumer.get_entry()
assert (None, None) == await consumer.get_entry()
await client.xadd("a", {"id": "a1"})
await client.xadd("b", {"id": "b1"})
assert {_s("a1"), _s("b1")} == {
k[1].field_values[_s("id")]
for _ in range(2)
if (k := await consumer.get_entry())
}
async def test_single_consumer_buffered(self, client, _s):
consumer = await Consumer(client, ["a"], buffer_size=10)
expected = []
for i in range(10):
await client.xadd("a", {"id": i})
expected.append(i)
assert expected == [
int(e[1].field_values[_s("id")])
for _ in range(10)
if (e := await consumer.get_entry())
]
async def test_group_consumer_buffered(self, client, _s):
consumer = await GroupConsumer(
client, ["a"], "group-a", "consumer-a", buffer_size=10, auto_create=True
)
expected = []
for i in range(10):
await client.xadd("a", {"id": i})
expected.append(i)
assert expected == [
int(e[1].field_values[_s("id")])
for _ in range(10)
if (e := await consumer.get_entry())
]
async def test_single_blocking_consumer(self, client, _s):
consumer = await Consumer(client, ["a"], timeout=1000)
async def _inner():
await asyncio.sleep(0.2)
await client.xadd("a", {"id": 1})
th = threading.Thread(
target=asyncio.run_coroutine_threadsafe,
args=(_inner(), asyncio.get_running_loop()),
)
th.start()
_, entry = await consumer.get_entry()
th.join()
assert entry.field_values[_s("id")] == _s(1)
async def test_group_blocking_consumer(self, client, _s):
consumer = await GroupConsumer(
client, ["a"], "group-a", "consumer-a", auto_create=True, timeout=1000
)
async def _inner():
await asyncio.sleep(0.2)
await client.xadd("a", {"id": 1})
th = threading.Thread(
target=asyncio.run_coroutine_threadsafe,
args=(_inner(), asyncio.get_running_loop()),
)
th.start()
_, entry = await consumer.get_entry()
th.join()
assert entry.field_values[_s("id")] == _s(1)
async def test_single_non_blocking_iterator(self, client, _s):
consumer = await Consumer(client, ["a", "b"])
consumed = {}
[await client.xadd("a", {"id": i}) for i in range(10)]
[await client.xadd("b", {"id": i}) for i in range(10)]
async for stream, entry in consumer:
consumed.setdefault(stream, []).append(int(entry.field_values[_s("id")]))
assert consumed[_s("a")] == list(range(10))
assert consumed[_s("b")] == list(range(10))
async def test_single_blocking_iterator(self, client, _s):
consumer = await Consumer(client, ["a"], timeout=1000)
async def _inner():
await asyncio.sleep(0.2)
await client.xadd("a", {"id": 1})
th = threading.Thread(
target=asyncio.run_coroutine_threadsafe,
args=(_inner(), asyncio.get_running_loop()),
)
th.start()
consumed = {}
async for stream, entry in consumer:
consumed.setdefault(stream, []).append(entry)
th.join()
assert len(consumed[_s("a")]) == 1
assert _s(1) == consumed[_s("a")][0].field_values[_s("id")]
| 36.978947 | 85 | 0.539615 | 1,332 | 10,539 | 4.108108 | 0.08033 | 0.046053 | 0.079496 | 0.050256 | 0.827668 | 0.800987 | 0.789474 | 0.758955 | 0.725877 | 0.72076 | 0 | 0.027253 | 0.296707 | 10,539 | 284 | 86 | 37.109155 | 0.711009 | 0 | 0 | 0.537549 | 0 | 0 | 0.046114 | 0.001993 | 0 | 0 | 0 | 0 | 0.106719 | 1 | 0 | false | 0 | 0.027668 | 0 | 0.031621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
62a3db1dcb47e334e8fd7faa3d54c611bee6e488 | 123 | py | Python | functional_tests/pages/__init__.py | XeryusTC/projman | 3db118d51a9fc362153593f5a862187bdaf0a73c | [
"MIT"
] | null | null | null | functional_tests/pages/__init__.py | XeryusTC/projman | 3db118d51a9fc362153593f5a862187bdaf0a73c | [
"MIT"
] | 3 | 2015-12-08T17:14:31.000Z | 2016-01-29T18:46:59.000Z | functional_tests/pages/__init__.py | XeryusTC/projman | 3db118d51a9fc362153593f5a862187bdaf0a73c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .accounts import *
from .landingpage import *
from .projects import *
from .settings import *
| 20.5 | 26 | 0.691057 | 15 | 123 | 5.666667 | 0.6 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.170732 | 123 | 5 | 27 | 24.6 | 0.823529 | 0.170732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7ebf7a4ce182be2e7585c254ebb91d88803a756 | 46 | py | Python | only_otters/qmltools/__init__.py | RohanJnr/code-jam-5 | 40e4552b57b09ed3d81cbd47533c2483f8de3bc4 | [
"MIT"
] | 2 | 2019-07-03T18:08:24.000Z | 2019-07-03T18:27:18.000Z | only_otters/qmltools/__init__.py | RohanJnr/code-jam-5 | 40e4552b57b09ed3d81cbd47533c2483f8de3bc4 | [
"MIT"
] | 5 | 2019-07-01T15:46:30.000Z | 2019-07-07T23:22:52.000Z | only_otters/qmltools/__init__.py | RohanJnr/code-jam-5 | 40e4552b57b09ed3d81cbd47533c2483f8de3bc4 | [
"MIT"
] | 1 | 2019-07-08T14:21:50.000Z | 2019-07-08T14:21:50.000Z | from .qmltools import QmlWidget # noqa: F401
| 23 | 45 | 0.76087 | 6 | 46 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0.173913 | 46 | 1 | 46 | 46 | 0.842105 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c533ca04c45124e27165239fd29cc60dd10c5918 | 25 | py | Python | modules/unit_tests/modules/__init__.py | nursix/DRKCM | 09328289ff721c416494398aa751ff99906327cb | [
"MIT"
] | 3 | 2022-01-26T08:07:54.000Z | 2022-03-21T21:53:52.000Z | modules/unit_tests/modules/__init__.py | nursix/eden-asp | e49f46cb6488918f8d5a163dcd5a900cd686978c | [
"MIT"
] | null | null | null | modules/unit_tests/modules/__init__.py | nursix/eden-asp | e49f46cb6488918f8d5a163dcd5a900cd686978c | [
"MIT"
] | 1 | 2017-10-03T13:03:47.000Z | 2017-10-03T13:03:47.000Z | from .s3layouts import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.16 | 25 | 1 | 25 | 25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c55e60b1f8f3e96ac36203a66502875044f0b899 | 89 | py | Python | tests/test_models.py | FlorisHoogenboom/BoxRec | c9cc5d149318f916facdf57d7dbe94e797d81582 | [
"MIT"
] | 5 | 2018-04-20T11:47:43.000Z | 2021-05-04T18:54:16.000Z | tests/test_models.py | FlorisHoogenboom/BoxRec | c9cc5d149318f916facdf57d7dbe94e797d81582 | [
"MIT"
] | 1 | 2018-03-21T08:44:25.000Z | 2018-03-22T12:08:17.000Z | tests/test_models.py | FlorisHoogenboom/BoxRec | c9cc5d149318f916facdf57d7dbe94e797d81582 | [
"MIT"
] | 6 | 2018-03-16T14:05:55.000Z | 2018-03-16T14:08:41.000Z | import unittest
from boxrec import models
class TestFight(unittest.TestCase):
pass
| 12.714286 | 35 | 0.786517 | 11 | 89 | 6.363636 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168539 | 89 | 6 | 36 | 14.833333 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3d83718448841f54bdca9f5ce236852a4f4e712e | 87,473 | py | Python | pynetdicom/dimse_primitives.py | sc-clocke/sc-pydicom | 899900e7be6457e2812898a33bc93241142bd60f | [
"MIT"
] | 1 | 2020-08-03T02:11:27.000Z | 2020-08-03T02:11:27.000Z | pynetdicom/dimse_primitives.py | fserrano1493/pynetdicomtest | 93ae2fccf7ca86f53a6eadbef6895220bdebd4d4 | [
"MIT"
] | null | null | null | pynetdicom/dimse_primitives.py | fserrano1493/pynetdicomtest | 93ae2fccf7ca86f53a6eadbef6895220bdebd4d4 | [
"MIT"
] | null | null | null | """
Define the DIMSE-C and DIMSE-N service parameter primitives.
Notes:
* The class member names must match their corresponding DICOM element keyword
in order for the DIMSE messages/primitives to be created correctly.
"""
import codecs
try:
from collections.abc import MutableSequence
except ImportError:
from collections import MutableSequence
from io import BytesIO
import logging
from pydicom.tag import Tag
from pydicom.uid import UID
from pynetdicom import _config
from pynetdicom.utils import validate_ae_title, validate_uid
LOGGER = logging.getLogger('pynetdicom.dimse_primitives')
# pylint: disable=invalid-name
# pylint: disable=attribute-defined-outside-init
# pylint: disable=too-many-instance-attributes
# pylint: disable=anomalous-backslash-in-string
class DIMSEPrimitive(object):
"""Base class for the DIMSE primitives."""
STATUS_OPTIONAL_KEYWORDS = ()
REQUEST_KEYWORDS = ()
RESPONSE_KEYWORDS = ('MessageIDBeingRespondedTo', 'Status')
@property
def AffectedSOPClassUID(self):
"""Return the *Affected SOP Class UID* as :class:`~pydicom.uid.UID`."""
return self._affected_sop_class_uid
@AffectedSOPClassUID.setter
def AffectedSOPClassUID(self, value):
"""Set the *Affected SOP Class UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value to use for the *Affected SOP Class UID* parameter.
"""
if isinstance(value, UID):
pass
elif isinstance(value, str):
value = UID(value)
elif isinstance(value, bytes):
value = UID(value.decode('ascii'))
elif value is None:
pass
else:
raise TypeError("Affected SOP Class UID must be a "
"pydicom.uid.UID, str or bytes")
if value and not validate_uid(value):
LOGGER.error("Affected SOP Class UID is an invalid UID")
raise ValueError("Affected SOP Class UID is an invalid UID")
if value and not value.is_valid:
LOGGER.warning(
"The Affected SOP Class UID '{}' is non-conformant"
.format(value)
)
if value:
self._affected_sop_class_uid = value
else:
self._affected_sop_class_uid = None
@property
def _AffectedSOPInstanceUID(self):
"""Return the *Affected SOP Instance UID* as :class:`~pydicom.uid.UID`.
"""
return self._affected_sop_instance_uid
@_AffectedSOPInstanceUID.setter
def _AffectedSOPInstanceUID(self, value):
"""Set the *Affected SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value for the Affected SOP Class UID
"""
if isinstance(value, UID):
pass
elif isinstance(value, str):
value = UID(value)
elif isinstance(value, bytes):
value = UID(value.decode('ascii'))
elif value is None:
pass
else:
raise TypeError("Affected SOP Instance UID must be a "
"pydicom.uid.UID, str or bytes")
if value and not validate_uid(value):
LOGGER.error("Affected SOP Instance UID is an invalid UID")
raise ValueError("Affected SOP Instance UID is an invalid UID")
if value and not value.is_valid:
LOGGER.warning(
"The Affected SOP Instance UID '{}' is non-conformant"
.format(value)
)
if value:
self._affected_sop_instance_uid = value
else:
self._affected_sop_instance_uid = None
@property
def _dataset_variant(self):
"""Return the Dataset-like parameter value.
Used for EventInformation, EventReply, AttributeList,
ActionInformation, ActionReply, DataSet, Identifier and
ModificationList dataset-like parameter values.
Returns
-------
BytesIO or None
"""
return self._dataset
@_dataset_variant.setter
def _dataset_variant(self, value):
"""Set the Dataset-like parameter.
Used for EventInformation, EventReply, AttributeList,
ActionInformation, ActionReply, DataSet, Identifier and
ModificationList dataset-like parameter values.
Parameters
----------
value : tuple
The (dataset, variant name) to set, where dataset is either None
or BytesIO and variant name is str.
"""
if value[0] is None:
self._dataset = value[0]
elif isinstance(value[0], BytesIO):
self._dataset = value[0]
else:
raise TypeError(
"'{}' parameter must be a BytesIO object".format(value[1])
)
@property
def is_valid_request(self):
"""Return ``True`` if the request is valid, ``False`` otherwise."""
for keyword in self.REQUEST_KEYWORDS:
if getattr(self, keyword) is None:
return False
return True
@property
def is_valid_response(self):
"""Return ``True`` if the response is valid, ``False`` otherwise."""
for keyword in self.RESPONSE_KEYWORDS:
if getattr(self, keyword) is None:
return False
return True
@property
def MessageID(self):
"""Return the *Message ID* value as :class:`int`."""
return self._message_id
@MessageID.setter
def MessageID(self, value):
"""Set the *Message ID*.
Parameters
----------
int
The value to use for the *Message ID* parameter.
"""
if isinstance(value, int):
if 0 <= value < 2**16:
self._message_id = value
else:
raise ValueError("Message ID must be between 0 and 65535, "
"inclusive")
elif value is None:
self._message_id = value
else:
raise TypeError("Message ID must be an int")
@property
def MessageIDBeingRespondedTo(self):
"""Return the *Message ID Being Responded To* as :class:`int`."""
return self._message_id_being_responded_to
@MessageIDBeingRespondedTo.setter
def MessageIDBeingRespondedTo(self, value):
"""Set the *Message ID Being Responded To*.
Parameters
----------
int
The value to use for the *Message ID Being Responded To* parameter.
"""
if isinstance(value, int):
if 0 <= value < 2**16:
self._message_id_being_responded_to = value
else:
raise ValueError("Message ID Being Responded To must be "
"between 0 and 65535, inclusive")
elif value is None:
self._message_id_being_responded_to = value
else:
raise TypeError("Message ID Being Responded To must be an int")
@property
def _NumberOfCompletedSuboperations(self):
"""Return the *Number of Completed Suboperations*."""
return self._number_of_completed_suboperations
@_NumberOfCompletedSuboperations.setter
def _NumberOfCompletedSuboperations(self, value):
"""Set the *Number of Completed Suboperations*."""
if isinstance(value, int):
if value >= 0:
self._number_of_completed_suboperations = value
else:
raise ValueError("Number of Completed Suboperations must be "
"greater than or equal to 0")
elif value is None:
self._number_of_completed_suboperations = value
else:
raise TypeError("Number of Completed Suboperations must be an int")
@property
def _NumberOfFailedSuboperations(self):
"""Return the *Number of Failed Suboperations*."""
return self._number_of_failed_suboperations
@_NumberOfFailedSuboperations.setter
def _NumberOfFailedSuboperations(self, value):
"""Set the *Number of Failed Suboperations*."""
if isinstance(value, int):
if value >= 0:
self._number_of_failed_suboperations = value
else:
raise ValueError("Number of Failed Suboperations must be "
"greater than or equal to 0")
elif value is None:
self._number_of_failed_suboperations = value
else:
raise TypeError("Number of Failed Suboperations must be an int")
@property
def _NumberOfRemainingSuboperations(self):
"""Return the *Number of Remaining Suboperations*."""
return self._number_of_remaining_suboperations
@_NumberOfRemainingSuboperations.setter
def _NumberOfRemainingSuboperations(self, value):
"""Set the *Number of Remaining Suboperations*."""
if isinstance(value, int):
if value >= 0:
self._number_of_remaining_suboperations = value
else:
raise ValueError("Number of Remaining Suboperations must be "
"greater than or equal to 0")
elif value is None:
self._number_of_remaining_suboperations = value
else:
raise TypeError("Number of Remaining Suboperations must be an int")
@property
def _NumberOfWarningSuboperations(self):
"""Return the *Number of Warning Suboperations*."""
return self._number_of_warning_suboperations
@_NumberOfWarningSuboperations.setter
def _NumberOfWarningSuboperations(self, value):
"""Set the *Number of Warning Suboperations*."""
if isinstance(value, int):
if value >= 0:
self._number_of_warning_suboperations = value
else:
raise ValueError("Number of Warning Suboperations must be "
"greater than or equal to 0")
elif value is None:
self._number_of_warning_suboperations = value
else:
raise TypeError("Number of Warning Suboperations must be an int")
@property
def _Priority(self):
"""Return the *Priority*."""
return self._priority
@_Priority.setter
def _Priority(self, value):
"""Set the *Priority*."""
if value in [0, 1, 2]:
self._priority = value
else:
LOGGER.warning("Attempted to set Priority parameter to "
"an invalid value")
raise ValueError("Priority must be 0, 1, or 2")
@property
def _RequestedSOPClassUID(self):
"""Return the *Requested SOP Class UID*."""
return self._requested_sop_class_uid
@_RequestedSOPClassUID.setter
def _RequestedSOPClassUID(self, value):
"""Set the *Requested SOP Class UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value for the Requested SOP Class UID
"""
if isinstance(value, UID):
pass
elif isinstance(value, str):
value = UID(value)
elif isinstance(value, bytes):
value = UID(value.decode('ascii'))
elif value is None:
pass
else:
raise TypeError("Requested SOP Class UID must be a "
"pydicom.uid.UID, str or bytes")
if value and not validate_uid(value):
LOGGER.error("Requested SOP Class UID is an invalid UID")
raise ValueError("Requested SOP Class UID is an invalid UID")
if value and not value.is_valid:
LOGGER.warning(
"The Requested SOP Class UID '{}' is non-conformant"
.format(value)
)
if value:
self._requested_sop_class_uid = value
else:
self._requested_sop_class_uid = None
@property
def _RequestedSOPInstanceUID(self):
"""Return the *Requested SOP Instance UID*."""
return self._requested_sop_instance_uid
@_RequestedSOPInstanceUID.setter
def _RequestedSOPInstanceUID(self, value):
"""Set the *Requested SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value for the Requested SOP Instance UID
"""
if isinstance(value, UID):
pass
elif isinstance(value, str):
value = UID(value)
elif isinstance(value, bytes):
value = UID(value.decode('ascii'))
elif value is None:
pass
else:
raise TypeError("Requested SOP Instance UID must be a "
"pydicom.uid.UID, str or bytes")
if value and not validate_uid(value):
LOGGER.error("Requested SOP Instance UID is an invalid UID")
raise ValueError("Requested SOP Instance UID is an invalid UID")
if value and not value.is_valid:
LOGGER.warning(
"The Requested SOP Instance UID '{}' is non-conformant"
.format(value)
)
if value:
self._requested_sop_instance_uid = value
else:
self._requested_sop_instance_uid = None
@property
def Status(self):
"""Return the *Status* as :class:`int`."""
return self._status
@Status.setter
def Status(self, value):
"""Set the *Status*
Parameters
----------
int
The value to use for the *Status* parameter.
"""
if isinstance(value, int) or value is None:
self._status = value
else:
raise TypeError("DIMSE primitive's 'Status' must be an int")
@property
def msg_type(self):
"""Return the DIMSE message type as :class:`str`."""
return self.__class__.__name__.replace('_', '-')
# DIMSE-C Service Primitives
class C_STORE(DIMSEPrimitive):
"""Represents a C-STORE primitive.
+------------------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+==========================================+=========+==========+
| Message ID | M | U |
+------------------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+------------------------------------------+---------+----------+
| Affected SOP Class UID | M | U(=) |
+------------------------------------------+---------+----------+
| Affected SOP Instance UID | M | U(=) |
+------------------------------------------+---------+----------+
| Priority | M | \- |
+------------------------------------------+---------+----------+
| Move Originator Application Entity Title | U | \- |
+------------------------------------------+---------+----------+
| Move Originator Message ID | U | \- |
+------------------------------------------+---------+----------+
| Data Set | M | \- |
+------------------------------------------+---------+----------+
| Status | \- | M |
+------------------------------------------+---------+----------+
| Offending Element | \- | C |
+------------------------------------------+---------+----------+
| Error Comment | \- | C |
+------------------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which this
response/confirmation applies.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Class for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
AffectedSOPInstanceUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Instance
for storage. If included in the response/confirmation, it shall be
equal to the value in the request/indication
Priority : int
The priority of the C-STORE operation. It shall be one of the
following:
* 0: Medium
* 1: High
* 2: Low (Default)
MoveOriginatorApplicationEntityTitle : bytes
The DICOM AE Title of the AE that invoked the C-MOVE operation
from which this C-STORE sub-operation is being performed
MoveOriginatorMessageID : int
The Message ID of the C-MOVE request/indication primitive from
which this C-STORE sub-operation is being performed
DataSet : io.BytesIO
A DICOM dataset containing the attributes of the Composite
SOP Instance to be stored.
Status : int
The error or success notification of the operation.
OffendingElement : list of int or None
An optional status related field containing a list of the
elements in which an error was detected.
ErrorComment : str or None
An optional status related field containing a text description
of the error detected. 64 characters maximum.
"""
STATUS_OPTIONAL_KEYWORDS = ('OffendingElement', 'ErrorComment', )
REQUEST_KEYWORDS = (
'MessageID', 'AffectedSOPClassUID', 'AffectedSOPInstanceUID',
'Priority', 'DataSet'
)
def __init__(self):
# Variable names need to match the corresponding DICOM Element keywords
# in order for the DIMSE Message classes to be built correctly.
# Changes to the variable names can be made provided the DIMSEMessage()
# class' message_to_primitive() and primitive_to_message() methods
# are also changed
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.AffectedSOPClassUID = None
self.AffectedSOPInstanceUID = None
self.Priority = 0x02
self.MoveOriginatorApplicationEntityTitle = None
self.MoveOriginatorMessageID = None
self.DataSet = None
self.Status = None
# Optional Command Set elements used with specific Status values
# For Warning statuses 0xB000, 0xB006, 0xB007
# For Failure statuses 0xCxxx, 0xA9xx,
self.OffendingElement = None
# For Warning statuses 0xB000, 0xB006, 0xB007
# For Failure statuses 0xCxxx, 0xA9xx, 0xA7xx, 0x0122, 0x0124
self.ErrorComment = None
# For Failure statuses 0x0117
# self.AffectedSOPInstanceUID
@property
def AffectedSOPInstanceUID(self):
"""Return the *Affected SOP Instance UID* as :class:`~pydicom.uid.UID`.
"""
return self._AffectedSOPInstanceUID
@AffectedSOPInstanceUID.setter
def AffectedSOPInstanceUID(self, value):
"""Set the *Affected SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value to use for the *Affected SOP Class UID* parameter.
"""
self._AffectedSOPInstanceUID = value
@property
def DataSet(self):
"""Return the *Data Set* as :class:`io.BytesIO`."""
return self._dataset_variant
@DataSet.setter
def DataSet(self, value):
"""Set the *Data Set*.
Parameters
----------
io.BytesIO
The value to use for the *Data Set* parameter.
"""
self._dataset_variant = (value, 'DataSet')
@property
def MoveOriginatorApplicationEntityTitle(self):
"""Return the *Move Originator Application Entity Title* as
:class:`bytes`.
"""
return self._move_originator_application_entity_title
@MoveOriginatorApplicationEntityTitle.setter
def MoveOriginatorApplicationEntityTitle(self, value):
"""Set the *Move Originator Application Entity Title*.
Parameters
----------
bytes or str
The value to use for the *Move Originator AE Title* parameter.
The parameter value will be truncated to 16 bytes and invalid
values ignored.
"""
if isinstance(value, str):
value = codecs.encode(value, 'ascii')
if value:
try:
self._move_originator_application_entity_title = (
validate_ae_title(value, _config.USE_SHORT_DIMSE_AET)
)
except ValueError:
LOGGER.error(
"C-STORE request primitive contains an invalid "
"'Move Originator AE Title'"
)
self._move_originator_application_entity_title = None
else:
self._move_originator_application_entity_title = None
@property
def MoveOriginatorMessageID(self):
"""Return the *Move Originator Message ID* as :class:`int`."""
return self._move_originator_message_id
@MoveOriginatorMessageID.setter
def MoveOriginatorMessageID(self, value):
"""Set the *Move Originator Message ID*.
Parameters
----------
int
The value to use for the *Move Originator Message ID* parameter.
"""
# Fix for peers sending a value consisting of nulls
if isinstance(value, int):
if 0 <= value < 2**16:
self._move_originator_message_id = value
else:
raise ValueError("Move Originator Message ID To must be "
"between 0 and 65535, inclusive")
elif value is None:
self._move_originator_message_id = value
else:
raise TypeError("Move Originator Message ID To must be an int")
@property
def Priority(self):
"""Return the *Priority* as :class:`int`."""
return self._Priority
@Priority.setter
def Priority(self, value):
"""Set the *Priority*.
Parameters
----------
int
The value to use for the *Priority* parameter.
"""
self._Priority = value
class C_FIND(DIMSEPrimitive):
"""Represents a C-FIND primitive.
+-------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+===============================+=========+==========+
| Message ID | M | U |
+-------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+-------------------------------+---------+----------+
| Affected SOP Class UID | M | U(=) |
+-------------------------------+---------+----------+
| Priority | M | \- |
+-------------------------------+---------+----------+
| Identifier | M | C |
+-------------------------------+---------+----------+
| Status | \- | M |
+-------------------------------+---------+----------+
| Offending Element | \- | C |
+-------------------------------+---------+----------+
| Error Comment | \- | C |
+-------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which
this response/confirmation applies.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Class
for storage. If included in the response/confirmation, it shall be
equal to the value in the request/indication
Priority : int
The priority of the C-STORE operation. It shall be one of the
following:
* 0: Medium
* 1: High
* 2: Low (Default)
Identifier : io.BytesIO
A DICOM dataset of attributes to be matched against the values of the
attributes in the instances of the composite objects known to the
performing DIMSE service-user.
Status : int
The error or success notification of the operation.
OffendingElement : list of int or None
An optional status related field containing a list of the
elements in which an error was detected.
ErrorComment : str or None
An optional status related field containing a text
description of the error detected. 64 characters maximum.
"""
STATUS_OPTIONAL_KEYWORDS = ('OffendingElement', 'ErrorComment', )
REQUEST_KEYWORDS = (
'MessageID', 'AffectedSOPClassUID', 'Priority', 'Identifier'
)
def __init__(self):
# Variable names need to match the corresponding DICOM Element keywords
# in order for the DIMSE Message classes to be built correctly.
# Changes to the variable names can be made provided the DIMSEMessage()
# class' message_to_primitive() and primitive_to_message() methods
# are also changed
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.AffectedSOPClassUID = None
self.Priority = 0x02
self.Identifier = None
self.Status = None
# Optional Command Set elements used in with specific Status values
# For Failure statuses 0xA900, 0xCxxx
self.OffendingElement = None
# For Failure statuses 0xA900, 0xA700, 0x0122, 0xCxxx
self.ErrorComment = None
@property
def Identifier(self):
"""Return the *Identifier* as :class:`io.BytesIO`."""
return self._dataset_variant
@Identifier.setter
def Identifier(self, value):
"""Set the *Identifier*.
Parameters
----------
io.BytesIO
The value to use for the *Identifier* parameter.
"""
self._dataset_variant = (value, 'Identifier')
@property
def Priority(self):
"""Return the *Priority* as :class:`int`."""
return self._Priority
@Priority.setter
def Priority(self, value):
"""Set the *Priority*.
Parameters
----------
int
The value to use for the *Priority* parameter.
"""
self._Priority = value
class C_GET(DIMSEPrimitive):
"""Represents a C-GET primitive.
+-------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+===============================+=========+==========+
| Message ID | M | U |
+-------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+-------------------------------+---------+----------+
| Affected SOP Class UID | M | U(=) |
+-------------------------------+---------+----------+
| Priority | M | \- |
+-------------------------------+---------+----------+
| Identifier | M | U |
+-------------------------------+---------+----------+
| Status | \- | M |
+-------------------------------+---------+----------+
| Number of Remaining Sub-ops | \- | C |
+-------------------------------+---------+----------+
| Number of Completed Sub-ops | \- | C |
+-------------------------------+---------+----------+
| Number of Failed Sub-ops | \- | C |
+-------------------------------+---------+----------+
| Number of Warning Sub-ops | \- | C |
+-------------------------------+---------+----------+
| Offending Element | \- | C |
+-------------------------------+---------+----------+
| Error Comment | \- | C |
+-------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which
this response/confirmation applies.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Class
for storage. If included in the response/confirmation, it shall be
equal to the value in the request/indication
Priority : int
The priority of the C-STORE operation. It shall be one of the
following:
* 0: Medium
* 1: High
* 2: Low (Default)
Identifier : io.BytesIO
A DICOM dataset of attributes to be matched against the values of the
attributes in the instances of the composite objects known to the
performing DIMSE service-user.
Status : int
The error or success notification of the operation.
NumberOfRemainingSuboperations : int
The number of remaining C-STORE sub-operations to be invoked
by this C-GET operation. It may be included in any response and shall
be included if the status is Pending
NumberOfCompletedSuboperations : int
The number of C-STORE sub-operations that have completed
successfully. It may be included in any response and shall be included
if the status is Pending
NumberOfFailedSuboperations : int
The number of C-STORE sub-operations that have failed. It may
be included in any response and shall be included if the status is
Pending
NumberOfWarningSuboperations : int
The number of C-STORE operations that generated Warning
responses. It may be included in any response and shall be included if
the status is Pending
OffendingElement : list of int or None
An optional status related field containing a list of the
elements in which an error was detected.
ErrorComment : str or None
An optional status related field containing a text
description of the error detected. 64 characters maximum.
"""
STATUS_OPTIONAL_KEYWORDS = (
'ErrorComment', 'OffendingElement', 'NumberOfRemainingSuboperations',
'NumberOfCompletedSuboperations', 'NumberOfFailedSuboperations',
'NumberOfWarningSuboperations'
)
REQUEST_KEYWORDS = (
'MessageID', 'AffectedSOPClassUID', 'Priority', 'Identifier'
)
def __init__(self):
# Variable names need to match the corresponding DICOM Element keywords
# in order for the DIMSE Message classes to be built correctly.
# Changes to the variable names can be made provided the DIMSEMessage()
# class' message_to_primitive() and primitive_to_message() methods
# are also changed
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.AffectedSOPClassUID = None
self.Priority = 0x02
self.Identifier = None
self.Status = None
self.NumberOfRemainingSuboperations = None
self.NumberOfCompletedSuboperations = None
self.NumberOfFailedSuboperations = None
self.NumberOfWarningSuboperations = None
# For Failure statuses 0xA701, 0xA900
self.ErrorComment = None
self.OffendingElement = None
# For 0xA702, 0xFE00, 0xB000, 0x0000
# self.NumberOfRemainingSuboperations
# self.NumberOfCompletedSuboperations
# self.NumberOfFailedSuboperations
# self.NumberOfWarningSuboperations
@property
def Identifier(self):
"""Return the *Identifier* as :class:`io.BytesIO`."""
return self._dataset_variant
@Identifier.setter
def Identifier(self, value):
"""Set the *Identifier*.
Parameters
----------
io.BytesIO
The value to use for the *Identifier* parameter.
"""
self._dataset_variant = (value, 'Identifier')
@property
def NumberOfCompletedSuboperations(self):
"""Return the *Number of Completed Suboperations* as :class:`int`."""
return self._NumberOfCompletedSuboperations
@NumberOfCompletedSuboperations.setter
def NumberOfCompletedSuboperations(self, value):
"""Set the *Number of Completed Suboperations*.
Parameters
----------
int
The value to use for the *Number of Completed Suboperations*
parameter.
"""
self._NumberOfCompletedSuboperations = value
@property
def NumberOfFailedSuboperations(self):
"""Return the *Number of Failed Suboperations* as :class:`int`."""
return self._NumberOfFailedSuboperations
@NumberOfFailedSuboperations.setter
def NumberOfFailedSuboperations(self, value):
"""Set the *Number of Failed Suboperations*.
Parameters
----------
int
The value to use for the *Number of Failed Suboperations*
parameter.
"""
self._NumberOfFailedSuboperations = value
@property
def NumberOfRemainingSuboperations(self):
"""Return the *Number of Remaining Suboperations* as :class:`int`."""
return self._NumberOfRemainingSuboperations
@NumberOfRemainingSuboperations.setter
def NumberOfRemainingSuboperations(self, value):
"""Set the *Number of Remaining Suboperations*.
Parameters
----------
int
The value to use for the *Number of Remaining Suboperations*
parameter.
"""
self._NumberOfRemainingSuboperations = value
@property
def NumberOfWarningSuboperations(self):
"""Return the *Number of Warning Suboperations* as :class:`int`."""
return self._NumberOfWarningSuboperations
@NumberOfWarningSuboperations.setter
def NumberOfWarningSuboperations(self, value):
"""Set the *Number of Warning Suboperations*.
Parameters
----------
int
The value to use for the *Number of Warning Suboperations*
parameter.
"""
self._NumberOfWarningSuboperations = value
@property
def Priority(self):
"""Return the *Priority* as :class:`int`."""
return self._Priority
@Priority.setter
def Priority(self, value):
"""Set the *Priority*.
Parameters
----------
int
The value to use for the *Priority* parameter.
"""
self._Priority = value
class C_MOVE(DIMSEPrimitive):
"""Represents a C-MOVE primitive.
+-------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+===============================+=========+==========+
| Message ID | M | U |
+-------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+-------------------------------+---------+----------+
| Affected SOP Class UID | M | U(=) |
+-------------------------------+---------+----------+
| Priority | M | \- |
+-------------------------------+---------+----------+
| Move Destination | M | \- |
+-------------------------------+---------+----------+
| Identifier | M | U |
+-------------------------------+---------+----------+
| Status | \- | M |
+-------------------------------+---------+----------+
| Number of Remaining Sub-ops | \- | C |
+-------------------------------+---------+----------+
| Number of Completed Sub-ops | \- | C |
+-------------------------------+---------+----------+
| Number of Failed Sub-ops | \- | C |
+-------------------------------+---------+----------+
| Number of Warning Sub-ops | \- | C |
+-------------------------------+---------+----------+
| Offending Element | \- | C |
+-------------------------------+---------+----------+
| Error Comment | \- | C |
+-------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which
this response/confirmation applies.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Class
for storage. If included in the response/confirmation, it shall be
equal to the value in the request/indication
Priority : int
The priority of the C-STORE operation. It shall be one of the
following:
* 0: Medium
* 1: High
* 2: Low (Default)
MoveDestination : bytes or str
Specifies the DICOM AE Title of the destination DICOM AE to
which the C-STORE sub-operations are being performed.
Identifier : io.BytesIO
A DICOM dataset of attributes to be matched against the values of the
attributes in the instances of the composite objects known to the
performing DIMSE service-user.
Status : int
The error or success notification of the operation.
NumberOfRemainingSuboperations : int
The number of remaining C-STORE sub-operations to be invoked
by this C-MOVE operation. It may be included in any response and shall
be included if the status is Pending
NumberOfCompletedSuboperations : int
The number of C-STORE sub-operations that have completed
successfully. It may be included in any response and shall be included
if the status is Pending
NumberOfFailedSuboperations : int
The number of C-STORE sub-operations that have failed. It may
be included in any response and shall be included if the status is
Pending
NumberOfWarningSuboperations : int
The number of C-STORE operations that generated Warning
responses. It may be included in any response and shall be included if
the status is Pending
OffendingElement : list of int or None
An optional status related field containing a list of the
elements in which an error was detected.
ErrorComment : str or None
An optional status related field containing a text
description of the error detected. 64 characters maximum.
"""
STATUS_OPTIONAL_KEYWORDS = (
'ErrorComment', 'OffendingElement', 'NumberOfRemainingSuboperations',
'NumberOfCompletedSuboperations', 'NumberOfFailedSuboperations',
'NumberOfWarningSuboperations'
)
REQUEST_KEYWORDS = (
'MessageID', 'AffectedSOPClassUID', 'Priority', 'Identifier',
'MoveDestination'
)
def __init__(self):
# Variable names need to match the corresponding DICOM Element keywords
# in order for the DIMSE Message classes to be built correctly.
# Changes to the variable names can be made provided the DIMSEMessage()
# class' message_to_primitive() and primitive_to_message() methods
# are also changed
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.AffectedSOPClassUID = None
self.Priority = 0x02
self.MoveDestination = None
self.Identifier = None
self.Status = None
self.NumberOfRemainingSuboperations = None
self.NumberOfCompletedSuboperations = None
self.NumberOfFailedSuboperations = None
self.NumberOfWarningSuboperations = None
# Optional Command Set elements used in with specific Status values
# For Failure statuses 0xA900
self.OffendingElement = None
# For Failure statuses 0xA801, 0xA701, 0xA702, 0x0122, 0xA900, 0xCxxx
# 0x0124
self.ErrorComment = None
@property
def Identifier(self):
"""Return the *Identifier* as :class:`io.BytesIO`."""
return self._dataset_variant
@Identifier.setter
def Identifier(self, value):
"""Set the *Identifier*.
Parameters
----------
io.BytesIO
The value to use for the *Identifier* parameter.
"""
self._dataset_variant = (value, 'Identifier')
@property
def MoveDestination(self):
"""Return the *Move Destination* as bytes."""
return self._move_destination
@MoveDestination.setter
def MoveDestination(self, value):
"""Set the *Move Destination*.
Parameters
----------
bytes or str
The value to use for the *Move Destination* parameter. Cannot
be an empty string and will be truncated to 16 characters long
"""
if isinstance(value, str):
value = codecs.encode(value, 'ascii')
if value is not None:
self._move_destination = validate_ae_title(
value, _config.USE_SHORT_DIMSE_AET
)
else:
self._move_destination = None
@property
def NumberOfCompletedSuboperations(self):
"""Return the *Number of Completed Suboperations* as :class:`int`."""
return self._NumberOfCompletedSuboperations
@NumberOfCompletedSuboperations.setter
def NumberOfCompletedSuboperations(self, value):
"""Set the *Number of Completed Suboperations*.
Parameters
----------
int
The value to use for the *Number of Completed Suboperations*
parameter.
"""
self._NumberOfCompletedSuboperations = value
@property
def NumberOfFailedSuboperations(self):
"""Return the *Number of Failed Suboperations* as :class:`int`."""
return self._NumberOfFailedSuboperations
@NumberOfFailedSuboperations.setter
def NumberOfFailedSuboperations(self, value):
"""Set the *Number of Failed Suboperations*.
Parameters
----------
int
The value to use for the *Number of Failed Suboperations*
parameter.
"""
self._NumberOfFailedSuboperations = value
@property
def NumberOfRemainingSuboperations(self):
"""Return the *Number of Remaining Suboperations* as :class:`int`."""
return self._NumberOfRemainingSuboperations
@NumberOfRemainingSuboperations.setter
def NumberOfRemainingSuboperations(self, value):
"""Set the *Number of Remaining Suboperations*.
Parameters
----------
int
The value to use for the *Number of Remaining Suboperations*
parameter.
"""
self._NumberOfRemainingSuboperations = value
@property
def NumberOfWarningSuboperations(self):
"""Return the *Number of Warning Suboperations* as :class:`int`."""
return self._NumberOfWarningSuboperations
@NumberOfWarningSuboperations.setter
def NumberOfWarningSuboperations(self, value):
"""Set the *Number of Warning Suboperations*.
Parameters
----------
int
The value to use for the *Number of Warning Suboperations*
parameter.
"""
self._NumberOfWarningSuboperations = value
@property
def Priority(self):
"""Return the *Priority* as :class:`int`."""
return self._Priority
@Priority.setter
def Priority(self, value):
"""Set the *Priority*.
Parameters
----------
int
The value to use for the *Priority* parameter.
"""
self._Priority = value
class C_ECHO(DIMSEPrimitive):
"""Represents a C-ECHO primitive.
+-------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+===============================+=========+==========+
| Message ID | M | U |
+-------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+-------------------------------+---------+----------+
| Affected SOP Class UID | M | U(=) |
+-------------------------------+---------+----------+
| Status | \- | M |
+-------------------------------+---------+----------+
| Error Comment | \- | C |
+-------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int or None
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int or None
The Message ID of the operation request/indication to which this
response/confirmation applies.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str or None
For the request/indication this specifies the SOP Class for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
Status : int or None
The error or success notification of the operation.
ErrorComment : str or None
An optional status related field containing a text description
of the error detected. 64 characters maximum.
"""
STATUS_OPTIONAL_KEYWORDS = ('ErrorComment', )
REQUEST_KEYWORDS = ('MessageID', 'AffectedSOPClassUID')
def __init__(self):
# Variable names need to match the corresponding DICOM Element keywords
# in order for the DIMSE Message classes to be built correctly.
# Changes to the variable names can be made provided the DIMSEMessage()
# class' message_to_primitive() and primitive_to_message() methods
# are also changed
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.AffectedSOPClassUID = None
self.Status = None
# (Optional) for Failure status 0x0122
self.ErrorComment = None
class C_CANCEL(object):
"""Represents a C-CANCEL primitive.
+-------------------------------+---------+
| Parameter | Req/ind |
+===============================+=========+
| Message ID Being Responded To | M |
+-------------------------------+---------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which this
response/confirmation applies.
References
----------
* DICOM Standard, Part 7, :dcm:`Section 9.3.2.3<part07/sect_9.3.2.3.html>`
"""
def __init__(self):
"""Initialise the C_CANCEL"""
# Variable names need to match the corresponding DICOM Element keywords
# in order for the DIMSE Message classes to be built correctly.
# Changes to the variable names can be made provided the DIMSEMessage()
# class' message_to_primitive() and primitive_to_message() methods
# are also changed
self.MessageIDBeingRespondedTo = None
@property
def MessageIDBeingRespondedTo(self):
"""Return the *Message ID Being Responded To* as an :class:`int`."""
return self._message_id_being_responded_to
@MessageIDBeingRespondedTo.setter
def MessageIDBeingRespondedTo(self, value):
"""Set the *Message ID Being Responded To*.
Parameters
----------
int
The value to use for the *Message ID Being Responded To* parameter.
"""
if isinstance(value, int):
if 0 <= value < 2**16:
self._message_id_being_responded_to = value
else:
raise ValueError("Message ID Being Responded To must be "
"between 0 and 65535, inclusive")
elif value is None:
self._message_id_being_responded_to = value
else:
raise TypeError("Message ID Being Responded To must be an int")
# DIMSE-N Service Primitives
class N_EVENT_REPORT(DIMSEPrimitive):
"""Represents a N-EVENT-REPORT primitive.
+------------------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+==========================================+=========+==========+
| Message ID | M | \- |
+------------------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+------------------------------------------+---------+----------+
| Affected SOP Class UID | M | U(=) |
+------------------------------------------+---------+----------+
| Affected SOP Instance UID | M | U(=) |
+------------------------------------------+---------+----------+
| Event Type ID | M | C(=) |
+------------------------------------------+---------+----------+
| Event Information | U | \- |
+------------------------------------------+---------+----------+
| Event Reply | \- | C |
+------------------------------------------+---------+----------+
| Status | \- | M |
+------------------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which this
response/confirmation applies.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Class for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
AffectedSOPInstanceUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Instance
for storage. If included in the response/confirmation, it shall be
equal to the value in the request/indication
EventTypeID : int
The type of event being reported, depends on the Service Class
specification. Shall be included if Event Reply is included.
EventInformation : io.BytesIO
Contains information the invoking DIMSE user is able to supply about
the event. An encoded DICOM dataset containing additional Service
Class specific information related to the operation.
EventReply : io.BytesIO
Contains the optional reply to the event report. An encoded DICOM
dataset containing additional Service Class specific information.
Status : int
The error or success notification of the operation.
"""
# Optional status element keywords other than 'Status'
STATUS_OPTIONAL_KEYWORDS = (
'AffectedSOPClassUID', 'AffectedSOPInstanceUID', 'EventTypeID',
'ErrorComment', 'ErrorID' # EventInformation
)
REQUEST_KEYWORDS = (
'MessageID', 'AffectedSOPClassUID', 'EventTypeID',
'AffectedSOPInstanceUID'
)
def __init__(self):
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.AffectedSOPClassUID = None
self.AffectedSOPInstanceUID = None
self.EventTypeID = None
self.EventInformation = None
self.EventReply = None
self.Status = None
# Optional status elements
self.ErrorComment = None
self.ErrorID = None
@property
def AffectedSOPInstanceUID(self):
"""Return the *Affected SOP Instance UID* as :class:`~pydicom.uid.UID`.
"""
return self._AffectedSOPInstanceUID
@AffectedSOPInstanceUID.setter
def AffectedSOPInstanceUID(self, value):
"""Set the *Affected SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value to use for the *Affected SOP Class UID* parameter.
"""
self._AffectedSOPInstanceUID = value
@property
def EventInformation(self):
"""Return the *Event Information* as :class:`io.BytesIO`."""
return self._dataset_variant
@EventInformation.setter
def EventInformation(self, value):
"""Set the *Event Information*.
Parameters
----------
io.BytesIO
The value to use for the *Event Information* parameter.
"""
self._dataset_variant = (value, 'EventInformation')
@property
def EventReply(self):
"""Return the *Event Reply* as :class:`io.BytesIO`."""
return self._dataset_variant
@EventReply.setter
def EventReply(self, value):
"""Set the *Event Reply*.
Parameters
----------
io.BytesIO
The value to use for the *Event Reply* parameter.
"""
self._dataset_variant = (value, 'EventReply')
@property
def EventTypeID(self):
"""Return the *Event Type ID* as :class:`int`."""
return self._event_type_id
@EventTypeID.setter
def EventTypeID(self, value):
"""Set the *Event Type ID*.
Parameters
----------
int
The value to use for the *Event Type ID* parameter.
"""
if isinstance(value, int) or value is None:
self._event_type_id = value
else:
raise TypeError("'N_EVENT_REPORT.EventTypeID' must be an int.")
class N_GET(DIMSEPrimitive):
"""Represents an N-GET primitive.
+------------------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+==========================================+=========+==========+
| Message ID | M | \- |
+------------------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+------------------------------------------+---------+----------+
| Requested SOP Class UID | M | \- |
+------------------------------------------+---------+----------+
| Requested SOP Instance UID | M | \- |
+------------------------------------------+---------+----------+
| Attribute Identifier List | U | \- |
+------------------------------------------+---------+----------+
| Affected SOP Class UID | \- | U |
+------------------------------------------+---------+----------+
| Affected SOP Instance UID | \- | U |
+------------------------------------------+---------+----------+
| Attribute List | \- | C |
+------------------------------------------+---------+----------+
| Status | \- | M |
+------------------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which this
response/confirmation applies.
RequestedSOPClassUID : pydicom.uid.UID, bytes or str
The UID of the SOP Class for which attribute values are to be
retrieved.
RequestedSOPInstanceUID : pydicom.uid.UID, bytes or str
The SOP Instance for which attribute values are to be retrieved.
AttributeIdentifierList : list of pydicom.tag.Tag
A list of attribute tags to be sent to the peer.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
The SOP Class UID of the SOP Instance for which the attributes were
retrieved.
AffectedSOPInstanceUID : pydicom.uid.UID, bytes or str
The SOP Instance UID of the SOP Instance for which the attributes were
retrieved.
AttributeList : pydicom.dataset.Dataset
A DICOM dataset containing elements matching those supplied in
Attribute Identifier List.
Status : int
The error or success notification of the operation.
"""
STATUS_OPTIONAL_KEYWORDS = ('ErrorComment', 'ErrorID', )
REQUEST_KEYWORDS = (
'MessageID', 'RequestedSOPClassUID', 'RequestedSOPInstanceUID'
)
def __init__(self):
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.RequestedSOPClassUID = None
self.RequestedSOPInstanceUID = None
self.AttributeIdentifierList = None
self.AffectedSOPClassUID = None
self.AffectedSOPInstanceUID = None
self.AttributeList = None
self.Status = None
# (Optional) elements for specific status values
self.ErrorComment = None
self.ErrorID = None
@property
def AffectedSOPInstanceUID(self):
"""Return the *Affected SOP Instance UID* as :class:`~pydicom.uid.UID`.
"""
return self._AffectedSOPInstanceUID
@AffectedSOPInstanceUID.setter
def AffectedSOPInstanceUID(self, value):
"""Set the *Affected SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value to use for the *Affected SOP Class UID* parameter.
"""
self._AffectedSOPInstanceUID = value
@property
def AttributeIdentifierList(self):
"""Return the *Attribute Identifier List* as a :class:`list` of
:class:`~pydicom.tag.BaseTag`.
"""
return self._attribute_identifier_list
@AttributeIdentifierList.setter
def AttributeIdentifierList(self, value):
"""Set the *Attribute Identifier List*.
Parameters
----------
list of pydicom.tag.BaseTag
The value to use for the *Attribute Identifier List* parameter.
A list of pydicom :class:`pydicom.tag.BaseTag` instances or any
values acceptable for creating them.
"""
if value is None:
self._attribute_identifier_list = None
return
# Singleton tags get put in a list
if not isinstance(value, (list, MutableSequence)):
value = [value]
# Empty list -> None
if not value:
self._attribute_identifier_list = None
return
try:
# Convert each item in list to pydicom Tag
self._attribute_identifier_list = [Tag(tag) for tag in value]
except (TypeError, ValueError):
raise ValueError(
"Attribute Identifier List must be a list of pydicom Tags"
)
@property
def AttributeList(self):
"""Return the *Attribute List* as :class:`io.BytesIO`."""
return self._dataset_variant
@AttributeList.setter
def AttributeList(self, value):
"""Set the *Attribute List*.
Parameters
----------
io.BytesIO
The value to use for the *Attribute List* parameter.
"""
self._dataset_variant = (value, 'AttributeList')
@property
def RequestedSOPClassUID(self):
"""Return the *Requested SOP Class UID* as :class:`~pydicom.uid.UID`.
"""
return self._RequestedSOPClassUID
@RequestedSOPClassUID.setter
def RequestedSOPClassUID(self, value):
"""Set the *Requested SOP Class UID*.
Parameters
----------
pydicom.uid.UID, bytes or str
The value to use for the *Requested SOP Class UID* parameter.
"""
self._RequestedSOPClassUID = value
@property
def RequestedSOPInstanceUID(self):
"""Return the *Requested SOP Instance UID* as
:class:`~pydicom.uid.UID`.
"""
return self._RequestedSOPInstanceUID
@RequestedSOPInstanceUID.setter
def RequestedSOPInstanceUID(self, value):
"""Set the *Requested SOP Instance UID*.
Parameters
----------
pydicom.uid.UID, bytes or str
The value to use for the *Requested SOP Instance UID* parameter.
"""
self._RequestedSOPInstanceUID = value
class N_SET(DIMSEPrimitive):
"""Represents a N-SET primitive.
+------------------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+==========================================+=========+==========+
| Message ID | M | \- |
+------------------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+------------------------------------------+---------+----------+
| Requested SOP Class UID | M | \- |
+------------------------------------------+---------+----------+
| Requested SOP Instance UID | M | \- |
+------------------------------------------+---------+----------+
| Modification List | M | \- |
+------------------------------------------+---------+----------+
| Attribute List | \- | U |
+------------------------------------------+---------+----------+
| Affected SOP Class UID | \- | U |
+------------------------------------------+---------+----------+
| Affected SOP Instance UID | \- | U |
+------------------------------------------+---------+----------+
| Status | \- | M |
+------------------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which this
response/confirmation applies.
RequestedSOPClassUID : pydicom.uid.UID, bytes or str
The UID of the SOP Class for which attribute values are to be
modified.
RequestedSOPInstanceUID : pydicom.uid.UID, bytes or str
The SOP Instance for which attribute values are to be modified.
ModificationList : io.BytesIO
A DICOM dataset containing the attributes and values that are to be
used to modify the SOP Instance.
AttributeList : io.BytesIO
A DICOM dataset containing the attributes and values that were used to
modify the SOP Instance.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
The SOP Class UID of the modified SOP Instance.
AffectedSOPInstanceUID : pydicom.uid.UID, bytes or str
The SOP Instance UID of the modified SOP Instance.
Status : int
The error or success notification of the operation.
"""
STATUS_OPTIONAL_KEYWORDS = (
'ErrorComment', 'ErrorID', 'AttributeIdentifierList'
)
REQUEST_KEYWORDS = (
'MessageID', 'RequestedSOPClassUID', 'RequestedSOPInstanceUID',
'ModificationList'
)
def __init__(self):
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.RequestedSOPClassUID = None
self.RequestedSOPInstanceUID = None
self.ModificationList = None
self.AttributeList = None
self.AffectedSOPClassUID = None
self.AffectedSOPInstanceUID = None
self.Status = None
# Optional
self.ErrorComment = None
self.ErrorID = None
self.AttributeIdentifierList = None
@property
def AffectedSOPInstanceUID(self):
"""Return the *Affected SOP Instance UID* as :class:`~pydicom.uid.UID`.
"""
return self._AffectedSOPInstanceUID
@AffectedSOPInstanceUID.setter
def AffectedSOPInstanceUID(self, value):
"""Set the *Affected SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value to use for the *Affected SOP Class UID* parameter.
"""
self._AffectedSOPInstanceUID = value
@property
def AttributeList(self):
"""Return the *Attribute List* as :class:`io.BytesIO`."""
return self._dataset_variant
@AttributeList.setter
def AttributeList(self, value):
"""Set the *Attribute List*.
Parameters
----------
io.BytesIO
The value to use for the *Attribute List* parameter.
"""
self._dataset_variant = (value, 'AttributeList')
@property
def ModificationList(self):
"""Return the *Modification List* as :class:`io.BytesIO`."""
return self._dataset_variant
@ModificationList.setter
def ModificationList(self, value):
"""Set the *Modification List*.
Parameters
----------
io.BytesIO
The value to use for the *Modification List* parameter.
"""
self._dataset_variant = (value, 'ModificationList')
@property
def RequestedSOPClassUID(self):
"""Return the *Requested SOP Class UID* as :class:`~pydicom.uid.UID`.
"""
return self._RequestedSOPClassUID
@RequestedSOPClassUID.setter
def RequestedSOPClassUID(self, value):
"""Set the *Requested SOP Class UID*.
Parameters
----------
pydicom.uid.UID, bytes or str
The value to use for the *Requested SOP Class UID* parameter.
"""
self._RequestedSOPClassUID = value
@property
def RequestedSOPInstanceUID(self):
"""Return the *Requested SOP Instance UID* as
:class:`~pydicom.uid.UID`.
"""
return self._RequestedSOPInstanceUID
@RequestedSOPInstanceUID.setter
def RequestedSOPInstanceUID(self, value):
"""Set the *Requested SOP Instance UID*.
Parameters
----------
pydicom.uid.UID, bytes or str
The value to use for the *Requested SOP Instance UID* parameter.
"""
self._RequestedSOPInstanceUID = value
class N_ACTION(DIMSEPrimitive):
"""Represents a N-ACTION primitive.
+------------------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+==========================================+=========+==========+
| Message ID | M | \- |
+------------------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+------------------------------------------+---------+----------+
| Requested SOP Class UID | M | \- |
+------------------------------------------+---------+----------+
| Requested SOP Instance UID | M | \- |
+------------------------------------------+---------+----------+
| Action Type ID | M | C(=) |
+------------------------------------------+---------+----------+
| Action Information | U | \- |
+------------------------------------------+---------+----------+
| Affected SOP Class UID | \- | U |
+------------------------------------------+---------+----------+
| Affected SOP Instance UID | \- | U |
+------------------------------------------+---------+----------+
| Action Reply | \- | C |
+------------------------------------------+---------+----------+
| Status | \- | M |
+------------------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which this
response/confirmation applies.
RequestedSOPClassUID : pydicom.uid.UID, bytes or str
The SOP Class for which the action is to be performed.
RequestedSOPInstanceUID : pydicom.uid.UID, bytes or str
The SOP Instance for which the action is to be performed.
ActionTypeID : int
The type of action that is to be performed.
ActionInformation : io.BytesIO
Extra information required to perform the action.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Class for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
AffectedSOPInstanceUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Instance for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
ActionReply : io.BytesIO
The reply to the action.
Status : int
The error or success notification of the operation.
"""
STATUS_OPTIONAL_KEYWORDS = (
'ErrorComment', 'ErrorID', 'AttributeIdentifierList'
)
REQUEST_KEYWORDS = (
'MessageID', 'RequestedSOPClassUID', 'RequestedSOPInstanceUID',
'ActionTypeID'
)
def __init__(self):
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.RequestedSOPClassUID = None
self.RequestedSOPInstanceUID = None
self.ActionTypeID = None
self.ActionInformation = None
self.AffectedSOPClassUID = None
self.AffectedSOPInstanceUID = None
self.ActionReply = None
self.Status = None
# Optional status elements
self.ErrorComment = None
self.ErrorID = None
@property
def ActionInformation(self):
"""Return the *Action Information* as :class:`io.BytesIO`."""
return self._dataset_variant
@ActionInformation.setter
def ActionInformation(self, value):
"""Set the *Action Information*.
Parameters
----------
io.BytesIO
The value to use for the *Action Information* parameter.
"""
self._dataset_variant = (value, 'ActionInformation')
@property
def ActionReply(self):
"""Return the *Action Reply* as :class:`io.BytesIO`."""
return self._dataset_variant
@ActionReply.setter
def ActionReply(self, value):
"""Set the *Action Reply*.
Parameters
----------
io.BytesIO
The value to use for the *Action Reply* parameter.
"""
self._dataset_variant = (value, 'ActionReply')
@property
def ActionTypeID(self):
"""Return the *Action Type ID* as :class:`int`."""
return self._action_type_id
@ActionTypeID.setter
def ActionTypeID(self, value):
"""Set the *Action Type ID*.
Parameters
----------
int
The value to use for the *Action Type ID* parameter.
"""
if isinstance(value, int) or value is None:
self._action_type_id = value
else:
raise TypeError("'N_ACTION.ActionTypeID' must be an int.")
@property
def AffectedSOPInstanceUID(self):
"""Return the *Affected SOP Instance UID* as :class:`~pydicom.uid.UID`.
"""
return self._AffectedSOPInstanceUID
@AffectedSOPInstanceUID.setter
def AffectedSOPInstanceUID(self, value):
"""Set the *Affected SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value to use for the *Affected SOP Class UID* parameter.
"""
self._AffectedSOPInstanceUID = value
@property
def RequestedSOPClassUID(self):
"""Return the *Requested SOP Class UID* as :class:`~pydicom.uid.UID`.
"""
return self._RequestedSOPClassUID
@RequestedSOPClassUID.setter
def RequestedSOPClassUID(self, value):
"""Set the *Requested SOP Class UID*.
Parameters
----------
pydicom.uid.UID, bytes or str
The value to use for the *Requested SOP Class UID* parameter.
"""
self._RequestedSOPClassUID = value
@property
def RequestedSOPInstanceUID(self):
"""Return the *Requested SOP Instance UID* as
:class:`~pydicom.uid.UID`.
"""
return self._RequestedSOPInstanceUID
@RequestedSOPInstanceUID.setter
def RequestedSOPInstanceUID(self, value):
"""Set the *Requested SOP Instance UID*.
Parameters
----------
pydicom.uid.UID, bytes or str
The value to use for the *Requested SOP Instance UID* parameter.
"""
self._RequestedSOPInstanceUID = value
class N_CREATE(DIMSEPrimitive):
"""Represents a N-CREATE primitive.
+------------------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+==========================================+=========+==========+
| Message ID | M | \- |
+------------------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+------------------------------------------+---------+----------+
| Affected SOP Class UID | M | U(=) |
+------------------------------------------+---------+----------+
| Affected SOP Instance UID | U | C |
+------------------------------------------+---------+----------+
| Attribute List | U | U |
+------------------------------------------+---------+----------+
| Status | \- | M |
+------------------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which this
response/confirmation applies.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Class for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
AffectedSOPInstanceUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Instance for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
AttributeList : io.BytesIO
A set of attributes and values that are to be assigned to the new
SOP Instance.
Status : int
The error or success notification of the operation. It shall be
one of the following values:
"""
STATUS_OPTIONAL_KEYWORDS = ('ErrorComment', 'ErrorID', )
REQUEST_KEYWORDS = ('MessageID', 'AffectedSOPClassUID')
def __init__(self):
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.AffectedSOPClassUID = None
self.AffectedSOPInstanceUID = None
self.AttributeList = None
self.Status = None
# Optional elements
self.ErrorComment = None
self.ErrorID = None
@property
def AffectedSOPInstanceUID(self):
"""Return the *Affected SOP Instance UID* as :class:`~pydicom.uid.UID`.
"""
return self._AffectedSOPInstanceUID
@AffectedSOPInstanceUID.setter
def AffectedSOPInstanceUID(self, value):
"""Set the *Affected SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value to use for the *Affected SOP Class UID* parameter.
"""
self._AffectedSOPInstanceUID = value
@property
def AttributeList(self):
"""Return the *Attribute List* as :class:`io.BytesIO`."""
return self._dataset_variant
@AttributeList.setter
def AttributeList(self, value):
"""Set the *Attribute List*.
Parameters
----------
io.BytesIO
The value to use for the *Attribute List* parameter.
"""
self._dataset_variant = (value, 'AttributeList')
class N_DELETE(DIMSEPrimitive):
"""Represents a N-DELETE primitive.
+------------------------------------------+---------+----------+
| Parameter | Req/ind | Rsp/conf |
+==========================================+=========+==========+
| Message ID | M | \- |
+------------------------------------------+---------+----------+
| Message ID Being Responded To | \- | M |
+------------------------------------------+---------+----------+
| Requested SOP Class UID | M | \- |
+------------------------------------------+---------+----------+
| Requested SOP Instance UID | M | \- |
+------------------------------------------+---------+----------+
| Affected SOP Class UID | \- | U |
+------------------------------------------+---------+----------+
| Affected SOP Instance UID | \- | U |
+------------------------------------------+---------+----------+
| Status | \- | M |
+------------------------------------------+---------+----------+
| (=) - The value of the parameter is equal to the value of the parameter
in the column to the left
| C - The parameter is conditional.
| M - Mandatory
| MF - Mandatory with a fixed value
| U - The use of this parameter is a DIMSE service user option
| UF - User option with a fixed value
Attributes
----------
MessageID : int
Identifies the operation and is used to distinguish this
operation from other notifications or operations that may be in
progress. No two identical values for the Message ID shall be used for
outstanding operations.
MessageIDBeingRespondedTo : int
The Message ID of the operation request/indication to which this
response/confirmation applies.
RequestedSOPClassUID : pydicom.uid.UID, bytes or str
The UID of the SOP Class to be deleted.
RequestedSOPInstanceUID : pydicom.uid.UID, bytes or str
The SOP Instance to be deleted.
AffectedSOPClassUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Class for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
AffectedSOPInstanceUID : pydicom.uid.UID, bytes or str
For the request/indication this specifies the SOP Instance for
storage. If included in the response/confirmation, it shall be equal
to the value in the request/indication
Status : int
The error or success notification of the operation.
"""
STATUS_OPTIONAL_KEYWORDS = ('ErrorComment', 'ErrorID', )
REQUEST_KEYWORDS = (
'MessageID', 'RequestedSOPClassUID', 'RequestedSOPInstanceUID'
)
def __init__(self):
self.MessageID = None
self.MessageIDBeingRespondedTo = None
self.RequestedSOPClassUID = None
self.RequestedSOPInstanceUID = None
self.AffectedSOPClassUID = None
self.AffectedSOPInstanceUID = None
self.Status = None
# Optional
self.ErrorComment = None
self.ErrorID = None
@property
def AffectedSOPInstanceUID(self):
"""Return the *Affected SOP Instance UID* as :class:`~pydicom.uid.UID`.
"""
return self._AffectedSOPInstanceUID
@AffectedSOPInstanceUID.setter
def AffectedSOPInstanceUID(self, value):
"""Set the *Affected SOP Instance UID*.
Parameters
----------
value : pydicom.uid.UID, bytes or str
The value to use for the *Affected SOP Class UID* parameter.
"""
self._AffectedSOPInstanceUID = value
@property
def RequestedSOPClassUID(self):
"""Return the *Requested SOP Class UID* as :class:`~pydicom.uid.UID`.
"""
return self._RequestedSOPClassUID
@RequestedSOPClassUID.setter
def RequestedSOPClassUID(self, value):
"""Set the *Requested SOP Class UID*.
Parameters
----------
pydicom.uid.UID, bytes or str
The value to use for the *Requested SOP Class UID* parameter.
"""
self._RequestedSOPClassUID = value
@property
def RequestedSOPInstanceUID(self):
"""Return the *Requested SOP Instance UID* as
:class:`~pydicom.uid.UID`.
"""
return self._RequestedSOPInstanceUID
@RequestedSOPInstanceUID.setter
def RequestedSOPInstanceUID(self, value):
"""Set the *Requested SOP Instance UID*.
Parameters
----------
pydicom.uid.UID, bytes or str
The value to use for the *Requested SOP Instance UID* parameter.
"""
self._RequestedSOPInstanceUID = value
| 37.8671 | 79 | 0.546226 | 8,408 | 87,473 | 5.623097 | 0.044481 | 0.016244 | 0.018148 | 0.018719 | 0.85869 | 0.842213 | 0.816642 | 0.803105 | 0.785 | 0.772267 | 0 | 0.003822 | 0.300138 | 87,473 | 2,309 | 80 | 37.883499 | 0.768471 | 0.570565 | 0 | 0.693467 | 0 | 0 | 0.108048 | 0.017035 | 0 | 0 | 0.000508 | 0 | 0 | 1 | 0.167085 | false | 0.01005 | 0.012563 | 0 | 0.310302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d8ebbb9b5340c22885526e338f1b4674f4b06b0 | 26 | py | Python | tests/test_edit.py | rosalindfranklininstitute/maptools | 45e4b42a6e029351098683b69f35e9be7e00c93e | [
"Apache-2.0"
] | null | null | null | tests/test_edit.py | rosalindfranklininstitute/maptools | 45e4b42a6e029351098683b69f35e9be7e00c93e | [
"Apache-2.0"
] | null | null | null | tests/test_edit.py | rosalindfranklininstitute/maptools | 45e4b42a6e029351098683b69f35e9be7e00c93e | [
"Apache-2.0"
] | 1 | 2021-04-26T14:26:27.000Z | 2021-04-26T14:26:27.000Z | def test_edit():
pass
| 8.666667 | 16 | 0.615385 | 4 | 26 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 26 | 2 | 17 | 13 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3d921a921c1cafaadadc619053ebfe629e05a7f7 | 7,696 | py | Python | iriusrisk-python-client-lib/iriusrisk_python_client_lib/models/__init__.py | iriusrisk/iriusrisk-python-client-lib | 4912706cd1e5c0bc555dbc7da02fb64cbeab3b18 | [
"Apache-2.0"
] | null | null | null | iriusrisk-python-client-lib/iriusrisk_python_client_lib/models/__init__.py | iriusrisk/iriusrisk-python-client-lib | 4912706cd1e5c0bc555dbc7da02fb64cbeab3b18 | [
"Apache-2.0"
] | null | null | null | iriusrisk-python-client-lib/iriusrisk_python_client_lib/models/__init__.py | iriusrisk/iriusrisk-python-client-lib | 4912706cd1e5c0bc555dbc7da02fb64cbeab3b18 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
IriusRisk API
Products API # noqa: E501
OpenAPI spec version: 1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import models into model package
from iriusrisk_python_client_lib.models.architecture_diagram import ArchitectureDiagram
from iriusrisk_python_client_lib.models.assign_groups_product_request_body import AssignGroupsProductRequestBody
from iriusrisk_python_client_lib.models.assign_user_group_request_body import AssignUserGroupRequestBody
from iriusrisk_python_client_lib.models.assign_users_product_request_body import AssignUsersProductRequestBody
from iriusrisk_python_client_lib.models.associate_countermeasure_threat_library_request_body import AssociateCountermeasureThreatLibraryRequestBody
from iriusrisk_python_client_lib.models.associate_countermeasure_weakness_library_request_body import AssociateCountermeasureWeaknessLibraryRequestBody
from iriusrisk_python_client_lib.models.associate_weakness_threat_library_request_body import AssociateWeaknessThreatLibraryRequestBody
from iriusrisk_python_client_lib.models.category_component import CategoryComponent
from iriusrisk_python_client_lib.models.component import Component
from iriusrisk_python_client_lib.models.component_asset import ComponentAsset
from iriusrisk_python_client_lib.models.component_control import ComponentControl
from iriusrisk_python_client_lib.models.component_definition import ComponentDefinition
from iriusrisk_python_client_lib.models.component_definition_risk_patterns import ComponentDefinitionRiskPatterns
from iriusrisk_python_client_lib.models.component_trust_zone import ComponentTrustZone
from iriusrisk_python_client_lib.models.component_use_case import ComponentUseCase
from iriusrisk_python_client_lib.models.component_use_case_short import ComponentUseCaseShort
from iriusrisk_python_client_lib.models.component_use_case_threat_short import ComponentUseCaseThreatShort
from iriusrisk_python_client_lib.models.component_weakness import ComponentWeakness
from iriusrisk_python_client_lib.models.control_command import ControlCommand
from iriusrisk_python_client_lib.models.control_command_standards import ControlCommandStandards
from iriusrisk_python_client_lib.models.create_group_request_body import CreateGroupRequestBody
from iriusrisk_python_client_lib.models.create_library_request_body import CreateLibraryRequestBody
from iriusrisk_python_client_lib.models.create_product import CreateProduct
from iriusrisk_python_client_lib.models.create_risk_pattern_request_body import CreateRiskPatternRequestBody
from iriusrisk_python_client_lib.models.create_role_request_body import CreateRoleRequestBody
from iriusrisk_python_client_lib.models.create_threat_library_request_body import CreateThreatLibraryRequestBody
from iriusrisk_python_client_lib.models.create_use_case_library_request_body import CreateUseCaseLibraryRequestBody
from iriusrisk_python_client_lib.models.create_user_request_body import CreateUserRequestBody
from iriusrisk_python_client_lib.models.create_weakness_library_request_body import CreateWeaknessLibraryRequestBody
from iriusrisk_python_client_lib.models.data_flow import DataFlow
from iriusrisk_python_client_lib.models.data_flow_assets import DataFlowAssets
from iriusrisk_python_client_lib.models.error import Error
from iriusrisk_python_client_lib.models.group import Group
from iriusrisk_python_client_lib.models.implementation import Implementation
from iriusrisk_python_client_lib.models.inline_response200 import InlineResponse200
from iriusrisk_python_client_lib.models.inline_response2001 import InlineResponse2001
from iriusrisk_python_client_lib.models.inline_response201 import InlineResponse201
from iriusrisk_python_client_lib.models.inline_response2011 import InlineResponse2011
from iriusrisk_python_client_lib.models.librarieslibrary_refriskpatternsrisk_pattern_refusecasesuse_case_refthreats_risk_rating import LibrarieslibraryRefriskpatternsriskPatternRefusecasesuseCaseRefthreatsRiskRating
from iriusrisk_python_client_lib.models.library import Library
from iriusrisk_python_client_lib.models.library_control import LibraryControl
from iriusrisk_python_client_lib.models.library_threat import LibraryThreat
from iriusrisk_python_client_lib.models.library_use_case import LibraryUseCase
from iriusrisk_python_client_lib.models.library_weakness import LibraryWeakness
from iriusrisk_python_client_lib.models.message import Message
from iriusrisk_python_client_lib.models.product import Product
from iriusrisk_python_client_lib.models.product_access_type import ProductAccessType
from iriusrisk_python_client_lib.models.product_asset import ProductAsset
from iriusrisk_python_client_lib.models.product_asset_classification import ProductAssetClassification
from iriusrisk_python_client_lib.models.product_setting import ProductSetting
from iriusrisk_python_client_lib.models.product_short import ProductShort
from iriusrisk_python_client_lib.models.product_short_groups import ProductShortGroups
from iriusrisk_python_client_lib.models.product_short_users import ProductShortUsers
from iriusrisk_python_client_lib.models.product_trust_zone import ProductTrustZone
from iriusrisk_python_client_lib.models.productsrefcomponentscomponent_reftestscwe_control import ProductsrefcomponentscomponentReftestscweControl
from iriusrisk_python_client_lib.models.productsrefcomponentscomponent_reftestscwe_source import ProductsrefcomponentscomponentReftestscweSource
from iriusrisk_python_client_lib.models.question import Question
from iriusrisk_python_client_lib.models.reference import Reference
from iriusrisk_python_client_lib.models.risk_count import RiskCount
from iriusrisk_python_client_lib.models.risk_pattern import RiskPattern
from iriusrisk_python_client_lib.models.risk_rating import RiskRating
from iriusrisk_python_client_lib.models.risk_summary import RiskSummary
from iriusrisk_python_client_lib.models.standard import Standard
from iriusrisk_python_client_lib.models.supported_standard import SupportedStandard
from iriusrisk_python_client_lib.models.test import Test
from iriusrisk_python_client_lib.models.test_command import TestCommand
from iriusrisk_python_client_lib.models.test_source import TestSource
from iriusrisk_python_client_lib.models.threat import Threat
from iriusrisk_python_client_lib.models.threat_control import ThreatControl
from iriusrisk_python_client_lib.models.threat_name_and_ref import ThreatNameAndRef
from iriusrisk_python_client_lib.models.threat_short import ThreatShort
from iriusrisk_python_client_lib.models.threat_weakness import ThreatWeakness
from iriusrisk_python_client_lib.models.udt import Udt
from iriusrisk_python_client_lib.models.unassign_groups_product_request_body import UnassignGroupsProductRequestBody
from iriusrisk_python_client_lib.models.unassign_users_product_request_body import UnassignUsersProductRequestBody
from iriusrisk_python_client_lib.models.unassing_users_group_request_body import UnassingUsersGroupRequestBody
from iriusrisk_python_client_lib.models.update_group_request_body import UpdateGroupRequestBody
from iriusrisk_python_client_lib.models.update_product import UpdateProduct
from iriusrisk_python_client_lib.models.update_status_countermeasure_request_body import UpdateStatusCountermeasureRequestBody
from iriusrisk_python_client_lib.models.update_status_test_request_body import UpdateStatusTestRequestBody
from iriusrisk_python_client_lib.models.user import User
from iriusrisk_python_client_lib.models.user_detailed import UserDetailed
from iriusrisk_python_client_lib.models.weakness_name_and_ref import WeaknessNameAndRef
| 76.19802 | 215 | 0.924246 | 939 | 7,696 | 7.13738 | 0.183174 | 0.160997 | 0.235303 | 0.309609 | 0.55849 | 0.517905 | 0.446882 | 0.152492 | 0.022381 | 0 | 0 | 0.004655 | 0.050936 | 7,696 | 100 | 216 | 76.96 | 0.912924 | 0.025078 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3de3f2d18b0f5c0f2293ec01e64065decbd8f3e6 | 40 | py | Python | main.py | mcclurec/tvnamer | f5a9060be8f78f45f023fb65d4d9d5dd2fef9ae6 | [
"Unlicense"
] | null | null | null | main.py | mcclurec/tvnamer | f5a9060be8f78f45f023fb65d4d9d5dd2fef9ae6 | [
"Unlicense"
] | null | null | null | main.py | mcclurec/tvnamer | f5a9060be8f78f45f023fb65d4d9d5dd2fef9ae6 | [
"Unlicense"
] | null | null | null | import tvnamer.main
tvnamer.main.main()
| 13.333333 | 19 | 0.8 | 6 | 40 | 5.333333 | 0.5 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 2 | 20 | 20 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3df60175d56446ffc8ab2ef457dcce14366b6303 | 10,399 | py | Python | spektral/geometric/stat.py | dbusbridge/spektral | 83eaa381a263d0a217692b6f1018388946e85c45 | [
"MIT"
] | 1 | 2020-06-25T03:29:30.000Z | 2020-06-25T03:29:30.000Z | spektral/geometric/stat.py | kprzybylapara/pylint | a95807603c2bb96c80f34d326f663273c72ca3fc | [
"MIT"
] | null | null | null | spektral/geometric/stat.py | kprzybylapara/pylint | a95807603c2bb96c80f34d326f663273c72ca3fc | [
"MIT"
] | null | null | null | import numpy as np
from spektral.geometric.manifold import spherical_clip, exp_map
# Uniform ######################################################################
def spherical_uniform(size, dim=3, r=1.):
"""
Samples points from a uniform distribution on a spherical manifold.
Uniform sampling on the sphere can be achieved by sampling from a Gaussian
in the ambient space of the CCM, and then projecting the samples onto the
sphere.
:param size: number of points to sample;
:param dim: dimension of the ambient space;
:param r: positive float, the radius of the CCM;
:return: np.array of shape (size, dim).
"""
samples = np.random.normal(0, 1, (size, dim))
samples = spherical_clip(samples, r=r)
return samples
def hyperbolic_uniform(size, dim=3, r=-1., low=-1., high=1., projection='upper'):
"""
Samples points from a uniform distribution on a hyperbolic manifold. Uniform
sampling on a hyperbolic CCM can be achieved by sampling from a uniform
distribution in the ambient space of the CCM, and then projecting the
samples onto the CCM.
:param size: number of points to sample;
:param dim: dimension of the ambient space;
:param r: negative float, the radius of the CCM;
:param low: lower bound of the uniform distribution from which to sample;
:param high: upper bound of the uniform distribution from which to sample;
:param projection: 'upper', 'lower', or 'both'. Whether to project points
always on the upper or lower branch of the hyperboloid, or on both based
on the sign of the last coordinate.
:return: np.array of shape (size, dim).
"""
samples = np.random.uniform(low, high, (size, dim))
if projection == 'both':
sign = np.sign(samples[..., -1:])
elif projection == 'upper':
sign = 1
elif projection == 'lower':
sign = -1
else:
raise NotImplementedError('Possible projection modes: \'both\', '
'\'upper\', \'lower\'.')
samples[..., -1:] = sign * np.sqrt((samples[..., :-1] ** 2).sum(-1, keepdims=True) + r ** 2)
return samples
def _ccm_uniform(size, dim=3, r=0., low=-1., high=1., projection='upper'):
"""
Samples points from a uniform distribution on a constant-curvature manifold.
If `r=0`, then points are sampled from a uniform distribution in the ambient
space.
:param size: number of points to sample;
:param dim: dimension of the ambient space;
:param r: float, the radius of the CCM;
:param low: lower bound of the uniform distribution from which to sample;
:param high: upper bound of the uniform distribution from which to sample;
:param projection: 'upper', 'lower', or 'both'. Whether to project points
always on the upper or lower branch of the hyperboloid, or on both based
on the sign of the last coordinate.
:return: np.array of shape (size, dim).
"""
if r < 0.:
return hyperbolic_uniform(size, dim=dim, r=r, low=low, high=high,
projection=projection)
elif r > 0.:
return spherical_uniform(size, dim=dim, r=r)
else:
return np.random.uniform(low, high, (size, dim))
def ccm_uniform(size, dim=3, r=0., low=-1., high=1., projection='upper'):
"""
Samples points from a uniform distribution on a constant-curvature manifold.
If `r=0`, then points are sampled from a uniform distribution in the ambient
space.
If a list of radii is passed instead of a single scalar, then the sampling
is repeated for each value in the list and the results are concatenated
along the last axis (e.g., see [Grattarola et al. (2018)](https://arxiv.org/abs/1805.06299)).
:param size: number of points to sample;
:param dim: dimension of the ambient space;
:param r: floats or list of floats, radii of the CCMs;
:param low: lower bound of the uniform distribution from which to sample;
:param high: upper bound of the uniform distribution from which to sample;
:param projection: 'upper', 'lower', or 'both'. Whether to project points
always on the upper or lower branch of the hyperboloid, or on both based
on the sign of the last coordinate.
:return: if `r` is a scalar, np.array of shape (size, dim). If `r` is a
list, np.array of shape (size, len(r) * dim).
"""
if isinstance(r, int) or isinstance(r, float):
r = [r]
elif isinstance(r, list) or isinstance(r, tuple):
r = r
else:
raise TypeError('Radius must be either a single value, a list'
'of values (or a tuple).')
to_ret = []
for r_ in r:
to_ret.append(_ccm_uniform(size, dim=dim, r=r_, low=low, high=high,
projection=projection))
return np.concatenate(to_ret, -1)
# Normal #######################################################################
def spherical_normal(size, tangent_point, r, dim=3, loc=0., scale=1.):
"""
Samples points from a normal distribution on a spherical manifold.
Normal sampling on the sphere works by sampling from a Gaussian on the
tangent plane, and then projecting the sampled points onto the sphere using
the Riemannian exponential map.
:param size: number of points to sample;
:param tangent_point: np.array, origin of the tangent plane on the CCM
(extrinsic coordinates);
:param dim: dimension of the ambient space;
:param r: positive float, the radius of the CCM;
:param loc: mean of the Gaussian on the tangent plane;
:param scale: standard deviation of the Gaussian on the tangent plane;
:return: np.array of shape (size, dim).
"""
samples = np.random.normal(loc=loc, scale=scale, size=(size, dim - 1))
samples = exp_map(samples, r, tangent_point)
return samples
def hyperbolic_normal(size, tangent_point, r, dim=3, loc=0., scale=1.):
"""
Samples points from a normal distribution on a hyperbolic manifold.
Normal sampling on a hyperbolic CCM works by sampling from a Gaussian on the
tangent plane, and then projecting the sampled points onto the CCM using
the Riemannian exponential map.
:param size: number of points to sample;
:param tangent_point: np.array, origin of the tangent plane on the CCM
(extrinsic coordinates);
:param r: positive float, the radius of the CCM;
:param dim: dimension of the ambient space;
:param loc: mean of the Gaussian on the tangent plane;
:param scale: standard deviation of the Gaussian on the tangent plane;
:return: np.array of shape (size, dim).
"""
samples = np.random.normal(loc=loc, scale=scale, size=(size, dim - 1))
return exp_map(samples, r, tangent_point)
def _ccm_normal(size, dim=3, r=0., tangent_point=None, loc=0., scale=1.):
"""
Samples points from a Gaussian distribution on a constant-curvature manifold.
If `r=0`, then points are sampled from a Gaussian distribution in the
ambient space.
:param size: number of points to sample;
:param tangent_point: np.array, origin of the tangent plane on the CCM
(extrinsic coordinates); if 'None', defaults to `[0., ..., 0., r]`.
:param r: float, the radius of the CCM;
:param dim: dimension of the ambient space;
:param loc: mean of the Gaussian on the tangent plane;
:param scale: standard deviation of the Gaussian on the tangent plane;
:return: np.array of shape (size, dim).
"""
if tangent_point is None:
tangent_point = np.zeros((dim, ))
tangent_point[-1] = np.abs(r)
if r < 0.:
return hyperbolic_normal(size, tangent_point, r, dim=dim, loc=loc, scale=scale)
elif r > 0.:
return spherical_normal(size, tangent_point, r, dim=dim, loc=loc, scale=scale)
else:
return np.random.normal(loc, scale, (size, dim))
def ccm_normal(size, dim=3, r=0., tangent_point=None, loc=0., scale=1.):
"""
Samples points from a Gaussian distribution on a constant-curvature manifold.
If `r=0`, then points are sampled from a Gaussian distribution in the
ambient space.
If a list of radii is passed instead of a single scalar, then the sampling
is repeated for each value in the list and the results are concatenated
along the last axis (e.g., see [Grattarola et al. (2018)](https://arxiv.org/abs/1805.06299)).
:param size: number of points to sample;
:param tangent_point: np.array, origin of the tangent plane on the CCM
(extrinsic coordinates); if 'None', defaults to `[0., ..., 0., r]`.
:param r: floats or list of floats, radii of the CCMs;
:param dim: dimension of the ambient space;
:param loc: mean of the Gaussian on the tangent plane;
:param scale: standard deviation of the Gaussian on the tangent plane;
:return: if `r` is a scalar, np.array of shape (size, dim). If `r` is a
list, np.array of shape (size, len(r) * dim).
"""
if isinstance(r, int) or isinstance(r, float):
r = [r]
elif isinstance(r, list) or isinstance(r, tuple):
r = r
else:
raise TypeError('Radius must be either a single value, a list'
'of values (or a tuple).')
if tangent_point is None:
tangent_point = [None] * len(r)
elif isinstance(tangent_point, np.ndarray):
tangent_point = [tangent_point]
elif isinstance(tangent_point, list) or isinstance(tangent_point, tuple):
pass
else:
raise TypeError('tangent_point must be either a single point or a'
'list of points.')
if len(r) != len(tangent_point):
raise ValueError('r and tangent_point must have the same length')
to_ret = []
for r_, tp_ in zip(r, tangent_point):
to_ret.append(_ccm_normal(size, dim=dim, r=r_, tangent_point=tp_,
loc=loc, scale=scale))
return np.concatenate(to_ret, -1)
# Generic ######################################################################
def get_ccm_distribution(name):
"""
:param name: 'uniform' or 'normal', name of the distribution.
:return: the callable function for sampling on a generic CCM;
"""
if name == 'uniform':
return ccm_uniform
elif name == 'normal':
return ccm_normal
else:
raise ValueError('Possible distributions: \'uniform\', \'normal\'')
| 44.630901 | 97 | 0.648139 | 1,517 | 10,399 | 4.399473 | 0.108108 | 0.032215 | 0.031465 | 0.029967 | 0.826491 | 0.793977 | 0.770752 | 0.740635 | 0.730297 | 0.730297 | 0 | 0.010194 | 0.235888 | 10,399 | 232 | 98 | 44.823276 | 0.829726 | 0.553707 | 0 | 0.386364 | 0 | 0 | 0.088453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102273 | false | 0.011364 | 0.022727 | 0 | 0.284091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9aaec905984a37e2cebf99cd05ccb688d2a47669 | 303 | py | Python | plugins/cybereason/icon_cybereason/actions/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | null | null | null | plugins/cybereason/icon_cybereason/actions/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | null | null | null | plugins/cybereason/icon_cybereason/actions/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | null | null | null | # GENERATED BY KOMAND SDK - DO NOT EDIT
from .delete_registry_key.action import DeleteRegistryKey
from .isolate_machine.action import IsolateMachine
from .quarantine_file.action import QuarantineFile
from .remediate_items.action import RemediateItems
from .search_for_files.action import SearchForFiles
| 43.285714 | 57 | 0.864686 | 39 | 303 | 6.538462 | 0.692308 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09571 | 303 | 6 | 58 | 50.5 | 0.930657 | 0.122112 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ad65661eba8bc561e0fc534137df8017e577ada | 39 | py | Python | atlas/foundations_rest_api/src/test/helpers/__init__.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 296 | 2020-03-16T19:55:00.000Z | 2022-01-10T19:46:05.000Z | atlas/foundations_rest_api/src/test/helpers/__init__.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 57 | 2020-03-17T11:15:57.000Z | 2021-07-10T14:42:27.000Z | atlas/foundations_rest_api/src/test/helpers/__init__.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 38 | 2020-03-17T21:06:05.000Z | 2022-02-08T03:19:34.000Z |
from foundations_spec.helpers import * | 19.5 | 38 | 0.846154 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 2 | 38 | 19.5 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b13df8fe91d2688be6d1d374e76ffec63f64e523 | 45 | py | Python | 13/test/utils/__init__.py | leisurexi/python-study | e2de3d66d5decb4403acd2df6a3d9cba307f018a | [
"Apache-2.0"
] | 1 | 2021-01-23T14:59:02.000Z | 2021-01-23T14:59:02.000Z | 13/test/utils/__init__.py | leisurexi/python-study | e2de3d66d5decb4403acd2df6a3d9cba307f018a | [
"Apache-2.0"
] | null | null | null | 13/test/utils/__init__.py | leisurexi/python-study | e2de3d66d5decb4403acd2df6a3d9cba307f018a | [
"Apache-2.0"
] | null | null | null | # author: leisurexi
# date: 2021-01-16 22:15
| 15 | 24 | 0.688889 | 8 | 45 | 3.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.315789 | 0.155556 | 45 | 2 | 25 | 22.5 | 0.5 | 0.888889 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b19fdd71099b15ab2b4fe10d8d09bf9a10b44627 | 406 | py | Python | plantcv/plantcv/roi/__init__.py | JamesChooWK/plantcv | 5ade9d2861c1824997b934c09062d6050ac180c5 | [
"MIT"
] | null | null | null | plantcv/plantcv/roi/__init__.py | JamesChooWK/plantcv | 5ade9d2861c1824997b934c09062d6050ac180c5 | [
"MIT"
] | null | null | null | plantcv/plantcv/roi/__init__.py | JamesChooWK/plantcv | 5ade9d2861c1824997b934c09062d6050ac180c5 | [
"MIT"
] | null | null | null | from plantcv.plantcv.roi.roi_methods import circle
from plantcv.plantcv.roi.roi_methods import ellipse
from plantcv.plantcv.roi.roi_methods import from_binary_image
from plantcv.plantcv.roi.roi_methods import rectangle
from plantcv.plantcv.roi.roi_methods import multi
from plantcv.plantcv.roi.roi_methods import custom
__all__ = ["circle", "ellipse", "from_binary_image", "rectangle", "multi", "custom"]
| 45.111111 | 84 | 0.825123 | 59 | 406 | 5.440678 | 0.220339 | 0.205607 | 0.336449 | 0.392523 | 0.691589 | 0.691589 | 0.691589 | 0 | 0 | 0 | 0 | 0 | 0.081281 | 406 | 8 | 85 | 50.75 | 0.86059 | 0 | 0 | 0 | 0 | 0 | 0.123153 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
494752b28a1b1512dea6193691da33a2075b124e | 242,358 | py | Python | obsolete/pipeline_proj007.py | 861934367/cgat | 77fdc2f819320110ed56b5b61968468f73dfc5cb | [
"BSD-2-Clause",
"BSD-3-Clause"
] | 87 | 2015-01-01T03:48:19.000Z | 2021-11-23T16:23:24.000Z | obsolete/pipeline_proj007.py | 861934367/cgat | 77fdc2f819320110ed56b5b61968468f73dfc5cb | [
"BSD-2-Clause",
"BSD-3-Clause"
] | 189 | 2015-01-06T15:53:11.000Z | 2019-05-31T13:19:45.000Z | obsolete/pipeline_proj007.py | CGATOxford/cgat | 326aad4694bdfae8ddc194171bb5d73911243947 | [
"BSD-2-Clause",
"BSD-3-Clause"
] | 56 | 2015-01-13T02:18:50.000Z | 2022-01-05T10:00:59.000Z | ################################################################################
#
# MRC FGU Computational Genomics Group
#
# $Id: pipeline_proj007.py 2900 2011-05-24 14:38:00Z david $
#
# Copyright (C) 2012 David Sims
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#################################################################################
"""
===================
Project007 pipeline
===================
:Author: David Sims
:Release: $Id: pipeline_proj007.py 2900 2011-05-24 14:38:00Z david $
:Date: |today|
:Tags: Python
The project007 pipeline annotates intervals generated by the CAPseq pipeline
Usage
=====
See :ref:`PipelineSettingUp` and :ref:`PipelineRunning` on general information how to use CGAT pipelines.
Configuration
-------------
The pipeline requires a configured :file:`pipeline_capseq.ini` file. The pipeline looks for a configuration file in several places:
1. The default configuration in the :term:`code directory`.
2. A shared configuration file :file:`../pipeline.ini`.
3. A local configuration :file:`pipeline.ini`.
The order is as above. Thus, a local configuration setting will
override a shared configuration setting and a default configuration
setting.
Configuration files follow the ini format (see the python
`ConfigParser <http://docs.python.org/library/configparser.html>` documentation).
The configuration file is organized by section and the variables are documented within
the file. In order to get a local configuration file in the current directory, type::
python <codedir>/pipeline_cpg.py config
The sphinxreport report requires a :file:`conf.py` and :file:`sphinxreport.ini` file
(see :ref:`PipelineDocumenation`). To start with, use the files supplied with the
:ref:`Example` data.
Input
-----
Reads
++++++
Input are :file:`.fastq.gz`-formatted files. The files should be
labeled in the following way::
sample-condition-replicate.fastq.gz
Note that neither ``sample``, ``condition`` or ``replicate`` should contain
``_`` (underscore) and ``.`` (dot) characters as these are used by the pipeline
to delineate tasks.
Requirements
------------
The pipeline requires the information from the following pipelines:
:doc:`pipeline_annotations`
set the configuration variables:
:py:data:`annotations_database`
:py:data:`annotations_dir`
On top of the default CGAT setup, the pipeline requires the following software to be in the
path:
+--------------------+-------------------+------------------------------------------------+
|*Program* |*Version* |*Purpose* |
+--------------------+-------------------+------------------------------------------------+
|BEDTools | |interval comparison |
+--------------------+-------------------+------------------------------------------------+
Pipline Output
==============
The results of the computation are all stored in an sqlite relational
database :file:`csvdb`.
Code
====
"""
import sys
import tempfile
import optparse
import shutil
import itertools
import csv
import math
import random
import re
import glob
import os
import shutil
import collections
import gzip
import sqlite3
import pysam
import CGAT.IndexedFasta as IndexedFasta
import CGAT.IndexedGenome as IndexedGenome
import CGAT.FastaIterator as FastaIterator
import CGAT.Genomics as Genomics
import CGAT.IOTools as IOTools
import CGAT.MAST as MAST
import CGAT.GTF as GTF
import CGAT.GFF as GFF
import CGAT.Bed as Bed
import cStringIO
import numpy
import CGAT.Masker as Masker
import fileinput
#import CGAT.gff2annotator
import CGAT.Experiment as E
#import CGAT.logging as L
import CGATPipelines.PipelinePeakcalling as PIntervals
import CGATPipelines.PipelineTracks as PipelineTracks
import CGATPipelines.PipelineMapping as PipelineMapping
import CGATPipelines.PipelineGO as PipelineGO
from ruffus import *
from rpy2.robjects import r as R
import rpy2.robjects as ro
USECLUSTER = True
###################################################
###################################################
###################################################
## Pipeline configuration
###################################################
import CGAT.Pipeline as P
P.getParameters( ["pipeline_proj007.ini", ] )
PARAMS = P.PARAMS
PARAMS_ANNOTATIONS = [0,] #P.peekParameters( PARAMS["geneset_dir"],"pipeline_annotations.py" )
###################################################################
###################################################################
###################################################################
## Helper functions mapping tracks to conditions, etc
###################################################################
# load all tracks - exclude input/control tracks
Sample = PipelineTracks.Sample3
TRACKS = PipelineTracks.Tracks( Sample ).loadFromDirectory(
[ x.replace("../","") for x in glob.glob( "../*.export.txt.gz" ) if PARAMS["tracks_control"] not in x ],
"(\S+).export.txt.gz" ) +\
PipelineTracks.Tracks( PipelineTracks.Sample3 ).loadFromDirectory(
[ x.replace("../","") for x in glob.glob( "../*.sra" ) if PARAMS["tracks_control"] not in x ],
"(\S+).sra" ) +\
PipelineTracks.Tracks( PipelineTracks.Sample3 ).loadFromDirectory(
[x.replace("../","") for x in glob.glob( "../*.fastq.gz" ) if PARAMS["tracks_control"] not in x],
"(\S+).fastq.gz" ) +\
PipelineTracks.Tracks( PipelineTracks.Sample3 ).loadFromDirectory(
[x.replace("../","") for x in glob.glob( "../*.fastq.1.gz" ) if PARAMS["tracks_control"] not in x],
"(\S+).fastq.1.gz" ) +\
PipelineTracks.Tracks( PipelineTracks.Sample3 ).loadFromDirectory(
[ x.replace("../","") for x in glob.glob( "../*.csfasta.gz" ) if PARAMS["track_control"] not in x],
"(\S+).csfasta.gz" )
for X in TRACKS:
print "TRACK=", X, "\n"
TRACKS_CONTROL = PipelineTracks.Tracks( Sample ).loadFromDirectory(
[ x.replace("../","") for x in glob.glob( "../*.export.txt.gz" ) if PARAMS["tracks_control"] in x ],
"(\S+).export.txt.gz" ) +\
PipelineTracks.Tracks( PipelineTracks.Sample3 ).loadFromDirectory(
[ x.replace("../","") for x in glob.glob( "../*.sra" ) if PARAMS["tracks_control"] in x ],
"(\S+).sra" ) +\
PipelineTracks.Tracks( PipelineTracks.Sample3 ).loadFromDirectory(
[x.replace("../","") for x in glob.glob( "../*.fastq.gz" ) if PARAMS["tracks_control"] in x],
"(\S+).fastq.gz" ) +\
PipelineTracks.Tracks( PipelineTracks.Sample3 ).loadFromDirectory(
[x.replace("../","") for x in glob.glob( "../*.fastq.1.gz" ) if PARAMS["tracks_control"] in x],
"(\S+).fastq.1.gz" ) +\
PipelineTracks.Tracks( PipelineTracks.Sample3 ).loadFromDirectory(
[ x.replace("../","") for x in glob.glob( "../*.csfasta.gz" ) if PARAMS["track_control"] in x],
"(\S+).csfasta.gz" )
for X in TRACKS_CONTROL:
print "TRACK_CONTROL=", X, "\n"
def getControl( track ):
'''return appropriate control for a track'''
n = track.clone()
n.condition = PARAMS["tracks_control"]
return n
###################################################################
###################################################################
###################################################################
# aggregate per experiment
EXPERIMENTS = PipelineTracks.Aggregate( TRACKS, labels = ("condition", "tissue") )
# aggregate per condition
CONDITIONS = PipelineTracks.Aggregate( TRACKS, labels = ("condition",) )
# aggregate per tissue
TISSUES = PipelineTracks.Aggregate( TRACKS, labels = ("tissue",) )
# compound targets : all experiments
TRACKS_MASTER = EXPERIMENTS.keys() + CONDITIONS.keys()
# compound targets : correlation between tracks
TRACKS_CORRELATION = TRACKS_MASTER + list(TRACKS)
print "Expts=", EXPERIMENTS, "\n"
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section1: Annotate CAPseq intervals using gene/transcript set
########################################################################################################################
########################################################################################################################
########################################################################################################################
############################################################
############################################################
## Section 1a: measure overlap with gene/transcript TSS, protein-coding genes, non-coding genes, flanks and intergenic regions
@transform( "../replicated_intervals/*.replicated.bed", regex(r"../replicated_intervals/(\S+).replicated.bed"), r"\1.replicated.bed" )
def copyCapseqReplicatedBedFiles( infile, outfile ):
'''Copy replicated Bed files generated by capseq pipline to geneset-specific output directory'''
statement = '''cp %(infile)s .'''
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".geneset_overlap" )
def annotateCapseqGenesetOverlap( infile, outfile ):
'''classify intervals according to their base pair overlap with respect to different genomic features (genes, TSS, upstream/downstream flanks) '''
to_cluster = True
feature_list = P.asList( PARAMS["geneset_feature_list"] )
outfiles = ""
first = True
for feature in feature_list:
feature_name = P.snip( os.path.basename( feature ), ".gtf" ).replace(".","_")
outfiles += " %(outfile)s.%(feature_name)s " % locals()
if first:
cut_command = "cut -f1,4,5,6,8 "
first = False
else:
cut_command = "cut -f4,5,6 "
statement = """
cat %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=overlap
--counter=length
--log=%(outfile)s.log
--filename-gff=%(geneset_dir)s/%(feature)s
--genome-file=%(genome_dir)s/%(genome)s
| %(cut_command)s
| sed s/nover/%(feature_name)s_nover/g
| sed s/pover/%(feature_name)s_pover/g
| sed s/min/length/
> %(outfile)s.%(feature_name)s"""
P.run()
# Paste output together
statement = '''paste %(outfiles)s > %(outfile)s'''
P.run()
############################################################
@transform( annotateCapseqGenesetOverlap, suffix(".geneset_overlap"), ".geneset_overlap.load" )
def loadCapseqGenesetOverlap( infile, outfile ):
'''load interval annotations: genome architecture '''
geneset_name = PARAMS["geneset_name"]
track= P.snip( os.path.basename(infile), ".geneset_overlap").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_overlap
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".genes_capseq_overlap" )
def annotateGenesetCapseqOverlap( infile, outfile ):
'''classify intervals according to their base pair overlap with respect to different genomic features (genes, TSS, upstream/downstream flanks) '''
to_cluster = True
genes = PARAMS["geneset_genes"]
track = P.snip( os.path.basename(infile), ".bed")
statement = """
cat %(infile)s | python %(scriptsdir)s/bed2gff.py --as-gtf > %(track)s.gtf;
cat %(geneset_dir)s/%(genes)s
| python %(scriptsdir)s/gtf2table.py
--counter=overlap
--counter=length
--log=%(outfile)s.log
--filename-gff=%(track)s.gtf
--genome-file=%(genome_dir)s/%(genome)s
| sed s/nover/capseq_nover/g
| sed s/pover/capseq_pover/g
| sed s/min/length/
> %(outfile)s"""
P.run()
############################################################
@transform( annotateGenesetCapseqOverlap, suffix(".genes_capseq_overlap"), ".genes_capseq_overlap.load" )
def loadGenesetCapseqOverlap( infile, outfile ):
'''load interval annotations: genome architecture '''
geneset_name = PARAMS["geneset_name"]
track= P.snip( os.path.basename(infile), ".genes_capseq_overlap").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_genes_capseq_overlap
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
############################################################
## Section 1b: Count overlap of CAPseq intervals with gene/transcript TSSs
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".transcript_tss.overlap.count" )
def getCapseqTranscriptTSSOverlapCount( infile, outfile ):
'''Establish overlap between capseq and gene tss intervals'''
tss = os.path.join( PARAMS["geneset_dir"],PARAMS["geneset_transcript_tss"] )
to_cluster = True
statement = """echo "CAPseq intervals overlapping 1 or more TSS" > %(outfile)s; intersectBed -a %(infile)s -b %(tss)s -u | wc -l >> %(outfile)s;
echo "CAPseq intervals not overlapping any TSS" >> %(outfile)s; intersectBed -a %(infile)s -b %(tss)s -v | wc -l >> %(outfile)s;
echo "TSSs overlapped by 1 or more CAPseq interval" >> %(outfile)s; intersectBed -a %(tss)s -b %(infile)s -u | wc -l >> %(outfile)s;
echo "TSSs not overlapped by any CAPseq intervals" >> %(outfile)s; intersectBed -a %(tss)s -b %(infile)s -v | wc -l >> %(outfile)s;
sed -i '{N;s/\\n/\\t/g}' %(outfile)s; """
P.run()
############################################################
@transform( getCapseqTranscriptTSSOverlapCount, suffix(".transcript_tss.overlap.count"), ".transcript_tss.overlap.count.load")
def loadCapseqTranscriptTSSOverlapCount(infile, outfile):
'''Load transcript TSS Capseq overlap into database'''
header = "track,intervals"
track = P.snip( os.path.basename( infile), ".transcript_tss.overlap.count" )
geneset_name = PARAMS["geneset_name"]
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_transcript_tss_venn
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".gene_tss.overlap.count" )
def getCapseqGeneTSSOverlapCount( infile, outfile ):
'''Establish overlap between capseq and gene tss intervals'''
tss = os.path.join( PARAMS["geneset_dir"],PARAMS["geneset_gene_tss"] )
to_cluster = True
statement = """echo "CAPseq intervals overlapping 1 or more TSS" > %(outfile)s; intersectBed -a %(infile)s -b %(tss)s -u | wc -l >> %(outfile)s;
echo "CAPseq intervals not overlapping any TSS" >> %(outfile)s; intersectBed -a %(infile)s -b %(tss)s -v | wc -l >> %(outfile)s;
echo "TSSs overlapped by 1 or more CAPseq interval" >> %(outfile)s; intersectBed -a %(tss)s -b %(infile)s -u | wc -l >> %(outfile)s;
echo "TSSs not overlapped by any CAPseq intervals" >> %(outfile)s; intersectBed -a %(tss)s -b %(infile)s -v | wc -l >> %(outfile)s;
sed -i '{N;s/\\n/\\t/g}' %(outfile)s; """
P.run()
############################################################
@transform( getCapseqGeneTSSOverlapCount, suffix(".gene_tss.overlap.count"), ".gene_tss.overlap.count.load")
def loadCapseqGeneTSSOverlapCount(infile, outfile):
'''Load gene TSS Capseq overlap into database'''
header = "track,intervals"
track = P.snip( os.path.basename( infile), ".gene_tss.overlap.count" )
geneset_name = PARAMS["geneset_name"]
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_gene_tss_venn
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
############################################################
## Section 1c: Annotate CAPseq interval TTS/TTS distance
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".transcript.tss.distance" )
def annotateCapseqTranscriptTSSDistance( infile, outfile ):
'''Compute distance from CAPseq intervals to nearest transcript TSS'''
to_cluster = True
annotation_file = os.path.join( PARAMS["geneset_dir"],PARAMS["geneset_transcript_tss"] )
statement = """cat < %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=distance-tss
--log=%(outfile)s.log
--filename-gff=%(annotation_file)s
--filename-format="bed"
> %(outfile)s"""
P.run()
############################################################
@transform( annotateCapseqTranscriptTSSDistance, suffix( ".transcript.tss.distance"), ".transcript.tss.distance.load" )
def loadCapseqTranscriptTSSDistance( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
track= P.snip( os.path.basename(infile), ".transcript.tss.distance").replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_transcript_tss_distance
--index=gene_id
--index=closest_id
--index=id5
--index=id3
> %(outfile)s; """
P.run()
############################################################
@transform( loadCapseqTranscriptTSSDistance, suffix(".transcript.tss.distance.load"), ".transcript.tss.distance.export" )
def exportCapseqTSSTranscriptList( infile, outfile ):
'''Export list of transcripts closest to CAPseq intervals '''
track = P.snip( os.path.basename( infile ), ".transcript.tss.distance.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct gene_id, closest_id FROM %(track)s_%(geneset_name)s_transcript_tss_distance
WHERE closest_id is not null ''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("interval_id\ttranscript_id\n")
for result in cc:
pre = ""
interval_id,transcripts = result
transcript_list = transcripts.split(",")
for t in transcript_list:
outs.write("%s\t%s\n" % (interval_id, str(t)) )
cc.close()
outs.close()
############################################################
@transform( exportCapseqTSSTranscriptList, suffix( ".transcript.tss.distance.export"), ".transcript.tss.distance.export.load" )
def loadCapseqTSSTranscriptList( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
track = P.snip( os.path.basename( infile ), ".transcript.tss.distance.export" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_interval_transcript_mapping
--index=transcript_id
--index=interval_id
> %(outfile)s; """
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".gene.tss.distance" )
def annotateCapseqGeneTSSDistance( infile, outfile ):
'''Compute distance from CAPseq intervals to nearest gene TSS (single TSS per gene)'''
to_cluster = True
annotation_file = os.path.join( PARAMS["geneset_dir"],PARAMS["geneset_gene_tss"] )
statement = """cat < %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=distance-tss
--log=%(outfile)s.log
--filename-gff=%(annotation_file)s
--filename-format="bed"
> %(outfile)s"""
P.run()
############################################################
@transform( annotateCapseqGeneTSSDistance, suffix( ".gene.tss.distance"), ".gene.tss.distance.load" )
def loadCapseqGeneTSSDistance( infile, outfile ):
'''load CAPseq interval annotations: distance to gene transcription start sites '''
track= P.snip( os.path.basename(infile), ".gene.tss.distance").replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_gene_tss_distance
--index=gene_id
--index=closest_id
--index=id5
--index=id3
> %(outfile)s; """
P.run()
############################################################
@transform( loadCapseqGeneTSSDistance, suffix(".gene.tss.distance.load"), ".gene.tss.distance.export" )
def exportCapseqTSSGeneList( infile, outfile ):
'''Export list of transcripts closest to CAPseq intervals '''
track = P.snip( os.path.basename( infile ), ".gene.tss.distance.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct gene_id, closest_id FROM %(track)s_%(geneset_name)s_gene_tss_distance
WHERE closest_id is not null''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("interval_id\tgene_id\n")
for result in cc:
pre = ""
interval_id,transcripts = result
transcript_list = transcripts.split(",")
for t in transcript_list:
outs.write("%s\t%s\n" % (interval_id, str(t)) )
cc.close()
outs.close()
############################################################
@transform( exportCapseqTSSGeneList, suffix( ".gene.tss.distance.export"), ".gene.tss.distance.export.load" )
def loadCapseqTSSGeneList( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
track = P.snip( os.path.basename( infile ), ".gene.tss.distance.export" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_interval_gene_mapping
--index=gene_id
--index=interval_id
> %(outfile)s; """
P.run()
############################################################
## Export bed files for CAPseq intervals overlapping TSS intervals
@transform( loadCapseqTranscriptTSSDistance, suffix( ".transcript.tss.distance.load"), ".transcript.tss.bed" )
def exportCapseqTSSBed( infile, outfile ):
'''export bed file of all CAPseq intervals within 1kb of annotated transcript TSS '''
track= P.snip( os.path.basename(infile), ".transcript.tss.distance.load").replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = '''SELECT i.contig, i.start, i.end, i.interval_id
FROM %(track)s_intervals i, %(track)s_%(geneset_name)s_transcript_tss_distance t
WHERE i.interval_id=t.gene_id
AND t.closest_dist < 1000
ORDER by contig, start''' % locals()
cc.execute( statement )
outs = open( outfile, "w")
for result in cc:
contig, start, stop, interval_id = result
outs.write( "%s\t%i\t%i\t%i\n" % (contig, start, stop, interval_id) )
cc.close()
outs.close()
############################################################
@transform( loadCapseqTranscriptTSSDistance, suffix( ".transcript.tss.distance.load"), ".intergenic.bed" )
def exportCapseqIntergenicBed( infile, outfile ):
'''export bed file of all CAPseq intervals not within 1kb of annotated transcript TSS '''
track= P.snip( os.path.basename(infile), ".transcript.tss.distance.load").replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = '''SELECT i.contig, i.start, i.end, i.interval_id
FROM %(track)s_intervals i, %(track)s_%(geneset_name)s_transcript_tss_distance t
WHERE i.interval_id=t.gene_id
AND t.closest_dist >= 1000
ORDER by contig, start''' % locals()
cc.execute( statement )
outs = open( outfile, "w")
for result in cc:
contig, start, stop, interval_id = result
outs.write( "%s\t%i\t%i\t%i\n" % (contig, start, stop, interval_id) )
cc.close()
outs.close()
############################################################
@transform(copyCapseqReplicatedBedFiles, suffix(".bed"), ".noncoding.tss.distance" )
def getCapseqNoncodingTSSDistance( infile, outfile ):
'''Calculate distance of CAPseq peaks to nearest non-coding transcript TSS'''
to_cluster = True
annotation_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_noncoding_tss"] )
statement = """cat < %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=distance-tss
--log=%(outfile)s.log
--filename-gff=%(annotation_file)s
--filename-format="bed"
> %(outfile)s"""
P.run()
############################################################
@transform( getCapseqNoncodingTSSDistance, suffix( ".noncoding.tss.distance"), ".noncoding.tss.distance.load" )
def loadCapseqNoncodingTSSDistance( infile, outfile ):
'''Load interval annotations: distance to non-coding transcription start sites '''
track= P.snip( os.path.basename(infile), ".noncoding.tss.distance").replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_noncoding_tss_distance
--index=gene_id
--index=closest_id
--index=id5
--index=id3
> %(outfile)s; """
P.run()
############################################################
@transform( loadCapseqNoncodingTSSDistance, suffix(".noncoding.tss.distance.load"), ".noncoding.tss.distance.export" )
def exportCapseqNoncodingTSSGeneList( infile, outfile ):
'''Export list of transcripts closest to CAPseq intervals '''
track = P.snip( os.path.basename( infile ), ".noncoding.tss.distance.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct gene_id, closest_id FROM %(track)s_%(geneset_name)s_noncoding_tss_distance
WHERE closest_id is not null''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("interval_id\tgene_id\n")
for result in cc:
pre = ""
interval_id,transcripts = result
transcript_list = transcripts.split(",")
for t in transcript_list:
outs.write("%s\t%s\n" % (interval_id, str(t)) )
cc.close()
outs.close()
############################################################
@transform( exportCapseqNoncodingTSSGeneList, suffix( ".noncoding.tss.distance.export"), ".noncoding.tss.distance.export.load" )
def loadCapseqNoncodingTSSGeneList( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
track = P.snip( os.path.basename( infile ), ".noncoding.tss.distance.export" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_interval_noncoding_mapping
--index=gene_id
--index=interval_id
> %(outfile)s; """
P.run()
############################################################
############################################################
## External linCRNA datasets
@files( PARAMS["geneset_lncrna_tss"], "lncrna.load" )
def loadlncRNAs( infile, outfile ):
'''Load external lncRNA dataset into db '''
header="contig,start,end,id,strand"
statement = """zcat %(infile)s
| awk 'OFS="\\t" {print $1,$2,$3,$4,$6}'
| python ~/src/csv2db.py
--database=%(database)s
--header=%(header)s
--table=lncrna_bed
--index=contig,start
> %(outfile)s; """
P.run()
############################################################
@transform(copyCapseqReplicatedBedFiles, suffix(".bed"), ".lncrna.tss.distance" )
def getCapseqlncRNATSSDistance( infile, outfile ):
'''Calculate distance of CAPseq peaks to nearest lncRNA transcript TSS'''
to_cluster = True
annotation_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_lncrna_tss"] )
statement = """cat < %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=distance-tss
--log=%(outfile)s.log
--filename-gff=%(annotation_file)s
--filename-format="bed"
> %(outfile)s"""
P.run()
############################################################
@transform( getCapseqlncRNATSSDistance, suffix( ".lncrna.tss.distance"), ".lncrna.tss.distance.load" )
def loadCapseqlncRNATSSDistance( infile, outfile ):
'''Load interval annotations: distance to lncRNA transcription start sites '''
track= P.snip( os.path.basename(infile), ".lncrna.tss.distance").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_lncrna_tss_distance
--index=gene_id
--index=closest_id
--index=id5
--index=id3
> %(outfile)s; """
P.run()
############################################################
@transform( loadCapseqlncRNATSSDistance, suffix(".lncrna.tss.distance.load"), ".lncrna.tss.distance.export" )
def exportCapseqlncRNATSSGeneList( infile, outfile ):
'''Export list of transcripts closest to CAPseq intervals '''
track = P.snip( os.path.basename( infile ), ".lncrna.tss.distance.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct gene_id, closest_id FROM %(track)s_lncrna_tss_distance
WHERE closest_id is not null''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("interval_id\tgene_id\n")
for result in cc:
pre = ""
interval_id,transcripts = result
transcript_list = transcripts.split(",")
for t in transcript_list:
outs.write("%s\t%s\n" % (interval_id, str(t)) )
cc.close()
outs.close()
############################################################
@transform( exportCapseqlncRNATSSGeneList, suffix( ".lncrna.tss.distance.export"), ".lncrna.tss.distance.export.load" )
def loadCapseqlncRNATSSGeneList( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
track = P.snip( os.path.basename( infile ), ".lncrna.tss.distance.export" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_interval_lncrna_mapping
--index=gene_id
--index=interval_id
> %(outfile)s; """
P.run()
############################################################
############################################################
## External RNAseq transcripts
@files( PARAMS["geneset_rnaseq_tss"], "rnaseq.load" )
def loadRNAseq( infile, outfile ):
'''Load external RNAseq dataset into db '''
header="contig,start,end,id,strand"
statement = """zcat %(infile)s
| awk 'OFS="\\t" {print $1,$2,$3,$4,$6}'
| python ~/src/csv2db.py
--database=%(database)s
--header=%(header)s
--table=rnaseq_bed
--index=contig,start
> %(outfile)s; """
P.run()
############################################################
@transform(copyCapseqReplicatedBedFiles, suffix(".bed"), ".rnaseq.tss.distance" )
def getCapseqRNAseqTSSDistance( infile, outfile ):
'''Calculate distance of CAPseq peaks to nearest lncRNA transcript TSS'''
to_cluster = True
annotation_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_rnaseq_tss"] )
statement = """cat < %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=distance-tss
--log=%(outfile)s.log
--filename-gff=%(annotation_file)s
--filename-format="bed"
> %(outfile)s"""
P.run()
############################################################
@transform( getCapseqRNAseqTSSDistance, suffix( ".rnaseq.tss.distance"), ".rnaseq.tss.distance.load" )
def loadCapseqRNAseqTSSDistance( infile, outfile ):
'''Load interval annotations: distance to lncRNA transcription start sites '''
track= P.snip( os.path.basename(infile), ".rnaseq.tss.distance").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_rnaseq_tss_distance
--index=gene_id
--index=closest_id
--index=id5
--index=id3
> %(outfile)s; """
P.run()
############################################################
@transform( loadCapseqRNAseqTSSDistance, suffix(".rnaseq.tss.distance.load"), ".rnaseq.tss.distance.export" )
def exportCapseqRNAseqTSSGeneList( infile, outfile ):
'''Export list of transcripts closest to CAPseq intervals '''
track = P.snip( os.path.basename( infile ), ".rnaseq.tss.distance.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct gene_id, closest_id FROM %(track)s_rnaseq_tss_distance
WHERE closest_id is not null''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("interval_id\tgene_id\n")
for result in cc:
pre = ""
interval_id,transcripts = result
transcript_list = transcripts.split(",")
for t in transcript_list:
outs.write("%s\t%s\n" % (interval_id, str(t)) )
cc.close()
outs.close()
############################################################
@transform( exportCapseqRNAseqTSSGeneList, suffix( ".rnaseq.tss.distance.export"), ".rnaseq.tss.distance.export.load" )
def loadCapseqRNAseqTSSGeneList( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
track = P.snip( os.path.basename( infile ), ".rnaseq.tss.distance.export" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_interval_rnaseq_mapping
--index=gene_id
--index=interval_id
> %(outfile)s; """
P.run()
############################################################
############################################################
## Section 1d: Calculate pileup of CAPseq reads over TSS/TTS
@follows( mkdir("tss-profile") )
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"tss-profile/\1.replicated.transcript.tss-profile.all.png" )
def getReplicatedTranscriptTSSProfile(infile, outfile):
'''Build TSS profile from BAM files'''
to_cluster = USECLUSTER
track = P.snip( os.path.basename(infile), ".replicated.bed" )
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
tss_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
ofp = "tss-profile/" + track + ".replicated.transcript.tss-profile.all"
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t.asFile()
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( fn )
fn = "../macs/with_input/%s.macs" % t.asFile()
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
bamfiles = " ".join( ("--bamfile=%s" % x) for x in samfiles )
shifts = " ".join( ("--shift=%s" % y) for y in offsets )
statement = '''python %(scriptsdir)s/bam2geneprofile.py
%(bamfiles)s
--gtffile=%(tss_file)s
--output-filename-pattern=%(ofp)s
--reporter=transcript
--method=tssprofile
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
## TSSs associated (within 1kb) with a CAPseq interval only
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"tss-profile/\1.replicated.transcript.tss-profile.capseq.png" )
def getReplicatedTranscriptTSSProfileCapseq(infile,outfile):
'''Build TSS profile from BAM files'''
to_cluster = USECLUSTER
track = P.snip( os.path.basename(infile), ".replicated.bed" )
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
ofp = "tss-profile/" + track + ".replicated.transcript.tss-profile.capseq"
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t.asFile()
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( fn )
fn = "../macs/with_input/%s.macs" % t.asFile()
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
bamfiles = " ".join( ("--bamfile=%s" % x) for x in samfiles )
shifts = " ".join( ("--shift=%s" % y) for y in offsets )
gene_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
tss_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_transcript_tss"])
tmpfile = P.getTempFile()
tmpfilename = tmpfile.name
statement = '''intersectBed -a %(tss_file)s -b %(infile)s -u | cut -f4 > %(tmpfilename)s;
zcat %(gene_file)s | python %(scriptsdir)s/gtf2gtf.py --filter=transcript --apply=%(tmpfilename)s | gzip > %(tmpfilename)s.gtf.gz;
python %(scriptsdir)s/bam2geneprofile.py
%(bamfiles)s
--gtffile=%(tmpfilename)s.gtf.gz
--output-filename-pattern=%(ofp)s
--reporter=transcript
--method=tssprofile
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
## TSSs NOT associated (within 1kb) with a CAPseq interval only
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"tss-profile/\1.replicated.transcript.tss-profile.nocapseq.png" )
def getReplicatedTranscriptTSSProfileNoCapseq(infile,outfile):
'''Build TSS profile from BAM files'''
to_cluster = USECLUSTER
track = P.snip( os.path.basename(infile), ".replicated.bed" )
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
ofp = "tss-profile/" + track + ".replicated.transcript.tss-profile.nocapseq"
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t.asFile()
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( fn )
fn = "../macs/with_input/%s.macs" % t.asFile()
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
bamfiles = " ".join( ("--bamfile=%s" % x) for x in samfiles )
shifts = " ".join( ("--shift=%s" % y) for y in offsets )
gene_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
tss_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_transcript_tss"])
tmpfile = P.getTempFile()
tmpfilename = tmpfile.name
statement = '''intersectBed -a %(tss_file)s -b %(infile)s -v | cut -f4 > %(tmpfilename)s;
zcat %(gene_file)s | python %(scriptsdir)s/gtf2gtf.py --filter=transcript --apply=%(tmpfilename)s | gzip > %(tmpfilename)s.gtf.gz;
python %(scriptsdir)s/bam2geneprofile.py
%(bamfiles)s
--gtffile=%(tmpfilename)s.gtf.gz
--output-filename-pattern=%(ofp)s
--reporter=transcript
--method=tssprofile
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
## Per gene
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"tss-profile/\1.replicated.gene.tss-profile.all.png" )
def getReplicatedGeneTSSProfile(infile, outfile):
'''Build TSS profile from BAM files'''
to_cluster = USECLUSTER
track = P.snip( os.path.basename(infile), ".replicated.bed" )
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
tss_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
ofp = "tss-profile/" + track + ".replicated.gene.tss-profile.all"
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t.asFile()
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( fn )
fn = "../macs/with_input/%s.macs" % t.asFile()
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
bamfiles = " ".join( ("--bamfile=%s" % x) for x in samfiles )
shifts = " ".join( ("--shift=%s" % y) for y in offsets )
statement = '''python %(scriptsdir)s/bam2geneprofile.py
%(bamfiles)s
--gtffile=%(tss_file)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=tssprofile
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"tss-profile/\1.replicated.gene.tss-profile.capseq.png" )
def getReplicatedGeneTSSProfileCapseq(infile,outfile):
'''Build TSS profile from BAM files'''
to_cluster = USECLUSTER
track = P.snip( os.path.basename(infile), ".replicated.bed" )
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
ofp = "tss-profile/" + track + ".replicated.gene.tss-profile.capseq"
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t.asFile()
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( fn )
fn = "../macs/with_input/%s.macs" % t.asFile()
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
bamfiles = " ".join( ("--bamfile=%s" % x) for x in samfiles )
shifts = " ".join( ("--shift=%s" % y) for y in offsets )
gene_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
tss_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_transcript_tss"])
tmpfile = P.getTempFile()
tmpfilename = tmpfile.name
statement = '''intersectBed -a %(tss_file)s -b %(infile)s -u | cut -f4 > %(tmpfilename)s;
zcat %(gene_file)s | python %(scriptsdir)s/gtf2gtf.py --filter=transcript --apply=%(tmpfilename)s | gzip > %(tmpfilename)s.gtf.gz;
python %(scriptsdir)s/bam2geneprofile.py
%(bamfiles)s
--gtffile=%(tmpfilename)s.gtf.gz
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=tssprofile
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"tss-profile/\1.replicated.gene.tss-profile.nocapseq.png" )
def getReplicatedGeneTSSProfileNoCapseq(infile,outfile):
'''Build TSS profile from BAM files'''
to_cluster = USECLUSTER
track = P.snip( os.path.basename(infile), ".replicated.bed" )
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
ofp = "tss-profile/" + track + ".replicated.gene.tss-profile.nocapseq"
# setup files
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t.asFile()
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( fn )
fn = "../macs/with_input/%s.macs" % t.asFile()
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
bamfiles = " ".join( ("--bamfile=%s" % x) for x in samfiles )
shifts = " ".join( ("--shift=%s" % y) for y in offsets )
gene_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
tss_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_transcript_tss"])
tmpfile = P.getTempFile()
tmpfilename = tmpfile.name
statement = '''intersectBed -a %(tss_file)s -b %(infile)s -v | cut -f4 > %(tmpfilename)s;
zcat %(gene_file)s | python %(scriptsdir)s/gtf2gtf.py --filter=transcript --apply=%(tmpfilename)s | gzip > %(tmpfilename)s.gtf.gz;
python %(scriptsdir)s/bam2geneprofile.py
%(bamfiles)s
--gtffile=%(tmpfilename)s.gtf.gz
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=tssprofile
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
############################################################
## Section 1d: Calculate pileup of CAPseq reads over genes
@follows( mkdir("gene-profile") )
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"gene-profile/\1.replicated.transcript-profile.all.png" )
def getReplicatedTranscriptProfile(infile, outfile):
'''Build transcript profile from BAM files'''
to_cluster = USECLUSTER
track = P.snip( os.path.basename(infile), ".replicated.bed" )
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
tss_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
ofp = "gene-profile/" + track + ".replicated.transcript-profile.all"
# setup files
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t.asFile()
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( fn )
fn = "../macs/with_input/%s.macs" % t.asFile()
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
bamfiles = " ".join( ("--bamfile=%s" % x) for x in samfiles )
shifts = " ".join( ("--shift=%s" % y) for y in offsets )
statement = '''python %(scriptsdir)s/bam2geneprofile.py
%(bamfiles)s
%(shifts)s
--gtffile=%(tss_file)s
--output-filename-pattern=%(ofp)s
--reporter=transcript
--method=geneprofile
--normalization=total-sum
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"gene-profile/\1.replicated.gene-profile.all.png" )
def getReplicatedGeneProfile(infile, outfile):
'''Build transcript profile from BAM files'''
to_cluster = USECLUSTER
track = P.snip( os.path.basename(infile), ".replicated.bed" )
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
tss_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
ofp = "gene-profile/" + track + ".replicated.gene-profile.all"
# setup files
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t.asFile()
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( fn )
fn = "../macs/with_input/%s.macs" % t.asFile()
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
bamfiles = " ".join( ("--bamfile=%s" % x) for x in samfiles )
shifts = " ".join( ("--shift=%s" % y) for y in offsets )
statement = '''python %(scriptsdir)s/bam2geneprofile.py
%(bamfiles)s
--gtffile=%(tss_file)s
--output-filename-pattern=%(ofp)s
%(shifts)s
--reporter=gene
--method=geneprofile
--normalization=total-sum
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
############################################################
## Section 1f: Export lists of genes with TSS-associated CAPseq intervals
@transform( loadCapseqTranscriptTSSDistance, suffix(".transcript.tss.distance.load"), ".transcript.tss_distance_1kb.genelist")
def exportCapseqTranscriptTSSDistanceTranscriptList( infile, outfile):
'''Export list of genes where one or more transcript TSS is within 1kb of a replicated CAPseq interval'''
max_gene_dist = 1000
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".transcript.tss.distance.load" ).replace("-","_").replace(".","_")
# Extract data from db
cc = dbhandle.cursor()
query = '''SELECT closest_id FROM %(track)s_%(geneset_name)s_transcript_tss_distance
WHERE closest_dist < %(max_gene_dist)s ORDER BY closest_dist;''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
ids = str(result[0])
genes = ids.split(",")
for g in genes:
outs.write( "%s\n" % g )
cc.close()
outs.close()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".transcript.tss_overlap_1kb.genelist")
def exportCapseqTranscriptTSSOverlapTranscriptList( infile, outfile):
'''Export list of genes where one or more extended transcript TSS overlaps a replicated CAPseq interval. Alternative method to above.'''
# Currently outputs transcript list
transcript_tss_bed = PARAMS["geneset_transcript_tss_extended"]
geneset_dir = PARAMS["geneset_dir"]
statement = '''intersectBed -a %(geneset_dir)s/%(transcript_tss_bed)s -b %(infile)s -u | cut -f4 | sort -u > %(outfile)s'''
P.run()
############################################################
############################################################
## Compare CAPseq intervals with genomic features using GAT
@follows( mkdir("gat") )
@files(PARAMS["samtools_genome"]+".fai", "gat/"+PARAMS["genome"]+".bed.gz")
def buildGATWorkspace(infile, outfile ):
'''Build genomic workspace file for GAT '''
statement = '''cat %(infile)s | awk 'OFS="\\t" {print $1,0,$2,"workspace"}' | gzip > %(outfile)s '''
P.run()
############################################################
@follows(buildGATWorkspace)
@merge( copyCapseqReplicatedBedFiles, "gat/genomic_features_gat.tsv" )
def runGenomicFeaturesGAT(infiles, outfile):
'''Run genome association tester on bed files '''
to_cluster = True
# Segment files
segfiles = ""
for x in infiles:
track = P.snip(os.path.basename(x), ".replicated.bed")
statement = """cat %(x)s | awk 'OFS="\\t" {print $1,$2,$3,"%(track)s"}' > gat/%(track)s.bed; """
P.run()
segfiles += " --segment-file=gat/%s.bed " % track
# Annotation files
annofiles = ""
anno_list = P.asList(PARAMS["geneset_feature_list"])
anno_dir = PARAMS["geneset_dir"]
for y in anno_list:
annotrack = P.snip(os.path.basename(y), ".gtf")
statement = """cat %(anno_dir)s/%(y)s | python %(scriptsdir)s/gff2bed.py --name='feature' --is-gtf | sed s/exon/%(annotrack)s/g > gat/%(annotrack)s.bed; """
P.run()
annofiles += " --annotation-file=gat/%s.bed " % annotrack
# Run GAT
statement = """gatrun.py %(segfiles)s %(annofiles)s --workspace=gat/%(genome)s.bed.gz --num-samples=1000 --force --nbuckets=120000 > %(outfile)s"""
P.run()
############################################################
@transform( runGenomicFeaturesGAT, suffix(".tsv"), ".tsv.load" )
def loadGenomicFeaturesGAT(infile, outfile):
'''Load genome association tester results into database '''
statement = """cat %(infile)s | grep -v "^#" | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=gat_genomic_features_results
> %(outfile)s"""
P.run()
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section 2: Annotate CAPseq interval nucleotide composition
########################################################################################################################
########################################################################################################################
########################################################################################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".capseq.composition" )
def annotateCapseqComposition( infile, outfile ):
'''Establish the nucleotide composition of intervals'''
to_cluster = True
statement = """cat %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateCapseqComposition, suffix( ".composition"), ".composition.load" )
def loadCapseqComposition( infile, outfile ):
'''Load the nucleotide composition of intervals'''
track= P.snip( os.path.basename(infile), ".composition").replace(".cleaned","").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_composition
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".control.composition" )
def annotateControlComposition( infile, outfile ):
'''Establish the nucleotide composition of control intervals'''
to_cluster = True
track= P.snip( os.path.basename(infile), ".bed")
dirname= os.path.dirname(infile)
statement = """cat %(infile)s | python %(scriptsdir)s/bed2bed.py -m shift -g %(genome_dir)s/%(genome)s --offset=-10000 -S %(track)s.control.bed;
cat %(track)s.control.bed
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s.log
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateControlComposition, suffix( ".control.composition"), ".control.composition.load" )
def loadControlComposition( infile, outfile ):
'''Load the nucleotide composition of intervals'''
track= P.snip( os.path.basename(infile), ".control.composition").replace(".cleaned","").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_composition_control
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".flanking5.composition" )
def annotateFlankingCompositionLeft( infile, outfile ):
'''Establish the nucleotide composition of intervals immediately upstream'''
to_cluster = True
track= P.snip( os.path.basename(infile), ".bed")
dirname= os.path.dirname(infile)
flank_size = PARAMS["geneset_flank_size"]
# Exclude intervals with length < 100bp
statement = """flankBed -i %(infile)s -l %(flank_size)s -r 0 -g %(samtools_genome)s.fai
| python %(scriptsdir)s/bed2bed.py --method=filter-genome --genome-file=%(genome_dir)s/%(genome)s -L %(track)s.flanking5.log
| awk 'OFS="\\t" {if ($3-$2>100) print $1,$2,$3,$4}' > %(track)s.flanking5.bed;
cat %(track)s.flanking5.bed
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s.log
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateFlankingCompositionLeft, suffix( ".flanking5.composition"), ".flanking5.composition.load" )
def loadFlankingCompositionLeft( infile, outfile ):
'''Load the nucleotide composition of regions flanking intervals'''
track= P.snip( os.path.basename(infile), ".flanking5.composition").replace(".cleaned","").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_composition_flanking5
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".flanking3.composition" )
def annotateFlankingCompositionRight( infile, outfile ):
'''Establish the nucleotide composition of intervals immediately downstream'''
to_cluster = True
track= P.snip( os.path.basename(infile), ".bed")
dirname= os.path.dirname(infile)
flank_size = PARAMS["geneset_flank_size"]
# Exclude intervals with length < 100bp
statement = """flankBed -i %(infile)s -l 0 -r 1000 -g %(samtools_genome)s.fai
| python %(scriptsdir)s/bed2bed.py --method=filter-genome --genome-file=%(genome_dir)s/%(genome)s -L %(track)s.flanking3.log
| awk 'OFS="\\t" {if ($3-$2>100) print $1,$2,$3,$4}' > %(track)s.flanking3.bed;
cat %(track)s.flanking3.bed
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s.log
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateFlankingCompositionRight, suffix( ".flanking3.composition"), ".flanking3.composition.load" )
def loadFlankingCompositionRight( infile, outfile ):
'''Load the nucleotide composition of regions flanking intervals'''
track= P.snip( os.path.basename(infile), ".flanking3.composition").replace(".cleaned","").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_composition_flanking3
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
############################################################
@transform( loadCapseqComposition, suffix(".replicated.capseq.composition.load"), ".replicated.gc.export" )
def exportCapseqGCProfiles( infile, outfile ):
'''Export file of GC content '''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.capseq.composition.load" ).replace("-","_").replace(".","_")
# Extract data from db
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.pGC, cc.pGC, c3.pGC, c5.pGC
FROM %(track)s_replicated_capseq_composition c
left join %(track)s_replicated_composition_control cc on c.gene_id=cc.gene_id
left join %(track)s_replicated_composition_flanking3 c3 on c.gene_id=c3.gene_id
left join %(track)s_replicated_composition_flanking5 c5 on c.gene_id=c5.gene_id;''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadCapseqComposition, suffix(".replicated.capseq.composition.load"), ".replicated.cpg.export" )
def exportCapseqCpGObsExp( infile, outfile ):
'''Export file of GC content '''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.capseq.composition.load" ).replace("-","_").replace(".","_")
# Extract data from db
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.CpG_ObsExp, cc.CpG_ObsExp, c3.CpG_ObsExp, c5.CpG_ObsExp
FROM %(track)s_replicated_capseq_composition c
left join %(track)s_replicated_composition_control cc on c.gene_id=cc.gene_id
left join %(track)s_replicated_composition_flanking3 c3 on c.gene_id=c3.gene_id
left join %(track)s_replicated_composition_flanking5 c5 on c.gene_id=c5.gene_id;''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadCapseqComposition, suffix(".replicated.capseq.composition.load"), ".replicated.cpg_density.export" )
def exportCapseqCpGDensity( infile, outfile ):
'''Export file of GC content '''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.capseq.composition.load" ).replace("-","_").replace(".","_")
# Extract data from db
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.pCpG, cc.pCpG, c3.pCpG, c5.pCpG
FROM %(track)s_replicated_capseq_composition c
left join %(track)s_replicated_composition_control cc on c.gene_id=cc.gene_id
left join %(track)s_replicated_composition_flanking3 c3 on c.gene_id=c3.gene_id
left join %(track)s_replicated_composition_flanking5 c5 on c.gene_id=c5.gene_id;''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section 3: Compare CAPseq intervals with external datasets
########################################################################################################################
########################################################################################################################
########################################################################################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".cgi_overlap")
def getCapseqCGIOverlapCount(infile, outfile):
'''identify intervals overlapping CGI for each datasets'''
CGI = P.asList(PARAMS["bed_cgi"])
if os.path.exists(outfile):
statement = '''rm %(outfile)s'''
P.run()
for dataset in CGI:
dataset_name = P.snip( os.path.basename( dataset ), ".bed")
statement = '''echo %(dataset_name)s >> %(outfile)s; intersectBed -a %(infile)s -b %(dataset)s -u | wc -l >> %(outfile)s; '''
P.run()
statement = '''sed -i '{N;s/\\n/\\t/}' %(outfile)s; '''
P.run()
############################################################
@transform( getCapseqCGIOverlapCount, suffix(".cgi_overlap"), ".cgi_overlap.load")
def loadCapseqCGIOverlapCount(infile, outfile):
'''Load intervals overlapping CGI into database '''
track = P.snip( os.path.basename( infile ), ".cgi_overlap" ).replace(".","_").replace("-","_")
header = "track,overlap"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_cgi_venn
--header=%(header)s
--allow-empty
> %(outfile)s '''
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".cgi_cap.bed")
def getCGIAndCapseqIntervals(infile, outfile):
'''identify intervals overlapping CGI for each datasets'''
CGI = PARAMS["bed_ucsc_cgi"]
dataset_name = P.snip( os.path.basename( CGI ), ".bed")
statement = '''intersectBed -a %(infile)s -b %(CGI)s -u > %(outfile)s; '''
P.run()
############################################################
@transform( getCGIAndCapseqIntervals, suffix(".cgi_cap.bed"), ".cgi_cap.bed.load")
def loadCGIAndCapseqIntervals(infile, outfile):
'''Load intervals overlapping CGI into database '''
track = P.snip( os.path.basename( infile ), ".cgi_cap.bed" ).replace(".","_").replace("-","_")
header = "contig,start,stop,interval_id"
statement = '''cat %(infile)s | awk 'OFS="\\t" {print $1,$2,$3,$4}' | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_predicted_cgi_and_cap
--index=contig,start
--index=interval_id
--header=%(header)s
--allow-empty
> %(outfile)s '''
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".cap_only.bed")
def getCapseqSpecificIntervals(infile, outfile):
'''identify CApseq intervals not overlapping predicted CGI for each dataset'''
CGI = PARAMS["bed_ucsc_cgi"]
dataset_name = P.snip( os.path.basename( CGI ), ".bed")
statement = '''intersectBed -a %(infile)s -b %(CGI)s -v > %(outfile)s; '''
P.run()
############################################################
@transform( getCapseqSpecificIntervals, suffix(".cap_only.bed"), ".cap_only.bed.load")
def loadCapseqSpecificIntervals(infile, outfile):
'''Load intervals not overlapping CGI into database '''
track = P.snip( os.path.basename( infile ), ".cap_only.bed" ).replace(".","_").replace("-","_")
header = "contig,start,stop,interval_id"
statement = '''cat %(infile)s | awk 'OFS="\\t" {print $1,$2,$3,$4}' | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_cap_not_predicted_cgi
--index=contig,start
--index=interval_id
--header=%(header)s
--allow-empty
> %(outfile)s '''
P.run()
############################################################
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".cgi_only.bed")
def getPredictedCGIIntervals(infile, outfile):
'''identify predicted CGI intervals not overlapping CAPseq intervals for each dataset'''
CGI = PARAMS["bed_ucsc_cgi"]
statement = '''cat %(CGI)s | awk 'OFS="\\t" {print $1,$2,$3,$4NR}' | intersectBed -a stdin -b %(infile)s -v > %(outfile)s; '''
P.run()
############################################################
@transform( getPredictedCGIIntervals, suffix(".cgi_only.bed"), ".cgi_only.bed.load")
def loadPredictedCGIIntervals(infile, outfile):
'''Load predicted CGI intervals not overlapping CAP-seq intervals into database '''
track = P.snip( os.path.basename( infile ), ".cgi_only.bed" ).replace(".replicated","")
table = P.snip( os.path.basename( infile ), ".cgi_only.bed" ).replace(".","_").replace("-","_")
expt_track = track + "-agg"
replicates = EXPERIMENTS[expt_track]
# Write header to output file
tmpfile = tempfile.NamedTemporaryFile(delete=False)
headers = ( "contig","start","stop","interval_id","nPeaks","PeakCenter","Length","AvgVal","PeakVal","nProbes" )
tmpfile.write( "\t".join(headers) + "\n" )
contig,start,end,interval_id,npeaks,peakcenter,length,avgval,peakval,nprobes = "",0,0,0,0,0,0,0,0,0
# setup files
samfiles, offsets = [], []
for t in replicates:
fn = "../bam/%s.norm.bam" % t
assert os.path.exists( fn ), "could not find bamfile %s for track %s" % ( fn, str(t))
samfiles.append( pysam.Samfile( fn, "rb" ) )
fn = "../macs/with_input/%s.macs" % t
if os.path.exists( fn ):
offsets.append( PIntervals.getPeakShiftFromMacs( fn ) )
# Loop over input Bed file and calculate stats for merged intervals
c = E.Counter()
for line in open(infile, "r"):
c.input += 1
contig, start, end, interval_id = line[:-1].split()[:4]
start, end = int(start), int(end)
#interval_id = c.input
npeaks, peakcenter, length, avgval, peakval, nprobes = PIntervals.countPeaks( contig, start, end, samfiles, offsets )
if nprobes == 0:
c.skipped_reads += 1
c.output += 1
tmpfile.write( "\t".join( map( str, (contig,start,end,interval_id,npeaks,peakcenter,length,avgval,peakval,nprobes) )) + "\n" )
tmpfile.close()
tmpfilename = tmpfile.name
tablename = "%s_predicted_cgi_not_cap" % table
statement = '''python %(scriptsdir)s/csv2db.py %(csv2db_options)s
--database=%(database)s
--index=contig,start
--table=%(tablename)s
--allow-empty
< %(tmpfilename)s > %(outfile)s '''
P.run()
os.unlink( tmpfile.name )
#L.info( "%s\n" % str(c) )
############################################################
## Load external bed file stats
@merge( "external_bed/*.bed", "external_interval_sets.stats" )
def getExternalBedStats(infiles, outfile):
'''Calculate statistics for external bed files '''
chromatin = P.asList(PARAMS["bed_chromatin"])
capseq = P.asList(PARAMS["bed_capseq"])
chipseq = P.asList(PARAMS["bed_chipseq"])
CGI = P.asList(PARAMS["bed_cgi"])
extBed = chromatin + capseq + chipseq + CGI
if os.path.exists(outfile):
statement = '''rm %(outfile)s'''
P.run()
for f in extBed:
if len(f) > 0:
track = P.snip( os.path.basename(f),".bed" )
statement = """echo '%(track)s' >> %(outfile)s; cat %(f)s | wc -l >> %(outfile)s; """
P.run()
statement = '''sed -i '{N;s/\\n/\\t/}' %(outfile)s; '''
P.run()
############################################################
@transform( getExternalBedStats, suffix(".stats"), ".stats.load" )
def loadExternalBedStats(infile, outfile):
'''Load statistics for external bed files into database '''
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--header=bed,intervals
--table=external_interval_sets
> %(outfile)s"""
P.run()
############################################################
## Compare CAPseq intervals with chromatin marks
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".chromatin")
def getChromatinMarkOverlap(infile, outfile):
'''identify intervals overlapping chromatin mark intervals for each datasets'''
chromatin = P.asList(PARAMS["bed_chromatin"])
if os.path.exists(outfile):
statement = '''rm %(outfile)s'''
P.run()
if len(chromatin[0]) > 0:
for mark in chromatin:
dataset_name = P.snip( os.path.basename( mark ), ".bed")
statement = '''echo %(dataset_name)s >> %(outfile)s; intersectBed -a %(infile)s -b %(mark)s -u | wc -l >> %(outfile)s; '''
P.run()
statement = '''sed -i '{N;s/\\n/\\t/}' %(outfile)s; '''
P.run()
else:
statement = '''touch %(outfile)s '''
P.run()
############################################################
@transform( getChromatinMarkOverlap, suffix(".chromatin"), ".chromatin.load")
def loadChromatinMarkIntervals(infile, outfile):
'''Load intervals overlapping chromatin marks into database '''
track = P.snip( os.path.basename( infile ), ".chromatin" ).replace(".","_").replace("-","_")
header = "track,overlap"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_chromatin
--header=%(header)s
--allow-empty
> %(outfile)s '''
P.run()
############################################################
@transform(copyCapseqReplicatedBedFiles, suffix(".bed"), ".h3k4me1.bed" )
def getH3K4Me1Overlap( infile, outfile ):
'''Calculate overlap of CAPseq peaks and h3k4me1 peaks (enhancer)'''
to_cluster = True
annotation_file = PARAMS["bed_h3k4me1"]
if len(annotation_file) > 0:
statement = """intersectBed -a %(infile)s -b %(annotation_file)s -u > %(outfile)s"""
P.run()
############################################################
@transform( getH3K4Me1Overlap, suffix( ".h3k4me1.bed"), ".h3k4me1.bed.load" )
def loadH3K4Me1Overlap( infile, outfile ):
'''Load interval annotations: h3k4me1 overlap '''
track= P.snip( os.path.basename(infile), ".h3k4me1.bed").replace(".","_").replace("-","_")
header = "contig,start,end,interval_id"
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--header=%(header)s
--table=%(track)s_h3k4me1_intervals
--index=interval_id
--index=contig,start
> %(outfile)s; """
P.run()
############################################################
## Compare CAPseq intervals with ChIP-seq intervals
@transform(copyCapseqReplicatedBedFiles, suffix(".bed"), ".chipseq")
def getChipseqOverlap(infile, outfile):
'''identify intervals overlapping chipseq intervals for each datasets'''
chipseq = P.asList(PARAMS["bed_chipseq"])
if os.path.exists(outfile):
statement = '''rm %(outfile)s'''
P.run()
if len(chipseq[0]) > 0:
for tf in chipseq:
dataset_name = P.snip( os.path.basename( tf ), ".bed")
statement = '''echo %(dataset_name)s >> %(outfile)s; intersectBed -a %(infile)s -b %(tf)s -u | wc -l >> %(outfile)s; '''
P.run()
statement = '''sed -i '{N;s/\\n/\\t/}' %(outfile)s; '''
P.run()
else:
statement = '''touch %(outfile)s '''
P.run()
############################################################
@transform( getChipseqOverlap, suffix(".chipseq"), ".chipseq.load")
def loadChipseqIntervals(infile, outfile):
'''Load intervals overlapping chipseq into database '''
track = P.snip( os.path.basename( infile ), ".chipseq" ).replace(".","_").replace("-","_")
header = "track,overlap"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_chipseq
--header=%(header)s
--allow-empty
> %(outfile)s '''
P.run()
############################################################
## Compare CAPseq intervals with external CAPseq intervals
@transform( copyCapseqReplicatedBedFiles, suffix(".bed"), ".capseq")
def getCapseqOverlap(infile, outfile):
'''identify intervals overlapping capseq intervals for each datasets'''
capseq = P.asList(PARAMS["bed_capseq"])
if os.path.exists(outfile):
statement = '''rm %(outfile)s'''
P.run()
if len(capseq[0]) > 0:
for x in capseq:
dataset_name = P.snip( os.path.basename( x ), ".bed")
statement = '''echo %(dataset_name)s >> %(outfile)s; intersectBed -a %(infile)s -b %(x)s -u | wc -l >> %(outfile)s; '''
P.run()
statement = '''sed -i '{N;s/\\n/\\t/}' %(outfile)s; '''
P.run()
else:
statement = '''touch %(outfile)s '''
P.run()
############################################################
@transform( getCapseqOverlap, suffix(".capseq"), ".capseq.load")
def loadCapseqIntervals(infile, outfile):
'''Load intervals overlapping capseq into database '''
track = P.snip( os.path.basename( infile ), ".capseq" ).replace(".","_").replace("-","_")
header = "track,overlap"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_capseq
--header=%(header)s
--allow-empty
> %(outfile)s '''
P.run()
############################################################
############################################################
## Compare intervals to external bed files using GAT
@follows( buildGATWorkspace )
@merge( copyCapseqReplicatedBedFiles, "gat/external_dataset_gat.tsv" )
def runExternalDatasetGAT(infiles, outfile):
'''Run genome association tester on bed files '''
to_cluster = True
segfiles = ""
for x in infiles:
track = P.snip(os.path.basename(x), ".bed")
statement = """cat %(x)s | awk 'OFS="\\t" {print $1,$2,$3,"%(track)s"}' > gat/%(track)s.bed; """
P.run()
segfiles += " --segment-file=gat/%s.bed " % track
# External datasets
chromatin = P.asList(PARAMS["bed_chromatin"])
capseq = P.asList(PARAMS["bed_capseq"])
chipseq = P.asList(PARAMS["bed_chipseq"])
CGI = P.asList(PARAMS["bed_cgi"])
extBed = chromatin + capseq + chipseq + CGI
annofiles = " ".join( [ "--annotation-file=%s" % x for x in extBed ] )
statement = """gatrun.py %(segfiles)s %(annofiles)s --workspace=gat/%(genome)s.bed.gz --num-samples=1000 --nbuckets=120000 --force > %(outfile)s"""
P.run()
############################################################
@transform( runExternalDatasetGAT, suffix(".tsv"), ".tsv.load" )
def loadExternalDatasetGAT(infile, outfile):
'''Load genome association tester results into database '''
statement = """cat %(infile)s | grep -v "^#" | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=external_dataset_gat_results gat/external_dataset_gat.tsv
> %(outfile)s"""
P.run()
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section 4: Annotate predicted CGI Intervals
########################################################################################################################
########################################################################################################################
########################################################################################################################
@follows( mkdir("cgi") )
@files( PARAMS["bed_ucsc_cgi"], "cgi/ucsc.bed.load")
def loadUCSCPredictedCGIIntervals(infile, outfile):
'''load CGI intervals'''
header = "contig,start,stop,id"
statement = '''cat %(infile)s | awk 'OFS="\\t" {print $1,$2,$3,$4NR}' | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=cgi_intervals
--index=contig,start
--index=id
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
############################################################
## CGI nucleotide composition
@follows( loadUCSCPredictedCGIIntervals )
@files( PARAMS["bed_ucsc_cgi"], "cgi/cgi.composition" )
def annotateCGIComposition( infile, outfile ):
'''Establish the nucleotide composition of CGI intervals'''
to_cluster = True
# Give each row a unique identifier
statement = """cat %(infile)s
| awk '{print $1,$2,$3,$4NR}'
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s.log
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateCGIComposition, suffix( ".composition"), ".composition.load" )
def loadCGIComposition( infile, outfile ):
'''Load the nucleotide composition of CGI intervals'''
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=cgi_comp
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@follows( loadUCSCPredictedCGIIntervals )
@files( PARAMS["bed_ucsc_cgi"], "cgi/cgi.transcript.tss.distance" )
def getCGITSSDistance( infile, outfile ):
'''Calculate distance of predicted CGIs to nearest non-coding transcript TSS'''
to_cluster = False
annotation_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_transcript_tss"] )
statement = """cat < %(infile)s
| awk '{print $1,$2,$3,$4NR}'
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=distance-tss
--log=%(outfile)s.log
--filename-gff=%(annotation_file)s
--filename-format="bed"
> %(outfile)s"""
P.run()
############################################################
@transform( getCGITSSDistance, suffix( ".transcript.tss.distance"), ".transcript.tss.distance.load" )
def loadCGITSSDistance( infile, outfile ):
'''Load interval annotations: distance to non-coding transcription start sites '''
track= P.snip( os.path.basename(infile), ".transcript.tss.distance").replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_transcript_tss_distance
--index=gene_id
--index=closest_id
--index=id5
--index=id3
> %(outfile)s; """
P.run()
############################################################
@transform( loadCGITSSDistance, suffix(".transcript.tss.distance.load"), ".transcript.tss.distance.export" )
def exportCGITSSTranscriptList( infile, outfile ):
'''Export list of transcripts closest to CAPseq intervals '''
track = P.snip( os.path.basename( infile ), ".transcript.tss.distance.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct gene_id, closest_id FROM %(track)s_%(geneset_name)s_transcript_tss_distance
WHERE closest_id is not null ''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("interval_id\ttranscript_id\n")
for result in cc:
pre = ""
interval_id,transcripts = result
transcript_list = transcripts.split(",")
for t in transcript_list:
outs.write("%s\t%s\n" % (interval_id, str(t)) )
cc.close()
outs.close()
############################################################
@transform( exportCGITSSTranscriptList, suffix( ".transcript.tss.distance.export"), ".transcript.tss.distance.export.load" )
def loadCGITSSTranscriptList( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
track = P.snip( os.path.basename( infile ), ".transcript.tss.distance.export" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_interval_transcript_mapping
--index=transcript_id
--index=interval_id
> %(outfile)s; """
P.run()
############################################################
@transform( loadCGIComposition, suffix("cgi.composition.load"), "cgi.gc.export" )
def exportCGIGCProfiles( infile, outfile ):
'''Export file of GC content '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.pGC FROM cgi_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadCGIComposition, suffix("cgi.composition.load"), "cgi.cpg_density.export" )
def exportCGICpGDensity( infile, outfile ):
'''Export file of CpG density '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.pCpG FROM cgi_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadCGIComposition, suffix("cgi.composition.load"), "cgi.cpg.export" )
def exportCGICpGObsExp( infile, outfile ):
'''Export file of CpG Observed / expected ratio '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.CpG_ObsExp FROM cgi_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
############################################################
## Compare predicted CGI intervals from UCSC with TSS annotations
@files( ( os.path.join( PARAMS["geneset_dir"],PARAMS["geneset_transcript_tss"] ), PARAMS["bed_ucsc_cgi"]), "cgi/cgi.transcript_tss.overlap.count" )
def getCGITranscriptTSSOverlapCount( infiles, outfile ):
'''Establish overlap between UCSC predicted CGIs and protein-coding transcript TSS intervals'''
tss, cgi = infiles
to_cluster = True
statement = '''echo "Predicted CGIs overlapping 1 or more TSS" > %(outfile)s; intersectBed -a %(cgi)s -b %(tss)s -u | wc -l >> %(outfile)s;
echo "Predicted CGIs not overlapping any TSS" >> %(outfile)s; intersectBed -a %(cgi)s -b %(tss)s -v | wc -l >> %(outfile)s;
echo "TSS overlapped by 1 or more CGI" >> %(outfile)s; intersectBed -a %(tss)s -b %(cgi)s -u | wc -l >> %(outfile)s;
echo "TSS not overlapped by any predicted CGI" >> %(outfile)s; intersectBed -a %(tss)s -b %(cgi)s -v | wc -l >> %(outfile)s;
sed -i '{N;s/\\n/\\t/g}' %(outfile)s; '''
P.run()
############################################################
@transform( getCGITranscriptTSSOverlapCount, regex(r"cgi/cgi.transcript_tss.overlap.count"), r"cgi/cgi.transcript_tss.overlap.count.load")
def loadCGITranscriptTSSOverlapCount(infile, outfile):
'''Load UCSC predicted CGI overlap with protein-coding transcript TSSs into database'''
header = "track,intervals"
geneset_name = PARAMS["geneset_name"]
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=cgi_%(geneset_name)s_transcript_tss_venn
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
@files( ( os.path.join( PARAMS["geneset_dir"],PARAMS["geneset_gene_tss"] ), PARAMS["bed_ucsc_cgi"]), "cgi/cgi.gene_tss.overlap.count" )
def getCGIGeneTSSOverlapCount( infiles, outfile ):
'''Establish overlap between UCSC predicted CGIs and protein-coding gene TSS intervals'''
tss, cgi = infiles
to_cluster = True
statement = """echo "Predicted CGIs overlapping 1 or more TSS" > %(outfile)s; intersectBed -a %(cgi)s -b %(tss)s -u | wc -l >> %(outfile)s;
echo "Predicted CGIs not overlapping any TSS" >> %(outfile)s; intersectBed -a %(cgi)s -b %(tss)s -v | wc -l >> %(outfile)s;
echo "TSS overlapped by 1 or more CGI" >> %(outfile)s; intersectBed -a %(tss)s -b %(cgi)s -u | wc -l >> %(outfile)s;
echo "TSS not overlapped by any predicted CGI" >> %(outfile)s; intersectBed -a %(tss)s -b %(cgi)s -v | wc -l >> %(outfile)s;
sed -i '{N;s/\\n/\\t/g}' %(outfile)s; """
P.run()
############################################################
@transform( getCGIGeneTSSOverlapCount, regex(r"cgi/cgi.gene_tss.overlap.count"), r"cgi/cgi.gene_tss.overlap.count.load")
def loadCGIGeneTSSOverlapCount(infile, outfile):
'''Load UCSC predicted CGI overlap with protein-coding gene TSSs into database'''
header = "track,intervals"
geneset_name = PARAMS["geneset_name"]
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=cgi_%(geneset_name)s_gene_tss_venn
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
############################################################
## Record overlap of intervals with protein-coding gene/transcript models
@files( PARAMS["bed_ucsc_cgi"], "cgi/cgi.geneset.overlap" )
def annotateCGIGenesetOverlap( infile, outfile ):
'''classify predicted CGI intervals according to their base pair overlap
with respect to different genomic features (genes, TSS, upstream/downstream flanks) '''
to_cluster = True
feature_list = P.asList( PARAMS["geneset_feature_list"] )
outfiles = ""
first = True
for feature in feature_list:
feature_name = P.snip( os.path.basename( feature ), ".gtf" ).replace(".","_")
outfiles += " %(outfile)s.%(feature_name)s " % locals()
if first:
cut_command = "cut -f1,4,5,6,8 "
first = False
else:
cut_command = "cut -f4,5,6 "
statement = """
cat %(infile)s
| awk '{print $1,$2,$3,$4NR}'
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=overlap
--counter=length
--log=%(outfile)s.log
--filename-gff=%(geneset_dir)s/%(feature)s
--genome-file=%(genome_dir)s/%(genome)s
| %(cut_command)s
| sed s/nover/%(feature_name)s_nover/g
| sed s/pover/%(feature_name)s_pover/g
| sed s/min/length/
> %(outfile)s.%(feature_name)s"""
P.run()
# Paste output together
statement = '''paste %(outfiles)s > %(outfile)s'''
P.run()
############################################################
@transform( annotateCGIGenesetOverlap, suffix(".geneset.overlap"), ".geneset.overlap.load" )
def loadCGIGenesetOverlap( infile, outfile ):
'''load interval annotations: genome architecture '''
track= P.snip( os.path.basename(infile), ".geneset.overlap").replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_%(geneset_name)s_overlap
--index=gene_id
> %(outfile)s; """
P.run()
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section 5: Annotate nucleotide composition of protein-coding / non-coding TSSs
########################################################################################################################
########################################################################################################################
########################################################################################################################
@follows( mkdir("tss") )
@files( os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_transcript_tss"] ), "tss/tss.transcript.composition" )
def annotateTranscriptTSSComposition( infile, outfile ):
'''Establish the nucleotide composition of tss intervals'''
to_cluster = True
tss_extend = PARAMS["geneset_tss_extend"]
statement = """zcat %(infile)s
| slopBed -i stdin -g %(samtools_genome)s.fai -b %(tss_extend)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s.log
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateTranscriptTSSComposition, suffix( ".composition"), ".composition.load" )
def loadTranscriptTSSComposition( infile, outfile ):
'''Load the nucleotide composition of tss intervals'''
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=tss_transcript_comp
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@files( os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_tss"] ), "tss/tss.gene.composition" )
def annotateGeneTSSComposition( infile, outfile ):
'''Establish the nucleotide composition of tss intervals'''
to_cluster = True
tss_extend = PARAMS["geneset_tss_extend"]
statement = """zcat %(infile)s
| slopBed -i stdin -g %(samtools_genome)s.fai -b %(tss_extend)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s.log
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateGeneTSSComposition, suffix( ".composition"), ".composition.load" )
def loadGeneTSSComposition( infile, outfile ):
'''Load the nucleotide composition of tss intervals'''
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=tss_gene_comp
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@files( os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_tss_interval"] ), "tss/tss.gene.interval.composition" )
def annotateGeneTSSIntervalComposition( infile, outfile ):
'''Establish the nucleotide composition of tss intervals'''
to_cluster = True
statement = """zcat %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s.log
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateGeneTSSIntervalComposition, suffix( ".composition"), ".composition.load" )
def loadGeneTSSIntervalComposition( infile, outfile ):
'''Load the nucleotide composition of tss intervals'''
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=tss_gene_interval_comp
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@transform( loadTranscriptTSSComposition, suffix("tss.transcript.composition.load"), "tss.transcript.gc.export" )
def exportTranscriptTSSGCProfiles( infile, outfile ):
'''Export file of GC content '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.pGC FROM tss_transcript_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadGeneTSSComposition, suffix("tss.gene.composition.load"), "tss.gene.gc.export" )
def exportGeneTSSGCProfiles( infile, outfile ):
'''Export file of GC content '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.pGC FROM tss_gene_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadTranscriptTSSComposition, suffix("tss.transcript.composition.load"), "tss.transcript.cpg.export" )
def exportTranscriptTSSCpGObsExp( infile, outfile ):
'''Export file of CpG observed / expected '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.CpG_ObsExp FROM tss_transcript_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadGeneTSSComposition, suffix("tss.gene.composition.load"), "tss.gene.cpg.export" )
def exportGeneTSSCpGObsExp( infile, outfile ):
'''Export file of CpG observed / expected '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.CpG_ObsExp FROM tss_gene_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadTranscriptTSSComposition, suffix("tss.transcript.composition.load"), "tss.transcript.cpg_density.export" )
def exportTranscriptTSSCpGDensity( infile, outfile ):
'''Export file of CpG density '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.pCpG FROM tss_transcript_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( loadGeneTSSComposition, suffix("tss.gene.composition.load"), "tss.gene.cpg_density.export" )
def exportGeneTSSCpGDensity( infile, outfile ):
'''Export file of CpG density '''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
query = '''SELECT c.gene_id, c.pCpG FROM tss_gene_comp c;''' % locals()
cc.execute( query )
E.info( query )
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section 6: Identify and annotate long and short CAPseq intervals
########################################################################################################################
########################################################################################################################
########################################################################################################################
@follows( loadCapseqTranscriptTSSDistance, loadCapseqGenesetOverlap, mkdir("long_intervals") )
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"long_intervals/\1.long.genelist" )
def getLongIntervalGeneList( infile, outfile ):
'''Generate bed file of top 500 longest intervals'''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.bed" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct b.gene_id
FROM (SELECT distinct s.closest_id, i.interval_id, i.contig, i.start, i.end, i.length, i.avgval, i.fold, o.genes_pover1, o.genes_pover2
FROM %(track)s_replicated_intervals i, %(track)s_replicated_%(geneset_name)s_transcript_tss_distance s,
%(track)s_replicated_%(geneset_name)s_overlap o
WHERE i.interval_id=s.gene_id
AND o.gene_id=i.interval_id
AND i.length > 3000
AND o.genes_pover2 > 0
ORDER BY i.length desc
LIMIT 1000) a,
(SELECT "%%" || transcript_id || "%%" as pattern, t.gene_id, t.gene_biotype
FROM annotations.transcript_info t
WHERE t.gene_biotype='protein_coding') b
WHERE a.closest_id like b.pattern
ORDER BY a.length desc
LIMIT 500''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@follows( loadCapseqTranscriptTSSDistance, loadCapseqGenesetOverlap, mkdir("long_intervals") )
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"long_intervals/\1.gene_overlap.genelist" )
def getGeneOverlapGeneList( infile, outfile ):
'''Generate bed file of top 500 longest intervals'''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.bed" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct b.gene_id
FROM (SELECT distinct s.closest_id, i.interval_id, i.contig, i.start, i.end, i.length, i.avgval, i.fold, o.genes_pover1, o.genes_pover2
FROM %(track)s_replicated_intervals i, %(track)s_replicated_%(geneset_name)s_transcript_tss_distance s,
%(track)s_replicated_%(geneset_name)s_overlap o
WHERE i.interval_id=s.gene_id
AND o.gene_id=i.interval_id
AND i.length > 3000
AND o.genes_pover2 > 80
ORDER BY i.length desc
LIMIT 1000) a,
(SELECT "%%" || transcript_id || "%%" as pattern, t.gene_id, t.gene_biotype
FROM annotations.transcript_info t
WHERE t.gene_biotype='protein_coding') b
WHERE a.closest_id like b.pattern
ORDER BY a.length desc
LIMIT 500''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@follows( loadCapseqTranscriptTSSDistance, loadCapseqGenesetOverlap, mkdir("long_intervals") )
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"long_intervals/\1.short.genelist" )
def getShortIntervalGeneList( infile, outfile ):
'''Generate bed file of 500 random intervals of normal size (<2kb)'''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.bed" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct b.gene_id
FROM (SELECT distinct s.closest_id, i.interval_id, i.contig, i.start, i.end, i.length, i.avgval, i.fold, o.genes_pover1, o.genes_pover2
FROM %(track)s_replicated_intervals i, %(track)s_replicated_%(geneset_name)s_transcript_tss_distance s,
%(track)s_replicated_%(geneset_name)s_overlap o
WHERE i.interval_id=s.gene_id
AND o.gene_id=i.interval_id
AND i.length < 2000
AND o.genes_pover2 > 0) a,
(SELECT "%%" || transcript_id || "%%" as pattern, t.gene_id, t.gene_biotype
FROM annotations.transcript_info t
WHERE t.gene_biotype='protein_coding') b
WHERE a.closest_id like b.pattern
ORDER BY RANDOM()
LIMIT 500''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( (getLongIntervalGeneList, getShortIntervalGeneList, getGeneOverlapGeneList), suffix(".genelist"), ".gtf.gz" )
def getLongIntervalGeneGTF( infile, outfile ):
'''Filter GTF file using list of gene ids associated with long CAPseq intervals '''
gene_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
statement = '''zcat %(gene_file)s
| python %(scriptsdir)s/gtf2gtf.py --filter=gene --apply=%(infile)s --log=%(outfile)s.log
| python %(scriptsdir)s/gtf2gtf.py --join-exons --log=%(outfile)s.log
| sed s/\\\\ttranscript\\\\t/\\\\texon\\\\t/g
| gzip > %(outfile)s; '''
P.run()
############################################################
############################################################
## CAPseq profile over long and short capseq interval genes
@follows(getLongIntervalGeneGTF)
@transform( "../merged_bams/*.merge.bam", regex(r"../merged_bams/(\S+).merge.bam"), r"long_intervals/\1.long_interval_genes.capseq_profile.log" )
def longIntervalGeneCAPseqProfile(infile, outfile):
'''plot CAPseq profiles over long intervals'''
track = P.snip( os.path.basename(infile), ".merge.bam" )
ofp = P.snip( outfile, ".log" )
capseq = "long_intervals/"+track+".long.gtf.gz"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(infile)s
--gtffile=%(capseq)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
@follows(getLongIntervalGeneGTF)
@transform( "../merged_bams/*.merge.bam", regex(r"../merged_bams/(\S+).merge.bam"), r"long_intervals/\1.gene_overlap.capseq_profile.log" )
def geneOverlapCAPseqProfile(infile, outfile):
'''plot CAPseq profiles over long intervals'''
track = P.snip( os.path.basename(infile), ".merge.bam" )
ofp = P.snip( outfile, ".log" )
capseq = "long_intervals/"+track+".gene_overlap.gtf.gz"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(infile)s
--gtffile=%(capseq)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
@follows(getLongIntervalGeneGTF)
@transform( "../merged_bams/*.merge.bam", regex(r"../merged_bams/(\S+).merge.bam"), r"long_intervals/\1.short_interval_genes.capseq_profile.log" )
def shortIntervalGeneCAPseqProfile(infile, outfile):
'''plot CAPseq profiles over long intervals'''
track = P.snip( os.path.basename(infile), ".merge.bam" )
ofp = P.snip( outfile, ".log" )
capseq = "long_intervals/"+track+".short.gtf.gz"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(infile)s
--gtffile=%(capseq)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
############################################################
## GO analysis
@follows( mkdir("long_intervals/go") )
@transform( getLongIntervalGeneList, suffix(".long.genelist"), ".long.go" )
def runGOLongGeneLists( infile, outfile ):
statement = """cat %(infile)s | sed "1igene_id\n" > %(infile)s.header"""
P.run()
track = os.path.basename(P.snip(infile,".long.genelist"))
PipelineGO.runGOFromFiles( outfile = outfile,
outdir = "long_intervals/go/%s" % track,
fg_file = infile+".header",
bg_file = None,
go_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_full"] ),
ontology_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_full_obo"] ),
minimum_counts = PARAMS["go_minimum_counts"] )
############################################################
@follows( runGOLongGeneLists, mkdir("long_intervals/goslim") )
@transform( getLongIntervalGeneList, suffix(".long.genelist"), ".long.goslim" )
def runGOSlimLongGeneLists( infile, outfile ):
track = os.path.basename(P.snip(infile,".long.genelist"))
PipelineGO.runGOFromFiles( outfile = outfile,
outdir = "long_intervals/goslim/%s" % track,
fg_file = infile+".header",
bg_file = None,
go_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_slim"] ),
ontology_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_slim_obo"]),
minimum_counts = PARAMS["go_minimum_counts"] )
############################################################
@transform( runGOLongGeneLists, suffix( ".long.go"), ".long.go.load" )
def loadLongGeneGo( infile, outfile ):
'''Load GO results for overlapped genes into database'''
track = os.path.basename(P.snip(infile,".long.go"))
go_categories = ["biol_process","cell_location","mol_function"]
for category in go_categories:
results_file = "long_intervals/go/%(track)s/foreground.%(category)s.withgenes" % locals()
statement = """cat %(results_file)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_long_go_%(category)s
--index=fdr
--index=goid
> %(outfile)s; """
P.run()
############################################################
@transform( runGOSlimLongGeneLists, suffix( ".long.goslim"), ".long.goslim.load" )
def loadLongGeneGoslim( infile, outfile ):
'''Load GO results for overlapped genes into database'''
track = os.path.basename(P.snip(infile,".long.goslim"))
go_categories = ["biol_process","cell_location"]
for category in go_categories:
results_file = "long_intervals/goslim/%(track)s/foreground.%(category)s.withgenes" % locals()
statement = """cat %(results_file)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_long_goslim_%(category)s
--index=fdr
--index=goid
> %(outfile)s; """
P.run()
############################################################
############################################################
## Analyse long and short CAPseq intervals
@transform( getLongIntervalGeneGTF, suffix(".gtf.gz"), ".chromatin_profile.log" )
def longIntervalGeneChromatinProfile(infile, outfile):
'''plot chromatin mark profiles over tissue-specific CAPseq intervals'''
chromatin = P.asList(PARAMS["bigwig_chromatin"])
track = P.snip( os.path.basename(infile), ".gtf.gz" )
if len(chromatin[0]) > 0:
for bw in chromatin:
chromatin_track = P.snip( os.path.basename(bw), ".bam" )
ofp = "long_intervals/" + track + ".genes." + chromatin_track + ".profile"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(bw)s
--gtffile=%(infile)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
else:
statement = '''touch %(outfile)s '''
P.run()
############################################################
@transform( getLongIntervalGeneGTF, suffix(".gtf.gz"), ".chromatin_profile.log" )
def shortIntervalGeneChromatinProfile(infile, outfile):
'''plot chromatin mark profiles over genes with normal length CAPseq intervals'''
chromatin = P.asList(PARAMS["bigwig_chromatin"])
track = P.snip( os.path.basename(infile), ".gtf.gz" )
if len(chromatin[0]) > 0:
for bw in chromatin:
chromatin_track = P.snip( os.path.basename(bw), ".bam" )
ofp = "long_intervals/" + track + ".genes." + chromatin_track + ".profile"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(bw)s
--gtffile=%(infile)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
else:
statement = '''touch %(outfile)s '''
P.run()
############################################################
## Intersection of H3K27Me3 intervals and long interval genes
@transform( getLongIntervalGeneGTF, suffix(".gtf.gz"), ".H3K27Me3.log" )
def longGeneChromatinIntersection(infile, outfile):
'''calculate intersection of chromatin marks and CAPSeq overlapped genes'''
chromatin = P.asList(PARAMS["bed_h3k27me3"])
track = P.snip(os.path.basename(infile), ".gtf.gz")
for bed in chromatin:
chromatin_track = P.snip(os.path.basename(bed), ".bed")
statement = '''zcat %(infile)s | python %(scriptsdir)s/gff2bed.py --is-gtf > long_intervals/%(track)s.bed;
cat long_intervals/%(track)s.bed %(bed)s | awk 'OFS="\\t" {print $1,$2,$3}'
| mergeBed -i stdin | awk 'OFS="\\t" {print $1,$2,$3,"merged"NR}' > long_intervals/%(track)s_%(chromatin_track)s.merged.bed;
echo "Track" > long_intervals/%(track)s_%(chromatin_track)s.counts;
echo "%(track)s" >> long_intervals/%(track)s_%(chromatin_track)s.counts;
echo "Chromatin_track" >> long_intervals/%(track)s_%(chromatin_track)s.counts;
echo "%(chromatin_track)s" >> long_intervals/%(track)s_%(chromatin_track)s.counts;
echo "Total_merged_intervals" >> long_intervals/%(track)s_%(chromatin_track)s.counts;
cat long_intervals/%(track)s_%(chromatin_track)s.merged.bed | wc -l >> long_intervals/%(track)s_%(chromatin_track)s.counts;
echo "track_and_chromatin_track" >> long_intervals/%(track)s_%(chromatin_track)s.counts;
intersectBed -a long_intervals/%(track)s_%(chromatin_track)s.merged.bed -b long_intervals/%(track)s.bed -u | intersectBed -a stdin -b %(bed)s -u > long_intervals/%(track)s_%(chromatin_track)s.shared.bed;
cat long_intervals/%(track)s_%(chromatin_track)s.shared.bed | wc -l >> long_intervals/%(track)s_%(chromatin_track)s.counts;
echo "chromatin_track_only" >> long_intervals/%(track)s_%(chromatin_track)s.counts;
intersectBed -a long_intervals/%(track)s_%(chromatin_track)s.merged.bed -b long_intervals/%(track)s.bed -v > long_intervals/%(track)s_%(chromatin_track)s.%(chromatin_track)s.unique.bed;
cat long_intervals/%(track)s_%(chromatin_track)s.%(chromatin_track)s.unique.bed | wc -l >> long_intervals/%(track)s_%(chromatin_track)s.counts;
echo "track_only" >> long_intervals/%(track)s_%(chromatin_track)s.counts;
intersectBed -a long_intervals/%(track)s_%(chromatin_track)s.merged.bed -b %(bed)s -v > long_intervals/%(track)s_%(chromatin_track)s.%(track)s.unique.bed;
cat long_intervals/%(track)s_%(chromatin_track)s.%(track)s.unique.bed | wc -l >> long_intervals/%(track)s_%(chromatin_track)s.counts;
sed -i '{N;s/\\n/\\t/g}' long_intervals/%(track)s_%(chromatin_track)s.counts;
touch %(outfile)s '''
P.run()
############################################################
@follows( longGeneChromatinIntersection )
@merge( "long_intervals/*.counts", "long_intervals/h3k27me3.stats" )
def longGeneChromatinIntersectionStats(infiles, outfile):
'''calculate intersection of chromatin marks and CAPSeq overlapped genes'''
first = True
header = ""
outs = open(outfile, "w")
for infile in infiles:
f = open(infile, "r")
names = []
values = []
for line in f:
name, value = line.split("\t")
names.append(name.strip())
values.append(value.strip())
header = "\t".join(names)+"\n"
outline = "\t".join(values)+"\n"
if first:
outs.write(header)
first = False
outs.write(outline)
f.close()
outs.close()
############################################################
@transform( longGeneChromatinIntersectionStats, suffix(".stats"), ".stats.load" )
def loadLongGeneChromatinIntersection(infile, outfile):
'''calculate intersection of chromatin marks and CAPSeq overlapped genes'''
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=long_intervals_h3k27me3_venn
> %(outfile)s"""
P.run()
############################################################
## Compare intervals to external bed files using GAT
@follows( buildGATWorkspace, mkdir("long_intervals/gat/") )
@merge( getLongIntervalGeneGTF, "long_intervals/gat/long_intervals_gat.tsv" )
def runLongGenesGAT(infiles, outfile):
'''Run genome association tester on bed files '''
to_cluster = True
segfiles = ""
for x in infiles:
track = P.snip(os.path.basename(x), ".gtf.gz")
statement = """zcat %(x)s | awk 'OFS="\\t" {print $1,$4-1,$5-1,"%(track)s"}' > long_intervals/gat/%(track)s.bed; """
P.run()
segfiles += " --segment-file=long_intervals/gat/%s.bed " % track
# External datasets
annofiles = ""
chromatin = P.asList(PARAMS["bed_h3k27me3"])
for y in chromatin:
track = P.snip(os.path.basename(y), ".bed")
statement = """cat %(y)s | awk 'OFS="\\t" {print $1,$2,$3,"%(track)s"}' > long_intervals/gat/%(track)s.bed; """
P.run()
annofiles += "--annotation-file=long_intervals/gat/%s.bed " % track
statement = """gatrun.py %(segfiles)s %(annofiles)s --workspace=gat/%(genome)s.bed.gz --num-samples=1000 --nbuckets=500000 --bucket-size=10 --force > %(outfile)s"""
P.run()
############################################################
@transform( runLongGenesGAT, suffix(".tsv"), ".tsv.load" )
def loadLongGenesGAT(infile, outfile):
'''Load genome association tester results into database '''
statement = """cat %(infile)s | grep -v "^#" | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=long_intervals_gat_results
> %(outfile)s"""
P.run()
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section 6b: Identify genes overlapped >90% by CAPseq intervals
########################################################################################################################
########################################################################################################################
########################################################################################################################
@follows( mkdir("overlapped_genes") )
@transform( loadGenesetCapseqOverlap, regex(r"(\S+).replicated.genes_capseq_overlap.load"), r"overlapped_genes/\1.overlapped_genes.genelist" )
def getGenesetCapseqOverlapList( infile, outfile ):
'''Generate text file of all genes overlapped by >90% by CAPseq intervals'''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.genes_capseq_overlap.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''select distinct o.gene_id
from %(track)s_replicated_%(geneset_name)s_genes_capseq_overlap o, annotations.transcript_info i
where capseq_pover1>90
and o.gene_id=i.gene_id
and o.length > 1000
order by length desc''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
########################################################################################################################
## Added 24-07-2013 For Hannahs thesis
@follows( mkdir("overlapped_genes") )
@transform( loadGenesetCapseqOverlap, regex(r"(\S+).replicated.genes_capseq_overlap.load"), r"overlapped_genes/\1.overlapped_genes_50.genelist" )
def getGenesetCapseqOverlapList50( infile, outfile ):
'''Generate text file of all genes overlapped by >50% by CAPseq intervals'''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.genes_capseq_overlap.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''select distinct o.gene_id
from %(track)s_replicated_%(geneset_name)s_genes_capseq_overlap o, annotations.transcript_info i
where capseq_pover1>50
and o.gene_id=i.gene_id
and o.length > 1000
order by length desc''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@follows( mkdir("overlapped_genes") )
@transform( loadGenesetCapseqOverlap, regex(r"(\S+).replicated.genes_capseq_overlap.load"), r"overlapped_genes/\1.overlapped_genes.control.genelist" )
def getGenesetCapseqOverlapControlList( infile, outfile ):
'''Generate text file of all genes overlapped by <50% by CAPseq intervals'''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
track = P.snip( os.path.basename( infile ), ".replicated.genes_capseq_overlap.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''select distinct o.gene_id
from %(track)s_replicated_%(geneset_name)s_genes_capseq_overlap o, annotations.transcript_info i
where capseq_pover1 >0
and capseq_pover1 < 10
and o.gene_id=i.gene_id
and o.length < 15000
and o.length > 1000
order by length desc''' % locals()
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( (getGenesetCapseqOverlapList, getGenesetCapseqOverlapControlList), suffix(".genelist"), ".gtf.gz" )
def getOverlappedGeneGTF( infile, outfile ):
'''Filter GTF file using list of gene ids associated with long CAPseq intervals '''
gene_file = os.path.join( PARAMS["geneset_dir"], PARAMS["geneset_gene_profile"])
statement = '''zcat %(gene_file)s
| python %(scriptsdir)s/gtf2gtf.py --filter=gene --apply=%(infile)s --log=%(outfile)s.log
| python %(scriptsdir)s/gtf2gtf.py --join-exons --log=%(outfile)s.log
| sed s/\\\\ttranscript\\\\t/\\\\texon\\\\t/g
| gzip > %(outfile)s; '''
P.run()
############################################################
## CAPseq profile over overlapped genes
@follows(getOverlappedGeneGTF)
@transform( "../merged_bams/*.merge.bam", regex(r"../merged_bams/(\S+).merge.bam"), r"overlapped_genes/\1.overlapped_genes.capseq_profile.log" )
def overlappedGeneCAPseqProfile(infile, outfile):
'''plot CAPseq profiles over long intervals'''
track = P.snip( os.path.basename(infile), ".merge.bam" )
ofp = P.snip( outfile, ".log" )
capseq = "overlapped_genes/"+track+".overlapped_genes.gtf.gz"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(infile)s
--gtffile=%(capseq)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none
--scale_flank_length=1'''
P.run()
############################################################
@follows(getOverlappedGeneGTF)
@transform( "../merged_bams/*.merge.bam", regex(r"../merged_bams/(\S+).merge.bam"), r"overlapped_genes/\1.overlapped_genes.control.capseq_profile.log" )
def controlGeneCAPseqProfile(infile, outfile):
'''plot CAPseq profiles over long intervals'''
track = P.snip( os.path.basename(infile), ".merge.bam" )
ofp = P.snip( outfile, ".log" )
capseq = "overlapped_genes/"+track+".overlapped_genes.control.gtf.gz"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(infile)s
--gtffile=%(capseq)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none
--scale_flank_length=1'''
P.run()
############################################################
## Chromatin profile over overlapped genes
@transform( getOverlappedGeneGTF, suffix(".gtf.gz"), ".chromatin_profile.log" )
def overlappedGeneChromatinProfile(infile, outfile):
'''plot chromatin mark profiles over CAPSeq overlapped genes'''
chromatin = P.asList(PARAMS["bigwig_chromatin"])
track = P.snip( os.path.basename(infile), ".gtf.gz" )
if len(chromatin[0]) > 0:
for bw in chromatin:
chromatin_track = P.snip( os.path.basename(bw), ".bam" )
ofp = "overlapped_genes/" + track + "." + chromatin_track + ".profile"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(bw)s
--gtffile=%(infile)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none
--scale_flank_length=1'''
P.run()
else:
statement = '''touch %(outfile)s '''
P.run()
############################################################
## Chromatin profile over overlapped genes
@transform( getOverlappedGeneGTF, suffix(".gtf.gz"), ".chromatin_profile.wide.log" )
def overlappedGeneChromatinProfileWide(infile, outfile):
'''plot chromatin mark profiles over CAPSeq overlapped genes'''
chromatin = P.asList(PARAMS["bigwig_chromatin"])
track = P.snip( os.path.basename(infile), ".gtf.gz" )
if len(chromatin[0]) > 0:
for bw in chromatin:
chromatin_track = P.snip( os.path.basename(bw), ".bam" )
ofp = "overlapped_genes/" + track + "." + chromatin_track + ".profile.wide"
statement = '''python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(bw)s
--gtffile=%(infile)s
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=geneprofile
--log=%(outfile)s
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none
--scale_flank_length=3'''
P.run()
else:
statement = '''touch %(outfile)s '''
P.run()
############################################################
## GO analysis
@follows( mkdir("overlapped_genes/go") )
@transform( getGenesetCapseqOverlapList, suffix(".overlapped_genes.genelist"), ".overlapped_genes.go" )
def runGOOverlappedGeneLists( infile, outfile ):
statement = """cat %(infile)s | sed "1igene_id\n" > %(infile)s.header"""
P.run()
track = os.path.basename(P.snip(infile,".overlapped_genes.genelist"))
PipelineGO.runGOFromFiles( outfile = outfile,
outdir = "overlapped_genes/go/%s" % track,
fg_file = infile+".header",
bg_file = None,
go_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_full"] ),
ontology_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_full_obo"] ),
minimum_counts = PARAMS["go_minimum_counts"] )
############################################################
@follows( runGOOverlappedGeneLists, mkdir("overlapped_genes/goslim") )
@transform( getGenesetCapseqOverlapList, suffix(".overlapped_genes.genelist"), ".overlapped_genes.goslim" )
def runGOSlimOverlappedGeneLists( infile, outfile ):
track = os.path.basename(P.snip(infile,".overlapped_genes.genelist"))
PipelineGO.runGOFromFiles( outfile = outfile,
outdir = "overlapped_genes/goslim/%s" % track,
fg_file = infile+".header",
bg_file = None,
go_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_slim"] ),
ontology_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_slim_obo"]),
minimum_counts = PARAMS["go_minimum_counts"] )
############################################################
@transform( runGOOverlappedGeneLists, suffix( ".overlapped_genes.go"), ".overlapped_genes.go.load" )
def loadOverlappedGeneGo( infile, outfile ):
'''Load GO results for overlapped genes into database'''
track = os.path.basename(P.snip(infile,".overlapped_genes.go"))
go_categories = ["biol_process","cell_location","mol_function"]
for category in go_categories:
results_file = "overlapped_genes/go/%(track)s/foreground.%(category)s.withgenes" % locals()
statement = """cat %(results_file)s | python ~/src/csv2db.py
--database=%(database)s
--table=%(track)s_overlapped_genes_go_%(category)s
--index=fdr
--index=goid
> %(outfile)s; """
P.run()
############################################################
@transform( runGOSlimOverlappedGeneLists, suffix( ".overlapped_genes.goslim"), ".overlapped_genes.goslim.load" )
def loadOverlappedGeneGoslim( infile, outfile ):
'''Load GO results for overlapped genes into database'''
track = os.path.basename(P.snip(infile,".overlapped_genes.goslim"))
go_categories = ["biol_process","cell_location"]
for category in go_categories:
results_file = "overlapped_genes/goslim/%(track)s/foreground.%(category)s.withgenes" % locals()
statement = """cat %(results_file)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_overlapped_genes_goslim_%(category)s
--index=fdr
--index=goid
> %(outfile)s; """
P.run()
############################################################
@follows(runGOOverlappedGeneLists)
@collate( "overlapped_genes/go/*/*.results", regex(r"overlapped_genes/go/(.*)/(.*)\.(.*).results"), r"overlapped_genes/go/\1/\3.revigo" )
def clusterGOResults( infiles, outfile ):
'''Use revigo to cluster go terms'''
infiles = " ".join(infiles)
filename_go = os.path.join( PARAMS["geneset_dir"], PARAMS["go_full"])
filename_obo = os.path.join( PARAMS["geneset_dir"], PARAMS["go_full_obo_xml"])
to_cluster = True
track = P.snip( outfile, ".revigo" )
statement = '''cat %(infiles)s
| python %(scriptsdir)s/revigo.py
--filename-go=%(filename_go)s
--filename-ontology=%(filename_obo)s
--output-filename-pattern=%(track)s.%%s
--ontology=all
--max-similarity=0.5
--reverse-palette
--force
-v 2
> %(outfile)s'''
P.run()
############################################################
## Intersection of H3K27Me3 intervals and overlapped genes
@transform( getOverlappedGeneGTF, suffix(".gtf.gz"), ".H3K27Me3.log" )
def overlappedGeneChromatinIntersection(infile, outfile):
'''calculate intersection of chromatin marks and CAPSeq overlapped genes'''
chromatin = P.asList(PARAMS["bed_h3k27me3"])
track = P.snip(os.path.basename(infile), ".gtf.gz")
for bed in chromatin:
chromatin_track = P.snip(os.path.basename(bed), ".bed")
statement = '''zcat %(infile)s | python %(scriptsdir)s/gff2bed.py --is-gtf > overlapped_genes/%(track)s.bed;
cat overlapped_genes/%(track)s.bed %(bed)s | awk 'OFS="\\t" {print $1,$2,$3}'
| mergeBed -i stdin | awk 'OFS="\\t" {print $1,$2,$3,"merged"NR}' > overlapped_genes/%(track)s_%(chromatin_track)s.merged.bed;
echo "Track" > overlapped_genes/%(track)s_%(chromatin_track)s.counts;
echo "%(track)s" >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
echo "Chromatin_track" >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
echo "%(chromatin_track)s" >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
echo "Total_merged_intervals" >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
cat overlapped_genes/%(track)s_%(chromatin_track)s.merged.bed | wc -l >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
echo "track_and_chromatin_track" >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
intersectBed -a overlapped_genes/%(track)s_%(chromatin_track)s.merged.bed -b overlapped_genes/%(track)s.bed -u | intersectBed -a stdin -b %(bed)s -u > overlapped_genes/%(track)s_%(chromatin_track)s.shared.bed;
cat overlapped_genes/%(track)s_%(chromatin_track)s.shared.bed | wc -l >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
echo "chromatin_track_only" >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
intersectBed -a overlapped_genes/%(track)s_%(chromatin_track)s.merged.bed -b overlapped_genes/%(track)s.bed -v > overlapped_genes/%(track)s_%(chromatin_track)s.%(chromatin_track)s.unique.bed;
cat overlapped_genes/%(track)s_%(chromatin_track)s.%(chromatin_track)s.unique.bed | wc -l >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
echo "track_only" >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
intersectBed -a overlapped_genes/%(track)s_%(chromatin_track)s.merged.bed -b %(bed)s -v > overlapped_genes/%(track)s_%(chromatin_track)s.%(track)s.unique.bed;
cat overlapped_genes/%(track)s_%(chromatin_track)s.%(track)s.unique.bed | wc -l >> overlapped_genes/%(track)s_%(chromatin_track)s.counts;
sed -i '{N;s/\\n/\\t/g}' overlapped_genes/%(track)s_%(chromatin_track)s.counts;
touch %(outfile)s '''
P.run()
############################################################
@follows( overlappedGeneChromatinIntersection )
@merge( "overlapped_genes/*.counts", "overlapped_genes/h3k27me3.stats" )
def overlappedGeneChromatinIntersectionStats(infiles, outfile):
'''calculate intersection of chromatin marks and CAPSeq overlapped genes'''
first = True
header = ""
outs = open(outfile, "w")
for infile in infiles:
f = open(infile, "r")
names = []
values = []
for line in f:
name, value = line.split("\t")
names.append(name.strip())
values.append(value.strip())
header = "\t".join(names)+"\n"
outline = "\t".join(values)+"\n"
if first:
outs.write(header)
first = False
outs.write(outline)
f.close()
outs.close()
############################################################
@transform( overlappedGeneChromatinIntersectionStats, suffix(".stats"), ".stats.load" )
def loadOverlappedGeneChromatinIntersection(infile, outfile):
'''calculate intersection of chromatin marks and CAPSeq overlapped genes'''
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=overlapped_genes_h3k27me3_venn
> %(outfile)s"""
P.run()
############################################################
@transform( overlappedGeneChromatinIntersectionStats, suffix(".stats"), ".stats.contingency_table" )
def OverlappedGeneChromatinIntersectionContingencyTable(infile, outfile):
'''calculate intersection of chromatin marks and CAPSeq overlapped genes'''
chromatin_track = PARAMS["plots_h3k27_track"]
track = PARAMS["plots_fig4_tissue"]
species = PARAMS["species"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
tracks = []
# Extract data from db
cc = dbhandle.cursor()
query = ''' SELECT "Canonical" as interval_type, track_and_chromatin_track as H3K27Me3, track_only as no_H3K27Me3
FROM overlapped_genes_h3k27me3_venn
WHERE chromatin_track="%(chromatin_track)s"
AND track="%(species)s_%(track)s-cap.overlapped_genes.control"
UNION
SELECT "Broad" as interval_type, track_and_chromatin_track as H3K27Me3, track_only as no_H3K27Me3
FROM overlapped_genes_h3k27me3_venn
WHERE chromatin_track="%(chromatin_track)s"
AND track="%(species)s_%(track)s-cap.overlapped_genes"''' % locals()
E.info( query )
cc.execute( query )
# Write to file
outs = open( outfile, "w")
outs.write("interval_type\tH3K27Me3\tno_H3K27Me3\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform( OverlappedGeneChromatinIntersectionContingencyTable, regex(r"overlapped_genes/(\S+).stats.contingency_table"), r"overlapped_genes/"+PARAMS["species"]+r"_\1.stats.fisher.test.tsv" )
def OverlappedGeneChromatinIntersectionFisherTest(infile, outfile):
'''calculate intersection of chromatin marks and CAPSeq overlapped genes'''
R('''x <- read.table(file='%(infile)s', header=TRUE, stringsAsFactors=FALSE, row.names=1)''' % locals() )
R('''res <- fisher.test(x)''')
R('''fisher_result <- data.frame(res[3]$estimate,res[1]$p.value,res[2]$conf.int[1],res[2]$conf.int[2])''')
R('''colnames(fisher_result) <- c("odds.ratio","p.value","conf.int.low","conf.int.high") ''')
R('''write.table(fisher_result, file="%(outfile)s", sep="\t", row.names=F) ''' % locals() )
############################################################
## Compare intervals to external bed files using GAT
@follows( buildGATWorkspace, mkdir("overlapped_genes/gat/") )
@merge( getOverlappedGeneGTF, "overlapped_genes/gat/overlapped_genes_gat.tsv" )
def runOverlappedGenesGAT(infiles, outfile):
'''Run genome association tester on bed files '''
to_cluster = True
segfiles = ""
for x in infiles:
track = P.snip(os.path.basename(x), ".gtf.gz")
statement = """zcat %(x)s | awk 'OFS="\\t" {print $1,$4-1,$5-1,"%(track)s"}' > overlapped_genes/gat/%(track)s.bed; """
P.run()
segfiles += " --segment-file=overlapped_genes/gat/%s.bed " % track
# External datasets
annofiles = ""
chromatin = P.asList(PARAMS["bed_h3k27me3"])
for y in chromatin:
track = P.snip(os.path.basename(y), ".bed")
statement = """cat %(y)s | awk 'OFS="\\t" {print $1,$2,$3,"%(track)s"}' > overlapped_genes/gat/%(track)s.bed; """
P.run()
annofiles += "--annotation-file=overlapped_genes/gat/%s.bed " % track
statement = """gatrun.py %(segfiles)s %(annofiles)s --workspace=gat/%(genome)s.bed.gz --num-samples=1000 --nbuckets=500000 --bucket-size=10 --force > %(outfile)s"""
P.run()
############################################################
@transform( runOverlappedGenesGAT, suffix(".tsv"), ".tsv.load" )
def loadOverlappedGenesGAT(infile, outfile):
'''Load genome association tester results into database '''
statement = """cat %(infile)s | grep -v "^#" | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=overlapped_genes_gat_results
> %(outfile)s"""
P.run()
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section 7: Identify and annotate liver and testes tissue specific intervals
########################################################################################################################
########################################################################################################################
########################################################################################################################
@follows(copyCapseqReplicatedBedFiles, mkdir("liver_vs_testes") )
@files( (PARAMS["compare_liver_pattern"]+".replicated.bed", PARAMS["compare_testes_pattern"]+".replicated.bed"), "liver_vs_testes/liver.testes.venn" )
def liverTestesVenn(infiles, outfile):
'''identify interval overlap between liver and testes. Merge intervals first.'''
liver, testes = infiles
liver_name = P.snip( os.path.basename(liver), ".bed" )
testes_name = P.snip( os.path.basename(testes), ".bed" )
to_cluster = True
statement = '''cat %(liver)s %(testes)s | mergeBed -i stdin | awk 'OFS="\\t" {print $1,$2,$3,"merged"NR}' > liver_vs_testes/liver.testes.merge.bed;
echo "Total merged intervals" > %(outfile)s;
cat liver_vs_testes/liver.testes.merge.bed | wc -l >> %(outfile)s;
echo "Liver & testes" >> %(outfile)s;
intersectBed -a liver_vs_testes/liver.testes.merge.bed -b %(liver)s -u | intersectBed -a stdin -b %(testes)s -u > liver_vs_testes/liver.testes.shared.bed;
cat liver_vs_testes/liver.testes.shared.bed | wc -l >> %(outfile)s;
echo "Testes only" >> %(outfile)s;
intersectBed -a liver_vs_testes/liver.testes.merge.bed -b %(liver)s -v > liver_vs_testes/%(testes_name)s.liver.testes.unique.bed;
cat liver_vs_testes/%(testes_name)s.liver.testes.unique.bed | wc -l >> %(outfile)s;
echo "Liver only" >> %(outfile)s;
intersectBed -a liver_vs_testes/liver.testes.merge.bed -b %(testes)s -v > liver_vs_testes/%(liver_name)s.liver.testes.unique.bed;
cat liver_vs_testes/%(liver_name)s.liver.testes.unique.bed | wc -l >> %(outfile)s;
sed -i '{N;s/\\n/\\t/g}' %(outfile)s; '''
P.run()
############################################################
@follows(copyCapseqReplicatedBedFiles, mkdir("liver_vs_testes") )
@files( (PARAMS["compare_liver_pattern"]+".replicated.bed", PARAMS["compare_testes_pattern"]+".replicated.bed"), ("liver_vs_testes/liver_nmi.liver.testes.shared.bed", "liver_vs_testes/testes_nmi.liver.testes.shared.bed", "liver_vs_testes/liver_nmi.liver.testes.uniq.bed", "liver_vs_testes/testes_nmi.liver.testes.uniq.bed") )
def liverTestesCompare(infiles, outfile):
'''identify interval overlap between liver and testes. Merge intervals first.'''
liver, testes = infiles
to_cluster = False
statement = '''intersectBed -a %(liver)s -b %(testes)s -u > liver_vs_testes/liver_nmi.liver.testes.shared.bed;
intersectBed -a %(testes)s -b %(liver)s -u > liver_vs_testes/testes_nmi.liver.testes.shared.bed;
intersectBed -a %(liver)s -b %(testes)s -v > liver_vs_testes/liver_nmi.liver.testes.uniq.bed;
intersectBed -a %(testes)s -b %(liver)s -v > liver_vs_testes/testes_nmi.liver.testes.uniq.bed; ''' % locals()
P.run()
############################################################
@transform( liverTestesVenn, suffix(".venn"), ".venn.load" )
def loadLiverTestesVenn(infile, outfile):
'''Load liver testes venn overlap into database '''
header = "category,intervals"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=liver_testes_venn
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
@follows(copyCapseqReplicatedBedFiles, exportCapseqIntergenicBed)
@files( (PARAMS["compare_liver_pattern"]+".replicated.intergenic.bed", PARAMS["compare_testes_pattern"]+".replicated.intergenic.bed"), "liver_vs_testes/liver.testes.intergenic.venn" )
def liverTestesIntergenicVenn(infiles, outfile):
'''identify interval overlap between liver and testes for non-TSS associated intervals. Merge intervals first.'''
liver, testes = infiles
to_cluster = True
statement = '''cat %(liver)s %(testes)s | mergeBed -i stdin > liver_vs_testes/liver.testes.intergenic.merge.bed;
echo "Total merged intervals" > %(outfile)s; cat liver_vs_testes/liver.testes.intergenic.merge.bed | wc -l >> %(outfile)s;
echo "Liver & testes" >> %(outfile)s; intersectBed -a liver_vs_testes/liver.testes.intergenic.merge.bed -b %(liver)s -u | intersectBed -a stdin -b %(testes)s -u | wc -l >> %(outfile)s;
echo "Testes only" >> %(outfile)s; intersectBed -a liver_vs_testes/liver.testes.intergenic.merge.bed -b %(liver)s -v | wc -l >> %(outfile)s;
echo "Liver only" >> %(outfile)s; intersectBed -a liver_vs_testes/liver.testes.intergenic.merge.bed -b %(testes)s -v | wc -l >> %(outfile)s;
sed -i '{N;s/\\n/\\t/g}' %(outfile)s; '''
P.run()
############################################################
@transform( liverTestesIntergenicVenn, suffix(".venn"), ".venn.load" )
def loadLiverTestesIntergenicVenn(infile, outfile):
'''Load liver testes venn overlap into database '''
header = "category,intervals"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=liver_testes_intergenic_venn
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
@follows(liverTestesVenn)
@files( "liver_vs_testes/liver.testes.shared.bed", "liver_vs_testes/liver.testes.shared.bed.load" )
def loadLiverTestesShared(infile, outfile):
'''Load liver testes shared intervals into database '''
header = "contig,start,end,interval_id"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=liver_testes_shared_intervals
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
@follows(liverTestesVenn)
@transform( "liver_vs_testes/*.liver.testes.unique.bed", suffix(".liver.testes.unique.bed"), ".liver.testes.unique.bed.load" )
def loadLiverTestesUnique(infile, outfile):
'''Load liver testes unique intervals into database '''
header = "contig,start,end,interval_id"
track = P.snip(os.path.basename(infile), ".liver.testes.unique.bed")
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=%(track)s_liver_testes_unique_intervals
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
@follows(liverTestesVenn)
@files( "liver_vs_testes/liver.testes.merge.bed", "liver_vs_testes/liver.testes.merge.bed.load" )
def loadLiverTestesMerge(infile, outfile):
'''Load liver testes shared intervals into database '''
header = "contig,start,end,interval_id"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=liver_testes_merged_intervals
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
@follows(loadLiverTestesShared, loadLiverTestesUnique, loadLiverTestesMerge)
@merge( "liver_vs_testes/*.liver.testes.unique.bed", "liver_vs_testes/liver.testes.merge.sort.bed")
def exportLiverTestesMergeWithSort( infiles, outfile):
'''Query database to produce a bed file which can be sorted by liver testes unique category and then length'''
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
tracks = []
for infile in infiles:
t = P.snip(os.path.basename(infile), ".liver.testes.unique.bed").replace("-","_").replace(".","_")
tracks.append(t)
# Extract data from db
cc = dbhandle.cursor()
query = '''SELECT m.contig, m.start, m.end, m.interval_id,
"sh_" || substr('000000' || (m.end-m.start), -6, 6) as sort
FROM liver_testes_merged_intervals m, liver_testes_shared_intervals s
WHERE m.interval_id=s.interval_id ''' % locals()
for i, t in enumerate(tracks):
query += '''UNION
SELECT m.contig, m.start, m.end, m.interval_id,
"a%(i)s_" || substr('000000' || (m.end-m.start), -6, 6) as sort
FROM liver_testes_merged_intervals m, %(t)s_liver_testes_unique_intervals u%(i)s
WHERE m.interval_id=u%(i)s.interval_id ''' % locals()
print query
cc.execute( query )
# Write to file
outs = open( outfile, "w")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
############################################################
## Analyse liver/testes tissue specific intervals
@follows( liverTestesVenn )
@files( "liver_vs_testes/liver.testes.merge.bed", "liver_vs_testes/liver.testes.merge.geneset_overlap" )
def annotateLiverTestesMergedGenesetOverlap( infile, outfile ):
'''classify intervals according to their base pair overlap with respect to different genomic features (genes, TSS, upstream/downstream flanks) '''
to_cluster = True
feature_list = P.asList( PARAMS["geneset_feature_list"] )
outfiles = ""
first = True
for feature in feature_list:
feature_name = P.snip( os.path.basename( feature ), ".gtf" ).replace(".","_")
outdir = os.path.dirname( outfile )
outfiles += " %(outfile)s.%(feature_name)s " % locals()
if first:
cut_command = "cut -f1,4,5,6,8 "
first = False
else:
cut_command = "cut -f4,5,6 "
statement = """
cat %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=overlap
--counter=length
--log=%(outfile)s.log
--filename-gff=%(geneset_dir)s/%(feature)s
--genome-file=%(genome_dir)s/%(genome)s
| %(cut_command)s
| sed s/nover/%(feature_name)s_nover/g
| sed s/pover/%(feature_name)s_pover/g
| sed s/min/length/
> %(outfile)s.%(feature_name)s"""
P.run()
# Paste output together
statement = '''paste %(outfiles)s > %(outfile)s'''
P.run()
############################################################
@transform( annotateLiverTestesMergedGenesetOverlap, suffix(".geneset_overlap"), ".geneset_overlap.load" )
def loadLiverTestesMergedGenesetOverlap( infile, outfile ):
'''load interval annotations: genome architecture '''
geneset_name = PARAMS["geneset_name"]
track= P.snip( os.path.basename(infile), ".geneset_overlap").replace(".","_").replace("-","_")
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=liver_testes_merged_%(geneset_name)s_overlap
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@follows( liverTestesVenn )
@files( "liver_vs_testes/liver.testes.merge.bed", "liver_vs_testes/liver.testes.merge.transcript.tss.distance" )
def annotateLiverTestesMergedTranscriptTSSDistance( infile, outfile ):
'''Compute distance from CAPseq intervals to nearest transcript TSS'''
to_cluster = True
annotation_file = os.path.join( PARAMS["geneset_dir"],PARAMS["geneset_transcript_tss"] )
statement = """cat < %(infile)s
| python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=distance-tss
--log=%(outfile)s.log
--filename-gff=%(annotation_file)s
--filename-format="bed"
> %(outfile)s"""
P.run()
############################################################
@transform( annotateLiverTestesMergedTranscriptTSSDistance, suffix( ".transcript.tss.distance"), ".transcript.tss.distance.load" )
def loadLiverTestesMergedTranscriptTSSDistance( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
track= P.snip( os.path.basename(infile), ".transcript.tss.distance").replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=liver_testes_merged_%(geneset_name)s_transcript_tss_distance
--index=gene_id
--index=closest_id
--index=id5
--index=id3
> %(outfile)s; """
P.run()
############################################################
@transform( loadLiverTestesMergedTranscriptTSSDistance, suffix(".transcript.tss.distance.load"), ".transcript.tss.distance.export" )
def exportLiverTestesTSSTranscriptList( infile, outfile ):
'''Export liver vs testes tissue specific CAPseq genes '''
track = P.snip( os.path.basename( infile ), ".transcript.tss.distance.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct t.gene_id, t.closest_id
FROM liver_testes_merged_intervals i,
liver_testes_merged_%(geneset_name)s_transcript_tss_distance t
WHERE i.interval_id=t.gene_id
AND t.closest_dist < 1000 ''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("interval_id\ttranscript_id\n")
for result in cc:
pre = ""
interval_id,transcripts = result
transcript_list = transcripts.split(",")
for t in transcript_list:
outs.write("%s\t%s\n" % (interval_id, str(t)) )
cc.close()
outs.close()
############################################################
@transform( exportLiverTestesTSSTranscriptList, suffix( ".transcript.tss.distance.export"), ".transcript.tss.distance.export.load" )
def loadLiverTestesTSSTranscriptList( infile, outfile ):
'''Load CAPseq interval annotations: distance to transcript transcription start sites '''
geneset_name = PARAMS["geneset_name"]
statement = """cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=liver_testes_merged_%(geneset_name)s_interval_transcript_mapping
--index=gene_id
--index=interval_id
> %(outfile)s; """
P.run()
############################################################
@follows( liverTestesVenn )
@files( "liver_vs_testes/liver.testes.merge.bed", "liver_vs_testes/liver.testes.merge.composition" )
def annotateLiverTesteMergedComposition( infile, outfile ):
'''Establish the nucleotide composition of tss intervals'''
to_cluster = True
statement = """cat %(infile)s | python %(scriptsdir)s/bed2gff.py --as-gtf
| python %(scriptsdir)s/gtf2table.py
--counter=composition-cpg
--log=%(outfile)s.log
--genome-file=%(genome_dir)s/%(genome)s
> %(outfile)s; """
P.run()
############################################################
@transform( annotateLiverTesteMergedComposition, suffix( ".composition"), ".composition.load" )
def loadLiverTesteMergedComposition( infile, outfile ):
'''Load the nucleotide composition of tss intervals'''
statement = """cat %(infile)s | python ~/src/csv2db.py
--database=%(database)s
--table=liver_testes_merged_composition
--index=gene_id
> %(outfile)s; """
P.run()
############################################################
@follows(copyCapseqReplicatedBedFiles, exportCapseqTSSBed)
@files( (PARAMS["compare_liver_pattern"]+".replicated.transcript.tss.bed", PARAMS["compare_testes_pattern"]+".replicated.transcript.tss.bed"), "liver_vs_testes/liver.testes.transcript.tss.venn" )
def liverTestesTSSVenn(infiles, outfile):
'''identify interval overlap between liver and testes for TSS associated intervals. Merge intervals first.'''
liver, testes = infiles
to_cluster = True
statement = '''cat %(liver)s %(testes)s | mergeBed -i stdin > liver_vs_testes/liver.testes.tss.merge.bed;
echo "Total merged intervals" > %(outfile)s; cat liver_vs_testes/liver.testes.tss.merge.bed | wc -l >> %(outfile)s;
echo "Liver & testes" >> %(outfile)s; intersectBed -a liver_vs_testes/liver.testes.tss.merge.bed -b %(liver)s -u | intersectBed -a stdin -b %(testes)s -u | wc -l >> %(outfile)s;
echo "Testes only" >> %(outfile)s; intersectBed -a liver_vs_testes/liver.testes.tss.merge.bed -b %(liver)s -v | wc -l >> %(outfile)s;
echo "Liver only" >> %(outfile)s; intersectBed -a liver_vs_testes/liver.testes.tss.merge.bed -b %(testes)s -v | wc -l >> %(outfile)s;
sed -i '{N;s/\\n/\\t/g}' %(outfile)s; '''
P.run()
############################################################
@transform( liverTestesTSSVenn, suffix(".venn"), ".venn.load" )
def loadLiverTestesTSSVenn(infile, outfile):
'''Load liver testes venn overlap into database '''
header = "category,intervals"
statement = '''cat %(infile)s | python %(scriptsdir)s/csv2db.py
--database=%(database)s
--table=liver_testes_tss_venn
--header=%(header)s
> %(outfile)s '''
P.run()
############################################################
@follows( exportLiverTestesMergeWithSort )
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"liver_vs_testes/\1.replicated.liver.testes.merge.reads.peakshape.gz" )
def getPeakShapeLiverTestesReads(infile, outfile):
'''Cluster intervals based on peak shape '''
track = P.snip( os.path.basename( infile ), ".replicated.bed" )
bedfile = "liver_vs_testes/liver.testes.merge.sort.bed"
bamfile = "../merged_bams/%s.merge.bam" % track
assert os.path.exists( bamfile ), "could not find bamfile %s for track %s" % ( bamfile, track )
statement = '''python %(scriptsdir)s/bam2peakshape.py %(bamfile)s %(bedfile)s
--output-filename-pattern=%(outfile)s.%%s
--sort=peak-width
--sort=peak-height
--sort=interval-width
--sort=interval-score
--window-size=3000
--bin-size=10
--normalization=sum
--centring-method=reads
--force
--log=%(outfile)s.log
| gzip > %(outfile)s '''
P.run()
############################################################
@follows( exportLiverTestesMergeWithSort )
@transform( copyCapseqReplicatedBedFiles, regex(r"(\S+).replicated.bed"), r"liver_vs_testes/\1.replicated.liver.testes.merge.centre.peakshape.gz" )
def getPeakShapeLiverTestesCentre(infile, outfile):
'''Cluster intervals based on peak shape '''
track = P.snip( os.path.basename( infile ), ".replicated.bed" )
bedfile = "liver_vs_testes/liver.testes.merge.sort.bed"
bamfile = "../merged_bams/%s.merge.bam" % track
assert os.path.exists( bamfile ), "could not find bamfile %s for track %s" % ( bamfile, track )
statement = '''python %(scriptsdir)s/bam2peakshape.py %(bamfile)s %(bedfile)s
--output-filename-pattern=%(outfile)s.%%s
--sort=peak-width
--sort=peak-height
--sort=interval-width
--sort=interval-score
--window-size=3000
--bin-size=10
--normalization=sum
--centring-method=middle
--force
--log=%(outfile)s.log
| gzip > %(outfile)s '''
P.run()
############################################################
@follows( liverTestesVenn )
@files( "liver_vs_testes/*.liver.testes.unique.bed", "liver_vs_testes/liver.testes.chromatin.log" )
def liverTestesUniqueChromatinProfile(infiles, outfile):
'''plot chromatin mark profiles over tissue-specific CAPseq intervals'''
chromatin = P.asList(PARAMS["bigwig_chromatin"])
if len(chromatin[0]) > 0:
for infile in infiles:
track = P.snip( os.path.basename(infile), ".liver.testes.unique.bed" )
outtemp = P.getTempFile()
tmpfilename = outtemp.name
for bw in chromatin:
chromatin_track = P.snip( os.path.basename(bw), ".bam" )
ofp = "liver_vs_testes/" + track + "." + chromatin_track + ".profile"
statement = '''cat %(infile)s | python %(scriptsdir)s/bed2gff.py --as-gtf | gzip > %(tmpfilename)s.gtf.gz;
python %(scriptsdir)s/bam2geneprofile.py
--bamfile=%(bw)s
--gtffile=%(tmpfilename)s.gtf.gz
--output-filename-pattern=%(ofp)s
--reporter=gene
--method=intervalprofile
--log=%(outfile)s
--normalization=total-sum
--normalize-profile=area
--normalize-profile=counts
--normalize-profile=none'''
P.run()
else:
statement = '''touch %(outfile)s '''
P.run()
############################################################
############################################################
## Export gene lists
@follows(loadLiverTestesTSSTranscriptList)
@transform( loadLiverTestesUnique, suffix(".liver.testes.unique.bed.load"), ".liver.testes.unique.genelist" )
def exportLiverTestesSpecificCAPseqGenes( infile, outfile ):
'''Export liver vs testes tissue specific CAPseq genes '''
track = P.snip( os.path.basename( infile ), ".liver.testes.unique.bed.load" ).replace("-","_").replace(".","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct a.gene_id
FROM %(track)s_liver_testes_unique_intervals u, annotations.transcript_info a,
liver_testes_merged_%(geneset_name)s_transcript_tss_distance t, liver_testes_merged_intervals i,
liver_testes_merged_%(geneset_name)s_interval_transcript_mapping m
WHERE i.interval_id=t.gene_id
AND i.contig=u.contig
AND i.start=u.start
AND t.closest_dist < 1000
AND a.gene_biotype='protein_coding'
AND m.interval_id=t.gene_id
AND a.transcript_id = m.transcript_id ''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("gene_id\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@follows(loadLiverTestesTSSTranscriptList)
@transform(loadLiverTestesShared, suffix(".shared.bed.load"), ".shared.genelist")
def exportLiverTestesSharedCAPseqGenes( infile, outfile ):
'''Export list of genes with TSS associated CAPseq intervals in both liver and testes'''
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT distinct a.gene_id
FROM liver_testes_shared_intervals s, annotations.transcript_info a,
liver_testes_merged_%(geneset_name)s_transcript_tss_distance t, liver_testes_merged_intervals i,
liver_testes_merged_%(geneset_name)s_interval_transcript_mapping m
WHERE i.interval_id=t.gene_id
AND i.contig=s.contig
AND i.start=s.start
AND t.closest_dist < 1000
AND a.gene_biotype='protein_coding'
AND m.interval_id=t.gene_id
AND a.transcript_id = m.transcript_id''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
outs.write("gene_id\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@follows(liverTestesVenn)
@transform( "liver_vs_testes/*.replicated.liver.testes.unique.bed", suffix(".bed"), ".length" )
def exportLiverTestesUniqueLength( infile, outfile ):
'''Export length of CAPseq intervals'''
statement = '''cat %(infile)s | awk '{print $3-$2}' > %(outfile)s'''
P.run()
############################################################
@follows(liverTestesVenn)
@transform( "liver_vs_testes/liver.testes.shared.bed", suffix(".bed"), ".length" )
def exportLiverTestesSharedLength( infile, outfile ):
'''Export length of CAPseq intervals'''
statement = '''cat %(infile)s | awk '{print $3-$2}' > %(outfile)s'''
P.run()
############################################################
@merge(liverTestesCompare, "liver_testes_lengths.log")
def exportLiverTestesIntervalLengths( infiles, outfile ):
'''Export list of genes with TSS associated CAPseq intervals in both liver and testes'''
for bed in infiles:
out = bed.replace(".bed",".length")
statement = '''cat %(bed)s | awk '{print $3-$2}' > %(out)s 2>> %(outfile)s '''
P.run()
############################################################
@transform(loadLiverTestesUnique, suffix(".unique.bed.load"), ".shared.cpg_obsexp")
def exportLiverTestesSharedCpGObsExp( infile, outfile ):
'''Export list of genes with TSS associated CAPseq intervals in both liver and testes'''
track = P.snip( os.path.basename( infile ), ".liver.testes.unique.bed.load" ).replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
# Extract data from db
query = '''SELECT CpG_ObsExp
FROM %(track)s_capseq_composition
EXCEPT
SELECT CpG_ObsExp
FROM %(track)s_capseq_composition c, %(track)s_intervals i,
%(track)s_liver_testes_unique_intervals s
WHERE c.gene_id=i.interval_id
AND i.contig=s.contig
AND i.start=s.start''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
#outs.write("gene_id\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
###########################################################
@transform(loadLiverTestesUnique, suffix(".bed.load"), ".cpg_obsexp")
def exportLiverTestesUniqueCpGObsExp( infile, outfile ):
'''Export list of genes with TSS associated CAPseq intervals in both liver and testes'''
track = P.snip( os.path.basename( infile ), ".liver.testes.unique.bed.load" ).replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT CpG_ObsExp
FROM %(track)s_capseq_composition c, %(track)s_intervals i,
%(track)s_liver_testes_unique_intervals s
WHERE c.gene_id=i.interval_id
AND i.contig=s.contig
AND i.start=s.start''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
#outs.write("gene_id\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform(loadLiverTestesUnique, suffix(".unique.bed.load"), ".shared.gc_content")
def exportLiverTestesSharedGC( infile, outfile ):
'''Export list of genes with TSS associated CAPseq intervals in both liver and testes'''
track = P.snip( os.path.basename( infile ), ".liver.testes.unique.bed.load" ).replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
# Extract data from db
query = '''SELECT pGC
FROM %(track)s_capseq_composition
EXCEPT
SELECT pGC
FROM %(track)s_capseq_composition c, %(track)s_intervals i,
%(track)s_liver_testes_unique_intervals s
WHERE c.gene_id=i.interval_id
AND i.contig=s.contig
AND i.start=s.start''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
#outs.write("gene_id\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
###########################################################
@transform(loadLiverTestesUnique, suffix(".bed.load"), ".gc_content")
def exportLiverTestesUniqueGC( infile, outfile ):
'''Export list of genes with TSS associated CAPseq intervals in both liver and testes'''
track = P.snip( os.path.basename( infile ), ".liver.testes.unique.bed.load" ).replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT pGC
FROM %(track)s_capseq_composition c, %(track)s_intervals i,
%(track)s_liver_testes_unique_intervals s
WHERE c.gene_id=i.interval_id
AND i.contig=s.contig
AND i.start=s.start''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
#outs.write("gene_id\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
@transform(loadLiverTestesUnique, suffix(".unique.bed.load"), ".shared.cpg_density")
def exportLiverTestesSharedCpGDensity( infile, outfile ):
'''Export list of genes with TSS associated CAPseq intervals in both liver and testes'''
track = P.snip( os.path.basename( infile ), ".liver.testes.unique.bed.load" ).replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
# Extract data from db
query = '''SELECT pCpG
FROM %(track)s_capseq_composition
EXCEPT
SELECT pCpG
FROM %(track)s_capseq_composition c, %(track)s_intervals i,
%(track)s_liver_testes_unique_intervals s
WHERE c.gene_id=i.interval_id
AND i.contig=s.contig
AND i.start=s.start''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
#outs.write("gene_id\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
###########################################################
@transform(loadLiverTestesUnique, suffix(".bed.load"), ".cpg_density")
def exportLiverTestesUniqueCpGDensity( infile, outfile ):
'''Export list of genes with TSS associated CAPseq intervals in both liver and testes'''
track = P.snip( os.path.basename( infile ), ".liver.testes.unique.bed.load" ).replace(".","_").replace("-","_")
geneset_name = PARAMS["geneset_name"]
# Connect to DB
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = "ATTACH DATABASE '%s' AS annotations; " % (PARAMS["geneset_database"])
cc.execute(statement)
# Extract data from db
query = '''SELECT pCpG
FROM %(track)s_capseq_composition c, %(track)s_intervals i,
%(track)s_liver_testes_unique_intervals s
WHERE c.gene_id=i.interval_id
AND i.contig=s.contig
AND i.start=s.start''' % locals()
cc.execute( query )
E.info( query )
# Write to file
outs = open( outfile, "w")
#outs.write("gene_id\n")
for result in cc:
pre = ""
for r in result:
outs.write("%s%s" % (pre, str(r)) )
pre = "\t"
outs.write("\n")
cc.close()
outs.close()
############################################################
############################################################
## GO analysis
@follows( mkdir("liver_vs_testes/go") )
@transform( exportLiverTestesSpecificCAPseqGenes, suffix(".liver.testes.unique.genes.export"), ".liver.testes.unique.genes.go" )
def runGOOnGeneLists( infile, outfile ):
PipelineGO.runGOFromFiles( outfile = outfile,
outdir = "liver_vs_testes/go",
fg_file = infile,
bg_file = None,
go_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_full"] ),
ontology_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_full_obo"] ),
minimum_counts = PARAMS["go_minimum_counts"] )
############################################################
@transform( exportLiverTestesSpecificCAPseqGenes, suffix(".liver.testes.unique.genes.export"), ".liver.testes.unique.genes.goslim" )
def runGOSlimOnGeneLists( infile, outfile ):
PipelineGO.runGOFromFiles( outfile = outfile,
outdir = "go",
fg_file = infile,
bg_file = None,
go_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_slim"] ),
ontology_file = os.path.join(PARAMS["geneset_dir"], PARAMS["go_slim_obo"]),
minimum_counts = PARAMS["go_minimum_counts"] )
########################################################################################################################
########################################################################################################################
########################################################################################################################
## Section 8: Plot paper figures in R
########################################################################################################################
########################################################################################################################
########################################################################################################################
@follows( mkdir("plots") )
@transform(getCapseqCGIOverlapCount, regex(r"(\S+).cgi_overlap"), r"plots/\1.nmi.cgi.venn.pdf")
def plotFigure1b( infile, outfile):
'''Figure 1b: Venn diagrams of CAPseq NMIs vs UCSC CGIs'''
track= P.snip( os.path.basename(infile), ".cgi_overlap").replace(".","_").replace("-","_")
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = '''SELECT overlap FROM %(track)s_cgi_venn where track like "%%ucsc%%"''' % locals()
print statement
cc.execute( statement )
overlap=int(cc.fetchone()[0])
statement = '''SELECT intervals FROM external_interval_sets where bed like "%%ucsc%%"''' % locals()
print statement
cc.execute( statement )
cgi=int(cc.fetchone()[0])
statement = '''SELECT count(*) FROM %(track)s_intervals ''' % locals()
print statement
cc.execute( statement )
nmi=int(cc.fetchone()[0])
offset = cgi - overlap
nmi2 = offset+int(nmi)
R('''library(VennDiagram) ''')
R('''CGI <- seq(1,%(cgi)i)''' % locals() )
R('''NMI <- seq(%(offset)i,%(nmi2)i)''' % locals() )
R('''x <- list(CGI=CGI,NMI=NMI)''' )
R('''pdf(file='%(outfile)s', height=8, width=8, onefile=TRUE, family='Helvetica', paper='A4', pointsize=12)''' % locals() )
R('''venn <- venn.diagram( x, filename=NULL, col="#58595B", fill=c("#EC1C24","#69BC45"), alpha=0.75, label.col=c("darkred", "white", "darkgreen"), cex=2.0, fontfamily="Helvetica", fontface="bold")''' % locals() )
R('''grid.draw(venn)''')
R('''dev.off()''')
############################################################
@follows( mkdir("plots"), getReplicatedTranscriptTSSProfileCapseq )
@transform("tss-profile/*.replicated.transcript.tss-profile.capseq.counts.tsv.gz", regex(r"tss-profile/(\S+).replicated.transcript.tss-profile.capseq.counts.tsv.gz"), r"plots/\1.combined.tss-profile.pdf")
def plotFigure2b( infile, outfile):
'''Figure 2b: TSS profiles for CAPseq and non CAPseq genes'''
capseq = infile
scriptsdir = PARAMS["scriptsdir"]
nocapseq = capseq.replace("capseq", "nocapseq")
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''combinedTSSPlot(capseqfile="%(capseq)s", nocapseqfile="%(nocapseq)s", outfile="%(outfile)s", ylimit=c(0,10), scale=1)''' % locals() )
############################################################
@follows( mkdir("plots"), getReplicatedTranscriptTSSProfileCapseq )
@transform("tss-profile/*.replicated.gene.tss-profile.capseq.counts.tsv.gz", regex(r"tss-profile/(\S+).replicated.gene.tss-profile.capseq.counts.tsv.gz"), r"plots/\1.combined.gene.tss-profile.pdf")
def plotFigure2bGene( infile, outfile):
'''Figure 2b: TSS profiles for CAPseq and non CAPseq genes'''
capseq = infile
scriptsdir = PARAMS["scriptsdir"]
nocapseq = capseq.replace("capseq", "nocapseq")
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''combinedTSSPlot(capseqfile="%(capseq)s", nocapseqfile="%(nocapseq)s", outfile="%(outfile)s", ylimit=c(0,10), scale=1)''' % locals() )
############################################################
@follows( mkdir("plots") )
@transform(loadLiverTestesTSSVenn, regex(r"liver_vs_testes/(\S+).load"), r"plots/"+PARAMS["species"]+r"_\1.pdf")
def plotFigure3bTSSVenn( infile, outfile):
'''Figure 3b: TSS profiles for CAPseq and non CAPseq genes'''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = '''SELECT * FROM liver_testes_tss_venn''' % locals()
print statement
cc.execute( statement )
total=int(cc.fetchone()[1])
liverAndTestes=int(cc.fetchone()[1])
testes=int(cc.fetchone()[1])
liver=int(cc.fetchone()[1])
cc.close()
liver_total = liver+liverAndTestes
R('''library(VennDiagram) ''')
R('''liver <- seq(1,%(liver_total)i)''' % locals() )
R('''testes <- seq(%(liver)i,%(total)i)''' % locals() )
R('''x <- list(Liver=liver,Testes=testes)''' )
R('''pdf(file='%(outfile)s', height=8, width=8, onefile=TRUE, family='Helvetica', paper='A4', pointsize=12)''' % locals() )
R('''venn <- venn.diagram( x, filename=NULL, col="#58595B", fill=c("#EC1C24","#69BC45"), alpha=0.75, label.col=c("darkred", "white", "darkgreen"), cex=2.0, fontfamily="Helvetica", fontface="bold")''' % locals() )
R('''grid.draw(venn)''')
R('''dev.off()''')
############################################################
@follows( mkdir("plots") )
@transform(loadLiverTestesIntergenicVenn, regex(r"liver_vs_testes/(\S+).load"), r"plots/"+PARAMS["species"]+r"_\1.pdf")
def plotFigure3bIntergenicVenn( infile, outfile):
'''Figure 3b: TSS profiles for CAPseq and non CAPseq genes'''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = '''SELECT * FROM liver_testes_intergenic_venn''' % locals()
print statement
cc.execute( statement )
total=int(cc.fetchone()[1])
liverAndTestes=int(cc.fetchone()[1])
testes=int(cc.fetchone()[1])
liver=int(cc.fetchone()[1])
cc.close()
liver_total = liver+liverAndTestes
R('''library(VennDiagram) ''')
R('''liver <- seq(1,%(liver_total)i)''' % locals() )
R('''testes <- seq(%(liver)i,%(total)i)''' % locals() )
R('''x <- list(Liver=liver,Testes=testes)''' )
R('''pdf(file='%(outfile)s', height=8, width=8, onefile=TRUE, family='Helvetica', paper='A4', pointsize=12)''' % locals() )
R('''venn <- venn.diagram( x, filename=NULL, col="#58595B", fill=c("#EC1C24","#69BC45"), alpha=0.75, label.col=c("darkred", "white", "darkgreen"), cex=2.0, fontfamily="Helvetica", fontface="bold")''' % locals() )
R('''grid.draw(venn)''')
R('''dev.off()''')
############################################################
@follows( mkdir("plots"),exportLiverTestesIntervalLengths )
@files(("liver_vs_testes/*nmi.liver.testes.shared.length","liver_vs_testes/*nmi.liver.testes.uniq.length"), "plots/"+PARAMS["species"]+".liver.testes.length.pdf")
def plotFigure3Length( infiles, outfile):
'''Figure 3 supplementary: length of liver and testes unique intervals compared to shared'''
liver_shared, testes_shared, liver_uniq, testes_uniq = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''sharesVsUniqueLengthPlot(liver_shared="%(liver_shared)s", liver_unique="%(liver_uniq)s", testes_shared="%(testes_shared)s", testes_unique="%(testes_uniq)s", outfile="%(outfile)s")''' % locals() )
############################################################
@follows( mkdir("plots"),exportLiverTestesUniqueCpGObsExp,exportLiverTestesSharedCpGObsExp )
@files("liver_vs_testes/*.cpg_obsexp", "plots/"+PARAMS["species"]+".liver.testes.cpg_obsexp.pdf")
def plotFigure3CpGObsExp( infiles, outfile):
'''Figure 3 supplementary: CpG Obs/Exp of liver and testes unique intervals compared to shared'''
liver_shared, liver_uniq, testes_shared, testes_uniq = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''sharesVsUniqueCpgPlot(liver_shared="%(liver_shared)s", liver_unique="%(liver_uniq)s", testes_shared="%(testes_shared)s", testes_unique="%(testes_uniq)s", outfile="%(outfile)s")''' % locals() )
############################################################
@follows( mkdir("plots"),exportLiverTestesUniqueGC,exportLiverTestesSharedGC )
@files("liver_vs_testes/*.gc_content", "plots/"+PARAMS["species"]+".liver.testes.gc_content.pdf")
def plotFigure3GC( infiles, outfile):
'''Figure 3 supplementary: CpG Obs/Exp of liver and testes unique intervals compared to shared'''
liver_shared, liver_uniq, testes_shared, testes_uniq = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''sharesVsUniqueCpgPlot(liver_shared="%(liver_shared)s", liver_unique="%(liver_uniq)s", testes_shared="%(testes_shared)s", testes_unique="%(testes_uniq)s", outfile="%(outfile)s", xlabel="GC content", xlimit=c(0,1))''' % locals() )
############################################################
@follows( mkdir("plots"),exportLiverTestesUniqueCpGDensity,exportLiverTestesSharedCpGDensity )
@files("liver_vs_testes/*.cpg_density", "plots/"+PARAMS["species"]+".liver.testes.cpg_density.pdf")
def plotFigure3CpGDensity( infiles, outfile):
'''Figure 3 supplementary: CpG density of liver and testes unique intervals compared to shared'''
liver_shared, liver_uniq, testes_shared, testes_uniq = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''sharesVsUniqueCpgPlot(liver_shared="%(liver_shared)s", liver_unique="%(liver_uniq)s", testes_shared="%(testes_shared)s", testes_unique="%(testes_uniq)s", outfile="%(outfile)s", xlabel="CpG Density", xlimit=c(0,0.3))''' % locals() )
############################################################
@follows( liverTestesUniqueChromatinProfile, mkdir("plots") )
@merge("liver_vs_testes/"+PARAMS["species"]+"_testes*H3K4Me3-1*profile.area.tsv.gz", r"plots/"+PARAMS["species"]+r"_testes_unique_intervals_H3K4Me3_profile.pdf")
def plotFigure3cH3K4Me3Testes( infiles, outfile):
'''Figure 3c: Liver and testes H3K4Me3 reads over liver and testes unique intervals'''
if len(infiles) == 2:
inlist = "','".join(infiles)
inlist = "'"+inlist+"'"
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''infiles <- c(%(inlist)s) ''' % locals() )
R('''liverTestesChromatinPlot(infiles=infiles, outfile="%(outfile)s")''' % locals() )
############################################################
@follows( liverTestesUniqueChromatinProfile, mkdir("plots") )
@merge("liver_vs_testes/"+PARAMS["species"]+"_liver*H3K4Me3-1*profile.area.tsv.gz", r"plots/"+PARAMS["species"]+r"_liver_unique_intervals_H3K4Me3_profile.pdf")
def plotFigure3cH3K4Me3Liver( infiles, outfile):
'''Figure 3c: Liver and testes H3K4Me3 reads over liver and testes unique intervals'''
if len(infiles) == 2:
inlist = "','".join(infiles)
inlist = "'"+inlist+"'"
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''infiles <- c(%(inlist)s) ''' % locals() )
R('''liverTestesChromatinPlot(infiles=infiles, outfile="%(outfile)s")''' % locals() )
############################################################
@follows( exportLiverTestesSpecificCAPseqGenes, exportLiverTestesSharedCAPseqGenes, mkdir("plots") )
@merge("liver_vs_testes/*unique.genelist", r"plots/"+PARAMS["species"]+r"_liver_testes_unique_interval_dx_scatter_rpkm.pdf")
def plotFigure3dxScatterRPKM( infiles, outfile):
'''Figure 3: differential expression of genes with liver and testes specific NMIs'''
if len(infiles) == 2:
liver, testes = infiles
shared = "liver_vs_testes/liver.testes.shared.genelist"
scriptsdir = PARAMS["scriptsdir"]
rpkm = PARAMS["expression_rpkm"]
R('''source("%(scriptsdir)s/R/proj007/rpkm.R") ''' % locals() )
R('''plot_rpkm(liver="%(liver)s", testes="%(testes)s", shared="%(shared)s", rpkm="%(rpkm)s", outfile="%(outfile)s")''' % locals() )
############################################################
@follows( exportLiverTestesSpecificCAPseqGenes, exportLiverTestesSharedCAPseqGenes, mkdir("plots") )
@merge("liver_vs_testes/*unique.genelist", r"plots/"+PARAMS["species"]+r"_liver_testes_unique_interval_dx_2fold_rpkm.pdf")
def plotFigure3TwoFolddxRPKM( infiles, outfile):
'''Figure 3: differential expression of genes with liver and testes specific NMIs'''
if len(infiles) == 2:
liver, testes = infiles
shared = "liver_vs_testes/liver.testes.shared.genelist"
scriptsdir = PARAMS["scriptsdir"]
rpkm = PARAMS["expression_rpkm"]
R('''source("%(scriptsdir)s/R/proj007/rpkm.R") ''' % locals() )
print '''foldchange_rpkm(liver="%(liver)s", testes="%(testes)s", shared="%(shared)s", rpkm="%(rpkm)s", outfile="%(outfile)s")''' % locals()
R('''foldchange_rpkm(liver="%(liver)s", testes="%(testes)s", shared="%(shared)s", rpkm="%(rpkm)s", outfile="%(outfile)s")''' % locals() )
############################################################
@follows( exportLiverTestesSpecificCAPseqGenes, exportLiverTestesSharedCAPseqGenes, mkdir("plots") )
@merge("liver_vs_testes/*unique.genelist", r"plots/"+PARAMS["species"]+r"_liver_testes_unique_interval_dx_density_rpkm.pdf")
def plotFigure3dxRPKMDist( infiles, outfile):
'''Figure 3: differential expression of genes with liver and testes specific NMIs'''
if len(infiles) == 2:
liver, testes = infiles
shared = "liver_vs_testes/liver.testes.shared.genelist"
scriptsdir = PARAMS["scriptsdir"]
rpkm = PARAMS["expression_rpkm"]
R('''source("%(scriptsdir)s/R/proj007/rpkm.R") ''' % locals() )
R('''density_rpkm(liver="%(liver)s", testes="%(testes)s", shared="%(shared)s", rpkm="%(rpkm)s", outfile="%(outfile)s")''' % locals() )
############################################################
@follows( exportLiverTestesSpecificCAPseqGenes, exportLiverTestesSharedCAPseqGenes, mkdir("plots") )
@merge("liver_vs_testes/*unique.genelist", r"plots/"+PARAMS["species"]+r"_liver_testes_unique_interval_dx_scatter_counts.pdf")
def plotFigure3dxScatterReadCounts( infiles, outfile):
'''Figure 3: differential expression of genes with liver and testes specific NMIs'''
if len(infiles) == 2:
liver, testes = infiles
shared = "liver_vs_testes/liver.testes.shared.genelist"
scriptsdir = PARAMS["scriptsdir"]
counts = PARAMS["expression_counts"]
R('''source("%(scriptsdir)s/R/proj007/rpkm.R") ''' % locals() )
R('''plot_rpkm(liver="%(liver)s", testes="%(testes)s", shared="%(shared)s", rpkm="%(counts)s", outfile="%(outfile)s")''' % locals() )
############################################################
@follows( exportLiverTestesSpecificCAPseqGenes, exportLiverTestesSharedCAPseqGenes, mkdir("plots") )
@merge("liver_vs_testes/*unique.genelist", r"plots/"+PARAMS["species"]+r"_liver_testes_unique_interval_dx_boxplot_rpkm.pdf")
def plotFigure3dxBoxplotRPKM( infiles, outfile):
'''Figure 3: differential expression of genes with liver and testes specific NMIs'''
if len(infiles) == 2:
liver, testes = infiles
shared = "liver_vs_testes/liver.testes.shared.genelist"
scriptsdir = PARAMS["scriptsdir"]
rpkm = PARAMS["expression_rpkm"]
R('''source("%(scriptsdir)s/R/proj007/rpkm.R") ''' % locals() )
R('''boxplot_rpkm(liver="%(liver)s", testes="%(testes)s", shared="%(shared)s", rpkm="%(rpkm)s", outfile="%(outfile)s")''' % locals() )
############################################################
@follows( overlappedGeneCAPseqProfile, controlGeneCAPseqProfile, mkdir("plots") )
@merge("overlapped_genes/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*.capseq_profile.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_overlapped_genes_capseq_profile.pdf")
def plotFigure4a( infiles, outfile):
'''Figure 4a: capseq profile over overlapped genes'''
overlapped,control = infiles
species = overlapped[0:2]
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesProfilePlot(overlapped="%(overlapped)s", control="%(control)s", outfile="%(outfile)s", ylabel="NMIs")''' % locals() )
############################################################
@follows( overlappedGeneCAPseqProfile, controlGeneCAPseqProfile, mkdir("plots") )
@merge("overlapped_genes/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*.capseq_profile.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_overlapped_genes_capseq_profile_smoothed.pdf")
def plotFigure4aSmoothed( infiles, outfile):
'''Figure 4a: capseq profile over overlapped genes'''
overlapped,control = infiles
species = overlapped[0:2]
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesSmoothedProfilePlot(overlapped="%(overlapped)s", control="%(control)s", outfile="%(outfile)s", ylabel="NMIs", smooth=0.5)''' % locals() )
############################################################
@follows( overlappedGeneChromatinProfile, mkdir("plots") )
@merge("overlapped_genes/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*_"+PARAMS["plots_fig4_tissue"]+"*H3K27Me3*profile.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_overlapped_genes_H3K27Me3_profile.pdf")
def plotFigure4bK27( infiles, outfile):
'''Figure 4b: H3K27Me3 profile over overlapped genes'''
if len(infiles) > 0:
control,overlapped = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesProfilePlot(overlapped="%(overlapped)s", control="%(control)s", outfile="%(outfile)s", ylabel="H3K27Me3")''' % locals() )
############################################################
@follows( overlappedGeneChromatinProfile, mkdir("plots") )
@merge("overlapped_genes/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*_"+PARAMS["plots_fig4_tissue"]+"*H3K27Me3*profile.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_overlapped_genes_H3K27Me3_profile_smoothed.pdf")
def plotFigure4bK27Smoothed( infiles, outfile):
'''Figure 4b: H3K27Me3 profile over overlapped genes'''
if len(infiles) > 0:
control,overlapped = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesSmoothedProfilePlot(overlapped="%(overlapped)s", control="%(control)s", outfile="%(outfile)s", ylabel="H3K27Me3")''' % locals() )
############################################################
@follows( overlappedGeneChromatinProfileWide, mkdir("plots") )
@merge("overlapped_genes/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*_"+PARAMS["plots_fig4_tissue"]+"*H3K27Me3*profile.wide.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_overlapped_genes_H3K27Me3_profile_wide_smoothed.pdf")
def plotFigure4bK27WideSmoothed( infiles, outfile):
'''Figure 4b: H3K27Me3 profile over overlapped genes'''
if len(infiles) > 0:
control,overlapped = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesSmoothedProfilePlot(overlapped="%(overlapped)s", control="%(control)s", outfile="%(outfile)s", ylabel="H3K27Me3", smooth=0.5)''' % locals() )
############################################################
@follows( longIntervalGeneChromatinProfile, mkdir("plots") )
@merge("long_intervals/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*_"+PARAMS["plots_fig4_tissue"]+"*H3K27Me3*profile.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_long_genes_H3K27Me3_profile.pdf")
def plotFigure4bLongGenesK27( infiles, outfile):
'''Figure 4b: H3K27Me3 profile over overlapped genes'''
if len(infiles) > 0:
overlapped,longgenes,shortgenes = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesProfilePlot(overlapped="%(longgenes)s", control="%(shortgenes)s", outfile="%(outfile)s", ylabel="H3K27Me3")''' % locals() )
############################################################
@follows( overlappedGeneChromatinProfile, mkdir("plots") )
@merge("overlapped_genes/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*_"+PARAMS["plots_fig4_tissue"]+"-H3K4Me3*profile.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_overlapped_genes_H3K4Me3_profile.pdf")
def plotFigure4bK4( infiles, outfile):
'''Figure 4b: H3K4Me3 profile over overlapped genes'''
if len(infiles) > 0:
control,overlapped = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesProfilePlot(overlapped="%(overlapped)s", control="%(control)s", outfile="%(outfile)s", ylabel="H3K4Me3")''' % locals() )
############################################################
@follows( overlappedGeneChromatinProfile, mkdir("plots") )
@merge("overlapped_genes/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*_"+PARAMS["plots_fig4_tissue"]+"-H3K4Me3*profile.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_overlapped_genes_H3K4Me3_profile_smoothed.pdf")
def plotFigure4bK4Smoothed( infiles, outfile):
'''Figure 4b: H3K4Me3 profile over overlapped genes'''
if len(infiles) > 0:
control,overlapped = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesSmoothedProfilePlot(overlapped="%(overlapped)s", control="%(control)s", outfile="%(outfile)s", ylabel="H3K4Me3", smooth=0.3)''' % locals() )
############################################################
@follows( overlappedGeneChromatinProfile, mkdir("plots") )
@merge("overlapped_genes/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*_"+PARAMS["plots_fig4_tissue"]+"-H3K4Me3*profile.wide.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_overlapped_genes_H3K4Me3_profile_wide_smoothed.pdf")
def plotFigure4bK4WideSmoothed( infiles, outfile):
'''Figure 4b: H3K4Me3 profile over overlapped genes'''
if len(infiles) > 0:
control,overlapped = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesSmoothedProfilePlot(overlapped="%(overlapped)s", control="%(control)s", outfile="%(outfile)s", ylabel="H3K4Me3")''' % locals() )
############################################################
@follows( longIntervalGeneChromatinProfile, mkdir("plots") )
@merge("long_intervals/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"*_"+PARAMS["plots_fig4_tissue"]+"-H3K4Me3*profile.counts.tsv.gz", "plots/"+PARAMS["species"]+"_"+PARAMS["plots_fig4_tissue"]+"_long_genes_H3K4Me3_profile.pdf")
def plotFigure4bLongGenesK4( infiles, outfile):
'''Figure 4b: H3K4Me3 profile over overlapped genes'''
if len(infiles) > 0:
overlapped,longgenes,shortgenes = infiles
scriptsdir = PARAMS["scriptsdir"]
R('''source("%(scriptsdir)s/R/proj007/proj007.R") ''' % locals() )
R('''overlappedGenesProfilePlot(overlapped="%(longgenes)s", control="%(shortgenes)s", outfile="%(outfile)s", ylabel="H3K4Me3")''' % locals() )
############################################################
@follows( mkdir("plots") )
@merge(getGenesetCapseqOverlapList, "plots/"+PARAMS["species"]+"_overlapped_genes_tissue_venn.pdf")
def plotFigure4OverlappedGenesTissueVenn( infiles, outfile):
'''Figure 4: venn diagram of genes overlapped >90% in different tissues'''
inlist = "'"+"','".join(infiles)+"'"
print(inlist)
R('''library(VennDiagram) ''')
R('''inlist <- c(%(inlist)s)''' % locals() )
R('''x <- list()''' )
R('''listnames <- NULL''' )
R('''length(x) <- length(inlist)''')
R('''for ( i in 1:length(inlist)) { x[[i]] <- read.table(file=inlist[i], header=FALSE, stringsAsFactors=FALSE)[,1]; listnames <- c(listnames,inlist[i]); }''' % locals() )
R('''names(x) <- listnames''')
R('''pdf(file='%(outfile)s', height=8, width=8, onefile=TRUE, family='Helvetica', paper='A4', pointsize=12)''' % locals() )
R('''venn <- venn.diagram( x, filename=NULL, col="#58595B", alpha=0.75, cex=2.0, fontfamily="Helvetica", fontface="bold")''' % locals() )
R('''grid.draw(venn)''')
R('''dev.off()''')
# Convert pdf to png for web
outfile2 = outfile.replace("pdf","png")
statement = '''convert %(outfile)s %(outfile2)s'''
P.run()
############################################################
@follows( mkdir("plots") )
@merge(getLongIntervalGeneList, "plots/"+PARAMS["species"]+"_long_interval_genes_tissue_venn.pdf")
def plotFigure4LongGenesTissueVenn( infiles, outfile):
'''Figure 4: venn diagram of genes overlapped >90% in different tissues'''
inlist = "'"+"','".join(infiles)+"'"
print(inlist)
R('''library(VennDiagram) ''')
R('''inlist <- c(%(inlist)s)''' % locals() )
R('''x <- list()''' )
R('''listnames <- NULL''' )
R('''length(x) <- length(inlist)''')
R('''for ( i in 1:length(inlist)) { x[[i]] <- read.table(file=inlist[i], header=FALSE, stringsAsFactors=FALSE)[,1]; listnames <- c(listnames,inlist[i]); }''' % locals() )
R('''names(x) <- listnames''')
R('''pdf(file='%(outfile)s', height=8, width=8, onefile=TRUE, family='Helvetica', paper='A4', pointsize=12)''' % locals() )
R('''venn <- venn.diagram( x, filename=NULL, col="#58595B", alpha=0.75, cex=2.0, fontfamily="Helvetica", fontface="bold")''' % locals() )
R('''grid.draw(venn)''')
R('''dev.off()''')
# Convert pdf to png for web
outfile2 = outfile.replace("pdf","png")
statement = '''convert %(outfile)s %(outfile2)s'''
P.run()
############################################################
@follows( mkdir("plots") )
@transform(loadOverlappedGeneChromatinIntersection, regex(r"(\S+).stats.load"), "plots/"+PARAMS["species"]+"_overlapped_genes_h3k27me3_venn.log")
def plotFigure4OverlappedGenesH3K27Me3Venn( infile, outfile):
'''Figure 4: venn diagram of genes overlapped >90% in different tissues'''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = '''select track, chromatin_track, total_merged_intervals, track_and_chromatin_track, track_only, chromatin_track_only
from overlapped_genes_h3k27me3_venn''' % locals()
print statement
cc.execute( statement )
for venn in cc:
track, chromatin_track, total_merged_intervals, track_and_chromatin_track, track_only, chromatin_track_only = venn
track = track.replace("_",".").replace("-",".")
chromatin_track = chromatin_track.replace("_",".").replace("-",".")
track_total = int(track_and_chromatin_track)+int(track_only)
total_merged_intervals = int(total_merged_intervals)
track_only = int(track_only)
pdffile = "plots/"+track+"."+chromatin_track+".pdf"
R('''library(VennDiagram) ''')
R('''track <- seq(1,%(track_total)i)''' % locals() )
R('''chromatin_track <- seq(%(track_only)i,%(total_merged_intervals)i)''' % locals() )
R('''x <- list(%(track)s=track,%(chromatin_track)s=chromatin_track)''' % locals() )
R('''pdf(file='%(pdffile)s', height=8, width=8, onefile=TRUE, family='Helvetica', paper='A4', pointsize=12)''' % locals() )
R('''venn <- venn.diagram( x, filename=NULL, col="#58595B", fill=c("#EC1C24","#69BC45"), alpha=0.75, label.col=c("darkred", "white", "darkgreen"), cex=2.0, fontfamily="Helvetica", fontface="bold")''' % locals() )
R('''grid.draw(venn)''')
R('''dev.off()''')
# Convert pdf to png for web
pdffile2 = pdffile.replace("pdf","png")
statement = '''convert %(pdffile)s %(pdffile2)s'''
P.run()
statement = '''touch %(outfile)s'''
P.run()
############################################################
@follows( mkdir("plots") )
@transform(loadLongGeneChromatinIntersection, regex(r"(\S+).stats.load"), "plots/"+PARAMS["species"]+"_long_interval_genes_h3k27me3_venn.log")
def plotFigure4LongGenesH3K27Me3Venn( infile, outfile):
'''Figure 4: venn diagram of genes overlapped by NMIs ><3kb in length with H3K27Me3 intervals'''
dbhandle = sqlite3.connect( PARAMS["database"] )
cc = dbhandle.cursor()
statement = '''select track, chromatin_track, total_merged_intervals, track_and_chromatin_track, track_only, chromatin_track_only
from long_intervals_h3k27me3_venn''' % locals()
print statement
cc.execute( statement )
for venn in cc:
track, chromatin_track, total_merged_intervals, track_and_chromatin_track, track_only, chromatin_track_only = venn
track = track.replace("_",".").replace("-",".")
chromatin_track = chromatin_track.replace("_",".").replace("-",".")
track_total = int(track_and_chromatin_track)+int(track_only)
total_merged_intervals = int(total_merged_intervals)
track_only = int(track_only)
pdffile = "plots/"+track+"."+chromatin_track+".pdf"
R('''library(VennDiagram) ''')
R('''track <- seq(1,%(track_total)i)''' % locals() )
R('''chromatin_track <- seq(%(track_only)i,%(total_merged_intervals)i)''' % locals() )
R('''x <- list(%(track)s=track,%(chromatin_track)s=chromatin_track)''' % locals() )
R('''pdf(file='%(pdffile)s', height=8, width=8, onefile=TRUE, family='Helvetica', paper='A4', pointsize=12)''' % locals() )
R('''venn <- venn.diagram( x, filename=NULL, col="#58595B", fill=c("#EC1C24","#69BC45"), alpha=0.75, label.col=c("darkred", "white", "darkgreen"), cex=2.0, fontfamily="Helvetica", fontface="bold")''' % locals() )
R('''grid.draw(venn)''')
R('''dev.off()''')
# Convert pdf to png for web
pdffile2 = pdffile.replace("pdf","png")
statement = '''convert %(pdffile)s %(pdffile2)s'''
P.run()
statement = '''touch %(outfile)s'''
P.run()
############################################################
############################################################
############################################################
## REPORTS
@follows( mkdir( "report" ) )
def build_report():
'''build report from scratch.'''
E.info( "starting documentation build process from scratch" )
P.run_report( clean = True )
############################################################
@follows( mkdir( "report" ) )
def update_report():
'''update report.'''
E.info( "updating documentation" )
P.run_report( clean = False )
############################################################
@files( "report.log", "publish.log")
def publish_report(infile, outfile):
'''Link bed, bam, wig and report files to web '''
publish_dir = PARAMS["publish_dir"]
species = PARAMS["genome"]
report_dir = os.path.join(publish_dir, species)
bam_dir = os.path.join(publish_dir, "bam")
bed_dir = os.path.join(publish_dir, "bed")
wig_dir = os.path.join(publish_dir, "wig")
tss_dir = os.path.join(publish_dir, "tss")
tss_dist_dir = os.path.join(publish_dir, "tss_distance")
gc_dir = os.path.join(publish_dir, "gc")
cgi_dir = os.path.join(publish_dir, "cpg")
cpg_density_dir = os.path.join(publish_dir, "cpg_density")
length_dir = os.path.join(publish_dir, "length")
long_interval_dir = os.path.join(publish_dir, "long_intervals")
liver_testes_dir = os.path.join(publish_dir, "liver_vs_testes")
fig_dir = os.path.join(publish_dir, "figures")
working_dir = os.getcwd()
capseq_dir = PARAMS["capseq_dir"]
# create directories if they do not exist
statement = '''[ -d %(report_dir)s ] || mkdir %(report_dir)s;
[ -d %(bam_dir)s ] || mkdir %(bam_dir)s;
[ -d %(bam_dir)s/merged ] || mkdir %(bam_dir)s/merged;
[ -d %(bed_dir)s ] || mkdir %(bed_dir)s;
[ -d %(bed_dir)s/no_input ] || mkdir %(bed_dir)s/no_input;
[ -d %(bed_dir)s/replicates ] || mkdir %(bed_dir)s/replicates;
[ -d %(bed_dir)s/tissue_specific ] || mkdir %(bed_dir)s/tissue_specific;
[ -d %(bed_dir)s/liver_vs_testes ] || mkdir %(bed_dir)s/liver_vs_testes;
[ -d %(wig_dir)s ] || mkdir %(wig_dir)s;
[ -d %(wig_dir)s/merged ] || mkdir %(wig_dir)s/merged;
[ -d %(tss_dir)s ] || mkdir %(tss_dir)s;
[ -d %(tss_dist_dir)s ] || mkdir %(tss_dist_dir)s;
[ -d %(gc_dir)s ] || mkdir %(gc_dir)s;
[ -d %(cgi_dir)s ] || mkdir %(cgi_dir)s;
[ -d %(cpg_density_dir)s ] || mkdir %(cpg_density_dir)s;
[ -d %(length_dir)s ] || mkdir %(length_dir)s;
[ -d %(long_interval_dir)s ] || mkdir %(long_interval_dir)s;
[ -d %(liver_testes_dir)s ] || mkdir %(liver_testes_dir)s;
[ -d %(fig_dir)s ] || mkdir %(fig_dir)s;
[ -d %(fig_dir)s/Fig1 ] || mkdir %(fig_dir)s/Fig1;
[ -d %(fig_dir)s/Fig2 ] || mkdir %(fig_dir)s/Fig2;
[ -d %(fig_dir)s/Fig3 ] || mkdir %(fig_dir)s/Fig3;
[ -d %(fig_dir)s/Fig4 ] || mkdir %(fig_dir)s/Fig4;'''
statement += '''cp -rf report/html/* %(report_dir)s > %(outfile)s; '''
statement += '''cp -sf %(capseq_dir)s/bam/*.norm.bam* %(bam_dir)s >> %(outfile)s;'''
statement += '''cp -sf %(capseq_dir)s/merged_bams/*.merge.bam* %(bam_dir)s/merged >> %(outfile)s;'''
statement += '''cp -sf %(capseq_dir)s/macs/with_input/*/*/*.wig.gz %(wig_dir)s >> %(outfile)s; '''
statement += '''cp -sf %(capseq_dir)s/macs/merged/*/*/*.wig.gz %(wig_dir)s/merged >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/*.replicated.bed %(bed_dir)s >> %(outfile)s;'''
statement += '''cp -sf %(capseq_dir)s/intervals/*solo*.bed %(bed_dir)s/no_input >> %(outfile)s; '''
statement += '''cp -sf %(capseq_dir)s/intervals/*.merged.cleaned.bed %(bed_dir)s/replicates >> %(outfile)s; '''
statement += '''cp -sf %(capseq_dir)s/replicated_intervals/*.replicated.unique.bed %(bed_dir)s/tissue_specific >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/tss-profile/*.tss-profile*.counts.tsv.gz %(tss_dir)s >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/*.replicated.gc.export %(gc_dir)s >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/tss/tss.gene.gc.export %(gc_dir)s/%(species)s.tss.gene.gc.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/tss/tss.transcript.gc.export %(gc_dir)s/%(species)s.tss.transcript.gc.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/cgi/cgi.gc.export %(gc_dir)s/%(species)s.cgi.gc.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/*.replicated.cpg.export %(cgi_dir)s >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/tss/tss.gene.cpg.export %(cgi_dir)s/%(species)s.tss.gene.cpg.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/tss/tss.transcript.cpg.export %(cgi_dir)s/%(species)s.tss.transcript.cpg.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/cgi/cgi.cpg.export %(cgi_dir)s/%(species)s.cgi.cpg.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/*.replicated.cpg_density.export %(cpg_density_dir)s >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/tss/tss.gene.cpg_density.export %(cpg_density_dir)s/%(species)s.tss.gene.cpg_density.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/tss/tss.transcript.cpg_density.export %(cpg_density_dir)s/%(species)s.tss.transcript.cpg_density.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/cgi/cgi.cpg_density.export %(cpg_density_dir)s/%(species)s.cgi.cpg_density.export >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/*.gene.tss.distance %(tss_dist_dir)s >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/long_intervals/*.capseq_profile.counts.tsv.gz %(long_interval_dir)s >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/liver_vs_testes/*.liver.testes.unique.bed %(bed_dir)s/liver_vs_testes >> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/liver_vs_testes/*.length %(length_dir)s >> %(outfile)s; '''
# Export plots
statement += '''cp -sf %(working_dir)s/plots/*.nmi.cgi.venn.pdf %(fig_dir)s/Fig1 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*.combined.*tss-profile.pdf %(fig_dir)s/Fig2 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*.liver.testes.length.pdf %(fig_dir)s/Fig3 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*liver.testes.intergenic.venn.pdf %(fig_dir)s/Fig3 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*liver.testes.transcript.tss.venn.pdf %(fig_dir)s/Fig3 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*cpg_obsexp.pdf %(fig_dir)s/Fig3 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*cpg_density.pdf %(fig_dir)s/Fig3 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*gc_content.pdf %(fig_dir)s/Fig3 2>> %(outfile)s; '''
if len(PARAMS["bigwig_chromatin"]) > 0:
statement += '''cp -sf %(working_dir)s/plots/*unique_intervals_H3K4Me3_profile.pdf %(fig_dir)s/Fig3 2>> %(outfile)s; '''
if len(PARAMS["expression_rpkm"]) > 0:
statement += '''cp -sf %(working_dir)s/plots/*dx*.pdf %(fig_dir)s/Fig3 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*overlapped_genes_capseq_profile*.pdf %(fig_dir)s/Fig4 2>> %(outfile)s; '''
if len(PARAMS["bigwig_chromatin"]) > 0:
statement += '''cp -sf %(working_dir)s/plots/*overlapped_genes_H3K27Me3_profile*.pdf %(fig_dir)s/Fig4 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*overlapped_genes_H3K4Me3_profile*.pdf %(fig_dir)s/Fig4 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/plots/*overlapped.genes*.pdf %(fig_dir)s/Fig4 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/overlapped_genes/*.fisher.test.tsv %(fig_dir)s/Fig4 2>> %(outfile)s; '''
# species-specific datasets - chromatin plots
if len(PARAMS["bigwig_chromatin"]) > 0:
statement += '''cp -sf %(working_dir)s/liver_vs_testes/*H3K4Me3*profile*.tsv.gz %(liver_testes_dir)s 2>> %(outfile)s; '''
statement += '''cp -sf %(working_dir)s/long_intervals/*.profile.counts.tsv.gz %(long_interval_dir)s 2>> %(outfile)s; '''
P.run()
############################################################
############################################################
############################################################
## Pipeline organisation
@follows(annotateCapseqGenesetOverlap,
loadCapseqGenesetOverlap,
getCapseqGeneTSSOverlapCount,
loadCapseqGeneTSSOverlapCount,
annotateCapseqTranscriptTSSDistance,
loadCapseqTranscriptTSSDistance,
exportCapseqTSSTranscriptList,
loadCapseqTSSTranscriptList,
annotateCapseqGeneTSSDistance,
loadCapseqGeneTSSDistance,
exportCapseqTSSGeneList,
loadCapseqTSSGeneList,
exportCapseqTSSBed,
exportCapseqIntergenicBed,
getCapseqNoncodingTSSDistance,
loadCapseqNoncodingTSSDistance,
exportCapseqNoncodingTSSGeneList,
loadCapseqNoncodingTSSGeneList,
exportCapseqTranscriptTSSDistanceTranscriptList,
exportCapseqTranscriptTSSOverlapTranscriptList,
runGenomicFeaturesGAT,
loadGenomicFeaturesGAT )
def capseqGeneset():
'''Annoatate CAPseq intervals using a geneset specified in the ini file'''
pass
@follows( loadlncRNAs,
getCapseqlncRNATSSDistance,
loadCapseqlncRNATSSDistance,
exportCapseqlncRNATSSGeneList,
loadCapseqlncRNATSSGeneList )
def capseqlincRNA():
'''Annoatate CAPseq intervals using an external lincRNA bed file specified in the ini file'''
pass
@follows( loadRNAseq,
getCapseqRNAseqTSSDistance,
loadCapseqRNAseqTSSDistance,
exportCapseqRNAseqTSSGeneList,
loadCapseqRNAseqTSSGeneList )
def capseqRNAseq():
'''Annoatate CAPseq intervals using an external RNAseq gtf specified in the ini file'''
pass
@follows(getReplicatedTranscriptTSSProfile,
getReplicatedTranscriptTSSProfileCapseq,
getReplicatedTranscriptTSSProfileNoCapseq,
getReplicatedGeneTSSProfile,
getReplicatedGeneTSSProfileCapseq,
getReplicatedGeneTSSProfileNoCapseq,
getReplicatedTranscriptProfile,
getReplicatedGeneProfile)
def capseqProfiles():
'''Calculate CAPseq profile over genes and TSSs using a geneset specified in the ini file'''
pass
# Section 2
@follows( annotateCapseqComposition,
loadCapseqComposition,
annotateControlComposition,
loadControlComposition,
annotateFlankingCompositionLeft,
loadFlankingCompositionLeft,
annotateFlankingCompositionRight,
loadFlankingCompositionRight,
exportCapseqGCProfiles,
exportCapseqCpGObsExp,
exportCapseqCpGDensity )
def capseqComposition():
'''Annotate nucleotide composition of CAPseq intervals and export to text files for plotting'''
pass
# Section 3
@follows( getCapseqCGIOverlapCount,
loadCapseqCGIOverlapCount,
getCGIAndCapseqIntervals,
loadCGIAndCapseqIntervals,
getCapseqSpecificIntervals,
loadCapseqSpecificIntervals,
getPredictedCGIIntervals,
loadPredictedCGIIntervals,
getExternalBedStats,
loadExternalBedStats,
getChromatinMarkOverlap,
loadChromatinMarkIntervals,
getChipseqOverlap,
loadChipseqIntervals,
getCapseqOverlap,
loadCapseqIntervals,
buildGATWorkspace,
runExternalDatasetGAT,
loadExternalDatasetGAT )
def compareExternal():
'''Compare intervals external bed files'''
pass
# Section 4
@follows( loadUCSCPredictedCGIIntervals,
annotateCGIComposition,
loadCGIComposition,
getCGITranscriptTSSOverlapCount,
loadCGITranscriptTSSOverlapCount,
getCGIGeneTSSOverlapCount,
loadCGIGeneTSSOverlapCount,
annotateCGIGenesetOverlap,
loadCGIGenesetOverlap,
exportCGICpGDensity,
exportCGICpGObsExp,
exportCGIGCProfiles )
def predictedCGIs():
'''Annotate predicted CGI intervals'''
pass
# Section 5
@follows( annotateTranscriptTSSComposition,
loadTranscriptTSSComposition,
annotateGeneTSSComposition,
loadGeneTSSComposition,
annotateGeneTSSIntervalComposition,
loadGeneTSSIntervalComposition,
exportTranscriptTSSCpGDensity,
exportGeneTSSCpGDensity,
exportTranscriptTSSCpGObsExp,
exportGeneTSSCpGObsExp,
exportTranscriptTSSGCProfiles,
exportGeneTSSGCProfiles )
def genesetTSSComposition():
'''Annotate the nucleotide composition of the TSS of the supplied gene set'''
pass
# Section 6a
@follows( getLongIntervalGeneList,
getShortIntervalGeneList,
getLongIntervalGeneGTF,
longIntervalGeneCAPseqProfile,
shortIntervalGeneCAPseqProfile,
runGOLongGeneLists,
runGOSlimLongGeneLists,
loadLongGeneGo,
loadLongGeneGoslim )
def longIntervals():
'''Annotate long vs short CAPseq intervals'''
pass
# Section 6a - chromatin
@follows( longIntervalGeneChromatinProfile,
shortIntervalGeneChromatinProfile,
longGeneChromatinIntersection,
longGeneChromatinIntersectionStats,
loadLongGeneChromatinIntersection,
runLongGenesGAT,
loadLongGenesGAT )
def longIntervalsChromatin():
'''Annotate long vs short CAPseq intervals'''
pass
# Section 6b
@follows( annotateGenesetCapseqOverlap,
loadGenesetCapseqOverlap,
getGenesetCapseqOverlapList,
getGenesetCapseqOverlapControlList,
getOverlappedGeneGTF,
overlappedGeneCAPseqProfile,
controlGeneCAPseqProfile,
runGOOverlappedGeneLists,
runGOSlimOverlappedGeneLists,
loadOverlappedGeneGo,
loadOverlappedGeneGoslim,
clusterGOResults )
def overlappedGenes():
'''Annotate long vs short CAPseq intervals'''
pass
# Section 6b - chromatin
@follows( overlappedGeneChromatinProfile,
overlappedGeneChromatinProfileWide,
overlappedGeneChromatinIntersection,
overlappedGeneChromatinIntersectionStats,
loadOverlappedGeneChromatinIntersection,
runOverlappedGenesGAT,
loadOverlappedGenesGAT )
def overlappedGenesChromatin():
'''Annotate long vs short CAPseq intervals'''
pass
# Section 7
@follows( liverTestesVenn,
loadLiverTestesVenn,
liverTestesIntergenicVenn,
loadLiverTestesIntergenicVenn,
loadLiverTestesShared,
loadLiverTestesUnique,
loadLiverTestesMerge,
exportLiverTestesMergeWithSort,
annotateLiverTestesMergedGenesetOverlap,
loadLiverTestesMergedGenesetOverlap,
annotateLiverTestesMergedTranscriptTSSDistance,
loadLiverTestesMergedTranscriptTSSDistance,
exportLiverTestesTSSTranscriptList,
loadLiverTestesTSSTranscriptList,
annotateLiverTesteMergedComposition,
loadLiverTesteMergedComposition,
liverTestesTSSVenn,
loadLiverTestesTSSVenn,
getPeakShapeLiverTestesReads,
getPeakShapeLiverTestesCentre,
liverTestesUniqueChromatinProfile,
exportLiverTestesSpecificCAPseqGenes,
exportLiverTestesSharedCAPseqGenes,
exportLiverTestesUniqueLength,
exportLiverTestesSharedLength,
exportLiverTestesSharedCpGObsExp,
exportLiverTestesUniqueCpGObsExp,
exportLiverTestesSharedGC,
exportLiverTestesUniqueGC,
exportLiverTestesSharedCpGDensity,
exportLiverTestesUniqueCpGDensity)
def liverTestes():
'''Annotate liver vs testes specific CAPseq intervals'''
pass
# Section 8
@follows( plotFigure1b,
plotFigure2b,
plotFigure3bTSSVenn,
plotFigure3bIntergenicVenn,
plotFigure3Length,
plotFigure3CpGObsExp,
plotFigure3GC,
plotFigure3CpGDensity,
plotFigure3cH3K4Me3Testes,
plotFigure3cH3K4Me3Liver,
plotFigure4a,
plotFigure4aSmoothed,
plotFigure4OverlappedGenesTissueVenn )
def figures():
'''Plot paper figures in R'''
pass
# Section 8 - histone plots
@follows( plotFigure4bK27,
plotFigure4bK27Smoothed,
plotFigure4bK4,
plotFigure4bK4Smoothed,
plotFigure4OverlappedGenesH3K27Me3Venn )
def histoneFigures():
'''Plot paper figures in R'''
pass
# Section 8 - differential expression
@follows( plotFigure3dxScatterRPKM,
plotFigure3dxScatterReadCounts,
plotFigure3dxBoxplotRPKM )
def dxFigures():
'''Plot paper figures in R'''
pass
@follows( build_report, publish_report )
def fullReport():
'''Build and publish report'''
pass
@follows( capseqGeneset,
capseqProfiles,
capseqComposition,
compareExternal,
predictedCGIs,
genesetTSSComposition,
longIntervals,
liverTestes)
def full():
'''Run the full pipeline.'''
pass
if __name__== "__main__":
sys.exit( P.main(sys.argv) )
| 51.412389 | 325 | 0.554432 | 24,704 | 242,358 | 5.329218 | 0.04477 | 0.018898 | 0.005317 | 0.012032 | 0.783788 | 0.758547 | 0.740682 | 0.724754 | 0.707732 | 0.685066 | 0 | 0.008505 | 0.22088 | 242,358 | 4,713 | 326 | 51.423297 | 0.688713 | 0.024439 | 0 | 0.664646 | 0 | 0.053734 | 0.507743 | 0.207554 | 0.000551 | 0 | 0 | 0 | 0.003031 | 0 | null | null | 0.00496 | 0.010471 | null | null | 0.011022 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8c72c2fb10ad9263080ede446d08e5b5c22e650 | 132 | py | Python | app/core/admin.py | angel-tk/tk-recipe-backend | 1d7d47ad0b65c379637454ae9a509f3e79d897fc | [
"MIT"
] | null | null | null | app/core/admin.py | angel-tk/tk-recipe-backend | 1d7d47ad0b65c379637454ae9a509f3e79d897fc | [
"MIT"
] | null | null | null | app/core/admin.py | angel-tk/tk-recipe-backend | 1d7d47ad0b65c379637454ae9a509f3e79d897fc | [
"MIT"
] | null | null | null | from django.contrib import admin
from core import models
admin.site.register(models.Ingredient)
admin.site.register(models.Recipe) | 22 | 38 | 0.833333 | 19 | 132 | 5.789474 | 0.578947 | 0.163636 | 0.309091 | 0.418182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 132 | 6 | 39 | 22 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7717473158d100a8959f2bd225fd8273309f68aa | 302 | py | Python | Lib/graph/__init__.py | jeamick/ares-visual | 3cf5068f874b3f6fe898968b2a7efa86fadca99d | [
"MIT"
] | null | null | null | Lib/graph/__init__.py | jeamick/ares-visual | 3cf5068f874b3f6fe898968b2a7efa86fadca99d | [
"MIT"
] | 2 | 2019-03-27T00:36:09.000Z | 2019-04-09T00:39:12.000Z | Lib/graph/__init__.py | jeamick/ares-visual | 3cf5068f874b3f6fe898968b2a7efa86fadca99d | [
"MIT"
] | null | null | null | from . import AresHtmlGraphC3
from . import AresHtmlGraphNVD3
from . import AresHtmlGraphChartJs
from . import AresHtmlGraphVis
from . import AresHtmlGraphBillboard
#from . import AresHtmlGraphDC
#from . import AresHtmlGraphD3
from . import AresHtmlGraphPlotly
from . import AresHtmlGraphFabric | 33.555556 | 37 | 0.821192 | 27 | 302 | 9.185185 | 0.407407 | 0.362903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011583 | 0.142384 | 302 | 9 | 38 | 33.555556 | 0.945946 | 0.192053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6201a299b81817b1dd4c5b97ed262cf18e30349c | 39 | py | Python | Python/py_module_discover/simple.py | egustafson/sandbox | 9804e966347b33558b0497a04edb1a591d2d7773 | [
"Apache-2.0"
] | 2 | 2019-09-27T21:25:26.000Z | 2019-12-29T11:26:54.000Z | Python/py_module_discover/simple.py | egustafson/sandbox | 9804e966347b33558b0497a04edb1a591d2d7773 | [
"Apache-2.0"
] | 7 | 2020-08-11T17:32:14.000Z | 2020-08-11T17:32:39.000Z | Python/py_module_discover/simple.py | egustafson/sandbox | 9804e966347b33558b0497a04edb1a591d2d7773 | [
"Apache-2.0"
] | 2 | 2016-07-18T10:55:50.000Z | 2020-08-19T01:46:08.000Z |
print("Module 'simple.py' loaded.")
| 7.8 | 35 | 0.641026 | 5 | 39 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 39 | 4 | 36 | 9.75 | 0.757576 | 0 | 0 | 0 | 0 | 0 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
626460cab53d20b057ebb72a1efeba39832e241e | 699 | py | Python | hubspot/events/models/__init__.py | fakepop/hubspot-api-python | f04103a09f93f5c26c99991b25fa76801074f3d3 | [
"Apache-2.0"
] | null | null | null | hubspot/events/models/__init__.py | fakepop/hubspot-api-python | f04103a09f93f5c26c99991b25fa76801074f3d3 | [
"Apache-2.0"
] | null | null | null | hubspot/events/models/__init__.py | fakepop/hubspot-api-python | f04103a09f93f5c26c99991b25fa76801074f3d3 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
HubSpot Events API
API for accessing CRM object events. # noqa: E501
The version of the OpenAPI document: v3
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
# import models into model package
from hubspot.events.models.collection_response_external_unified_event import (
CollectionResponseExternalUnifiedEvent,
)
from hubspot.events.models.error import Error
from hubspot.events.models.error_detail import ErrorDetail
from hubspot.events.models.external_unified_event import ExternalUnifiedEvent
from hubspot.events.models.next_page import NextPage
from hubspot.events.models.paging import Paging
| 27.96 | 78 | 0.805436 | 89 | 699 | 6.179775 | 0.516854 | 0.165455 | 0.185455 | 0.250909 | 0.101818 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009901 | 0.133047 | 699 | 24 | 79 | 29.125 | 0.89769 | 0.310443 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.777778 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
627972e6af591cf841c5d1eaa121621e03cbf719 | 6,821 | py | Python | openmdao/recorders/case.py | ryanfarr01/blue | a9aac98c09cce0f7cadf26cf592e3d978bf4e3ff | [
"Apache-2.0"
] | null | null | null | openmdao/recorders/case.py | ryanfarr01/blue | a9aac98c09cce0f7cadf26cf592e3d978bf4e3ff | [
"Apache-2.0"
] | null | null | null | openmdao/recorders/case.py | ryanfarr01/blue | a9aac98c09cce0f7cadf26cf592e3d978bf4e3ff | [
"Apache-2.0"
] | null | null | null | """
A Case class.
"""
class Case(object):
"""
Case wraps the data from a single iteration of a recording to make it more easily accessible.
Parameters
----------
filename : str
The filename from which the Case was constructed.
counter : int
The global execution counter.
iteration_coordinate : str
The string that holds the full unique identifier for this iteration.
timestamp : float
Time of execution of the case.
success : str
Success flag for the case.
msg : str
Message associated with the case.
Attributes
----------
filename : str
The file from which the case was loaded.
counter : int
The global execution counter.
iteration_coordinate : str
The string that holds the full unique identifier for this iteration.
timestamp : float
Time of execution of the case.
success : str
Success flag for the case.
msg : str
Message associated with the case.
"""
def __init__(self, filename, counter, iteration_coordinate, timestamp, success, msg):
"""
Initialize.
"""
self.filename = filename
self.counter = counter
self.iteration_coordinate = iteration_coordinate
self.timestamp = timestamp
self.success = success
self.msg = msg
class DriverCase(Case):
"""
Wrap data from a single iteration of a Driver recording to make it more easily accessible.
Parameters
----------
filename : str
The filename from which the DriverCase was constructed.
counter : int
The global execution counter.
iteration_coordinate: str
The string that holds the full unique identifier for the desired iteration.
timestamp : float
Time of execution of the case.
success : str
Success flag for the case.
msg : str
Message associated with the case.
desvars : array
Driver design variables to read in from the recording file.
responses : array
Driver responses to read in from the recording file.
objectives : array
Driver objectives to read in from the recording file.
constraints : array
Driver constraints to read in from the recording file.
Attributes
----------
desvars : array
Driver design variables that have been read in from the recording file.
responses : array
Driver responses that have been read in from the recording file.
objectives : array
Driver objectives that have been read in from the recording file.
constraints : array
Driver constraints that have been read in from the recording file.
"""
def __init__(self, filename, counter, iteration_coordinate, timestamp, success, msg, desvars,
responses, objectives, constraints):
"""
Initialize.
"""
super(DriverCase, self).__init__(filename, counter, iteration_coordinate,
timestamp, success, msg)
self.desvars = desvars[0] if desvars.dtype.names else None
self.responses = responses[0] if responses.dtype.names else None
self.objectives = objectives[0] if objectives.dtype.names else None
self.constraints = constraints[0] if constraints.dtype.names else None
class SystemCase(Case):
"""
Wraps data from a single iteration of a System recording to make it more accessible.
Parameters
----------
filename : str
The filename from which the SystemCase was constructed.
counter : int
The global execution counter.
iteration_coordinate: str
The string that holds the full unique identifier for the desired iteration.
timestamp : float
Time of execution of the case
success : str
Success flag for the case
msg : str
Message associated with the case
inputs : array
System inputs to read in from the recording file.
outputs : array
System outputs to read in from the recording file.
residuals : array
System residuals to read in from the recording file.
Attributes
----------
inputs : array
System inputs that have been read in from the recording file.
outputs : array
System outputs that have been read in from the recording file.
residuals : array
System residuals that have been read in from the recording file.
"""
def __init__(self, filename, counter, iteration_coordinate, timestamp, success, msg, inputs,
outputs, residuals):
"""
Initialize.
"""
super(SystemCase, self).__init__(filename, counter, iteration_coordinate,
timestamp, success, msg)
self.inputs = inputs[0] if inputs.dtype.names else None
self.outputs = outputs[0] if outputs.dtype.names else None
self.residuals = residuals[0] if residuals.dtype.names else None
class SolverCase(Case):
"""
Wraps data from a single iteration of a System recording to make it more accessible.
Parameters
----------
filename : str
The filename from which the SystemCase was constructed.
counter : int
The global execution counter.
iteration_coordinate: str
timestamp : float
Time of execution of the case
success : str
Success flag for the case
msg : str
Message associated with the case
abs_err : array
Solver absolute error to read in from the recording file.
rel_err : array
Solver relative error to read in from the recording file.
outputs : array
Solver outputs to read in from the recording file.
residuals : array
Solver residuals to read in from the recording file.
Attributes
----------
abs_err : array
Solver absolute error that has been read in from the recording file.
rel_err : array
Solver relative error that has been read in from the recording file.
outputs : array
Solver outputs that have been read in from the recording file.
residuals : array
Solver residuals that have been read in from the recording file.
"""
def __init__(self, filename, counter, iteration_coordinate, timestamp, success, msg,
abs_err, rel_err, outputs, residuals):
"""
Initialize.
"""
super(SolverCase, self).__init__(filename, counter, iteration_coordinate, timestamp,
success, msg)
self.abs_err = abs_err
self.rel_err = rel_err
self.outputs = outputs[0] if outputs.dtype.names else None
self.residuals = residuals[0] if residuals.dtype.names else None
| 33.11165 | 97 | 0.643894 | 821 | 6,821 | 5.286236 | 0.108404 | 0.030415 | 0.050691 | 0.065899 | 0.85023 | 0.803917 | 0.792396 | 0.778571 | 0.768203 | 0.73871 | 0 | 0.001877 | 0.297171 | 6,821 | 205 | 98 | 33.273171 | 0.903421 | 0.607389 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
656dc72cdb2617ad8e41a2664de0393228d22e8c | 178 | py | Python | sdk/opendp/smartnoise_t/evaluation/params/_dataset_params.py | ObliviousAI/smartnoise-sdk | 6c5b9bdd16852a08ee01299193a1fac93def99cd | [
"MIT"
] | 63 | 2020-03-26T15:26:10.000Z | 2020-10-22T06:26:38.000Z | sdk/opendp/smartnoise_t/evaluation/params/_dataset_params.py | ObliviousAI/smartnoise-sdk | 6c5b9bdd16852a08ee01299193a1fac93def99cd | [
"MIT"
] | 87 | 2021-02-20T20:43:49.000Z | 2022-03-31T16:24:46.000Z | sdk/opendp/smartnoise_t/evaluation/params/_dataset_params.py | ObliviousAI/smartnoise-sdk | 6c5b9bdd16852a08ee01299193a1fac93def99cd | [
"MIT"
] | 17 | 2021-02-18T18:47:09.000Z | 2022-03-01T06:44:17.000Z | class DatasetParams:
"""
Defines the fields used to set dataset parameters
"""
def __init__(self, dataset_size=10000):
self.dataset_size = dataset_size
| 22.25 | 51 | 0.674157 | 21 | 178 | 5.380952 | 0.714286 | 0.292035 | 0.265487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037313 | 0.247191 | 178 | 7 | 52 | 25.428571 | 0.80597 | 0.275281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
65bba8ba912d056f9ef7086ab6ea7d75a0b28b98 | 153 | py | Python | baseApp/admin.py | vah-ini/crispy-ai | f1f743012cac508dd4ad13c886eab6352c9c5053 | [
"MIT"
] | 7 | 2019-05-07T17:31:57.000Z | 2021-07-06T15:08:14.000Z | baseApp/admin.py | vah-ini/crispy-ai | f1f743012cac508dd4ad13c886eab6352c9c5053 | [
"MIT"
] | 60 | 2019-05-04T08:52:37.000Z | 2022-03-11T23:53:25.000Z | baseApp/admin.py | vah-ini/crispy-ai | f1f743012cac508dd4ad13c886eab6352c9c5053 | [
"MIT"
] | 21 | 2019-04-12T14:31:54.000Z | 2019-09-29T09:51:20.000Z | from django.contrib import admin
#from .models import Courses,Live
# Register your models here.
#admin.site.register(Courses)
#admin.site.register(Live)
| 25.5 | 33 | 0.797386 | 22 | 153 | 5.545455 | 0.545455 | 0.147541 | 0.278689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098039 | 153 | 5 | 34 | 30.6 | 0.884058 | 0.732026 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
65c0b366e4e323024ba348435a9028334eac8410 | 12,246 | py | Python | src/restLayer/app/SearchCounts.py | ucsd-ccbb/Oncolist | a3c7ecde6f665a665873e5aa7be5bc3778f5b17e | [
"MIT"
] | null | null | null | src/restLayer/app/SearchCounts.py | ucsd-ccbb/Oncolist | a3c7ecde6f665a665873e5aa7be5bc3778f5b17e | [
"MIT"
] | null | null | null | src/restLayer/app/SearchCounts.py | ucsd-ccbb/Oncolist | a3c7ecde6f665a665873e5aa7be5bc3778f5b17e | [
"MIT"
] | null | null | null | __author__ = 'aarongary'
from collections import Counter
from app import PubMed
from models.TermResolver import TermAnalyzer
from elasticsearch import Elasticsearch
from app import elastic_search_uri
#es = Elasticsearch(['http://ec2-52-24-205-32.us-west-2.compute.amazonaws.com:9200/'],send_get_body_as='POST') # Clustered Server
es = Elasticsearch([elastic_search_uri],send_get_body_as='POST',timeout=300) # Prod Clustered Server
#==================================
#==================================
# GENE SEARCH
#==================================
#==================================
def get_counts_gene(queryTerms, disease=[]):
# network_info = {
# 'searchGroupTitle': 'Star Network',
# 'searchTab': 'GENES',
# 'network': 'node',
# 'matchField': 'node_list.node.name',
# 'matchCoreNode': 'node_name',
# 'cancerType': 'BRCA',
# 'queryTerms': queryTerms
# }
# gene_network_data = {
# 'searchGroupTitle': network_info['searchGroupTitle'],
# 'clusterNodeName': "",
# 'searchTab': network_info['searchTab'],
# 'items': [],
# 'geneSuperList': [],
# 'geneScoreRangeMax': '100',
# 'geneScoreRangeMin': '5',
# 'geneScoreRangeStep': '0.1'
# }
# queryTermArray = queryTerms.split(',')
# sorted_query_list = PubMed.get_gene_pubmed_counts_normalized(network_info['queryTerms'], 1)
# gene_network_data['geneSuperList'] = get_geneSuperList_gene(queryTermArray, sorted_query_list)
# network_info['queryTerms'] = network_info['queryTerms'].replace(",", "*")
# search_body = get_searchBody_count_gene(queryTermArray, network_info, disease, sorted_query_list, True)
# result = es.count(
# index = 'network',
# doc_type = 'node',
# body = search_body
# )
return 0
def get_searchBody_count_gene(queryTermArray, network_info, disease, sorted_query_list, isStarSearch):
should_match = []
for queryTerm in queryTermArray:
boost_value_append = get_boost_value_gene(sorted_query_list['results'], queryTerm)
if(isStarSearch):
should_match.append({"match": {"node_list.name":{"query": queryTerm,"boost": boost_value_append}}})
should_match.append( { 'match': {'node_name': queryTerm} })
else:
should_match.append({"match": {"x_node_list.name":{"query": queryTerm,"boost": boost_value_append}}})
returnBody = {
'query': {
'bool': {
'should': should_match
}
}
}
return returnBody
def get_geneSuperList_gene(queryTermArray, sorted_query_list):
returnValue = []
#sorted_query_list = PubMed.get_gene_pubmed_counts_normalized(network_info['queryTerms'], 1)
for queryTerm in queryTermArray:
#should_match.append( { 'match': {network_info['matchField']: queryTerm} })
boost_value_append = get_boost_value_gene(sorted_query_list['results'], queryTerm)
#should_match.append({"match": {"node_list.node.name":{"query": queryTerm,"boost": boost_value_append}}})
returnValue.append({'queryTerm': queryTerm, 'boostValue': boost_value_append})
return returnValue
def get_boost_value_gene(boostArray, idToCheck):
for boostItem in boostArray:
if(boostItem['id'] == idToCheck):
returnThisValue = boostItem['normalizedValue']
return boostItem['normalizedValue']
return 0
#==================================
#==================================
# CLUSTERS SEARCH
#==================================
#==================================
def get_counts_cluster(queryTerms, disease=[]):
network_info = {
'searchGroupTitle': 'Cluster Network',
'searchTab': 'PATHWAYS',
'network': 'cluster',
'matchField': 'x_node_list.name',
'matchCoreNode': 'node_name',
'cancerType': 'BRCA',
'queryTerms': queryTerms
}
gene_network_data = {
'searchGroupTitle': network_info['searchGroupTitle'],
'clusterNodeName': "",
'searchTab': network_info['searchTab'],
'items': [],
'geneSuperList': [],
'geneScoreRangeMax': '100',
'geneScoreRangeMin': '5',
'geneScoreRangeStep': '0.1'
}
queryTermArray = network_info['queryTerms'].split(',')
sorted_query_list = PubMed.get_gene_pubmed_counts_normalized(network_info['queryTerms'], 1)
gene_network_data['geneSuperList'] = get_geneSuperList_cluster(queryTermArray, sorted_query_list)
network_info['queryTerms'] = network_info['queryTerms'].replace(",", "*")
search_body = get_searchBody_count_cluster(queryTermArray, network_info, disease, sorted_query_list, False)
result = es.count(
index = 'clusters',
doc_type = ['clusters_geo_oslom', 'clusters_tcga_oslom'],
body = search_body
)
return result['count']
def get_searchBody_count_cluster(queryTermArray, network_info, disease, sorted_query_list, isStarSearch):
should_match = []
for queryTerm in queryTermArray:
boost_value_append = get_boost_value_cluster(sorted_query_list['results'], queryTerm)
if(isStarSearch):
should_match.append({"match": {"node_list.name":{"query": queryTerm,"boost": boost_value_append}}})
should_match.append( { 'match': {'node_name': queryTerm} })
else:
should_match.append({"match": {"x_node_list.name":{"query": queryTerm,"boost": boost_value_append}}})
returnBody = {
'query': {
'bool': {
'should': should_match
}
}
}
return returnBody
def get_geneSuperList_cluster(queryTermArray, sorted_query_list):
returnValue = []
#sorted_query_list = PubMed.get_gene_pubmed_counts_normalized(network_info['queryTerms'], 1)
for queryTerm in queryTermArray:
#should_match.append( { 'match': {network_info['matchField']: queryTerm} })
boost_value_append = get_boost_value_cluster(sorted_query_list['results'], queryTerm)
#should_match.append({"match": {"node_list.node.name":{"query": queryTerm,"boost": boost_value_append}}})
returnValue.append({'queryTerm': queryTerm, 'boostValue': boost_value_append})
return returnValue
def get_boost_value_cluster(boostArray, idToCheck):
for boostItem in boostArray:
if(boostItem['id'] == idToCheck):
returnThisValue = boostItem['normalizedValue']
return boostItem['normalizedValue']
return 0
#==================================
#==================================
# CONDITIONS SEARCH
#==================================
#==================================
def get_counts_condition(queryTerms, phenotypes=None):
should_match = []
must_match = []
queryTermArray = queryTerms.split(',')
for queryTerm in queryTermArray:
should_match.append({"match": {"node_list.name": queryTerm}})
if(phenotypes is not None):
phenotypeTermArray = phenotypes.split('~')
for phenotypeTerm in phenotypeTermArray:
must_match.append({"match": {"node_name": phenotypeTerm}})
search_body = {
'query': {
'bool': {
'must': must_match,
'should': should_match
}
}
}
else:
search_body = {
'query': {
'bool': {
'should': should_match
}
}
}
result = es.count(
index = 'conditions',
doc_type = 'conditions_clinvar',
body = search_body
)
return result['count']
#==================================
#==================================
# AUTHORS SEARCH
#==================================
#==================================
def get_counts_author(queryTerms):
should_match = []
queryTermArray = queryTerms.split(',')
for queryTerm in queryTermArray:
should_match.append({"match": {"node_list.name": queryTerm}})
search_body = {
'query': {
'filtered': {
'query': {
'bool': {
'must': [
{
'nested': {
'path': 'node_list',
'score_mode': 'sum',
'query': {
'function_score': {
'query': {
'bool': {
'should': should_match
}
},
'field_value_factor': {
'field': 'node_list.scores',
'factor': 1,
'modifier': 'none',
'missing': 1
},
'boost_mode': 'replace'
}
}
}
}
]
}
},
'filter': {
'or': {
'filters': [
{'terms': {
'network_name': [
'authors_pubmed'
]
}}
]
}
}
}
}
}
result = es.count(
index = 'authors',
doc_type = 'authors_pubmed',
body = search_body
)
return result['count']
#==================================
#==================================
# DRUGS SEARCH
#==================================
#==================================
def get_counts_drug(queryTerms, disease=[]):
network_info = {
'searchGroupTitle': 'Cluster Network',
'searchTab': 'DRUG',
'network': 'drug_network',
'matchField': 'x_node_list.name',
'matchCoreNode': 'node_name',
'cancerType': 'BRCA',
'queryTerms': queryTerms
}
gene_network_data = {
'searchGroupTitle': network_info['searchGroupTitle'],
'clusterNodeName': "",
'searchTab': network_info['searchTab'],
'items': [],
'geneSuperList': [],
'geneScoreRangeMax': '100',
'geneScoreRangeMin': '5',
'geneScoreRangeStep': '0.1'
}
queryTermArray = network_info['queryTerms'].split(',')
sorted_query_list = PubMed.get_gene_pubmed_counts_normalized(network_info['queryTerms'], 1)
gene_network_data['geneSuperList'] = get_geneSuperList_drug(queryTermArray, sorted_query_list)
network_info['queryTerms'] = network_info['queryTerms'].replace(",", "*")
should_match = []
for queryTerm in queryTermArray:
boost_value_append = get_boost_value_drug(sorted_query_list['results'], queryTerm)
should_match.append({"match": {"node_list.name": queryTerm}})
search_body = {
'query': {
'bool': {
'should': should_match
}
}
}
result = es.count(
index = 'drugs',
doc_type = 'drugs_drugbank',
body = search_body
)
return result['count']
def get_geneSuperList_drug(queryTermArray, sorted_query_list):
returnValue = []
#sorted_query_list = PubMed.get_gene_pubmed_counts_normalized(network_info['queryTerms'], 1)
for queryTerm in queryTermArray:
#should_match.append( { 'match': {network_info['matchField']: queryTerm} })
boost_value_append = get_boost_value_drug(sorted_query_list['results'], queryTerm)
#should_match.append({"match": {"node_list.node.name":{"query": queryTerm,"boost": boost_value_append}}})
returnValue.append({'queryTerm': queryTerm, 'boostValue': boost_value_append})
return returnValue
def get_boost_value_drug(boostArray, idToCheck):
for boostItem in boostArray:
if(boostItem['id'] == idToCheck):
returnThisValue = boostItem['normalizedValue']
return boostItem['normalizedValue']
return 0
| 33.642857 | 129 | 0.542381 | 1,023 | 12,246 | 6.197458 | 0.138807 | 0.05205 | 0.05205 | 0.05205 | 0.807729 | 0.787539 | 0.77776 | 0.770978 | 0.738644 | 0.738644 | 0 | 0.005473 | 0.283766 | 12,246 | 363 | 130 | 33.735537 | 0.717364 | 0.233382 | 0 | 0.577406 | 0 | 0 | 0.163699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054393 | false | 0 | 0.020921 | 0.004184 | 0.142259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65ce3c76c5b3f6850a4a56be27e7625f2b916a6b | 100,693 | py | Python | tests/unit/python/foglamp/tasks/north/test_sending_process.py | kayanme/FogLAMP | 909b5adf558ea9c4e217d11de2a815ecdbf7bb6d | [
"Apache-2.0"
] | 1 | 2020-09-10T11:34:04.000Z | 2020-09-10T11:34:04.000Z | tests/unit/python/foglamp/tasks/north/test_sending_process.py | kayanme/FogLAMP | 909b5adf558ea9c4e217d11de2a815ecdbf7bb6d | [
"Apache-2.0"
] | 1 | 2017-09-06T14:05:21.000Z | 2017-09-06T14:05:21.000Z | tests/unit/python/foglamp/tasks/north/test_sending_process.py | kayanme/FogLAMP | 909b5adf558ea9c4e217d11de2a815ecdbf7bb6d | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
""" Unit tests for the North Sending Process """
# FOGLAMP_BEGIN
# See: http://foglamp.readthedocs.io/
# FOGLAMP_END
import asyncio
import logging
import sys
import time
import uuid
from unittest.mock import patch, MagicMock, ANY
import pytest
import foglamp.tasks.north.sending_process as sp_module
from foglamp.common.audit_logger import AuditLogger
from foglamp.common.storage_client.storage_client import StorageClientAsync, ReadingsStorageClientAsync
from foglamp.tasks.north.sending_process import SendingProcess
from foglamp.common.microservice_management_client.microservice_management_client import MicroserviceManagementClient
__author__ = "Stefano Simonelli"
__copyright__ = "Copyright (c) 2018 OSIsoft, LLC"
__license__ = "Apache 2.0"
__version__ = "${VERSION}"
pytestmark = pytest.mark.asyncio
STREAM_ID = 1
@asyncio.coroutine
def mock_coro(*args, **kwargs):
if len(args) > 0:
return args[0]
else:
return ""
async def mock_async_call():
""" mocks a generic async function """
return True
async def mock_audit_failure():
""" mocks audit.failure """
return True
@pytest.mark.asyncio
@pytest.fixture
def fixture_sp(event_loop):
"""" Configures the sending process instance for the tests """
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
SendingProcess._logger = MagicMock(spec=logging)
sp._stream_id = 1
sp._logger = MagicMock(spec=logging)
sp._audit = MagicMock(spec=AuditLogger)
sp._config_from_manager = {
'applyFilter': {'value': "FALSE"}
}
sp._task_fetch_data_run = True
sp._task_send_data_run = True
sp._task_fetch_data_sem = asyncio.Semaphore(0)
sp._task_send_data_sem = asyncio.Semaphore(0)
return sp
@pytest.mark.parametrize(
"p_data, "
"expected_data",
[
("2018-05-28 16:56:55", "2018-05-28 16:56:55.000000+00"),
("2018-05-28 13:42:28.8", "2018-05-28 13:42:28.800000+00"),
("2018-05-28 13:42:28.84", "2018-05-28 13:42:28.840000+00"),
("2018-05-28 13:42:28.840000", "2018-05-28 13:42:28.840000+00"),
("2018-03-22 17:17:17.166347", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347+00", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347+00:00", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347+02:00", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347+00:02", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347+02:02", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347-00", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347-00:00", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347-02:00", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347-00:02", "2018-03-22 17:17:17.166347+00"),
("2018-03-22 17:17:17.166347-02:02", "2018-03-22 17:17:17.166347+00"),
]
)
async def test_apply_date_format(p_data, expected_data):
assert expected_data == sp_module.apply_date_format(p_data)
@pytest.mark.parametrize(
"p_parameter, "
"expected_param_mgt_name, "
"expected_param_mgt_port, "
"expected_param_mgt_address, "
"expected_stream_id, "
"expected_log_performance, "
"expected_log_debug_level , "
"expected_execution",
[
# Bad cases
(
["", "--name", "SEND_PR1"],
"", "", "", 1, False, 0,
"exception"
),
(
["", "--name", "SEND_PR1", "--port", "0001"],
"", "", "", 1, False, 0,
"exception"
),
(
["", "--name", "SEND_PR1", "--port", "0001", "--address", "127.0.0.0"],
"", "", "", 1, False, 0,
"exception"
),
# stream_id must be an integer
(
["", "--name", "SEND_PR1", "--port", "0001", "--address", "127.0.0.0", "--stream_id", "x"],
"", "", "", 1, False, 0,
"exception"
),
# Good cases
(
# p_parameter
["", "--name", "SEND_PR1", "--port", "0001", "--address", "127.0.0.0", "--stream_id", "1"],
# expected_param_mgt_name
"SEND_PR1",
# expected_param_mgt_port
"0001",
# expected_param_mgt_address
"127.0.0.0",
# expected_stream_id
1,
# expected_log_performance
False,
# expected_log_debug_level
0,
# expected_execution
"good"
),
(
# Case - --performance_log
# p_parameter
["", "--name", "SEND_PR1", "--port", "0001", "--address", "127.0.0.0", "--stream_id", "1",
"--performance_log", "1"],
# expected_param_mgt_name
"SEND_PR1",
# expected_param_mgt_port
"0001",
# expected_param_mgt_address
"127.0.0.0",
# expected_stream_id
1,
# expected_log_performance
True,
# expected_log_debug_level
0,
# expected_execution
"good"
),
(
# Case - --debug_level
# p_parameter
["", "--name", "SEND_PR1", "--port", "0001", "--address", "127.0.0.0", "--stream_id", "1",
"--performance_log", "1", "--debug_level", "3"],
# expected_param_mgt_name
"SEND_PR1",
# expected_param_mgt_port
"0001",
# expected_param_mgt_address
"127.0.0.0",
# expected_stream_id
1,
# expected_log_performance
True,
# expected_log_debug_level
3,
# expected_execution
"good"
),
]
)
async def test_handling_input_parameters(
p_parameter,
expected_param_mgt_name,
expected_param_mgt_port,
expected_param_mgt_address,
expected_stream_id,
expected_log_performance,
expected_log_debug_level,
expected_execution):
""" Tests the handing of input parameters of the Sending process """
sys.argv = p_parameter
sp_module._LOGGER = MagicMock(spec=logging)
if expected_execution == "good":
log_performance, log_debug_level = sp_module.handling_input_parameters()
# noinspection PyProtectedMember
assert not sp_module._LOGGER.error.called
assert log_performance == expected_log_performance
assert log_debug_level == expected_log_debug_level
# elif expected_execution == "exception":
#
# with pytest.raises(sp_module.InvalidCommandLineParameters):
# sp_module.handling_input_parameters()
#
# # noinspection PyProtectedMember
# assert sp_module._LOGGER.error.called
# noinspection PyUnresolvedReferences
@pytest.allure.feature("unit")
@pytest.allure.story("tasks", "north")
class TestSendingProcess:
"""Unit tests for the sending_process.py"""
@pytest.mark.parametrize(
"p_stream_id, "
"p_rows, "
"expected_stream_id_valid, "
"expected_execution",
[
# Good cases
(
# p_stream_id
1,
# p_rows
{
"rows":
[
{"active": "t"}
]
},
# expected_stream_id_valid = True, it is a valid stream id
True,
# expected_execution
"good"
),
(
# p_stream_id
1,
# p_rows
{
"rows":
[
{"active": "f"}
]
},
# expected_stream_id_valid = True, it is a valid stream id
False,
# expected_execution
"good"
),
# Bad cases
# 0 rows
(
# p_stream_id
1,
# p_rows
{
"rows":
[
]
},
# expected_stream_id_valid = True, it is a valid stream id
False,
# expected_execution
"exception"
),
# Multiple rows
(
# p_stream_id
1,
# p_rows
{
"rows":
[
{"active": "t"},
{"active": "f"}
]
},
# expected_stream_id_valid = True, it is a valid stream id
False,
# expected_execution
"exception"
),
]
)
@pytest.mark.skip(reason="Stream ID tests no longer valid")
async def test_is_stream_id_valid(self,
p_stream_id,
p_rows,
expected_stream_id_valid,
expected_execution,
event_loop):
""" Unit tests for - _is_stream_id_valid """
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
SendingProcess._logger = MagicMock(spec=logging)
sp._logger = MagicMock(spec=logging)
sp._storage_async = MagicMock(spec=StorageClientAsync)
if expected_execution == "good":
with patch.object(sp._storage_async, 'query_tbl', return_value=mock_coro(p_rows)):
generate_stream_id = await sp._is_stream_id_valid(p_stream_id)
# noinspection PyProtectedMember
assert not SendingProcess._logger.error.called
assert generate_stream_id == expected_stream_id_valid
elif expected_execution == "exception":
with patch.object(sp._storage_async, 'query_tbl', side_effect=ValueError):
with pytest.raises(ValueError):
await sp._is_stream_id_valid(p_stream_id)
# noinspection PyProtectedMember
assert SendingProcess._logger.error.called
@pytest.mark.parametrize("plugin_file, plugin_type, plugin_name, expected_result", [
("pi_server", "north", "PI Server North", True),
("pi_server", "north", "Empty North Plugin", False),
("pi_server", "south", "PI Server North", False)
])
async def test_is_north_valid(self, plugin_file, plugin_type, plugin_name, expected_result, event_loop):
"""Tests the possible cases of the function is_north_valid """
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._config['plugin'] = plugin_file
sp._plugin_load()
sp._plugin_info = sp._plugin.plugin_info()
sp._plugin_info['type'] = plugin_type
sp._plugin_info['name'] = plugin_name
assert sp._is_north_valid() == expected_result
@pytest.mark.asyncio
async def test_load_data_into_memory(self, event_loop):
""" Unit test for - test_load_data_into_memory"""
async def mock_coroutine():
"""" mock_coroutine """
return True
# Checks the Readings handling
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
# Tests - READINGS
sp._config['source'] = 'readings'
with patch.object(sp, '_load_data_into_memory_readings', return_value=mock_coroutine()) \
as mocked_load_data_into_memory_readings:
await sp._load_data_into_memory(5)
assert mocked_load_data_into_memory_readings.called
# Tests - STATISTICS
sp._config['source'] = 'statistics'
with patch.object(sp, '_load_data_into_memory_statistics', return_value=mock_coro(True)) \
as mocked_load_data_into_memory_statistics:
await sp._load_data_into_memory(5)
assert mocked_load_data_into_memory_statistics.called
# Tests - AUDIT
# sp._config['source'] = 'audit'
#
# with patch.object(sp, '_load_data_into_memory_audit', return_value=mock_coro(True)) \
# as mocked_load_data_into_memory_audit:
#
# await sp._load_data_into_memory(5)
# assert mocked_load_data_into_memory_audit.called
@pytest.mark.asyncio
@pytest.mark.parametrize(
"p_rows, "
"expected_rows, ",
[
# Case 1: Base case and Timezone added
(
# p_rows
{
"rows": [
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 11, "temperature": 38},
"user_ts": "16/04/2018 16:32:55"
},
]
},
# expected_rows,
# NOTE:
# Time generated with UTC timezone
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 11, "temperature": 38},
"user_ts": "16/04/2018 16:32:55.000000+00"
},
]
)
]
)
async def test_load_data_into_memory_readings(self,
event_loop,
p_rows,
expected_rows):
"""Test _load_data_into_memory handling and transformations for the readings """
async def mock_coroutine():
"""" mock_coroutine """
return p_rows
# Checks the Readings handling
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._config['source'] = 'readings'
sp._readings = MagicMock(spec=ReadingsStorageClientAsync)
# Checks the transformations and especially the adding of the UTC timezone
with patch.object(sp._readings, 'fetch', return_value=mock_coroutine()):
generated_rows = await sp._load_data_into_memory_readings(5)
assert len(generated_rows) == 1
assert generated_rows == expected_rows
@pytest.mark.parametrize(
"p_rows, "
"expected_rows, ",
[
# Case 1:
# NOTE:
# Time generated with UTC timezone
(
# p_rows
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 11, "temperature": 38},
"user_ts": "16/04/2018 16:32:55"
},
],
# expected_rows,
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 11, "temperature": 38},
"user_ts": "16/04/2018 16:32:55.000000+00"
},
]
),
# Case 2: "180.2" to float 180.2
(
# p_rows
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": "180.2"},
"user_ts": "16/04/2018 16:32:55"
},
],
# expected_rows,
# NOTE:
# Time generated with UTC timezone
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 180.2},
"user_ts": "16/04/2018 16:32:55.000000+00"
},
]
)
]
)
async def test_transform_in_memory_data_readings(self,
event_loop,
p_rows,
expected_rows):
""" Unit test for - _transform_in_memory_data_readings"""
# Checks the Readings handling
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
# Checks the transformations and especially the adding of the UTC timezone
generated_rows = sp._transform_in_memory_data_readings(p_rows)
assert len(generated_rows) == 1
assert generated_rows == expected_rows
@pytest.mark.parametrize(
"p_rows, ",
[
(
# reading - missing
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"user_ts": "16/04/2018 16:32:55"
}
]
),
(
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": '',
"user_ts": "16/04/2018 16:32:55"
}
]
),
(
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": '{"value"',
"user_ts": "16/04/2018 16:32:55"
}
]
),
(
[
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": '{"value":02}',
"user_ts": "16/04/2018 16:32:55"
}
]
),
(
[
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": 100,
"user_ts": "16/04/2018 16:32:55"
}
]
),
(
[
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": "none",
"user_ts": "16/04/2018 16:32:55"
}
]
),
]
)
async def test_transform_in_memory_data_readings_error(self, event_loop, p_rows):
""" Unit test for - _transform_in_memory_data_readings - tests error cases/handling """
SendingProcess._logger = MagicMock(spec=logging)
with patch.object(SendingProcess._logger, 'warning') as patched_logger:
SendingProcess._transform_in_memory_data_readings(p_rows)
assert patched_logger.called
@pytest.mark.parametrize(
"p_rows, "
"expected_rows, ",
[
# Case 1:
# fields mapping,
# key->asset_code
# Timezone added
# reading format handling
#
# Note :
# read_key is not handled
# Time generated with UTC timezone
(
# p_rows
{
"rows": [
{
"id": 1,
"key": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"value": 20,
"history_ts": "16/04/2018 20:00:00",
"ts": "16/04/2018 16:32:55",
},
]
},
# expected_rows,
[
{
"id": 1,
"asset_code": "test_asset_code",
"reading": {"value": 20},
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"user_ts": "16/04/2018 20:00:00.000000+00"
},
]
),
# Case 2: key is having spaces
(
# p_rows
{
"rows": [
{
"id": 1,
"key": " test_asset_code ",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"value": 21,
"history_ts": "16/04/2018 20:00:00",
"ts": "16/04/2018 16:32:55"
},
]
},
# expected_rows,
[
{
"id": 1,
"asset_code": "test_asset_code",
"reading": {"value": 21},
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"user_ts": "16/04/2018 20:00:00.000000+00"
},
]
)
]
)
async def test_load_data_into_memory_statistics(self,
event_loop,
p_rows,
expected_rows):
"""Test _load_data_into_memory handling and transformations for the statistics """
# Checks the Statistics handling
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._config['source'] = 'statistics'
sp._storage_async = MagicMock(spec=StorageClientAsync)
# Checks the transformations for the Statistics especially for the 'reading' field and the fields naming/mapping
with patch.object(uuid, 'uuid4', return_value=uuid.UUID("ef6e1368-4182-11e8-842f-0ed5f89f718b")):
with patch.object(sp._storage_async, 'query_tbl_with_payload', return_value=mock_coro(p_rows)):
generated_rows = await sp._load_data_into_memory_statistics(5)
assert len(generated_rows) == 1
assert generated_rows == expected_rows
@pytest.mark.parametrize(
"p_rows, "
"expected_rows, ",
[
# Case 1:
# fields mapping,
# key->asset_code
# Timezone added
# reading format handling
#
# Note :
# read_key is not handled
# Time generated with UTC timezone
(
# p_rows
[
{
"id": 1,
"key": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"value": 20,
"history_ts": "16/04/2018 20:00:00",
"ts": "16/04/2018 16:32:55"
},
],
# expected_rows,
[
{
"id": 1,
"asset_code": "test_asset_code",
"reading": {"value": 20},
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"user_ts": "16/04/2018 20:00:00.000000+00"
},
]
),
# Case 2: key is having spaces
(
# p_rows
[
{
"id": 1,
"key": " test_asset_code ",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"value": 21,
"history_ts": "16/04/2018 20:00:00",
"ts": "16/04/2018 16:32:55"
},
],
# expected_rows,
[
{
"id": 1,
"asset_code": "test_asset_code",
"reading": {"value": 21},
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"user_ts": "16/04/2018 20:00:00.000000+00"
},
]
)
]
)
async def test_transform_in_memory_data_statistics(self,
event_loop,
p_rows,
expected_rows):
""" Unit test for - _transform_in_memory_data_statistics"""
# Checks the Statistics handling
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._storage_async = MagicMock(spec=StorageClientAsync)
with patch.object(uuid, 'uuid4', return_value=uuid.UUID("ef6e1368-4182-11e8-842f-0ed5f89f718b")):
with patch.object(sp._storage_async, 'query_tbl_with_payload', return_value=mock_coro()):
# Checks the transformations for the Statistics especially for the 'reading' field and the fields naming/mapping
generated_rows = sp._transform_in_memory_data_statistics(p_rows)
assert len(generated_rows) == 1
assert generated_rows == expected_rows
async def test_last_object_id_read(self, event_loop):
"""Tests the possible cases for the function last_object_id_read """
async def mock_query_tbl_row_1():
"""Mocks the query_tbl function of the StorageClientAsync object - good case"""
rows = {"rows": [{"last_object": 10}]}
return rows
async def mock_query_tbl_row_0():
"""Mocks the query_tbl function of the StorageClientAsync object - base case"""
rows = {"rows": []}
return rows
async def mock_query_tbl_row_2():
"""Mocks the query_tbl function of the StorageClientAsync object - base case"""
rows = {"rows": [{"last_object": 10}, {"last_object": 11}]}
return rows
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._storage_async = MagicMock(spec=StorageClientAsync)
sp._stream_id = 1
# Good Case
with patch.object(sp._storage_async, 'query_tbl', return_value=mock_query_tbl_row_1()) as sp_mocked:
position = await sp._last_object_id_read()
sp_mocked.assert_called_once_with('streams', 'id=1')
assert position == 10
# Bad cases
sp._logger.error = MagicMock()
with patch.object(sp._storage_async, 'query_tbl', return_value=mock_query_tbl_row_0()):
# noinspection PyBroadException
try:
await sp._last_object_id_read()
except Exception:
pass
sp._logger.error.assert_called_once_with(sp_module._MESSAGES_LIST["e000019"])
sp._logger.error = MagicMock()
with patch.object(sp._storage_async, 'query_tbl', return_value=mock_query_tbl_row_2()):
# noinspection PyBroadException
try:
await sp._last_object_id_read()
except Exception:
pass
sp._logger.error.assert_called_once_with(sp_module._MESSAGES_LIST["e000019"])
@pytest.mark.asyncio
@pytest.mark.parametrize(
"p_duration, "
"p_sleep_interval, "
"p_signal_received, " # simulates the termination signal
"expected_time, "
"tolerance ",
[
# p_duration - p_sleep_interval - p_signal_received - expected_time - tolerance
(10, 1, False, 10, 5),
(60, 1, True, 0, 5),
]
)
async def test_send_data_good(
self,
event_loop,
p_duration,
p_sleep_interval,
p_signal_received,
expected_time,
tolerance):
""" Unit tests - send_data """
async def mock_task():
""" Dummy async task """
pass
return True
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._logger = MagicMock(spec=logging)
# Configures properly the SendingProcess
sp._config = {
'duration': p_duration,
'sleepInterval': p_sleep_interval,
'memory_buffer_size': 1000
}
# Simulates the reception of the termination signal
if p_signal_received:
SendingProcess._stop_execution = True
else:
SendingProcess._stop_execution = False
# Force tasks immediately termination
sp._task_fetch_data_run = False
sp._task_send_data_run = False
# Start time track
start_time = time.time()
with patch.object(sp, '_last_object_id_read', return_value=0):
await sp.send_data()
# It considers a reasonable tolerance
elapsed_seconds = time.time() - start_time
assert expected_time <= elapsed_seconds <= (expected_time + tolerance)
@pytest.mark.parametrize(
"p_rows, " # GIVEN, information retrieve from the storage layer
"p_num_element_to_fetch, "
"p_buffer_size, " # size of the in memory buffer
"expected_buffer ", # THEN, expected in memory buffer loaded by the _task_fetch_data function
[
(
# p_rows
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 10, "temperature": 101},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 20, "temperature": 201},
"user_ts": "16/04/2018 16:32:55"
},
],
[
{
"id": 3,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 30, "temperature": 301},
"user_ts": "16/04/2018 16:32:55"
},
],
[
{
"id": 4,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 40, "temperature": 401},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 5,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 50, "temperature": 501},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 6,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 60, "temperature": 601},
"user_ts": "16/04/2018 16:32:55"
},
]
],
# p_num_element_to_fetch
3,
# p_buffer_size
3,
# expected_buffer - 2 dimensions list
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 10, "temperature": 101},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 20, "temperature": 201},
"user_ts": "16/04/2018 16:32:55"
},
],
[
{
"id": 3,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 30, "temperature": 301},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 4,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 40, "temperature": 401},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 5,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 50, "temperature": 501},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 6,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 60, "temperature": 601},
"user_ts": "16/04/2018 16:32:55"
},
]
]
)
]
)
async def test_task_fetch_data_fill_buffer(
self,
event_loop,
p_rows,
p_buffer_size,
p_num_element_to_fetch,
expected_buffer):
""" Unit tests - _task_fetch_data - fill the memory buffer
Checks if the fetch task/function properly fills the in memory buffer
in relation to defined set of inputs
"""
async def retrieve_rows(idx):
""" mock rows retrieval from the storage layer """
return p_rows[idx]
# GIVEN
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._logger = MagicMock(spec=logging)
# Configures properly the SendingProcess
sp._config = {
'memory_buffer_size': p_buffer_size
}
sp._config_from_manager = {
'applyFilter': {'value': "FALSE"}
}
sp._task_fetch_data_run = True
sp._task_fetch_data_sem = asyncio.Semaphore(0)
sp._task_send_data_sem = asyncio.Semaphore(0)
# Prepares the in memory buffer for the fetch/send operations
sp._memory_buffer = [None for x in range(sp._config['memory_buffer_size'])]
# WHEN
with patch.object(sp, '_last_object_id_read', return_value=mock_coro(0)):
with patch.object(sp, '_load_data_into_memory',
side_effect=[asyncio.ensure_future(retrieve_rows(x)) for x in range(0, p_num_element_to_fetch)]):
task_id = asyncio.ensure_future(sp._task_fetch_data())
# Lets the _task_fetch_data to run for a while
await asyncio.sleep(3)
# Tear down
sp._task_fetch_data_run = False
sp._task_fetch_data_sem.release()
sp._task_send_data_sem.release()
await task_id
# THEN
assert sp._memory_buffer == expected_buffer
@pytest.mark.parametrize(
"p_rows, " # GIVEN, information retrieve from the storage layer
"p_num_element_to_fetch, "
"p_buffer_size, " # size of the in memory buffer
"expected_buffer ", # THEN, expected in memory buffer loaded by the _task_fetch_data function
[
(
# p_rows
[
# Step 1
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 10, "temperature": 101},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 20, "temperature": 201},
"user_ts": "16/04/2018 16:32:55"
},
],
[
{
"id": 3,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 30, "temperature": 301},
"user_ts": "16/04/2018 16:32:55"
},
],
[
{
"id": 4,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 40, "temperature": 401},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 5,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 50, "temperature": 501},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 6,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 60, "temperature": 601},
"user_ts": "16/04/2018 16:32:55"
},
],
# Step 2
[
{
"id": 10,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 100, "temperature": 1001},
"user_ts": "16/04/2018 16:32:55"
},
]
],
# p_num_element_to_fetch
4,
# p_buffer_size
3,
# expected_buffer - 2 dimensions list
[
# Loaded at first step 2
[
{
"id": 10,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 100, "temperature": 1001},
"user_ts": "16/04/2018 16:32:55"
},
],
# Loaded at first step 1
[
{
"id": 3,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 30, "temperature": 301},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 4,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 40, "temperature": 401},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 5,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 50, "temperature": 501},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 6,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 60, "temperature": 601},
"user_ts": "16/04/2018 16:32:55"
},
]
]
)
]
)
@pytest.mark.asyncio
async def test_task_fetch_data_cycle_buffer(
self,
event_loop,
p_rows,
p_num_element_to_fetch,
p_buffer_size,
expected_buffer):
""" Unit tests - _task_fetch_data - add a new element after filling the memory buffer"""
async def retrieve_rows(idx):
""" mock rows retrieval from the storage layer - used for the first fill """
return p_rows[idx]
# GIVEN
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._logger = MagicMock(spec=logging)
# Configures properly the SendingProcess
sp._config = {
'memory_buffer_size': p_buffer_size
}
sp._config_from_manager = {
'applyFilter': {'value': "FALSE"}
}
sp._task_fetch_data_run = True
sp._task_fetch_data_sem = asyncio.Semaphore(0)
sp._task_send_data_sem = asyncio.Semaphore(0)
# Prepares the in memory buffer for the fetch/send operations
sp._memory_buffer = [None for x in range(sp._config['memory_buffer_size'])]
# WHEN
# Starts the fetch 'task'
with patch.object(sp, '_last_object_id_read', return_value=mock_coro(0)):
with patch.object(sp, '_load_data_into_memory',
side_effect=[asyncio.ensure_future(retrieve_rows(x)) for x in range(0, p_num_element_to_fetch)]):
task_id = asyncio.ensure_future(sp._task_fetch_data())
# Lets the _task_fetch_data to run for a while, to fill the in memory buffer
await asyncio.sleep(3)
# Simulates the sent operation - so another block is loaded
sp._memory_buffer[0] = None
# Lets the fetch task to restart
sp._task_send_data_sem.release()
# Lets the _task_fetch_data to run for a while
await asyncio.sleep(3)
# Tear down
sp._task_fetch_data_run = False
sp._task_send_data_sem.release()
await task_id
# THEN
assert sp._memory_buffer == expected_buffer
@pytest.mark.parametrize(
"p_rows, " # GIVEN, information retrieve from the storage layer
"p_num_element_to_fetch, "
"p_buffer_size, " # size of the in memory buffer
"expected_buffer ", # THEN, expected in memory buffer loaded by the _task_fetch_data function
[
(
# p_rows
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 10, "temperature": 101},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 20, "temperature": 201},
"user_ts": "16/04/2018 16:32:55"
},
],
[
{
"id": 3,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 30, "temperature": 301},
"user_ts": "16/04/2018 16:32:55"
},
]
],
# p_num_element_to_fetch
2,
# p_buffer_size
3,
# expected_buffer - 2 dimensions list
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 10, "temperature": 101},
"user_ts": "16/04/2018 16:32:55"
},
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 20, "temperature": 201},
"user_ts": "16/04/2018 16:32:55"
},
],
[
{
"id": 3,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {"humidity": 30, "temperature": 301},
"user_ts": "16/04/2018 16:32:55"
},
],
None
]
)
]
)
@pytest.mark.asyncio
async def test_task_fetch_data_error(
self,
event_loop,
p_rows,
p_num_element_to_fetch,
p_buffer_size,
expected_buffer):
""" Unit tests - _task_fetch_data - simulates and error while fetching """
async def mock_retrieve_rows(idx):
""" mock rows retrieval from the storage layer - used for the first fill """
return p_rows[idx]
# GIVEN
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._logger = MagicMock(spec=logging)
sp._audit = MagicMock(spec=AuditLogger)
SendingProcess._logger = MagicMock(spec=logging)
# Configures properly the SendingProcess
sp._config = {
'memory_buffer_size': p_buffer_size
}
sp._config_from_manager = {
'applyFilter': {'value': "FALSE"}
}
sp._task_fetch_data_run = True
sp._task_fetch_data_sem = asyncio.Semaphore(0)
sp._task_send_data_sem = asyncio.Semaphore(0)
# Prepares the in memory buffer for the fetch/send operations
sp._memory_buffer = [None for x in range(sp._config['memory_buffer_size'])]
# WHEN - Starts the fetch 'task'
with patch.object(sp, '_last_object_id_read', return_value=mock_coro(0)):
with patch.object(SendingProcess._logger, 'error') as patched_logger:
with patch.object(sp._audit, 'failure', return_value=mock_audit_failure()) as patched_audit:
with patch.object(sp, '_load_data_into_memory',
side_effect=[asyncio.ensure_future(mock_retrieve_rows(x)) for x in range(0, p_num_element_to_fetch)]):
# to mask - cannot reuse already awaited coroutine
with pytest.raises(RuntimeError):
task_id = asyncio.ensure_future(sp._task_fetch_data())
# Lets the _task_fetch_data to run for a while
await asyncio.sleep(3)
# Tear down
sp._task_fetch_data_run = False
sp._task_send_data_sem.release()
await task_id
# THEN - Checks log and audit are called in case of en error and the in memory buffer is as expected
assert patched_logger.called
assert patched_audit.called
patched_audit.assert_called_with(SendingProcess._AUDIT_CODE, ANY)
assert sp._memory_buffer == expected_buffer
@pytest.mark.parametrize(
"p_rows, " # GIVEN, information retrieve from the storage layer
"p_num_element_to_fetch, "
"p_buffer_size, " # size of the in memory buffer
"p_jqfilter, " # JQ filter to apply
"expected_buffer ", # THEN, expected in memory buffer loaded by the _task_fetch_data function
[
(
# p_rows
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 11, "temperature": 38
},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 20, "temperature": 201
},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 3,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 30, "temperature": 301
},
"user_ts": "16/04/2018 16:32:55"
}
]
],
# p_num_element_to_fetch
3,
# p_buffer_size
3,
# p_jqfilter
"(.[]|.reading|.addedField)=512",
# expected_buffer - 2 dimensions list
[
[
{
'read_key': 'ef6e1368-4182-11e8-842f-0ed5f89f718b',
'id': 1,
'reading': {
'humidity': 11,
'temperature': 38,
'addedField': 512
},
'asset_code': 'test_asset_code',
'user_ts': '16/04/2018 16:32:55'
}
],
[
{
'read_key': 'ef6e1368-4182-11e8-842f-0ed5f89f718b',
'id': 2,
'reading': {
'humidity': 20,
'temperature': 201,
'addedField': 512
},
'asset_code': 'test_asset_code',
'user_ts': '16/04/2018 16:32:55'
}
],
[
{
'read_key': 'ef6e1368-4182-11e8-842f-0ed5f89f718b',
'id': 3,
'reading': {
'humidity': 30,
'temperature': 301,
'addedField': 512
},
'asset_code': 'test_asset_code',
'user_ts': '16/04/2018 16:32:55'
}
],
]
)
]
)
@pytest.mark.asyncio
async def test_task_fetch_data_jqfilter(
self,
event_loop,
p_rows,
p_num_element_to_fetch,
p_buffer_size,
p_jqfilter,
expected_buffer):
""" Unit tests - _task_fetch_data - tests JQFilter functionalities """
async def mock_retrieve_rows(idx):
""" mock rows retrieval from the storage layer"""
return p_rows[idx]
# GIVEN
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._logger = MagicMock(spec=logging)
SendingProcess._logger = MagicMock(spec=logging)
sp._audit = MagicMock(spec=AuditLogger)
# Configures properly the SendingProcess, enabling JQFilter
sp._config = {
'memory_buffer_size': p_buffer_size
}
sp._config_from_manager = {
"applyFilter": {"value": "TRUE"},
"filterRule": {"value": p_jqfilter}
}
sp._task_fetch_data_run = True
sp._task_fetch_data_sem = asyncio.Semaphore(0)
sp._task_send_data_sem = asyncio.Semaphore(0)
# Prepares the in memory buffer for the fetch/send operations
sp._memory_buffer = [None for x in range(sp._config['memory_buffer_size'])]
# WHEN - Starts the fetch 'task'
with patch.object(sp, '_last_object_id_read', return_value=mock_coro(0)):
with patch.object(sp, '_load_data_into_memory',
side_effect=[asyncio.ensure_future(mock_retrieve_rows(x)) for x in range(0, p_num_element_to_fetch)]):
task_id = asyncio.ensure_future(sp._task_fetch_data())
# Lets the _task_fetch_data to run for a while
await asyncio.sleep(3)
# Tear down
sp._task_fetch_data_run = False
sp._task_send_data_sem.release()
await task_id
assert sp._memory_buffer == expected_buffer
@pytest.mark.parametrize(
"p_rows, " # GIVEN, information available in the in memory buffer
"p_buffer_size, " # size of the in memory buffer
"p_send_result, " # Values returned by the _plugin.plugin_send
"expected_num_sent, " # THEN, expected elements sent
"expected_buffer ", # expected in memory buffer after the _task_send_data operations
[
# Case 1
(
# p_rows
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 11, "temperature": 38
},
"user_ts": "16/04/2018 16:32:55"
}
]
],
# p_buffer_size
3,
# p_send_result
[
{
"data_sent": True,
"new_last_object_id": 1,
"num_sent": 1,
}
],
# expected_num_sent
1,
# expected_buffer - 2 dimensions list
[
None,
None,
None
]
),
# Case 2 - fills the buffer
(
# p_rows
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 11, "temperature": 38
},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 20, "temperature": 201
},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 4,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 30, "temperature": 301
},
"user_ts": "16/04/2018 16:32:55"
}
]
],
# p_buffer_size
3,
# p_send_result
[
{
"data_sent": True,
"new_last_object_id": 1,
"num_sent": 1,
},
{
"data_sent": True,
"new_last_object_id": 2,
"num_sent": 1,
},
{
"data_sent": True,
"new_last_object_id": 4,
"num_sent": 1,
},
],
# expected_num_sent
3,
# expected_buffer - 2 dimensions list
[
None,
None,
None
]
),
]
)
@pytest.mark.asyncio
async def test_task_send_data_fill_buffer(
self,
event_loop,
p_rows,
p_buffer_size,
p_send_result,
expected_num_sent,
expected_buffer):
""" Unit tests - _task_send_data - send data without errors """
async def mock_send_rows(x):
""" mock the results of the sending operation """
return p_send_result[x]["data_sent"], p_send_result[x]["new_last_object_id"], p_send_result[x]["num_sent"]
# GIVEN
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._logger = MagicMock(spec=logging)
SendingProcess._logger = MagicMock(spec=logging)
sp._audit = MagicMock(spec=AuditLogger)
sp._stream_id = 1
sp._tracked_assets = []
# Configures properly the SendingProcess, enabling JQFilter
sp._config = {
'memory_buffer_size': p_buffer_size,
'plugin': 'pi_server'
}
sp._config_from_manager = {
'applyFilter': {'value': "FALSE"}
}
sp._task_send_data_run = True
sp._task_fetch_data_sem = asyncio.Semaphore(0)
sp._task_send_data_sem = asyncio.Semaphore(0)
# Allocates the in memory buffer
sp._memory_buffer = [None for x in range(p_buffer_size)]
# Fills the buffer
for x in range(len(p_rows)):
sp._memory_buffer[x] = p_rows[x]
# WHEN - Starts the fetch 'task'
with patch.object(sp, '_update_position_reached', return_value=mock_async_call()) \
as patched_update_position_reached:
with patch.object(sp._plugin, 'plugin_send',
side_effect=[asyncio.ensure_future(mock_send_rows(x)) for x in range(0, len(p_send_result))]):
with patch.object(sp._core_microservice_management_client, 'create_asset_tracker_event'):
task_id = asyncio.ensure_future(sp._task_send_data())
# Lets the _task_fetch_data to run for a while
await asyncio.sleep(3)
# Tear down
sp._task_send_data_run = False
sp._task_fetch_data_sem.release()
await task_id
expected_new_last_object_id = p_send_result[len(p_send_result) - 1]["new_last_object_id"]
assert sp._memory_buffer == expected_buffer
patched_update_position_reached.assert_called_with( expected_new_last_object_id, expected_num_sent)
@pytest.mark.parametrize(
"p_rows_step1, " # information available in the in memory buffer
"p_rows_step2, " # information available in the in memory buffer
"p_buffer_size, " # size of the in memory buffer
"p_send_result, " # Values returned by the _plugin.plugin_send
"expected_num_sent_step1, " # expected elements sent
"expected_num_sent_step2, " # expected elements sent
"expected_buffer ", # expected in memory buffer after the _task_send_data operations
[
# Case 1
(
# p_rows_step1
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 11, "temperature": 38
},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 20, "temperature": 201
},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 4,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 30, "temperature": 301
},
"user_ts": "16/04/2018 16:32:55"
}
]
],
# p_rows_step2
[
[
{
"id": 5,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 50, "temperature": 501
},
"user_ts": "16/04/2018 16:32:55"
}
]
],
# p_buffer_size
3,
# p_send_result
[
{
"data_sent": True,
"new_last_object_id": 1,
"num_sent": 1,
},
{
"data_sent": True,
"new_last_object_id": 2,
"num_sent": 1,
},
{
"data_sent": True,
"new_last_object_id": 4,
"num_sent": 1,
},
{
"data_sent": True,
"new_last_object_id": 5,
"num_sent": 1,
},
],
# expected_num_sent_step1
3,
# expected_num_sent_step1
1,
# expected_buffer - 2 dimensions list
[
None,
None,
None
]
),
]
)
@pytest.mark.asyncio
async def test_task_send_data_cycle_buffer(
self,
event_loop,
p_rows_step1,
p_rows_step2,
p_buffer_size,
p_send_result,
expected_num_sent_step1,
expected_num_sent_step2,
expected_buffer):
""" Unit tests - _task_send_data - send data filling the buffer and adding new elements """
async def mock_send_rows(x):
""" mock the results of the sending operation """
return p_send_result[x]["data_sent"], p_send_result[x]["new_last_object_id"], p_send_result[x]["num_sent"]
# GIVEN
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._logger = MagicMock(spec=logging)
SendingProcess._logger = MagicMock(spec=logging)
sp._audit = MagicMock(spec=AuditLogger)
sp._stream_id = 1
sp._tracked_assets = []
# Configures properly the SendingProcess, enabling JQFilter
sp._config = {
'memory_buffer_size': p_buffer_size,
'plugin': 'pi_server'
}
sp._config_from_manager = {
'applyFilter': {'value': "FALSE"}
}
sp._task_send_data_run = True
sp._task_fetch_data_sem = asyncio.Semaphore(0)
sp._task_send_data_sem = asyncio.Semaphore(0)
# Allocates the in memory buffer
sp._memory_buffer = [None for x in range(p_buffer_size)]
# Fills the buffer - step 1
for x in range(len(p_rows_step1)):
sp._memory_buffer[x] = p_rows_step1[x]
# WHEN - Starts the fetch 'task'
# 2 calls of _update_position_reached will be executed
with patch.object(sp,
'_update_position_reached',
side_effect=[asyncio.ensure_future(mock_async_call()) for x in range(2)]
) as patched_update_position_reached:
with patch.object(
sp._plugin,
'plugin_send',
side_effect=[asyncio.ensure_future(mock_send_rows(x)) for x in range(0, len(p_send_result))]):
with patch.object(sp._core_microservice_management_client, 'create_asset_tracker_event'):
task_id = asyncio.ensure_future(sp._task_send_data())
# Lets the _task_fetch_data to run for a while
await asyncio.sleep(3)
# THEN - Step 1
expected_new_last_object_id = p_rows_step1[len(p_rows_step1) - 1][0]["id"]
assert sp._memory_buffer == expected_buffer
patched_update_position_reached.assert_called_with(
expected_new_last_object_id,
expected_num_sent_step1)
# Fills the buffer - step 1
for x in range(len(p_rows_step2)):
sp._memory_buffer[x] = p_rows_step2[x]
# let handle step 2
sp._task_fetch_data_sem.release()
await asyncio.sleep(3)
# Tear down
sp._task_send_data_run = False
sp._task_fetch_data_sem.release()
await task_id
# THEN - Step 2
expected_new_last_object_id = p_rows_step2[len(p_rows_step2) - 1][0]["id"]
assert sp._memory_buffer == expected_buffer
patched_update_position_reached.assert_called_with( expected_new_last_object_id, expected_num_sent_step2)
@pytest.mark.parametrize(
"p_rows, " # GIVEN, information available in the in memory buffer
"p_buffer_size, " # size of the in memory buffer
"p_send_result, " # Values returned by the _plugin.plugin_send
"expected_num_sent, " # THEN, expected elements sent
"expected_buffer ", # expected in memory buffer after the _task_send_data operations
[
(
# p_rows
[
[
{
"id": 1,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 11, "temperature": 38
},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 2,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 20, "temperature": 201
},
"user_ts": "16/04/2018 16:32:55"
}
],
[
{
"id": 4,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 30, "temperature": 301
},
"user_ts": "16/04/2018 16:32:55"
}
]
],
# p_buffer_size
3,
# p_send_result - only to elements to force an error calling the plugin_send function
[
{
"data_sent": True,
"new_last_object_id": 1,
"num_sent": 1,
},
{
"data_sent": True,
"new_last_object_id": 2,
"num_sent": 1,
}
],
# expected_num_sent
3,
# expected_buffer - The third element was not sent for the occuring of the error
[
None,
None,
[
{
"id": 4,
"asset_code": "test_asset_code",
"read_key": "ef6e1368-4182-11e8-842f-0ed5f89f718b",
"reading": {
"humidity": 30, "temperature": 301
},
"user_ts": "16/04/2018 16:32:55"
}
]
]
),
]
)
@pytest.mark.asyncio
async def test_task_send_data_error(
self,
event_loop,
p_rows,
p_buffer_size,
p_send_result,
expected_num_sent,
expected_buffer,
fixture_sp):
""" Unit tests - _task_send_data - simulates an error while sending,
to force the error the list p_send_result is filled with less elements respect the required ones,
so 2 calls will be successful the third one will fail """
async def mock_send_rows(x):
""" mock the results of the sending operation """
return p_send_result[x]["data_sent"], p_send_result[x]["new_last_object_id"], p_send_result[x]["num_sent"]
# Configures properly the SendingProcess, enabling JQFilter
fixture_sp._tracked_assets = []
fixture_sp._config = {
'memory_buffer_size': p_buffer_size,
'plugin': 'pi_server'
}
# Allocates the in memory buffer
fixture_sp._memory_buffer = [None for x in range(p_buffer_size)]
# Fills the buffer
for x in range(len(p_rows)):
fixture_sp._memory_buffer[x] = p_rows[x]
# WHEN - Starts the fetch 'task'
with patch.object(fixture_sp, '_update_position_reached', return_value=mock_async_call()):
with patch.object(SendingProcess._logger, 'error') as patched_logger:
with patch.object(fixture_sp._audit, 'failure', return_value=mock_audit_failure()) as patched_audit:
with patch.object(
fixture_sp._plugin,
'plugin_send',
side_effect=[
asyncio.ensure_future(mock_send_rows(x)) for x in range(0, len(p_send_result))]):
with patch.object(fixture_sp._core_microservice_management_client, 'create_asset_tracker_event'):
with pytest.raises(RuntimeError):
task_id = asyncio.ensure_future(fixture_sp._task_send_data())
# Lets the _task_fetch_data to run for a while
await asyncio.sleep(3)
# Tear down
fixture_sp._task_send_data_run = False
fixture_sp._task_fetch_data_sem.release()
await task_id
# THEN - Checks log and audit are called in case of en error and the in memory buffer is as expected
assert patched_logger.called
assert patched_audit.called
patched_audit.assert_called_with(SendingProcess._AUDIT_CODE, ANY)
assert fixture_sp._memory_buffer == expected_buffer
@pytest.mark.asyncio
async def test_update_position_reached(self, event_loop):
""" Unit tests - _update_position_reached """
async def mock_task():
""" Dummy async task """
return True
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._audit = MagicMock(spec=AuditLogger)
with patch.object(sp, '_last_object_id_update', return_value=mock_task()) as mock_last_object_id_update:
with patch.object(sp, '_update_statistics', return_value=mock_task()) as mock__update_statistics:
with patch.object(sp._audit, 'information', return_value=mock_task()) as mock_audit_information:
await sp._update_position_reached( 1000, 100)
mock_last_object_id_update.assert_called_with(1000)
mock__update_statistics.assert_called_with(100)
mock_audit_information.assert_called_with(SendingProcess._AUDIT_CODE, {"sentRows": 100})
@pytest.mark.parametrize("plugin_file, plugin_type, plugin_name", [
("empty", "north", "Empty North Plugin"),
("pi_server", "north", "PI Server North"),
("ocs", "north", "OCS North")
])
async def test_standard_plugins(self, plugin_file, plugin_type, plugin_name, event_loop):
"""Tests if the standard plugins are available and loadable and if they have the required methods """
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
# Try to Loads the plugin
sp._config['plugin'] = plugin_file
sp._plugin_load()
# Evaluates if the plugin has all the required methods
assert callable(getattr(sp._plugin, 'plugin_info'))
assert callable(getattr(sp._plugin, 'plugin_init'))
assert callable(getattr(sp._plugin, 'plugin_send'))
assert callable(getattr(sp._plugin, 'plugin_shutdown'))
assert callable(getattr(sp._plugin, 'plugin_reconfigure'))
# Retrieves the info from the plugin
plugin_info = sp._plugin.plugin_info()
assert plugin_info['type'] == plugin_type
assert plugin_info['name'] == plugin_name
@pytest.mark.parametrize(
"p_config,"
"expected_config",
[
# Case 1
(
# p_config
{
"enable": {"value": "true"},
"duration": {"value": "10"},
"source": {"value": 'readings'},
"blockSize": {"value": "10"},
"memory_buffer_size": {"value": "10"},
"sleepInterval": {"value": "10"},
"plugin": {"value": "omf"},
"stream_id": {"value": "1"}
},
# expected_config
{
"enable": True,
"duration": 10,
"source": 'readings',
"blockSize": 10,
"memory_buffer_size": 10,
"sleepInterval": 10,
"plugin": "omf",
"stream_id": 1
},
),
]
)
async def test_retrieve_configuration_good(self,
event_loop,
p_config,
expected_config):
""" Unit tests - _retrieve_configuration - tests the transformations """
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
with patch.object(sp, '_fetch_configuration', return_value=p_config):
sp._retrieve_configuration()
assert sp._config['enable'] == expected_config['enable']
assert sp._config['duration'] == expected_config['duration']
assert sp._config['source'] == expected_config['source']
assert sp._config['blockSize'] == expected_config['blockSize']
assert sp._config['memory_buffer_size'] == expected_config['memory_buffer_size']
assert sp._config['sleepInterval'] == expected_config['sleepInterval']
assert sp._config['plugin'] == expected_config['plugin']
assert sp._config['stream_id'] == expected_config['stream_id']
@pytest.mark.skip(reason="Stream ID tests no longer valid")
async def test_start_stream_not_valid(self, event_loop):
""" Unit tests - _start - stream_id is not valid """
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
with patch.object(sp, '_plugin_load') as mocked_plugin_load:
result = await sp._start()
assert not result
assert not mocked_plugin_load.called
async def test_start_sp_disabled(self, event_loop):
""" Unit tests - _start - sending process is disabled """
async def mock_stream():
return 1, True
async def mock_stat_key():
return "sp"
async def mock_master_stat_key():
return 'Readings Sent'
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._plugin = MagicMock()
sp._config['plugin'] = MagicMock()
sp._config['enable'] = False
sp._config['stream_id'] = 1
sp._config_from_manager = {}
with patch.object(sp, '_get_stream_id', return_value=mock_stream()) as mocked_get_stream_id:
with patch.object(sp, '_get_statistics_key', return_value=mock_stat_key()) as mocked_get_statistics_key:
with patch.object(sp, '_get_master_statistics_key', return_value=mock_master_stat_key()):
with patch.object(sp._core_microservice_management_client, 'update_configuration_item'):
with patch.object(sp, '_retrieve_configuration'):
with patch.object(sp, '_plugin_load') as mocked_plugin_load:
result = await sp._start()
assert not result
assert not mocked_plugin_load.called
async def test_start_not_north(self, event_loop):
""" Unit tests - _start - it is not a north plugin """
async def mock_stream():
return 1, True
async def mock_stat_key():
return "sp"
async def mock_master_stat_key():
return 'Readings Sent'
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._plugin = MagicMock()
sp._config['plugin'] = MagicMock()
sp._config['enable'] = True
sp._config['stream_id'] = 1
sp._config_from_manager = {}
with patch.object(sp._core_microservice_management_client, 'update_configuration_item'):
with patch.object(sp, '_get_stream_id', return_value=mock_stream()) as mocked_get_stream_id:
with patch.object(sp, '_get_statistics_key', return_value=mock_stat_key()) as mocked_get_statistics_key:
with patch.object(sp, '_get_master_statistics_key', return_value=mock_master_stat_key()):
with patch.object(sp, '_retrieve_configuration'):
with patch.object(sp, '_plugin_load') as mocked_plugin_load:
with patch.object(sp._plugin, 'plugin_info') as mocked_plugin_info:
with patch.object(sp, '_is_north_valid', return_value=False) as mocked_is_north_valid:
result = await sp._start()
assert not result
assert mocked_plugin_load.called
assert mocked_plugin_info.called
assert mocked_is_north_valid.called
async def test_start_good(self, event_loop):
""" Unit tests - _start """
async def mock_stream():
return 1, True
async def mock_stat_key():
return "sp"
async def mock_master_stat_key():
return 'Readings Sent'
with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):
with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:
with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:
with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:
with patch.object(asyncio, 'get_event_loop', return_value=event_loop):
sp = SendingProcess()
sp._plugin = MagicMock()
sp._config['plugin'] = MagicMock()
sp._config['enable'] = True
sp._config['stream_id'] = 1
sp._config_from_manager = {}
with patch.object(sp._core_microservice_management_client, 'update_configuration_item'):
with patch.object(sp, '_get_stream_id', return_value=mock_stream()) as mocked_get_stream_id:
with patch.object(sp, '_get_statistics_key', return_value=mock_stat_key()) as mocked_get_statistics_key:
with patch.object(sp, '_get_master_statistics_key', return_value=mock_master_stat_key()):
with patch.object(sp, '_retrieve_configuration') as mocked_retrieve_configuration:
with patch.object(sp, '_plugin_load') as mocked_plugin_load:
with patch.object(sp._plugin, 'plugin_info') as mocked_plugin_info:
with patch.object(sp, '_is_north_valid', return_value=True) as mocked_is_north_valid:
with patch.object(sp._plugin, 'plugin_init') as mocked_plugin_init:
result = await sp._start()
assert result
# mocked_is_stream_id_valid.called_with(STREAM_ID)
mocked_retrieve_configuration.called_with( True)
assert mocked_plugin_load.called
assert mocked_plugin_info.called
assert mocked_is_north_valid.called
assert mocked_retrieve_configuration.called
assert mocked_plugin_init.called
| 41.132761 | 140 | 0.465375 | 9,256 | 100,693 | 4.742653 | 0.047645 | 0.035879 | 0.059798 | 0.016402 | 0.863843 | 0.839423 | 0.810037 | 0.795321 | 0.774705 | 0.748189 | 0 | 0.072361 | 0.437985 | 100,693 | 2,447 | 141 | 41.149571 | 0.703345 | 0.079271 | 0 | 0.598456 | 0 | 0 | 0.176584 | 0.040104 | 0 | 0 | 0 | 0 | 0.040265 | 1 | 0.001103 | false | 0.001655 | 0.006619 | 0 | 0.023718 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
02af6b4eb54b64fe8c9a1bf971056346f0787f0c | 36 | py | Python | __init__.py | xmonader/expectless | cebbf5c1312c675a007196d3a8faf218a35c775c | [
"MIT"
] | 2 | 2018-07-24T10:59:11.000Z | 2021-04-15T13:31:16.000Z | __init__.py | xmonader/expectless | cebbf5c1312c675a007196d3a8faf218a35c775c | [
"MIT"
] | 1 | 2017-06-17T18:21:28.000Z | 2017-06-17T18:21:28.000Z | __init__.py | xmonader/expectless | cebbf5c1312c675a007196d3a8faf218a35c775c | [
"MIT"
] | null | null | null | from .expect import expect, interact | 36 | 36 | 0.833333 | 5 | 36 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.