hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
07084b46612e153b19abe5e5203e935edb741b50 | 1,001 | py | Python | sharpy/tools/interval_func.py | MadManSC2/sharpy-sc2 | 13950357df2db58033daab24f076e3ae83f0b2a8 | [
"MIT"
] | 1 | 2020-03-05T19:21:56.000Z | 2020-03-05T19:21:56.000Z | sharpy/tools/interval_func.py | MadManSC2/sharpy-sc2 | 13950357df2db58033daab24f076e3ae83f0b2a8 | [
"MIT"
] | null | null | null | sharpy/tools/interval_func.py | MadManSC2/sharpy-sc2 | 13950357df2db58033daab24f076e3ae83f0b2a8 | [
"MIT"
] | null | null | null | import sc2
class IntervalFunc:
def __init__(self, ai: sc2.BotAI, func, timer_seconds: float):
self.timer_seconds = timer_seconds
self.ai = ai
self.func = func
self.cached_value = None
self.last_call = None
def execute(self):
if self.last_call is None or self.ai.time > self.last_call + self.timer_seconds:
self.last_call = self.ai.time
self.cached_value = self.func()
return self.cached_value
class IntervalFuncAsync:
def __init__(self, ai: sc2.BotAI, func, timer_seconds: float):
self.timer_seconds = timer_seconds
self.ai = ai
self.func = func
self.cached_value = None
self.last_call = None
async def execute(self):
if self.last_call is None or self.ai.time > self.last_call + self.timer_seconds:
self.last_call = self.ai.time
self.cached_value = await self.func()
return self.cached_value | 33.366667 | 89 | 0.617383 | 134 | 1,001 | 4.38806 | 0.201493 | 0.081633 | 0.163265 | 0.095238 | 0.901361 | 0.901361 | 0.802721 | 0.802721 | 0.802721 | 0.802721 | 0 | 0.004286 | 0.300699 | 1,001 | 30 | 90 | 33.366667 | 0.835714 | 0 | 0 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.04 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0732fb1cf7fa30ee12a88298f567a7ce4ab69660 | 3,483 | py | Python | tests/test_services/test_add_user_to_vm/actions.py | openvcloud/ays_templates | 7ce0bd5844ccfaefae554dd0dedeab2730a365cc | [
"Apache-2.0"
] | null | null | null | tests/test_services/test_add_user_to_vm/actions.py | openvcloud/ays_templates | 7ce0bd5844ccfaefae554dd0dedeab2730a365cc | [
"Apache-2.0"
] | 10 | 2017-10-25T13:23:23.000Z | 2018-03-28T16:00:06.000Z | tests/test_services/test_add_user_to_vm/actions.py | openvcloud/ays_templates | 7ce0bd5844ccfaefae554dd0dedeab2730a365cc | [
"Apache-2.0"
] | null | null | null | def test_user_access(job):
import requests, sys, time
service = job.service
RESULT_OK = 'OK : %s ' % service.name
RESULT_FAILED = 'FAILED : %s'
RESULT_ERROR = 'ERROR : %s'
failures = []
try:
service.model.data.result = RESULT_OK
g8client = service.producers.get('g8client')[0]
config_instance = "{}_{}".format(g8client.aysrepo.name, g8client.model.data.instance)
openv_client = j.clients.openvcloud.get(instance=config_instance, create=False, die=True, sshkey_path="/root/.ssh/ays_repos_key")
node_srv = service.producers.get('node')[0]
vdc_srv = node_srv.producers.get('vdc')[0]
for vdc in openv_client.api.cloudapi.cloudspaces.list():
if vdc['name'] == vdc_srv.name:
break
if vdc['name'] != vdc_srv.name:
raise RuntimeError('No matching cloudspace found')
for vm in openv_client.api.cloudapi.machines.list(cloudspaceId=vdc['id']):
if vm['name'] == node_srv.name:
break
if vm['name'] != node_srv.name:
raise RuntimeError('No matching VM found')
machine_info = openv_client.api.cloudapi.machines.get(machineId=vm['id'])
configured_uservdc = node_srv.model.data.uservdc[0]
acls = machine_info['acl']
for acl in acls:
if acl['userGroupId'] == configured_uservdc.name:
if acl['right'] != configured_uservdc.accesstype:
service.model.data.result = RESULT_FAILED %\
('User is not confgured correctly: Expected acl: [%s] Found acl: [%s]' % (configured_uservdc.accesstype, acl['right']))
except Exception as e:
service.model.data.result = RESULT_ERROR % (str(sys.exc_info()[:2]) + str(e))
finally:
service.save()
def test_delete_user_access(job):
import requests, sys, time
service = job.service
RESULT_OK = 'OK : %s ' % service.name
RESULT_FAILED = 'FAILED : %s'
RESULT_ERROR = 'ERROR : %s'
failures = []
try:
service.model.data.result = RESULT_OK
g8client = service.producers.get('g8client')[0]
config_instance = "{}_{}".format(g8client.aysrepo.name, g8client.model.data.instance)
openv_client = j.clients.openvcloud.get(instance=config_instance, create=False, die=True, sshkey_path="/root/.ssh/ays_repos_key")
node_srv = service.producers.get('node')[0]
vdc_srv = node_srv.producers.get('vdc')[0]
for vdc in openv_client.api.cloudapi.cloudspaces.list():
if vdc['name'] == vdc_srv.name:
break
if vdc['name'] != vdc_srv.name:
raise RuntimeError('No matching cloudspace found')
for vm in openv_client.api.cloudapi.machines.list(cloudspaceId=vdc['id']):
if vm['name'] == node_srv.name:
break
if vm['name'] != node_srv.name:
raise RuntimeError('No matching VM found')
machine_info = openv_client.api.cloudapi.machines.get(machineId=vm['id'])
# after deleting the all the users access rights, only the owner of machine should have access right
if len(machine_info['acl']) > 1:
service.model.data.result = RESULT_FAILED % 'Unconfigured users have access to machine [%s]' % machine_info['name']
except Exception as e:
service.model.data.result = RESULT_ERROR % (str(sys.exc_info()[:2]) + str(e))
finally:
service.save()
| 46.44 | 148 | 0.623026 | 440 | 3,483 | 4.788636 | 0.238636 | 0.038443 | 0.045562 | 0.062648 | 0.809682 | 0.809682 | 0.777409 | 0.777409 | 0.777409 | 0.777409 | 0 | 0.006852 | 0.245765 | 3,483 | 74 | 149 | 47.067568 | 0.795204 | 0.028137 | 0 | 0.835821 | 0 | 0 | 0.125924 | 0.014189 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.029851 | 0 | 0.059701 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4aed0ce63429c556adeca24cf36ad2a847cd1677 | 132 | py | Python | envdsys/shared/utilities/__init__.py | NOAA-PMEL/envDataSystem | 4db4a3569d2329658799a3eef06ce36dd5c0597d | [
"Unlicense"
] | 1 | 2021-11-06T19:22:53.000Z | 2021-11-06T19:22:53.000Z | envdsys/shared/utilities/__init__.py | NOAA-PMEL/envDataSystem | 4db4a3569d2329658799a3eef06ce36dd5c0597d | [
"Unlicense"
] | 25 | 2019-06-18T20:40:36.000Z | 2021-07-23T20:56:48.000Z | envdsys/shared/utilities/__init__.py | NOAA-PMEL/envDataSystem | 4db4a3569d2329658799a3eef06ce36dd5c0597d | [
"Unlicense"
] | null | null | null | # @Author: derek
# @Date: 2018-12-13T13:29:27-08:00
# @Last modified by: derek
# @Last modified time: 2018-12-13T13:57:14-08:00
| 26.4 | 48 | 0.666667 | 24 | 132 | 3.666667 | 0.666667 | 0.136364 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.321429 | 0.151515 | 132 | 4 | 49 | 33 | 0.464286 | 0.931818 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
db78641cf10baee9b9924a09f89d966768c5186f | 350,803 | py | Python | pmu-tools-master/ucevent/jkt_uc.py | patinnc/60secs | 45ad68e4359e0dfd506f9e3a898c216ed38e7fd0 | [
"MIT"
] | null | null | null | pmu-tools-master/ucevent/jkt_uc.py | patinnc/60secs | 45ad68e4359e0dfd506f9e3a898c216ed38e7fd0 | [
"MIT"
] | null | null | null | pmu-tools-master/ucevent/jkt_uc.py | patinnc/60secs | 45ad68e4359e0dfd506f9e3a898c216ed38e7fd0 | [
"MIT"
] | 1 | 2021-03-22T20:38:10.000Z | 2021-03-22T20:38:10.000Z | # Support for Intel Xeon E5 2600 series uncore monitoring
# see http://www.intel.com/content/dam/www/public/us/en/documents/design-guides/xeon-e5-2600-uncore-guide.pdf
# for more details on the events and formulas.
# aliases
aliases = {
"QPIMatch1": "Q_Py_PCI_PMON_PKT_MATCH1",
"QPIMask0": "Q_Py_PCI_PMON_PKT_MASK0",
"QPIMatch0": "Q_Py_PCI_PMON_BOX_MATCH0",
"PCUFilter": "PCU_MSR_PMON_BOX_FILTER",
"CBoFilter": "Cn_MSR_PMON_BOX_FILTER",
"QPIMask1": "Q_Py_PCI_PMON_PKT_MASK1",
}
events = {
# R3QPI:
"R3QPI.CLOCKTICKS": {
"Box": "R3QPI",
"Category": "R3QPI UCLK Events",
"Counters": "0-2",
"Defn": "Counts the number of uclks in the QPI uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the QPI Agent is close to the Ubox, they generally should not diverge by more than a handful of cycles.",
"Desc": "Number of uclks in domain",
"EvSel": 1,
},
"R3QPI.IIO_CREDITS_ACQUIRED": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit Acquired",
"EvSel": 32,
},
"R3QPI.IIO_CREDITS_ACQUIRED.NCS": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit Acquired",
"EvSel": 32,
"Umask": "bxx1xxxxx",
},
"R3QPI.IIO_CREDITS_ACQUIRED.NCB": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit Acquired",
"EvSel": 32,
"Umask": "bxxx1xxxx",
},
"R3QPI.IIO_CREDITS_ACQUIRED.DRS": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit Acquired",
"EvSel": 32,
"Umask": "bxxxx1xxx",
},
"R3QPI.IIO_CREDITS_REJECT": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit Rejected",
"EvSel": 33,
},
"R3QPI.IIO_CREDITS_REJECT.NCS": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit Rejected",
"EvSel": 33,
"Umask": "bxx1xxxxx",
},
"R3QPI.IIO_CREDITS_REJECT.NCB": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit Rejected",
"EvSel": 33,
"Umask": "bxxx1xxxx",
},
"R3QPI.IIO_CREDITS_REJECT.DRS": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit Rejected",
"EvSel": 33,
"Umask": "bxxxx1xxx",
},
"R3QPI.IIO_CREDITS_USED": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit In Use",
"EvSel": 34,
},
"R3QPI.IIO_CREDITS_USED.NCS": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit In Use",
"EvSel": 34,
"Umask": "bxx1xxxxx",
},
"R3QPI.IIO_CREDITS_USED.NCB": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit In Use",
"EvSel": 34,
"Umask": "bxxx1xxxx",
},
"R3QPI.IIO_CREDITS_USED.DRS": {
"Box": "R3QPI",
"Category": "R3QPI IIO_CREDITS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time.",
"Desc": "to IIO BL Credit In Use",
"EvSel": 34,
"Umask": "bxxxx1xxx",
},
"R3QPI.RING_AD_USED": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 AD Ring in Use",
"EvSel": 7,
},
"R3QPI.RING_AD_USED.CW_EVEN": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 AD Ring in Use",
"EvSel": 7,
"Umask": "bxxxxxxx1",
},
"R3QPI.RING_AD_USED.CCW_EVEN": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 AD Ring in Use",
"EvSel": 7,
"Umask": "bxxxxx1xx",
},
"R3QPI.RING_AD_USED.CW_ODD": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 AD Ring in Use",
"EvSel": 7,
"Umask": "bxxxxxx1x",
},
"R3QPI.RING_AD_USED.CCW_ODD": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 AD Ring in Use",
"EvSel": 7,
"Umask": "bxxxx1xxx",
},
"R3QPI.RING_AK_USED": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.",
"Desc": "R3 AK Ring in Use",
"EvSel": 8,
},
"R3QPI.RING_AK_USED.CW_EVEN": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.",
"Desc": "R3 AK Ring in Use",
"EvSel": 8,
"Umask": "bxxxxxxx1",
},
"R3QPI.RING_AK_USED.CCW_EVEN": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.",
"Desc": "R3 AK Ring in Use",
"EvSel": 8,
"Umask": "bxxxxx1xx",
},
"R3QPI.RING_AK_USED.CW_ODD": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.",
"Desc": "R3 AK Ring in Use",
"EvSel": 8,
"Umask": "bxxxxxx1x",
},
"R3QPI.RING_AK_USED.CCW_ODD": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.",
"Desc": "R3 AK Ring in Use",
"EvSel": 8,
"Umask": "bxxxx1xxx",
},
"R3QPI.RING_BL_USED": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 BL Ring in Use",
"EvSel": 9,
},
"R3QPI.RING_BL_USED.CW_EVEN": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 BL Ring in Use",
"EvSel": 9,
"Umask": "bxxxxxxx1",
},
"R3QPI.RING_BL_USED.CCW_EVEN": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 BL Ring in Use",
"EvSel": 9,
"Umask": "bxxxxx1xx",
},
"R3QPI.RING_BL_USED.CW_ODD": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 BL Ring in Use",
"EvSel": 9,
"Umask": "bxxxxxx1x",
},
"R3QPI.RING_BL_USED.CCW_ODD": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R3 BL Ring in Use",
"EvSel": 9,
"Umask": "bxxxx1xxx",
},
"R3QPI.RING_IV_USED": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. The IV ring is unidirectional. Whether UP or DN is used is dependent on the system programming. Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.",
"Desc": "R3 IV Ring in Use",
"EvSel": 10,
},
"R3QPI.RING_IV_USED.ANY": {
"Box": "R3QPI",
"Category": "R3QPI RING Events",
"Counters": "0-2",
"Defn": "Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. The IV ring is unidirectional. Whether UP or DN is used is dependent on the system programming. Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.",
"Desc": "R3 IV Ring in Use",
"EvSel": 10,
"Umask": "b00001111",
},
"R3QPI.RxR_BYPASSED": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of times when the Ingress was bypassed and an incoming transaction was bypassed directly across the BGF and into the qfclk domain.",
"Desc": "Ingress Bypassed",
"EvSel": 18,
},
"R3QPI.RxR_BYPASSED.AD": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of times when the Ingress was bypassed and an incoming transaction was bypassed directly across the BGF and into the qfclk domain.",
"Desc": "Ingress Bypassed",
"EvSel": 18,
"Umask": "bxxxxxxx1",
},
"R3QPI.RxR_CYCLES_NE": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
},
"R3QPI.RxR_CYCLES_NE.NCS": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxx1xxxxx",
},
"R3QPI.RxR_CYCLES_NE.NCB": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxxx1xxxx",
},
"R3QPI.RxR_CYCLES_NE.DRS": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxxxx1xxx",
},
"R3QPI.RxR_CYCLES_NE.SNP": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxxxxxx1x",
},
"R3QPI.RxR_CYCLES_NE.HOM": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxxxxxxx1",
},
"R3QPI.RxR_CYCLES_NE.NDR": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxxxxx1xx",
},
"R3QPI.RxR_INSERTS": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Allocations",
"EvSel": 17,
},
"R3QPI.RxR_INSERTS.NCS": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Allocations",
"EvSel": 17,
"Umask": "bxx1xxxxx",
},
"R3QPI.RxR_INSERTS.NCB": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Allocations",
"EvSel": 17,
"Umask": "bxxx1xxxx",
},
"R3QPI.RxR_INSERTS.DRS": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Allocations",
"EvSel": 17,
"Umask": "bxxxx1xxx",
},
"R3QPI.RxR_INSERTS.SNP": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Allocations",
"EvSel": 17,
"Umask": "bxxxxxx1x",
},
"R3QPI.RxR_INSERTS.HOM": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Allocations",
"EvSel": 17,
"Umask": "bxxxxxxx1",
},
"R3QPI.RxR_INSERTS.NDR": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Allocations",
"EvSel": 17,
"Umask": "bxxxxx1xx",
},
"R3QPI.RxR_OCCUPANCY": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": 0,
"Defn": "Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.",
"Desc": "Ingress Occupancy Accumulator",
"EvSel": 19,
"MaxIncCyc": 32,
"SubCtr": 1,
},
"R3QPI.RxR_OCCUPANCY.NCS": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": 0,
"Defn": "Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.",
"Desc": "Ingress Occupancy Accumulator",
"EvSel": 19,
"MaxIncCyc": 32,
"SubCtr": 1,
"Umask": "bxx1xxxxx",
},
"R3QPI.RxR_OCCUPANCY.NCB": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": 0,
"Defn": "Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.",
"Desc": "Ingress Occupancy Accumulator",
"EvSel": 19,
"MaxIncCyc": 32,
"SubCtr": 1,
"Umask": "bxxx1xxxx",
},
"R3QPI.RxR_OCCUPANCY.DRS": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": 0,
"Defn": "Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.",
"Desc": "Ingress Occupancy Accumulator",
"EvSel": 19,
"MaxIncCyc": 32,
"SubCtr": 1,
"Umask": "bxxxx1xxx",
},
"R3QPI.RxR_OCCUPANCY.SNP": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": 0,
"Defn": "Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.",
"Desc": "Ingress Occupancy Accumulator",
"EvSel": 19,
"MaxIncCyc": 32,
"SubCtr": 1,
"Umask": "bxxxxxx1x",
},
"R3QPI.RxR_OCCUPANCY.HOM": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": 0,
"Defn": "Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.",
"Desc": "Ingress Occupancy Accumulator",
"EvSel": 19,
"MaxIncCyc": 32,
"SubCtr": 1,
"Umask": "bxxxxxxx1",
},
"R3QPI.RxR_OCCUPANCY.NDR": {
"Box": "R3QPI",
"Category": "R3QPI INGRESS Events",
"Counters": 0,
"Defn": "Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.",
"Desc": "Ingress Occupancy Accumulator",
"EvSel": 19,
"MaxIncCyc": 32,
"SubCtr": 1,
"Umask": "bxxxxx1xx",
},
"R3QPI.TxR_CYCLES_FULL": {
"Box": "R3QPI",
"Category": "R3QPI EGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the R2PCIe Egress buffer is full.",
"Desc": "Egress Cycles Full",
"EvSel": 37,
},
"R3QPI.TxR_CYCLES_NE": {
"Box": "R3QPI",
"Category": "R3QPI EGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the QPI Egress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"Desc": "Egress Cycles Not Empty",
"EvSel": 35,
},
"R3QPI.TxR_INSERTS": {
"Box": "R3QPI",
"Category": "R3QPI EGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of allocations into the QPI Egress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Egress Occupancy Accumulator event in order to calculate average queue latency. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"Desc": "Egress Allocations",
"EvSel": 36,
},
"R3QPI.TxR_NACK": {
"Box": "R3QPI",
"Category": "R3QPI EGRESS Events",
"Counters": "0-1",
"Desc": "Egress NACK",
"EvSel": 38,
},
"R3QPI.VN0_CREDITS_REJECT": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.",
"Desc": "VN0 Credit Acquisition Failed on DRS",
"EvSel": 55,
},
"R3QPI.VN0_CREDITS_REJECT.NCS": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.",
"Desc": "VN0 Credit Acquisition Failed on DRS",
"EvSel": 55,
"Umask": "bxx1xxxxx",
},
"R3QPI.VN0_CREDITS_REJECT.NCB": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.",
"Desc": "VN0 Credit Acquisition Failed on DRS",
"EvSel": 55,
"Umask": "bxxx1xxxx",
},
"R3QPI.VN0_CREDITS_REJECT.DRS": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.",
"Desc": "VN0 Credit Acquisition Failed on DRS",
"EvSel": 55,
"Umask": "bxxxx1xxx",
},
"R3QPI.VN0_CREDITS_REJECT.SNP": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.",
"Desc": "VN0 Credit Acquisition Failed on DRS",
"EvSel": 55,
"Umask": "bxxxxxx1x",
},
"R3QPI.VN0_CREDITS_REJECT.HOM": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.",
"Desc": "VN0 Credit Acquisition Failed on DRS",
"EvSel": 55,
"Umask": "bxxxxxxx1",
},
"R3QPI.VN0_CREDITS_REJECT.NDR": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.",
"Desc": "VN0 Credit Acquisition Failed on DRS",
"EvSel": 55,
"Umask": "bxxxxx1xx",
},
"R3QPI.VN0_CREDITS_USED": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.",
"Desc": "VN0 Credit Used",
"EvSel": 54,
},
"R3QPI.VN0_CREDITS_USED.NCS": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.",
"Desc": "VN0 Credit Used",
"EvSel": 54,
"Umask": "bxx1xxxxx",
},
"R3QPI.VN0_CREDITS_USED.NCB": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.",
"Desc": "VN0 Credit Used",
"EvSel": 54,
"Umask": "bxxx1xxxx",
},
"R3QPI.VN0_CREDITS_USED.DRS": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.",
"Desc": "VN0 Credit Used",
"EvSel": 54,
"Umask": "bxxxx1xxx",
},
"R3QPI.VN0_CREDITS_USED.SNP": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.",
"Desc": "VN0 Credit Used",
"EvSel": 54,
"Umask": "bxxxxxx1x",
},
"R3QPI.VN0_CREDITS_USED.HOM": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.",
"Desc": "VN0 Credit Used",
"EvSel": 54,
"Umask": "bxxxxxxx1",
},
"R3QPI.VN0_CREDITS_USED.NDR": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VN0_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.",
"Desc": "VN0 Credit Used",
"EvSel": 54,
"Umask": "bxxxxx1xx",
},
"R3QPI.VNA_CREDITS_ACQUIRED": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of QPI VNA Credit acquisitions. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder. VNA credits are used by all message classes in order to communicate across QPI. If a packet is unable to acquire credits, it will then attempt to use credts from the VN0 pool. Note that a single packet may require multiple flit buffers (i.e. when data is being transfered). Therefore, this event will increment by the number of credits acquired in each cycle. Filtering based on message class is not provided. One can count the number of packets transfered in a given message class using an qfclk event.",
"Desc": "VNA credit Acquisitions",
"EvSel": 51,
"MaxIncCyc": 4,
},
"R3QPI.VNA_CREDITS_REJECT": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.",
"Desc": "VNA Credit Reject",
"EvSel": 52,
},
"R3QPI.VNA_CREDITS_REJECT.NCS": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.",
"Desc": "VNA Credit Reject",
"EvSel": 52,
"Umask": "bxx1xxxxx",
},
"R3QPI.VNA_CREDITS_REJECT.NCB": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.",
"Desc": "VNA Credit Reject",
"EvSel": 52,
"Umask": "bxxx1xxxx",
},
"R3QPI.VNA_CREDITS_REJECT.DRS": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.",
"Desc": "VNA Credit Reject",
"EvSel": 52,
"Umask": "bxxxx1xxx",
},
"R3QPI.VNA_CREDITS_REJECT.SNP": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.",
"Desc": "VNA Credit Reject",
"EvSel": 52,
"Umask": "bxxxxxx1x",
},
"R3QPI.VNA_CREDITS_REJECT.HOM": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.",
"Desc": "VNA Credit Reject",
"EvSel": 52,
"Umask": "bxxxxxxx1",
},
"R3QPI.VNA_CREDITS_REJECT.NDR": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.",
"Desc": "VNA Credit Reject",
"EvSel": 52,
"Umask": "bxxxxx1xx",
},
"R3QPI.VNA_CREDIT_CYCLES_OUT": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of QPI uclk cycles when the transmitted has no VNA credits available and therefore cannot send any requests on this channel. Note that this does not mean that no flits can be transmitted, as those holding VN0 credits will still (potentially) be able to transmit. Generally it is the goal of the uncore that VNA credits should not run out, as this can substantially throttle back useful QPI bandwidth.",
"Desc": "Cycles with no VNA credits available",
"EvSel": 49,
},
"R3QPI.VNA_CREDIT_CYCLES_USED": {
"Box": "R3QPI",
"Category": "R3QPI LINK_VNA_CREDITS Events",
"Counters": "0-1",
"Defn": "Number of QPI uclk cycles with one or more VNA credits in use. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average number of used VNA credits.",
"Desc": "Cycles with 1 or more VNA credits in use",
"EvSel": 50,
},
# CBO:
"CBO.CLOCKTICKS": {
"Box": "CBO",
"Category": "CBO UCLK Events",
"Counters": "0-3",
"Desc": "Uncore Clocks",
"EvSel": 0,
},
"CBO.COUNTER0_OCCUPANCY": {
"Box": "CBO",
"Category": "CBO OCCUPANCY Events",
"Counters": "1-3",
"Defn": "Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entry.",
"Desc": "Counter 0 Occupancy",
"EvSel": 31,
"MaxIncCyc": 20,
"SubCtr": 1,
},
"CBO.ISMQ_DRD_MISS_OCC": {
"Box": "CBO",
"Category": "CBO ISMQ Events",
"Counters": "0-1",
"EvSel": 33,
"MaxIncCyc": 20,
"SubCtr": 1,
},
"CBO.LLC_LOOKUP": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.",
"Desc": "Cache Lookups",
"EvSel": 52,
"Notes": "Bit 0 of the umask must always be set for this event. This allows us to match a given state (or states). The state is programmed in Cn_MSR_PMON_BOX_FILTER.state. The state field is a bit mask, so you can select (and monitor) multiple states at a time. 0 = I (miss), 1 = S, 2 = E, 3 = M, 4 = F. For example, if you wanted to monitor F and S hits, you could set 10010b in the 5-bit state field. To monitor any lookup, set the field to 0x1F.",
},
"CBO.LLC_LOOKUP.DATA_READ": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.",
"Desc": "Cache Lookups",
"EvSel": 52,
"Notes": "Bit 0 of the umask must always be set for this event. This allows us to match a given state (or states). The state is programmed in Cn_MSR_PMON_BOX_FILTER.state. The state field is a bit mask, so you can select (and monitor) multiple states at a time. 0 = I (miss), 1 = S, 2 = E, 3 = M, 4 = F. For example, if you wanted to monitor F and S hits, you could set 10010b in the 5-bit state field. To monitor any lookup, set the field to 0x1F.",
"Umask": "b00000011",
},
"CBO.LLC_LOOKUP.REMOTE_SNOOP": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.",
"Desc": "Cache Lookups",
"EvSel": 52,
"Notes": "Bit 0 of the umask must always be set for this event. This allows us to match a given state (or states). The state is programmed in Cn_MSR_PMON_BOX_FILTER.state. The state field is a bit mask, so you can select (and monitor) multiple states at a time. 0 = I (miss), 1 = S, 2 = E, 3 = M, 4 = F. For example, if you wanted to monitor F and S hits, you could set 10010b in the 5-bit state field. To monitor any lookup, set the field to 0x1F.",
"Umask": "b00001001",
},
"CBO.LLC_LOOKUP.WRITE": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.",
"Desc": "Cache Lookups",
"EvSel": 52,
"Notes": "Bit 0 of the umask must always be set for this event. This allows us to match a given state (or states). The state is programmed in Cn_MSR_PMON_BOX_FILTER.state. The state field is a bit mask, so you can select (and monitor) multiple states at a time. 0 = I (miss), 1 = S, 2 = E, 3 = M, 4 = F. For example, if you wanted to monitor F and S hits, you could set 10010b in the 5-bit state field. To monitor any lookup, set the field to 0x1F.",
"Umask": "b00000101",
},
"CBO.LLC_LOOKUP.NID": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.",
"Desc": "Cache Lookups",
"EvSel": 52,
"Notes": "Bit 0 of the umask must always be set for this event. This allows us to match a given state (or states). The state is programmed in Cn_MSR_PMON_BOX_FILTER.state. The state field is a bit mask, so you can select (and monitor) multiple states at a time. 0 = I (miss), 1 = S, 2 = E, 3 = M, 4 = F. For example, if you wanted to monitor F and S hits, you could set 10010b in the 5-bit state field. To monitor any lookup, set the field to 0x1F.",
"Umask": "b01000001",
},
"CBO.LLC_VICTIMS": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Desc": "Lines Victimized",
"EvSel": 55,
},
"CBO.LLC_VICTIMS.MISS": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Desc": "Lines Victimized",
"EvSel": 55,
"Umask": "bxxxx1xxx",
},
"CBO.LLC_VICTIMS.NID": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Desc": "Lines Victimized",
"EvSel": 55,
"Umask": "bx1xxxxxx",
},
"CBO.LLC_VICTIMS.S_STATE": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Desc": "Lines Victimized",
"EvSel": 55,
"Umask": "bxxxxx1xx",
},
"CBO.LLC_VICTIMS.E_STATE": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Desc": "Lines Victimized",
"EvSel": 55,
"Umask": "bxxxxxx1x",
},
"CBO.LLC_VICTIMS.M_STATE": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Counters": "0-1",
"Defn": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Desc": "Lines Victimized",
"EvSel": 55,
"Umask": "bxxxxxxx1",
},
"CBO.MISC": {
"Box": "CBO",
"Category": "CBO MISC Events",
"Counters": "0-1",
"Defn": "Miscellaneous events in the Cbo.",
"Desc": "Cbo Misc",
"EvSel": 57,
},
"CBO.MISC.RFO_HIT_S": {
"Box": "CBO",
"Category": "CBO MISC Events",
"Counters": "0-1",
"Defn": "Miscellaneous events in the Cbo.",
"Desc": "Cbo Misc",
"EvSel": 57,
"Umask": "bxxxx1xxx",
},
"CBO.MISC.RSPI_WAS_FSE": {
"Box": "CBO",
"Category": "CBO MISC Events",
"Counters": "0-1",
"Defn": "Miscellaneous events in the Cbo.",
"Desc": "Cbo Misc",
"EvSel": 57,
"Umask": "bxxxxxxx1",
},
"CBO.MISC.STARTED": {
"Box": "CBO",
"Category": "CBO MISC Events",
"Counters": "0-1",
"Defn": "Miscellaneous events in the Cbo.",
"Desc": "Cbo Misc",
"EvSel": 57,
"Umask": "bxxxxx1xx",
},
"CBO.MISC.WC_ALIASING": {
"Box": "CBO",
"Category": "CBO MISC Events",
"Counters": "0-1",
"Defn": "Miscellaneous events in the Cbo.",
"Desc": "Cbo Misc",
"EvSel": 57,
"Umask": "bxxxxxx1x",
},
"CBO.RING_AD_USED": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AD Ring In Use",
"EvSel": 27,
},
"CBO.RING_AD_USED.UP_ODD": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AD Ring In Use",
"EvSel": 27,
"Umask": "bxxxxxx1x",
},
"CBO.RING_AD_USED.DOWN_ODD": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AD Ring In Use",
"EvSel": 27,
"Umask": "bxxxx1xxx",
},
"CBO.RING_AD_USED.DOWN_EVEN": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AD Ring In Use",
"EvSel": 27,
"Umask": "bxxxxx1xx",
},
"CBO.RING_AD_USED.UP_EVEN": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AD Ring In Use",
"EvSel": 27,
"Umask": "bxxxxxxx1",
},
"CBO.RING_AK_USED": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AK Ring In Use",
"EvSel": 28,
},
"CBO.RING_AK_USED.UP_ODD": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AK Ring In Use",
"EvSel": 28,
"Umask": "bxxxxxx1x",
},
"CBO.RING_AK_USED.DOWN_ODD": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AK Ring In Use",
"EvSel": 28,
"Umask": "bxxxx1xxx",
},
"CBO.RING_AK_USED.DOWN_EVEN": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AK Ring In Use",
"EvSel": 28,
"Umask": "bxxxxx1xx",
},
"CBO.RING_AK_USED.UP_EVEN": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "AK Ring In Use",
"EvSel": 28,
"Umask": "bxxxxxxx1",
},
"CBO.RING_BL_USED": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "BL Ring in Use",
"EvSel": 29,
},
"CBO.RING_BL_USED.UP_ODD": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "BL Ring in Use",
"EvSel": 29,
"Umask": "bxxxxxx1x",
},
"CBO.RING_BL_USED.DOWN_ODD": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "BL Ring in Use",
"EvSel": 29,
"Umask": "bxxxx1xxx",
},
"CBO.RING_BL_USED.DOWN_EVEN": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "BL Ring in Use",
"EvSel": 29,
"Umask": "bxxxxx1xx",
},
"CBO.RING_BL_USED.UP_EVEN": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the \"UP\" direction is on the clockwise ring and \"DN\" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"Desc": "BL Ring in Use",
"EvSel": 29,
"Umask": "bxxxxxxx1",
},
"CBO.RING_BOUNCES": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "0-1",
"Desc": "Number of LLC responses that bounced on the Ring.",
"EvSel": 5,
},
"CBO.RING_BOUNCES.IV_CORE": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "0-1",
"Desc": "Number of LLC responses that bounced on the Ring.",
"EvSel": 5,
"Umask": "bxxxx1xxx",
},
"CBO.RING_BOUNCES.AK_CORE": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "0-1",
"Desc": "Number of LLC responses that bounced on the Ring.",
"EvSel": 5,
"Umask": "bxxxxxx1x",
},
"CBO.RING_BOUNCES.BL_CORE": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "0-1",
"Desc": "Number of LLC responses that bounced on the Ring.",
"EvSel": 5,
"Umask": "bxxxxx1xx",
},
"CBO.RING_IV_USED": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring in JKT. Therefore, if one wants to monitor the \"Even\" ring, they should select both UP_EVEN and DOWN_EVEN. To monitor the \"Odd\" ring, they should select both UP_ODD and DOWN_ODD.",
"Desc": "BL Ring in Use",
"EvSel": 30,
},
"CBO.RING_IV_USED.ANY": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "2-3",
"Defn": "Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring in JKT. Therefore, if one wants to monitor the \"Even\" ring, they should select both UP_EVEN and DOWN_EVEN. To monitor the \"Odd\" ring, they should select both UP_ODD and DOWN_ODD.",
"Desc": "BL Ring in Use",
"EvSel": 30,
"Umask": "b00001111",
},
"CBO.RING_SRC_THRTL": {
"Box": "CBO",
"Category": "CBO RING Events",
"Counters": "0-1",
"EvSel": 7,
},
"CBO.RxR_EXT_STARVED": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.",
"Desc": "Ingress Arbiter Blocking Cycles",
"EvSel": 18,
},
"CBO.RxR_EXT_STARVED.IPQ": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.",
"Desc": "Ingress Arbiter Blocking Cycles",
"EvSel": 18,
"Umask": "bxxxxxx1x",
},
"CBO.RxR_EXT_STARVED.ISMQ_BIDS": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.",
"Desc": "Ingress Arbiter Blocking Cycles",
"EvSel": 18,
"Umask": "bxxxx1xxx",
},
"CBO.RxR_EXT_STARVED.ISMQ": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.",
"Desc": "Ingress Arbiter Blocking Cycles",
"EvSel": 18,
"Umask": "bxxxxx1xx",
},
"CBO.RxR_EXT_STARVED.IRQ": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.",
"Desc": "Ingress Arbiter Blocking Cycles",
"EvSel": 18,
"Umask": "bxxxxxxx1",
},
"CBO.RxR_INSERTS": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts number of allocations per cycle into the specified Ingress queue.",
"Desc": "Ingress Allocations",
"EvSel": 19,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
},
"CBO.RxR_INSERTS.VFIFO": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts number of allocations per cycle into the specified Ingress queue.",
"Desc": "Ingress Allocations",
"EvSel": 19,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"Umask": "bxxx1xxxx",
},
"CBO.RxR_INSERTS.IPQ": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts number of allocations per cycle into the specified Ingress queue.",
"Desc": "Ingress Allocations",
"EvSel": 19,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"Umask": "bxxxxx1xx",
},
"CBO.RxR_INSERTS.IRQ_REJECTED": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts number of allocations per cycle into the specified Ingress queue.",
"Desc": "Ingress Allocations",
"EvSel": 19,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"Umask": "bxxxxxx1x",
},
"CBO.RxR_INSERTS.IRQ": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": "0-1",
"Defn": "Counts number of allocations per cycle into the specified Ingress queue.",
"Desc": "Ingress Allocations",
"EvSel": 19,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"Umask": "bxxxxxxx1",
},
"CBO.RxR_IPQ_RETRY": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.",
"Desc": "Probe Queue Retries",
"EvSel": 49,
},
"CBO.RxR_IPQ_RETRY.QPI_CREDITS": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.",
"Desc": "Probe Queue Retries",
"EvSel": 49,
"Umask": "bxxx1xxxx",
},
"CBO.RxR_IPQ_RETRY.ADDR_CONFLICT": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.",
"Desc": "Probe Queue Retries",
"EvSel": 49,
"Umask": "bxxxxx1xx",
},
"CBO.RxR_IPQ_RETRY.ANY": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.",
"Desc": "Probe Queue Retries",
"EvSel": 49,
"Umask": "bxxxxxxx1",
},
"CBO.RxR_IPQ_RETRY.FULL": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.",
"Desc": "Probe Queue Retries",
"EvSel": 49,
"Umask": "bxxxxxx1x",
},
"CBO.RxR_IRQ_RETRY": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Desc": "Ingress Request Queue Rejects",
"EvSel": 50,
},
"CBO.RxR_IRQ_RETRY.RTID": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Desc": "Ingress Request Queue Rejects",
"EvSel": 50,
"Umask": "bxxxx1xxx",
},
"CBO.RxR_IRQ_RETRY.QPI_CREDITS": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Desc": "Ingress Request Queue Rejects",
"EvSel": 50,
"Umask": "bxxx1xxxx",
},
"CBO.RxR_IRQ_RETRY.ADDR_CONFLICT": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Desc": "Ingress Request Queue Rejects",
"EvSel": 50,
"Umask": "bxxxxx1xx",
},
"CBO.RxR_IRQ_RETRY.ANY": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Desc": "Ingress Request Queue Rejects",
"EvSel": 50,
"Umask": "bxxxxxxx1",
},
"CBO.RxR_IRQ_RETRY.FULL": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Desc": "Ingress Request Queue Rejects",
"EvSel": 50,
"Umask": "bxxxxxx1x",
},
"CBO.RxR_ISMQ_RETRY": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"Desc": "ISMQ Retries",
"EvSel": 51,
},
"CBO.RxR_ISMQ_RETRY.RTID": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"Desc": "ISMQ Retries",
"EvSel": 51,
"Umask": "bxxxx1xxx",
},
"CBO.RxR_ISMQ_RETRY.QPI_CREDITS": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"Desc": "ISMQ Retries",
"EvSel": 51,
"Umask": "bxxx1xxxx",
},
"CBO.RxR_ISMQ_RETRY.ANY": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"Desc": "ISMQ Retries",
"EvSel": 51,
"Umask": "bxxxxxxx1",
},
"CBO.RxR_ISMQ_RETRY.IIO_CREDITS": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"Desc": "ISMQ Retries",
"EvSel": 51,
"Umask": "bxx1xxxxx",
},
"CBO.RxR_ISMQ_RETRY.FULL": {
"Box": "CBO",
"Category": "CBO INGRESS_RETRY Events",
"Counters": "0-1",
"Defn": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"Desc": "ISMQ Retries",
"EvSel": 51,
"Umask": "bxxxxxx1x",
},
"CBO.RxR_OCCUPANCY": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": 0,
"Defn": "Counts number of entries in the specified Ingress queue in each cycle.",
"Desc": "Ingress Occupancy",
"EvSel": 17,
"MaxIncCyc": 20,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"SubCtr": 1,
},
"CBO.RxR_OCCUPANCY.VFIFO": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": 0,
"Defn": "Counts number of entries in the specified Ingress queue in each cycle.",
"Desc": "Ingress Occupancy",
"EvSel": 17,
"MaxIncCyc": 20,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"SubCtr": 1,
"Umask": "bxxx1xxxx",
},
"CBO.RxR_OCCUPANCY.IPQ": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": 0,
"Defn": "Counts number of entries in the specified Ingress queue in each cycle.",
"Desc": "Ingress Occupancy",
"EvSel": 17,
"MaxIncCyc": 20,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"SubCtr": 1,
"Umask": "bxxxxx1xx",
},
"CBO.RxR_OCCUPANCY.IRQ_REJECTED": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": 0,
"Defn": "Counts number of entries in the specified Ingress queue in each cycle.",
"Desc": "Ingress Occupancy",
"EvSel": 17,
"MaxIncCyc": 20,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"SubCtr": 1,
"Umask": "bxxxxxx1x",
},
"CBO.RxR_OCCUPANCY.IRQ": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Counters": 0,
"Defn": "Counts number of entries in the specified Ingress queue in each cycle.",
"Desc": "Ingress Occupancy",
"EvSel": 17,
"MaxIncCyc": 20,
"Notes": "IRQ_REJECTED should not be Ored with the other umasks.",
"SubCtr": 1,
"Umask": "bxxxxxxx1",
},
"CBO.TOR_INSERTS": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
},
"CBO.TOR_INSERTS.NID_MISS_ALL": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b01001010",
},
"CBO.TOR_INSERTS.NID_OPCODE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b01000001",
},
"CBO.TOR_INSERTS.MISS_OPCODE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b00000011",
},
"CBO.TOR_INSERTS.NID_ALL": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b01001000",
},
"CBO.TOR_INSERTS.NID_EVICTION": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b01000100",
},
"CBO.TOR_INSERTS.NID_MISS_OPCODE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b01000011",
},
"CBO.TOR_INSERTS.EVICTION": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b00000100",
},
"CBO.TOR_INSERTS.WB": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b00010000",
},
"CBO.TOR_INSERTS.NID_WB": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b01010000",
},
"CBO.TOR_INSERTS.OPCODE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b00000001",
},
"CBO.TOR_INSERTS.MISS_ALL": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": "0-1",
"Defn": "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).",
"Desc": "TOR Inserts",
"EvSel": 53,
"Umask": "b00001010",
},
"CBO.TOR_OCCUPANCY": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
},
"CBO.TOR_OCCUPANCY.NID_MISS_ALL": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b01001010",
},
"CBO.TOR_OCCUPANCY.NID_OPCODE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b01000001",
},
"CBO.TOR_OCCUPANCY.MISS_OPCODE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b00000011",
},
"CBO.TOR_OCCUPANCY.ALL": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b00001000",
},
"CBO.TOR_OCCUPANCY.NID_ALL": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b01001000",
},
"CBO.TOR_OCCUPANCY.NID_EVICTION": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b01000100",
},
"CBO.TOR_OCCUPANCY.NID_MISS_OPCODE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b01000011",
},
"CBO.TOR_OCCUPANCY.EVICTION": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b00000100",
},
"CBO.TOR_OCCUPANCY.OPCODE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b00000001",
},
"CBO.TOR_OCCUPANCY.MISS_ALL": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Counters": 0,
"Defn": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select \"MISS_OPC_MATCH\" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)",
"Desc": "TOR Occupancy",
"EvSel": 54,
"MaxIncCyc": 20,
"SubCtr": 1,
"Umask": "b00001010",
},
"CBO.TxR_ADS_USED": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"EvSel": 4,
},
"CBO.TxR_INSERTS": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"Defn": "Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.",
"Desc": "Egress Allocations",
"EvSel": 2,
},
"CBO.TxR_INSERTS.BL_CACHE": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"Defn": "Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.",
"Desc": "Egress Allocations",
"EvSel": 2,
"Umask": "bxxxxx1xx",
},
"CBO.TxR_INSERTS.AK_CORE": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"Defn": "Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.",
"Desc": "Egress Allocations",
"EvSel": 2,
"Umask": "bxx1xxxxx",
},
"CBO.TxR_INSERTS.AD_CORE": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"Defn": "Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.",
"Desc": "Egress Allocations",
"EvSel": 2,
"Umask": "bxxx1xxxx",
},
"CBO.TxR_INSERTS.IV_CACHE": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"Defn": "Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.",
"Desc": "Egress Allocations",
"EvSel": 2,
"Umask": "bxxxx1xxx",
},
"CBO.TxR_INSERTS.BL_CORE": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"Defn": "Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.",
"Desc": "Egress Allocations",
"EvSel": 2,
"Umask": "bx1xxxxxx",
},
"CBO.TxR_INSERTS.AK_CACHE": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"Defn": "Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.",
"Desc": "Egress Allocations",
"EvSel": 2,
"Umask": "bxxxxxx1x",
},
"CBO.TxR_INSERTS.AD_CACHE": {
"Box": "CBO",
"Category": "CBO EGRESS Events",
"Counters": "0-1",
"Defn": "Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.",
"Desc": "Egress Allocations",
"EvSel": 2,
"Umask": "bxxxxxxx1",
},
# HA:
"HA.ADDR_OPC_MATCH": {
"Box": "HA",
"Category": "HA ADDR_OPCODE_MATCH Events",
"Counters": "0-3",
"Desc": "QPI Address/Opcode Match",
"EvSel": 32,
},
"HA.ADDR_OPC_MATCH.FILT": {
"Box": "HA",
"Category": "HA ADDR_OPCODE_MATCH Events",
"Counters": "0-3",
"Desc": "QPI Address/Opcode Match",
"EvSel": 32,
"Umask": "b00000011",
},
"HA.CLOCKTICKS": {
"Box": "HA",
"Category": "HA UCLK Events",
"Counters": "0-3",
"Defn": "Counts the number of uclks in the HA. This will be slightly different than the count in the Ubox because of enable/freeze delays. The HA is on the other side of the die from the fixed Ubox uclk counter, so the drift could be somewhat larger than in units that are closer like the QPI Agent.",
"Desc": "uclks",
"EvSel": 0,
},
"HA.CONFLICT_CYCLES": {
"Box": "HA",
"Category": "HA CONFLICTS Events",
"Counters": "0-3",
"Desc": "Conflict Checks",
"EvSel": 11,
"Broken": 1,
},
"HA.CONFLICT_CYCLES.CONFLICT": {
"Box": "HA",
"Category": "HA CONFLICTS Events",
"Counters": "0-3",
"Desc": "Conflict Checks",
"EvSel": 11,
"Umask": "bxxxxxx1x",
},
"HA.CONFLICT_CYCLES.NO_CONFLICT": {
"Box": "HA",
"Category": "HA CONFLICTS Events",
"Counters": "0-3",
"Desc": "Conflict Checks",
"EvSel": 11,
"Umask": "bxxxxxxx1",
"Broken": 1,
},
"HA.DIRECT2CORE_COUNT": {
"Box": "HA",
"Category": "HA DIRECT2CORE Events",
"Counters": "0-3",
"Defn": "Number of Direct2Core messages sent",
"Desc": "Direct2Core Messages Sent",
"EvSel": 17,
"Broken": 1,
},
"HA.DIRECT2CORE_CYCLES_DISABLED": {
"Box": "HA",
"Category": "HA DIRECT2CORE Events",
"Counters": "0-3",
"Defn": "Number of cycles in which Direct2Core was disabled",
"Desc": "Cycles when Direct2Core was Disabled",
"EvSel": 18,
"Obscure": 1,
"Broken": 1,
},
"HA.DIRECT2CORE_TXN_OVERRIDE": {
"Box": "HA",
"Category": "HA DIRECT2CORE Events",
"Counters": "0-3",
"Defn": "Number of Reads where Direct2Core overridden",
"Desc": "Number of Reads that had Direct2Core Overridden",
"EvSel": 19,
"Broken": 1,
},
"HA.DIRECTORY_LOOKUP": {
"Box": "HA",
"Category": "HA DIRECTORY Events",
"Counters": "0-3",
"Defn": "Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to.",
"Desc": "Directory Lookups",
"EvSel": 12,
"Notes": "Only valid for parts that implement the Directory",
"Broken": 1,
},
"HA.DIRECTORY_LOOKUP.NO_SNP": {
"Box": "HA",
"Category": "HA DIRECTORY Events",
"Counters": "0-3",
"Defn": "Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to.",
"Desc": "Directory Lookups",
"EvSel": 12,
"Notes": "Only valid for parts that implement the Directory",
"Umask": "bxxxxxx1x",
"Broken": 1,
},
"HA.DIRECTORY_LOOKUP.SNP": {
"Box": "HA",
"Category": "HA DIRECTORY Events",
"Counters": "0-3",
"Defn": "Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to.",
"Desc": "Directory Lookups",
"EvSel": 12,
"Notes": "Only valid for parts that implement the Directory",
"Umask": "bxxxxxxx1",
"Broken": 1,
},
"HA.DIRECTORY_UPDATE": {
"Box": "HA",
"Category": "HA DIRECTORY Events",
"Counters": "0-3",
"Defn": "Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears.",
"Desc": "Directory Updates",
"EvSel": 13,
"Notes": "Only valid for parts that implement the Directory",
"Broken": 1,
},
"HA.DIRECTORY_UPDATE.SET": {
"Box": "HA",
"Category": "HA DIRECTORY Events",
"Counters": "0-3",
"Defn": "Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears.",
"Desc": "Directory Updates",
"EvSel": 13,
"Notes": "Only valid for parts that implement the Directory",
"Umask": "bxxxxxxx1",
"Broken": 1,
},
"HA.DIRECTORY_UPDATE.ANY": {
"Box": "HA",
"Category": "HA DIRECTORY Events",
"Counters": "0-3",
"Defn": "Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears.",
"Desc": "Directory Updates",
"EvSel": 13,
"Notes": "Only valid for parts that implement the Directory",
"Umask": "bxxxxxx11",
"Broken": 1,
},
"HA.DIRECTORY_UPDATE.CLEAR": {
"Box": "HA",
"Category": "HA DIRECTORY Events",
"Counters": "0-3",
"Defn": "Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears.",
"Desc": "Directory Updates",
"EvSel": 13,
"Notes": "Only valid for parts that implement the Directory",
"Umask": "bxxxxxx1x",
"Broken": 1,
},
"HA.IGR_NO_CREDIT_CYCLES": {
"Box": "HA",
"Category": "HA QPI_IGR_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.",
"Desc": "Cycles without QPI Ingress Credits",
"EvSel": 34,
},
"HA.IGR_NO_CREDIT_CYCLES.AD_QPI1": {
"Box": "HA",
"Category": "HA QPI_IGR_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.",
"Desc": "Cycles without QPI Ingress Credits",
"EvSel": 34,
"Umask": "bxxxxxx1x",
},
"HA.IGR_NO_CREDIT_CYCLES.AD_QPI0": {
"Box": "HA",
"Category": "HA QPI_IGR_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.",
"Desc": "Cycles without QPI Ingress Credits",
"EvSel": 34,
"Umask": "bxxxxxxx1",
},
"HA.IGR_NO_CREDIT_CYCLES.BL_QPI1": {
"Box": "HA",
"Category": "HA QPI_IGR_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.",
"Desc": "Cycles without QPI Ingress Credits",
"EvSel": 34,
"Umask": "bxxxx1xxx",
},
"HA.IGR_NO_CREDIT_CYCLES.BL_QPI0": {
"Box": "HA",
"Category": "HA QPI_IGR_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.",
"Desc": "Cycles without QPI Ingress Credits",
"EvSel": 34,
"Umask": "bxxxxx1xx",
},
"HA.IMC_RETRY": {
"Box": "HA",
"Category": "HA IMC_MISC Events",
"Counters": "0-3",
"Desc": "Retry Events",
"EvSel": 30,
},
"HA.IMC_WRITES": {
"Box": "HA",
"Category": "HA IMC_WRITES Events",
"Counters": "0-3",
"Defn": "Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"Desc": "HA to iMC Full Line Writes Issued",
"EvSel": 26,
},
"HA.IMC_WRITES.PARTIAL_ISOCH": {
"Box": "HA",
"Category": "HA IMC_WRITES Events",
"Counters": "0-3",
"Defn": "Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"Desc": "HA to iMC Full Line Writes Issued",
"EvSel": 26,
"Umask": "bxxxx1xxx",
},
"HA.IMC_WRITES.ALL": {
"Box": "HA",
"Category": "HA IMC_WRITES Events",
"Counters": "0-3",
"Defn": "Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"Desc": "HA to iMC Full Line Writes Issued",
"EvSel": 26,
"Umask": "b00001111",
},
"HA.IMC_WRITES.PARTIAL": {
"Box": "HA",
"Category": "HA IMC_WRITES Events",
"Counters": "0-3",
"Defn": "Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"Desc": "HA to iMC Full Line Writes Issued",
"EvSel": 26,
"Umask": "bxxxxxx1x",
},
"HA.IMC_WRITES.FULL": {
"Box": "HA",
"Category": "HA IMC_WRITES Events",
"Counters": "0-3",
"Defn": "Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"Desc": "HA to iMC Full Line Writes Issued",
"EvSel": 26,
"Umask": "bxxxxxxx1",
},
"HA.IMC_WRITES.FULL_ISOCH": {
"Box": "HA",
"Category": "HA IMC_WRITES Events",
"Counters": "0-3",
"Defn": "Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"Desc": "HA to iMC Full Line Writes Issued",
"EvSel": 26,
"Umask": "bxxxxx1xx",
},
"HA.REQUESTS": {
"Box": "HA",
"Category": "HA REQUESTS Events",
"Counters": "0-3",
"Defn": "Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).",
"Desc": "Read and Write Requests",
"EvSel": 1,
},
"HA.REQUESTS.READS": {
"Box": "HA",
"Category": "HA REQUESTS Events",
"Counters": "0-3",
"Defn": "Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).",
"Desc": "Read and Write Requests",
"EvSel": 1,
"Umask": "b00000011",
},
"HA.REQUESTS.WRITES": {
"Box": "HA",
"Category": "HA REQUESTS Events",
"Counters": "0-3",
"Defn": "Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).",
"Desc": "Read and Write Requests",
"EvSel": 1,
"Umask": "b00001100",
},
"HA.RPQ_CYCLES_NO_REG_CREDITS": {
"Box": "HA",
"Category": "HA RPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and \"special\" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "iMC RPQ Credits Empty - Regular",
"EvSel": 21,
"MaxIncCyc": 4,
},
"HA.RPQ_CYCLES_NO_REG_CREDITS.CHN1": {
"Box": "HA",
"Category": "HA RPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and \"special\" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "iMC RPQ Credits Empty - Regular",
"EvSel": 21,
"MaxIncCyc": 4,
"Umask": "bxxxxxx1x",
},
"HA.RPQ_CYCLES_NO_REG_CREDITS.CHN2": {
"Box": "HA",
"Category": "HA RPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and \"special\" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "iMC RPQ Credits Empty - Regular",
"EvSel": 21,
"MaxIncCyc": 4,
"Umask": "bxxxxx1xx",
},
"HA.RPQ_CYCLES_NO_REG_CREDITS.CHN3": {
"Box": "HA",
"Category": "HA RPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and \"special\" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "iMC RPQ Credits Empty - Regular",
"EvSel": 21,
"MaxIncCyc": 4,
"Umask": "bxxxx1xxx",
},
"HA.RPQ_CYCLES_NO_REG_CREDITS.CHN0": {
"Box": "HA",
"Category": "HA RPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and \"special\" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "iMC RPQ Credits Empty - Regular",
"EvSel": 21,
"MaxIncCyc": 4,
"Umask": "bxxxxxxx1",
},
"HA.TAD_REQUESTS_G0": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
},
"HA.TAD_REQUESTS_G0.REGION0": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
"Umask": "bxxxxxxx1",
},
"HA.TAD_REQUESTS_G0.REGION7": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
"Umask": "b1xxxxxxx",
},
"HA.TAD_REQUESTS_G0.REGION3": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
"Umask": "bxxxx1xxx",
},
"HA.TAD_REQUESTS_G0.REGION4": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
"Umask": "bxxx1xxxx",
},
"HA.TAD_REQUESTS_G0.REGION2": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
"Umask": "bxxxxx1xx",
},
"HA.TAD_REQUESTS_G0.REGION1": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
"Umask": "bxxxxxx1x",
},
"HA.TAD_REQUESTS_G0.REGION5": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
"Umask": "bxx1xxxxx",
},
"HA.TAD_REQUESTS_G0.REGION6": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 0",
"EvSel": 27,
"MaxIncCyc": 2,
"Umask": "bx1xxxxxx",
},
"HA.TAD_REQUESTS_G1": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 1",
"EvSel": 28,
"MaxIncCyc": 2,
},
"HA.TAD_REQUESTS_G1.REGION9": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 1",
"EvSel": 28,
"MaxIncCyc": 2,
"Umask": "bxxxxxx1x",
},
"HA.TAD_REQUESTS_G1.REGION10": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 1",
"EvSel": 28,
"MaxIncCyc": 2,
"Umask": "bxxxxx1xx",
},
"HA.TAD_REQUESTS_G1.REGION11": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 1",
"EvSel": 28,
"MaxIncCyc": 2,
"Umask": "bxxxx1xxx",
},
"HA.TAD_REQUESTS_G1.REGION8": {
"Box": "HA",
"Category": "HA TAD Events",
"Counters": "0-3",
"Defn": "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for \"Monroe\" systems that use the TAD to enable individual channels to enter self-refresh to save power.",
"Desc": "HA Requests to a TAD Region - Group 1",
"EvSel": 28,
"MaxIncCyc": 2,
"Umask": "bxxxxxxx1",
},
"HA.TRACKER_INSERTS": {
"Box": "HA",
"Category": "HA TRACKER Events",
"Counters": "0-3",
"Defn": "Counts the number of allocations into the local HA tracker pool. This can be used in conjunction with the occupancy accumulation event in order to calculate average latency. One cannot filter between reads and writes. HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.",
"Desc": "Tracker Allocations",
"EvSel": 6,
},
"HA.TRACKER_INSERTS.ALL": {
"Box": "HA",
"Category": "HA TRACKER Events",
"Counters": "0-3",
"Defn": "Counts the number of allocations into the local HA tracker pool. This can be used in conjunction with the occupancy accumulation event in order to calculate average latency. One cannot filter between reads and writes. HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.",
"Desc": "Tracker Allocations",
"EvSel": 6,
"Umask": "b00000011",
},
"HA.TxR_AD": {
"Box": "HA",
"Category": "HA OUTBOUND_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of outbound transactions on the AD ring. This can be filtered by the NDR and SNP message classes. See the filter descriptions for more details.",
"Desc": "Outbound NDR Ring Transactions",
"EvSel": 15,
},
"HA.TxR_AD.SNP": {
"Box": "HA",
"Category": "HA OUTBOUND_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of outbound transactions on the AD ring. This can be filtered by the NDR and SNP message classes. See the filter descriptions for more details.",
"Desc": "Outbound NDR Ring Transactions",
"EvSel": 15,
"Umask": "bxxxxxx1x",
},
"HA.TxR_AD.NDR": {
"Box": "HA",
"Category": "HA OUTBOUND_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of outbound transactions on the AD ring. This can be filtered by the NDR and SNP message classes. See the filter descriptions for more details.",
"Desc": "Outbound NDR Ring Transactions",
"EvSel": 15,
"Umask": "bxxxxxxx1",
},
"HA.TxR_AD_CYCLES_FULL": {
"Box": "HA",
"Category": "HA AD_EGRESS Events",
"Counters": "0-3",
"Defn": "AD Egress Full",
"Desc": "AD Egress Full",
"EvSel": 42,
},
"HA.TxR_AD_CYCLES_FULL.SCHED1": {
"Box": "HA",
"Category": "HA AD_EGRESS Events",
"Counters": "0-3",
"Defn": "AD Egress Full",
"Desc": "AD Egress Full",
"EvSel": 42,
"Umask": "bxxxxxx1x",
},
"HA.TxR_AD_CYCLES_FULL.ALL": {
"Box": "HA",
"Category": "HA AD_EGRESS Events",
"Counters": "0-3",
"Defn": "AD Egress Full",
"Desc": "AD Egress Full",
"EvSel": 42,
"Umask": "bxxxxxx11",
},
"HA.TxR_AD_CYCLES_FULL.SCHED0": {
"Box": "HA",
"Category": "HA AD_EGRESS Events",
"Counters": "0-3",
"Defn": "AD Egress Full",
"Desc": "AD Egress Full",
"EvSel": 42,
"Umask": "bxxxxxxx1",
},
"HA.TxR_AK_CYCLES_FULL": {
"Box": "HA",
"Category": "HA AK_EGRESS Events",
"Counters": "0-3",
"Defn": "AK Egress Full",
"Desc": "AK Egress Full",
"EvSel": 50,
},
"HA.TxR_AK_CYCLES_FULL.SCHED1": {
"Box": "HA",
"Category": "HA AK_EGRESS Events",
"Counters": "0-3",
"Defn": "AK Egress Full",
"Desc": "AK Egress Full",
"EvSel": 50,
"Umask": "bxxxxxx1x",
},
"HA.TxR_AK_CYCLES_FULL.ALL": {
"Box": "HA",
"Category": "HA AK_EGRESS Events",
"Counters": "0-3",
"Defn": "AK Egress Full",
"Desc": "AK Egress Full",
"EvSel": 50,
"Umask": "bxxxxxx11",
},
"HA.TxR_AK_CYCLES_FULL.SCHED0": {
"Box": "HA",
"Category": "HA AK_EGRESS Events",
"Counters": "0-3",
"Defn": "AK Egress Full",
"Desc": "AK Egress Full",
"EvSel": 50,
"Umask": "bxxxxxxx1",
},
"HA.TxR_AK_NDR": {
"Box": "HA",
"Category": "HA OUTBOUND_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of outbound NDR transactions sent on the AK ring. NDR stands for \"non-data response\" and is generally used for completions that do not include data. AK NDR is used for messages to the local socket.",
"Desc": "Outbound NDR Ring Transactions",
"EvSel": 14,
},
"HA.TxR_BL": {
"Box": "HA",
"Category": "HA OUTBOUND_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.",
"Desc": "Outbound DRS Ring Transactions to Cache",
"EvSel": 16,
},
"HA.TxR_BL.DRS_QPI": {
"Box": "HA",
"Category": "HA OUTBOUND_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.",
"Desc": "Outbound DRS Ring Transactions to Cache",
"EvSel": 16,
"Umask": "bxxxxx1xx",
},
"HA.TxR_BL.DRS_CACHE": {
"Box": "HA",
"Category": "HA OUTBOUND_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.",
"Desc": "Outbound DRS Ring Transactions to Cache",
"EvSel": 16,
"Umask": "bxxxxxxx1",
},
"HA.TxR_BL.DRS_CORE": {
"Box": "HA",
"Category": "HA OUTBOUND_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.",
"Desc": "Outbound DRS Ring Transactions to Cache",
"EvSel": 16,
"Umask": "bxxxxxx1x",
},
"HA.TxR_BL_CYCLES_FULL": {
"Box": "HA",
"Category": "HA BL_EGRESS Events",
"Counters": "0-3",
"Defn": "BL Egress Full",
"Desc": "BL Egress Full",
"EvSel": 54,
},
"HA.TxR_BL_CYCLES_FULL.SCHED1": {
"Box": "HA",
"Category": "HA BL_EGRESS Events",
"Counters": "0-3",
"Defn": "BL Egress Full",
"Desc": "BL Egress Full",
"EvSel": 54,
"Umask": "bxxxxxx1x",
},
"HA.TxR_BL_CYCLES_FULL.ALL": {
"Box": "HA",
"Category": "HA BL_EGRESS Events",
"Counters": "0-3",
"Defn": "BL Egress Full",
"Desc": "BL Egress Full",
"EvSel": 54,
"Umask": "bxxxxxx11",
},
"HA.TxR_BL_CYCLES_FULL.SCHED0": {
"Box": "HA",
"Category": "HA BL_EGRESS Events",
"Counters": "0-3",
"Defn": "BL Egress Full",
"Desc": "BL Egress Full",
"EvSel": 54,
"Umask": "bxxxxxxx1",
},
"HA.WPQ_CYCLES_NO_REG_CREDITS": {
"Box": "HA",
"Category": "HA WPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and \"special\" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "HA iMC CHN0 WPQ Credits Empty - Regular",
"EvSel": 24,
"MaxIncCyc": 4,
},
"HA.WPQ_CYCLES_NO_REG_CREDITS.CHN1": {
"Box": "HA",
"Category": "HA WPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and \"special\" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "HA iMC CHN0 WPQ Credits Empty - Regular",
"EvSel": 24,
"MaxIncCyc": 4,
"Umask": "bxxxxxx1x",
},
"HA.WPQ_CYCLES_NO_REG_CREDITS.CHN2": {
"Box": "HA",
"Category": "HA WPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and \"special\" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "HA iMC CHN0 WPQ Credits Empty - Regular",
"EvSel": 24,
"MaxIncCyc": 4,
"Umask": "bxxxxx1xx",
},
"HA.WPQ_CYCLES_NO_REG_CREDITS.CHN3": {
"Box": "HA",
"Category": "HA WPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and \"special\" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "HA iMC CHN0 WPQ Credits Empty - Regular",
"EvSel": 24,
"MaxIncCyc": 4,
"Umask": "bxxxx1xxx",
},
"HA.WPQ_CYCLES_NO_REG_CREDITS.CHN0": {
"Box": "HA",
"Category": "HA WPQ_CREDITS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when there are no \"regular\" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and \"special\" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.",
"Desc": "HA iMC CHN0 WPQ Credits Empty - Regular",
"EvSel": 24,
"MaxIncCyc": 4,
"Umask": "bxxxxxxx1",
},
# iMC:
"iMC.ACT_COUNT": {
"Box": "iMC",
"Category": "iMC ACT Events",
"Counters": "0-3",
"Defn": "Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"Desc": "DRAM Activate Count",
"EvSel": 1,
},
"iMC.CAS_COUNT": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Counters": "0-3",
"Defn": "DRAM RD_CAS and WR_CAS Commands",
"Desc": "DRAM RD_CAS and WR_CAS Commands.",
"EvSel": 4,
},
"iMC.CAS_COUNT.WR_RMM": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Counters": "0-3",
"Defn": "DRAM RD_CAS and WR_CAS Commands",
"Desc": "DRAM RD_CAS and WR_CAS Commands.",
"EvSel": 4,
"Umask": "bxxxx1xxx",
},
"iMC.CAS_COUNT.RD_UNDERFILL": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Counters": "0-3",
"Defn": "DRAM RD_CAS and WR_CAS Commands",
"Desc": "DRAM RD_CAS and WR_CAS Commands.",
"EvSel": 4,
"Umask": "bxxxxxx1x",
},
"iMC.CAS_COUNT.ALL": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Counters": "0-3",
"Defn": "DRAM RD_CAS and WR_CAS Commands",
"Desc": "DRAM RD_CAS and WR_CAS Commands.",
"EvSel": 4,
"Umask": "b00001111",
},
"iMC.CAS_COUNT.RD": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Counters": "0-3",
"Defn": "DRAM RD_CAS and WR_CAS Commands",
"Desc": "DRAM RD_CAS and WR_CAS Commands.",
"EvSel": 4,
"Umask": "b00000011",
},
"iMC.CAS_COUNT.RD_REG": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Counters": "0-3",
"Defn": "DRAM RD_CAS and WR_CAS Commands",
"Desc": "DRAM RD_CAS and WR_CAS Commands.",
"EvSel": 4,
"Umask": "bxxxxxxx1",
},
"iMC.CAS_COUNT.WR_WMM": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Counters": "0-3",
"Defn": "DRAM RD_CAS and WR_CAS Commands",
"Desc": "DRAM RD_CAS and WR_CAS Commands.",
"EvSel": 4,
"Umask": "bxxxxx1xx",
},
"iMC.CAS_COUNT.WR": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Counters": "0-3",
"Defn": "DRAM RD_CAS and WR_CAS Commands",
"Desc": "DRAM RD_CAS and WR_CAS Commands.",
"EvSel": 4,
"Umask": "b00001100",
},
"iMC.DRAM_PRE_ALL": {
"Box": "iMC",
"Category": "iMC DRAM_PRE_ALL Events",
"Counters": "0-3",
"Defn": "Counts the number of times that the precharge all command was sent.",
"Desc": "DRAM Precharge All Commands",
"EvSel": 6,
},
"iMC.DRAM_REFRESH": {
"Box": "iMC",
"Category": "iMC DRAM_REFRESH Events",
"Counters": "0-3",
"Defn": "Counts the number of refreshes issued.",
"Desc": "Number of DRAM Refreshes Issued",
"EvSel": 5,
},
"iMC.DRAM_REFRESH.PANIC": {
"Box": "iMC",
"Category": "iMC DRAM_REFRESH Events",
"Counters": "0-3",
"Defn": "Counts the number of refreshes issued.",
"Desc": "Number of DRAM Refreshes Issued",
"EvSel": 5,
"Umask": "bxxxxxx1x",
},
"iMC.DRAM_REFRESH.HIGH": {
"Box": "iMC",
"Category": "iMC DRAM_REFRESH Events",
"Counters": "0-3",
"Defn": "Counts the number of refreshes issued.",
"Desc": "Number of DRAM Refreshes Issued",
"EvSel": 5,
"Umask": "bxxxxx1xx",
},
"iMC.ECC_CORRECTABLE_ERRORS": {
"Box": "iMC",
"Category": "iMC ECC Events",
"Counters": "0-3",
"Defn": "Counts the number of ECC errors detected and corrected by the iMC on this channel. This counter is only useful with ECC DRAM devices. This count will increment one time for each correction regardless of the number of bits corrected. The iMC can correct up to 4 bit errors in independent channel mode and 8 bit erros in lockstep mode.",
"Desc": "ECC Correctable Errors",
"EvSel": 9,
},
"iMC.MAJOR_MODES": {
"Box": "iMC",
"Category": "iMC MAJOR_MODES Events",
"Counters": "0-3",
"Defn": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.",
"Desc": "Cycles in a Major Mode",
"EvSel": 7,
},
"iMC.MAJOR_MODES.ISOCH": {
"Box": "iMC",
"Category": "iMC MAJOR_MODES Events",
"Counters": "0-3",
"Defn": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.",
"Desc": "Cycles in a Major Mode",
"EvSel": 7,
"Umask": "bxxxx1xxx",
},
"iMC.MAJOR_MODES.READ": {
"Box": "iMC",
"Category": "iMC MAJOR_MODES Events",
"Counters": "0-3",
"Defn": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.",
"Desc": "Cycles in a Major Mode",
"EvSel": 7,
"Umask": "bxxxxxxx1",
},
"iMC.MAJOR_MODES.PARTIAL": {
"Box": "iMC",
"Category": "iMC MAJOR_MODES Events",
"Counters": "0-3",
"Defn": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.",
"Desc": "Cycles in a Major Mode",
"EvSel": 7,
"Umask": "bxxxxx1xx",
},
"iMC.MAJOR_MODES.WRITE": {
"Box": "iMC",
"Category": "iMC MAJOR_MODES Events",
"Counters": "0-3",
"Defn": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.",
"Desc": "Cycles in a Major Mode",
"EvSel": 7,
"Umask": "bxxxxxx1x",
},
"iMC.POWER_CHANNEL_DLLOFF": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles when all the ranks in the channel are in CKE Slow (DLLOFF) mode.",
"Desc": "Channel DLLOFF Cycles",
"EvSel": 132,
"Notes": "IBT = Input Buffer Termination = Off",
},
"iMC.POWER_CHANNEL_PPD": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles when all the ranks in the channel are in PPD mode. If IBT=off is enabled, then this can be used to count those cycles. If it is not enabled, then this can count the number of cycles when that could have been taken advantage of.",
"Desc": "Channel PPD Cycles",
"EvSel": 133,
"MaxIncCyc": 4,
"Notes": "IBT = Input Buffer Termination = On",
},
"iMC.POWER_CKE_CYCLES": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
},
"iMC.POWER_CKE_CYCLES.RANK5": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
"Umask": "bxx1xxxxx",
},
"iMC.POWER_CKE_CYCLES.RANK6": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
"Umask": "bx1xxxxxx",
},
"iMC.POWER_CKE_CYCLES.RANK3": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
"Umask": "bxxxx1xxx",
},
"iMC.POWER_CKE_CYCLES.RANK4": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
"Umask": "bxxx1xxxx",
},
"iMC.POWER_CKE_CYCLES.RANK1": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
"Umask": "bxxxxxx1x",
},
"iMC.POWER_CKE_CYCLES.RANK0": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
"Umask": "bxxxxxxx1",
},
"iMC.POWER_CKE_CYCLES.RANK2": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
"Umask": "bxxxxx1xx",
},
"iMC.POWER_CKE_CYCLES.RANK7": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"Desc": "CKE_ON_CYCLES by Rank",
"EvSel": 131,
"MaxIncCyc": 16,
"Umask": "b1xxxxxxx",
},
"iMC.POWER_CRITICAL_THROTTLE_CYCLES": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the iMC is in critical thermal throttling. When this happens, all traffic is blocked. This should be rare unless something bad is going on in the platform. There is no filtering by rank for this event.",
"Desc": "Critical Throttle Cycles",
"EvSel": 134,
},
"iMC.POWER_SELF_REFRESH": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock. This happens in some package C-states. For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Monroe technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases.",
"Desc": "Clock-Enabled Self-Refresh",
"EvSel": 67,
},
"iMC.POWER_THROTTLE_CYCLES": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for any Rank",
"EvSel": 65,
},
"iMC.POWER_THROTTLE_CYCLES.RANK5": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for Rank 5",
"EvSel": 65,
"Umask": "bxx1xxxxx",
},
"iMC.POWER_THROTTLE_CYCLES.RANK6": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for Rank 6",
"EvSel": 65,
"Umask": "bx1xxxxxx",
},
"iMC.POWER_THROTTLE_CYCLES.RANK3": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for Rank 3",
"EvSel": 65,
"Umask": "bxxxx1xxx",
},
"iMC.POWER_THROTTLE_CYCLES.RANK4": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for Rank 4",
"EvSel": 65,
"Umask": "bxxx1xxxx",
},
"iMC.POWER_THROTTLE_CYCLES.RANK1": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for Rank 1",
"EvSel": 65,
"Umask": "bxxxxxx1x",
},
"iMC.POWER_THROTTLE_CYCLES.RANK0": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for Rank 0",
"EvSel": 65,
"Umask": "bxxxxxxx1",
},
"iMC.POWER_THROTTLE_CYCLES.RANK2": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for Rank 2",
"EvSel": 65,
"Umask": "bxxxxx1xx",
},
"iMC.POWER_THROTTLE_CYCLES.RANK7": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"Desc": "Throttle Cycles for Rank 7",
"EvSel": 65,
"Umask": "b1xxxxxxx",
},
"iMC.PREEMPTION": {
"Box": "iMC",
"Category": "iMC PREEMPTION Events",
"Counters": "0-3",
"Defn": "Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.",
"Desc": "Read Preemption Count",
"EvSel": 8,
},
"iMC.PREEMPTION.RD_PREEMPT_WR": {
"Box": "iMC",
"Category": "iMC PREEMPTION Events",
"Counters": "0-3",
"Defn": "Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.",
"Desc": "Read Preemption Count",
"EvSel": 8,
"Umask": "bxxxxxx1x",
},
"iMC.PREEMPTION.RD_PREEMPT_RD": {
"Box": "iMC",
"Category": "iMC PREEMPTION Events",
"Counters": "0-3",
"Defn": "Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.",
"Desc": "Read Preemption Count",
"EvSel": 8,
"Umask": "bxxxxxxx1",
},
"iMC.PRE_COUNT": {
"Box": "iMC",
"Category": "iMC PRE Events",
"Counters": "0-3",
"Defn": "Counts the number of DRAM Precharge commands sent on this channel.",
"Desc": "DRAM Precharge commands.",
"EvSel": 2,
},
"iMC.PRE_COUNT.PAGE_CLOSE": {
"Box": "iMC",
"Category": "iMC PRE Events",
"Counters": "0-3",
"Defn": "Counts the number of DRAM Precharge commands sent on this channel.",
"Desc": "DRAM Precharge commands.",
"EvSel": 2,
"Umask": "bxxxxxx1x",
},
"iMC.PRE_COUNT.PAGE_MISS": {
"Box": "iMC",
"Category": "iMC PRE Events",
"Counters": "0-3",
"Defn": "Counts the number of DRAM Precharge commands sent on this channel.",
"Desc": "DRAM Precharge commands.",
"EvSel": 2,
"Umask": "bxxxxxxx1",
},
"iMC.RPQ_CYCLES_FULL": {
"Box": "iMC",
"Category": "iMC RPQ Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries.",
"Desc": "Read Pending Queue Full Cycles",
"EvSel": 18,
},
"iMC.RPQ_CYCLES_NE": {
"Box": "iMC",
"Category": "iMC RPQ Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.",
"Desc": "Read Pending Queue Not Empty",
"EvSel": 17,
},
"iMC.RPQ_INSERTS": {
"Box": "iMC",
"Category": "iMC RPQ Events",
"Counters": "0-3",
"Defn": "Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
"Desc": "Read Pending Queue Allocations",
"EvSel": 16,
},
"iMC.RPQ_OCCUPANCY": {
"Box": "iMC",
"Category": "iMC RPQ Events",
"Counters": "0-3",
"Defn": "Accumulates the occupancies of the Read Pending Queue each cycle. This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory.",
"Desc": "Read Pending Queue Occupancy",
"EvSel": 128,
"MaxIncCyc": 22,
"SubCtr": 1,
},
"iMC.WPQ_CYCLES_FULL": {
"Box": "iMC",
"Category": "iMC WPQ Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no WPQ credits, just somewhat smaller to account for the credit return overhead.",
"Desc": "Write Pending Queue Full Cycles",
"EvSel": 34,
},
"iMC.WPQ_CYCLES_NE": {
"Box": "iMC",
"Category": "iMC WPQ Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have \"posted\" to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.",
"Desc": "Write Pending Queue Not Empty",
"EvSel": 33,
},
"iMC.WPQ_INSERTS": {
"Box": "iMC",
"Category": "iMC WPQ Events",
"Counters": "0-3",
"Defn": "Counts the number of allocations into the Write Pending Queue. This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have \"posted\" to the iMC.",
"Desc": "Write Pending Queue Allocations",
"EvSel": 32,
},
"iMC.WPQ_OCCUPANCY": {
"Box": "iMC",
"Category": "iMC WPQ Events",
"Counters": "0-3",
"Defn": "Accumulates the occupancies of the Write Pending Queue each cycle. This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have \"posted\" to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. So, we provide filtering based on if the request has posted or not. By using the \"not posted\" filter, we can track how long writes spent in the iMC before completions were sent to the HA. The \"posted\" filter, on the other hand, provides information about how much queueing is actually happenning in the iMC for writes before they are actually issued to memory. High average occupancies will generally coincide with high write major mode counts.",
"Desc": "Write Pending Queue Occupancy",
"EvSel": 129,
"MaxIncCyc": 32,
"SubCtr": 1,
},
"iMC.WPQ_READ_HIT": {
"Box": "iMC",
"Category": "iMC WPQ Events",
"Counters": "0-3",
"Defn": "Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"Desc": "Write Pending Queue CAM Match",
"EvSel": 35,
},
"iMC.WPQ_WRITE_HIT": {
"Box": "iMC",
"Category": "iMC WPQ Events",
"Counters": "0-3",
"Defn": "Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"Desc": "Write Pending Queue CAM Match",
"EvSel": 36,
},
# R2PCIe:
"R2PCIe.CLOCKTICKS": {
"Box": "R2PCIe",
"Category": "R2PCIe UCLK Events",
"Counters": "0-3",
"Defn": "Counts the number of uclks in the R2PCIe uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the R2PCIe is close to the Ubox, they generally should not diverge by more than a handful of cycles.",
"Desc": "Number of uclks in domain",
"EvSel": 1,
},
"R2PCIe.RING_AD_USED": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AD Ring in Use",
"EvSel": 7,
},
"R2PCIe.RING_AD_USED.CW_EVEN": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AD Ring in Use",
"EvSel": 7,
"Umask": "bxxxxxxx1",
},
"R2PCIe.RING_AD_USED.CCW_EVEN": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AD Ring in Use",
"EvSel": 7,
"Umask": "bxxxxx1xx",
},
"R2PCIe.RING_AD_USED.CW_ODD": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AD Ring in Use",
"EvSel": 7,
"Umask": "bxxxxxx1x",
},
"R2PCIe.RING_AD_USED.CCW_ODD": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AD Ring in Use",
"EvSel": 7,
"Umask": "bxxxx1xxx",
},
"R2PCIe.RING_AK_USED": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AK Ring in Use",
"EvSel": 8,
},
"R2PCIe.RING_AK_USED.CW_EVEN": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AK Ring in Use",
"EvSel": 8,
"Umask": "bxxxxxxx1",
},
"R2PCIe.RING_AK_USED.CCW_EVEN": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AK Ring in Use",
"EvSel": 8,
"Umask": "bxxxxx1xx",
},
"R2PCIe.RING_AK_USED.CW_ODD": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AK Ring in Use",
"EvSel": 8,
"Umask": "bxxxxxx1x",
},
"R2PCIe.RING_AK_USED.CCW_ODD": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 AK Ring in Use",
"EvSel": 8,
"Umask": "bxxxx1xxx",
},
"R2PCIe.RING_BL_USED": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 BL Ring in Use",
"EvSel": 9,
},
"R2PCIe.RING_BL_USED.CW_EVEN": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 BL Ring in Use",
"EvSel": 9,
"Umask": "bxxxxxxx1",
},
"R2PCIe.RING_BL_USED.CCW_EVEN": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 BL Ring in Use",
"EvSel": 9,
"Umask": "bxxxxx1xx",
},
"R2PCIe.RING_BL_USED.CW_ODD": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 BL Ring in Use",
"EvSel": 9,
"Umask": "bxxxxxx1x",
},
"R2PCIe.RING_BL_USED.CCW_ODD": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.",
"Desc": "R2 BL Ring in Use",
"EvSel": 9,
"Umask": "bxxxx1xxx",
},
"R2PCIe.RING_IV_USED": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sunk into the ring stop. The IV ring is unidirectional. Whether UP or DN is used is dependent on the system programming. Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.",
"Desc": "R2 IV Ring in Use",
"EvSel": 10,
},
"R2PCIe.RING_IV_USED.ANY": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sunk into the ring stop. The IV ring is unidirectional. Whether UP or DN is used is dependent on the system programming. Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.",
"Desc": "R2 IV Ring in Use",
"EvSel": 10,
"Umask": "b00001111",
},
"R2PCIe.RxR_AK_BOUNCES": {
"Box": "R2PCIe",
"Category": "R2PCIe INGRESS Events",
"Counters": 0,
"Defn": "Counts the number of times when a request destined for the AK ingress bounced.",
"Desc": "AK Ingress Bounced",
"EvSel": 18,
},
"R2PCIe.RxR_CYCLES_NE": {
"Box": "R2PCIe",
"Category": "R2PCIe INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
},
"R2PCIe.RxR_CYCLES_NE.NCS": {
"Box": "R2PCIe",
"Category": "R2PCIe INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxx1xxxxx",
},
"R2PCIe.RxR_CYCLES_NE.NCB": {
"Box": "R2PCIe",
"Category": "R2PCIe INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxxx1xxxx",
},
"R2PCIe.RxR_CYCLES_NE.DRS": {
"Box": "R2PCIe",
"Category": "R2PCIe INGRESS Events",
"Counters": "0-1",
"Defn": "Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.",
"Desc": "Ingress Cycles Not Empty",
"EvSel": 16,
"Umask": "bxxxx1xxx",
},
"R2PCIe.TxR_CYCLES_FULL": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of cycles when the R2PCIe Egress buffer is full.",
"Desc": "Egress Cycles Full",
"EvSel": 37,
},
"R2PCIe.TxR_CYCLES_FULL.AK": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of cycles when the R2PCIe Egress buffer is full.",
"Desc": "Egress Cycles Full",
"EvSel": 37,
"Umask": "bxxxxxx1x",
},
"R2PCIe.TxR_CYCLES_FULL.BL": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of cycles when the R2PCIe Egress buffer is full.",
"Desc": "Egress Cycles Full",
"EvSel": 37,
"Umask": "bxxxxx1xx",
},
"R2PCIe.TxR_CYCLES_FULL.AD": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of cycles when the R2PCIe Egress buffer is full.",
"Desc": "Egress Cycles Full",
"EvSel": 37,
"Umask": "bxxxxxxx1",
},
"R2PCIe.TxR_CYCLES_NE": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"Desc": "Egress Cycles Not Empty",
"EvSel": 35,
},
"R2PCIe.TxR_CYCLES_NE.AK": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"Desc": "Egress Cycles Not Empty",
"EvSel": 35,
"Umask": "bxxxxxx1x",
},
"R2PCIe.TxR_CYCLES_NE.BL": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"Desc": "Egress Cycles Not Empty",
"EvSel": 35,
"Umask": "bxxxxx1xx",
},
"R2PCIe.TxR_CYCLES_NE.AD": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"Desc": "Egress Cycles Not Empty",
"EvSel": 35,
"Umask": "bxxxxxxx1",
},
"R2PCIe.TxR_INSERTS": {
"Box": "R2PCIe",
"Category": "R2PCIe EGRESS Events",
"Counters": 0,
"Defn": "Counts the number of allocations into the R2PCIe Egress. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue latency. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"Desc": "Egress Allocations",
"EvSel": 36,
},
# PCU:
"PCU.CLOCKTICKS": {
"Box": "PCU",
"Category": "PCU PCLK Events",
"Counters": "0-3",
"Defn": "The PCU runs off a fixed 800 MHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
"Desc": "pclk Cycles",
"EvSel": 0,
},
"PCU.CORE0_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions. There is one event per core.",
"Desc": "Core C State Transition Cycles",
"EvSel": 3,
"ExtSel": 1,
"Notes": "This only tracks the hardware portion in the RCFSM (CFCFSM). This portion is just doing the core C state transition. It does not include any necessary frequency/voltage transitions.",
},
"PCU.CORE1_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions. There is one event per core.",
"Desc": "Core C State Transition Cycles",
"ExtSel": 1,
"EvSel": 4,
},
"PCU.CORE2_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions. There is one event per core.",
"Desc": "Core C State Transition Cycles",
"ExtSel": 1,
"EvSel": 5,
},
"PCU.CORE3_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions. There is one event per core.",
"Desc": "Core C State Transition Cycles",
"ExtSel": 1,
"EvSel": 6,
},
"PCU.CORE4_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions. There is one event per core.",
"Desc": "Core C State Transition Cycles",
"ExtSel": 1,
"EvSel": 7,
},
"PCU.CORE5_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions. There is one event per core.",
"Desc": "Core C State Transition Cycles",
"ExtSel": 1,
"EvSel": 8,
},
"PCU.CORE6_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions. There is one event per core.",
"Desc": "Core C State Transition Cycles",
"ExtSel": 1,
"EvSel": 9,
},
"PCU.CORE7_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions. There is one event per core.",
"Desc": "Core C State Transition Cycles",
"ExtSel": 1,
"EvSel": 10,
},
"PCU.DEMOTIONS_CORE0": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Counts the number of times when a configurable cores had a C-state demotion",
"Desc": "Core C State Demotions",
"ExtSel": 1,
"EvSel": 30,
"Filter": "PCUFilter[7:0]",
},
"PCU.DEMOTIONS_CORE1": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Counts the number of times when a configurable cores had a C-state demotion",
"Desc": "Core C State Demotions",
"EvSel": 31,
"Filter": "PCUFilter[7:0]",
},
"PCU.DEMOTIONS_CORE2": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Counts the number of times when a configurable cores had a C-state demotion",
"Desc": "Core C State Demotions",
"EvSel": 32,
},
"PCU.DEMOTIONS_CORE3": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Counts the number of times when a configurable cores had a C-state demotion",
"Desc": "Core C State Demotions",
"EvSel": 33,
"Filter": "PCUFilter[7:0]",
},
"PCU.DEMOTIONS_CORE4": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Counts the number of times when a configurable cores had a C-state demotion",
"Desc": "Core C State Demotions",
"EvSel": 34,
"Filter": "PCUFilter[7:0]",
},
"PCU.DEMOTIONS_CORE5": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Counts the number of times when a configurable cores had a C-state demotion",
"Desc": "Core C State Demotions",
"EvSel": 35,
"Filter": "PCUFilter[7:0]",
},
"PCU.DEMOTIONS_CORE6": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Counts the number of times when a configurable cores had a C-state demotion",
"Desc": "Core C State Demotions",
"EvSel": 36,
"Filter": "PCUFilter[7:0]",
},
"PCU.DEMOTIONS_CORE7": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Counts the number of times when a configurable cores had a C-state demotion",
"Desc": "Core C State Demotions",
"EvSel": 37,
"Filter": "PCUFilter[7:0]",
},
"PCU.FREQ_BAND0_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_RESIDENCY Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.",
"Desc": "Frequency Residency",
"EvSel": 11,
"Filter": "PCUFilter[7:0]",
"Notes": "The PMON control registers in the PCU only update on a frequency transition. Changing the measuring threshold during a sample interval may introduce errors in the counts. This is especially true when running at a constant frequency for an extended period of time. There is a corner case here: we set this code on the GV transition. So, if we never GV we will never call this code. This event does not include transition times. It is handled on fast path.",
},
"PCU.FREQ_BAND1_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_RESIDENCY Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.",
"Desc": "Frequency Residency",
"EvSel": 12,
"Filter": "PCUFilter[15:8]",
"Notes": "The PMON control registers in the PCU only update on a frequency transition. Changing the measuring threshold during a sample interval may introduce errors in the counts. This is especially true when running at a constant frequency for an extended period of time. There is a corner case here: we set this code on the GV transition. So, if we never GV we will never call this code. This event does not include transition times. It is handled on fast path.",
},
"PCU.FREQ_BAND2_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_RESIDENCY Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.",
"Desc": "Frequency Residency",
"EvSel": 13,
"Filter": "PCUFilter[23:16]",
"Notes": "The PMON control registers in the PCU only update on a frequency transition. Changing the measuring threshold during a sample interval may introduce errors in the counts. This is especially true when running at a constant frequency for an extended period of time. There is a corner case here: we set this code on the GV transition. So, if we never GV we will never call this code. This event does not include transition times. It is handled on fast path.",
},
"PCU.FREQ_BAND3_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_RESIDENCY Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.",
"Desc": "Frequency Residency",
"EvSel": 14,
"Filter": "PCUFilter[31:24]",
"Notes": "The PMON control registers in the PCU only update on a frequency transition. Changing the measuring threshold during a sample interval may introduce errors in the counts. This is especially true when running at a constant frequency for an extended period of time. There is a corner case here: we set this code on the GV transition. So, if we never GV we will never call this code. This event does not include transition times. It is handled on fast path.",
},
"PCU.FREQ_MAX_CURRENT_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_MAX_LIMIT Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when current is the upper limit on frequency.",
"Desc": "Current Strongest Upper Limit Cycles",
"EvSel": 7,
"Notes": "This is fast path, will clear our other limits when it happens. The slow loop portion, which covers the other limits, can double count EDP. Clearing should fix this up in the next fast path event, but this will happen. Add up all the cycles and it won't make sense, but the general distribution is true.",
},
"PCU.FREQ_MAX_LIMIT_THERMAL_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_MAX_LIMIT Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when thermal conditions are the upper limit on frequency. This is related to the THERMAL_THROTTLE CYCLES_ABOVE_TEMP event, which always counts cycles when we are above the thermal temperature. This event (STRONGEST_UPPER_LIMIT) is sampled at the output of the algorithm that determines the actual frequency, while THERMAL_THROTTLE looks at the input.",
"Desc": "Thermal Strongest Upper Limit Cycles",
"EvSel": 4,
},
"PCU.FREQ_MAX_OS_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_MAX_LIMIT Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the OS is the upper limit on frequency.",
"Desc": "OS Strongest Upper Limit Cycles",
"EvSel": 6,
"Notes": "Essentially, this event says the OS is getting the frequency it requested.",
},
"PCU.FREQ_MAX_POWER_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_MAX_LIMIT Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when power is the upper limit on frequency.",
"Desc": "Power Strongest Upper Limit Cycles",
"EvSel": 5,
},
"PCU.FREQ_MIN_IO_P_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_MIN_LIMIT Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth.",
"Desc": "IO P Limit Strongest Lower Limit Cycles",
"ExtSel": 1,
"EvSel": 1,
},
"PCU.FREQ_MIN_PERF_P_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_MIN_LIMIT Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when Perf P Limit is preventing us from dropping the frequency lower. Perf P Limit is an algorithm that takes input from remote sockets when determining if a socket should drop it's frequency down. This is largely to minimize increases in snoop and remote read latencies.",
"Desc": "Perf P Limit Strongest Lower Limit Cycles",
"ExtSel": 1,
"EvSel": 2,
},
"PCU.FREQ_TRANS_CYCLES": {
"Box": "PCU",
"Category": "PCU FREQ_TRANS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system.",
"Desc": "Cycles spent changing Frequency",
"ExtSel": 1,
"EvSel": 0,
},
"PCU.MEMORY_PHASE_SHEDDING_CYCLES": {
"Box": "PCU",
"Category": "PCU MEMORY_PHASE_SHEDDING Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency.",
"Desc": "Memory Phase Shedding Cycles",
"EvSel": 47,
"Notes": "Is this the package C one? Yes",
},
"PCU.POWER_STATE_OCCUPANCY": {
"Box": "PCU",
"Category": "PCU POWER_STATE_OCC Events",
"Counters": "0-3",
"Defn": "This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in C0, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Desc": "Number of cores in C0",
"EvSel": 128,
"MaxIncCyc": 8,
"SubCtr": 1,
},
"PCU.POWER_STATE_OCCUPANCY.CORES_C3": {
"Box": "PCU",
"Category": "PCU POWER_STATE_OCC Events",
"Counters": "0-3",
"Defn": "This is an occupancy event that tracks the number of cores that are in C3. It can be used by itself to get the average number of cores in C3, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Desc": "Number of cores in C3",
"EvSel": 128,
"MaxIncCyc": 8,
"SubCtr": 1,
"Umask": "b10000000",
},
"PCU.POWER_STATE_OCCUPANCY.CORES_C0": {
"Box": "PCU",
"Category": "PCU POWER_STATE_OCC Events",
"Counters": "0-3",
"Defn": "This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in C0, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Desc": "Number of cores in C0",
"EvSel": 128,
"MaxIncCyc": 8,
"SubCtr": 1,
"Umask": "b01000000",
},
"PCU.POWER_STATE_OCCUPANCY.CORES_C6": {
"Box": "PCU",
"Category": "PCU POWER_STATE_OCC Events",
"Counters": "0-3",
"Defn": "This is an occupancy event that tracks the number of cores that are in C6. It can be used by itself to get the average number of cores in C6, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Desc": "Number of cores in C6",
"EvSel": 128,
"MaxIncCyc": 8,
"SubCtr": 1,
"Umask": "b11000000",
},
"PCU.PROCHOT_EXTERNAL_CYCLES": {
"Box": "PCU",
"Category": "PCU PROCHOT Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip.",
"Desc": "External Prochot",
"EvSel": 10,
},
"PCU.PROCHOT_INTERNAL_CYCLES": {
"Box": "PCU",
"Category": "PCU PROCHOT Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that we are in Interal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.",
"Desc": "ExtSel Prochot",
"EvSel": 9,
},
"PCU.TOTAL_TRANSITION_CYCLES": {
"Box": "PCU",
"Category": "PCU CORE_C_STATE_TRANSITION Events",
"Counters": "0-3",
"Defn": "Number of cycles spent performing core C state transitions across all cores.",
"Desc": "Total Core C State Transition Cycles",
"ExtSel": 1,
"EvSel": 11,
},
"PCU.VOLT_TRANS_CYCLES_CHANGE": {
"Box": "PCU",
"Category": "PCU VOLT_TRANS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the system is changing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition. This event is calculated by or'ing together the increasing and decreasing events.",
"Desc": "Cycles Changing Voltage",
"EvSel": 3,
},
"PCU.VOLT_TRANS_CYCLES_DECREASE": {
"Box": "PCU",
"Category": "PCU VOLT_TRANS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the system is decreasing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition.",
"Desc": "Cycles Decreasing Voltage",
"EvSel": 2,
},
"PCU.VOLT_TRANS_CYCLES_INCREASE": {
"Box": "PCU",
"Category": "PCU VOLT_TRANS Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the system is increasing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition.",
"Desc": "Cycles Increasing Voltage",
"EvSel": 1,
},
"PCU.VR_HOT_CYCLES": {
"Box": "PCU",
"Category": "PCU VR_HOT Events",
"Counters": "0-3",
"Desc": "VR Hot",
"EvSel": 50,
},
# QPI_LL:
"QPI_LL.CLOCKTICKS": {
"Box": "QPI_LL",
"Category": "QPI_LL CFCLK Events",
"Counters": "0-3",
"Defn": "Counts the number of clocks in the QPI LL. This clock runs at 1/8th the \"GT/s\" speed of the QPI link. For example, a 8GT/s link will have qfclk or 1GHz. JKT does not support dynamic link speeds, so this frequency is fixed.",
"Desc": "Number of qfclks",
"EvSel": 20,
},
"QPI_LL.CTO_COUNT": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Counters": "0-3",
"Defn": "Counts the number of CTO (cluster trigger outs) events that were asserted across the two slots. If both slots trigger in a given cycle, the event will increment by 2. You can use edge detect to count the number of cases when both events triggered.",
"Desc": "Count of CTO Events",
"ExtSel": 1,
"EvSel": 56,
"MaxIncCyc": 2,
"SubCtr": 1,
},
"QPI_LL.DIRECT2CORE": {
"Box": "QPI_LL",
"Category": "QPI_LL DIRECT2CORE Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"Desc": "Direct 2 Core Spawning",
"EvSel": 19,
},
"QPI_LL.DIRECT2CORE.FAILURE_RBT": {
"Box": "QPI_LL",
"Category": "QPI_LL DIRECT2CORE Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"Desc": "Direct 2 Core Spawning",
"EvSel": 19,
"Umask": "bxxxxx1xx",
},
"QPI_LL.DIRECT2CORE.SUCCESS": {
"Box": "QPI_LL",
"Category": "QPI_LL DIRECT2CORE Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"Desc": "Direct 2 Core Spawning",
"EvSel": 19,
"Umask": "bxxxxxxx1",
},
"QPI_LL.DIRECT2CORE.FAILURE_CREDITS_RBT": {
"Box": "QPI_LL",
"Category": "QPI_LL DIRECT2CORE Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"Desc": "Direct 2 Core Spawning",
"EvSel": 19,
"Umask": "bxxxx1xxx",
},
"QPI_LL.DIRECT2CORE.FAILURE_CREDITS": {
"Box": "QPI_LL",
"Category": "QPI_LL DIRECT2CORE Events",
"Counters": "0-3",
"Defn": "Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"Desc": "Direct 2 Core Spawning",
"EvSel": 19,
"Umask": "bxxxxxx1x",
},
"QPI_LL.L1_POWER_CYCLES": {
"Box": "QPI_LL",
"Category": "QPI_LL POWER Events",
"Counters": "0-3",
"Defn": "Number of QPI qfclk cycles spent in L1 power mode. L1 is a mode that totally shuts down a QPI link. Use edge detect to count the number of instances when the QPI link entered L1. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. Because L1 totally shuts down the link, it takes a good amount of time to exit this mode.",
"Desc": "Cycles in L1",
"EvSel": 18,
},
"QPI_LL.RxL0P_POWER_CYCLES": {
"Box": "QPI_LL",
"Category": "QPI_LL POWER_RX Events",
"Counters": "0-3",
"Defn": "Number of QPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses. Use edge detect to count the number of instances when the QPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.",
"Desc": "Cycles in L0p",
"EvSel": 16,
"Notes": "Using .edge_det to count transitions does not function if L1_POWER_CYCLES > 0.",
},
"QPI_LL.RxL0_POWER_CYCLES": {
"Box": "QPI_LL",
"Category": "QPI_LL POWER_RX Events",
"Counters": "0-3",
"Defn": "Number of QPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.",
"Desc": "Cycles in L0",
"EvSel": 15,
},
"QPI_LL.RxL_BYPASSED": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transfered, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.",
"Desc": "Rx Flit Buffer Bypassed",
"EvSel": 9,
},
"QPI_LL.RxL_CREDITS_CONSUMED_VN0": {
"Box": "QPI_LL",
"Category": "QPI_LL RX_CREDITS_CONSUMED Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Desc": "VN0 Credit Consumed",
"EvSel": 30,
"ExtSel": 1,
},
"QPI_LL.RxL_CREDITS_CONSUMED_VN0.NCS": {
"Box": "QPI_LL",
"Category": "QPI_LL RX_CREDITS_CONSUMED Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Desc": "VN0 Credit Consumed",
"EvSel": 30,
"Umask": "bxxxxx1xx",
"ExtSel": 1,
},
"QPI_LL.RxL_CREDITS_CONSUMED_VN0.NCB": {
"Box": "QPI_LL",
"Category": "QPI_LL RX_CREDITS_CONSUMED Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Desc": "VN0 Credit Consumed",
"EvSel": 30,
"Umask": "bxxxxxx1x",
"ExtSel": 1,
},
"QPI_LL.RxL_CREDITS_CONSUMED_VN0.SNP": {
"Box": "QPI_LL",
"Category": "QPI_LL RX_CREDITS_CONSUMED Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Desc": "VN0 Credit Consumed",
"EvSel": 30,
"Umask": "bxxx1xxxx",
"ExtSel": 1,
},
"QPI_LL.RxL_CREDITS_CONSUMED_VN0.HOM": {
"Box": "QPI_LL",
"Category": "QPI_LL RX_CREDITS_CONSUMED Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Desc": "VN0 Credit Consumed",
"EvSel": 30,
"Umask": "bxxxx1xxx",
"ExtSel": 1,
},
"QPI_LL.RxL_CREDITS_CONSUMED_VN0.DRS": {
"Box": "QPI_LL",
"Category": "QPI_LL RX_CREDITS_CONSUMED Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Desc": "VN0 Credit Consumed",
"EvSel": 30,
"Umask": "bxxxxxxx1",
"ExtSel": 1,
},
"QPI_LL.RxL_CREDITS_CONSUMED_VN0.NDR": {
"Box": "QPI_LL",
"Category": "QPI_LL RX_CREDITS_CONSUMED Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Desc": "VN0 Credit Consumed",
"EvSel": 30,
"Umask": "bxx1xxxxx",
"ExtSel": 1,
},
"QPI_LL.RxL_CREDITS_CONSUMED_VNA": {
"Box": "QPI_LL",
"Category": "QPI_LL RX_CREDITS_CONSUMED Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Desc": "VNA Credit Consumed",
"EvSel": 29,
"ExtSel": 1,
},
"QPI_LL.RxL_CYCLES_NE": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.",
"Desc": "RxQ Cycles Not Empty",
"EvSel": 10,
},
"QPI_LL.RxL_FLITS_G0": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.",
"Desc": "Flits Received - Group 0",
"EvSel": 1,
"MaxIncCyc": 2,
},
"QPI_LL.RxL_FLITS_G0.NON_DATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.",
"Desc": "Flits Received - Group 0",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "bxxxxx1xx",
},
"QPI_LL.RxL_FLITS_G0.DATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.",
"Desc": "Flits Received - Group 0",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "bxxxxxx1x",
},
"QPI_LL.RxL_FLITS_G0.IDLE": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.",
"Desc": "Flits Received - Group 0",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "bxxxxxxx1",
},
"QPI_LL.RxL_FLITS_G1": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 1",
"EvSel": 2,
"MaxIncCyc": 2,
},
"QPI_LL.RxL_FLITS_G1.DRS_DATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 1",
"EvSel": 2,
"MaxIncCyc": 2,
"Umask": "bxxxx1xxx",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G1.HOM_NONREQ": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 1",
"EvSel": 2,
"MaxIncCyc": 2,
"Umask": "bxxxxx1xx",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G1.HOM_REQ": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 1",
"EvSel": 2,
"MaxIncCyc": 2,
"Umask": "bxxxxxx1x",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G1.DRS": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 1",
"EvSel": 2,
"MaxIncCyc": 2,
"Umask": "b00011000",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G1.HOM": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 1",
"EvSel": 2,
"MaxIncCyc": 2,
"Umask": "b00000110",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G1.SNP": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 1",
"EvSel": 2,
"MaxIncCyc": 2,
"Umask": "bxxxxxxx1",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G1.DRS_NONDATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 1",
"EvSel": 2,
"MaxIncCyc": 2,
"Umask": "bxxx1xxxx",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G2": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 2",
"EvSel": 3,
"MaxIncCyc": 2,
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G2.NCS": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 2",
"EvSel": 3,
"MaxIncCyc": 2,
"Umask": "bxxx1xxxx",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G2.NCB": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 2",
"EvSel": 3,
"MaxIncCyc": 2,
"Umask": "b00001100",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G2.NDR_AD": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 2",
"EvSel": 3,
"MaxIncCyc": 2,
"Umask": "bxxxxxxx1",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G2.NCB_NONDATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 2",
"EvSel": 3,
"MaxIncCyc": 2,
"Umask": "bxxxx1xxx",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G2.NDR_AK": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 2",
"EvSel": 3,
"MaxIncCyc": 2,
"Umask": "bxxxxxx1x",
"ExtSel": 1,
},
"QPI_LL.RxL_FLITS_G2.NCB_DATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits received from the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Received - Group 2",
"EvSel": 3,
"MaxIncCyc": 2,
"Umask": "bxxxxx1xx",
"ExtSel": 1,
},
"QPI_LL.RxL_INSERTS": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"Desc": "Rx Flit Buffer Allocations",
"EvSel": 8,
},
"QPI_LL.RxL_INSERTS_DRS": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only DRS flits.",
"Desc": "Rx Flit Buffer Allocations - DRS",
"EvSel": 9,
"ExtSel": 1,
},
"QPI_LL.RxL_INSERTS_HOM": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only HOM flits.",
"Desc": "Rx Flit Buffer Allocations - HOM",
"EvSel": 12,
"ExtSel": 1,
},
"QPI_LL.RxL_INSERTS_NCB": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCB flits.",
"Desc": "Rx Flit Buffer Allocations - NCB",
"EvSel": 10,
"ExtSel": 1,
},
"QPI_LL.RxL_INSERTS_NCS": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCS flits.",
"Desc": "Rx Flit Buffer Allocations - NCS",
"EvSel": 11,
"ExtSel": 1,
},
"QPI_LL.RxL_INSERTS_NDR": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NDR flits.",
"Desc": "Rx Flit Buffer Allocations - NDR",
"EvSel": 14,
"ExtSel": 1,
},
"QPI_LL.RxL_INSERTS_SNP": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only SNP flits.",
"Desc": "Rx Flit Buffer Allocations - SNP",
"EvSel": 13,
"ExtSel": 1,
},
"QPI_LL.RxL_OCCUPANCY": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"Desc": "RxQ Occupancy - All Packets",
"EvSel": 11,
"MaxIncCyc": 128,
"SubCtr": 1,
},
"QPI_LL.RxL_OCCUPANCY_DRS": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors DRS flits only.",
"Desc": "RxQ Occupancy - DRS",
"EvSel": 21,
"MaxIncCyc": 128,
"SubCtr": 1,
"ExtSel": 1,
},
"QPI_LL.RxL_OCCUPANCY_HOM": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors HOM flits only.",
"Desc": "RxQ Occupancy - HOM",
"EvSel": 24,
"MaxIncCyc": 128,
"SubCtr": 1,
"ExtSel": 1,
},
"QPI_LL.RxL_OCCUPANCY_NCB": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCB flits only.",
"Desc": "RxQ Occupancy - NCB",
"EvSel": 22,
"MaxIncCyc": 128,
"SubCtr": 1,
"ExtSel": 1,
},
"QPI_LL.RxL_OCCUPANCY_NCS": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCS flits only.",
"Desc": "RxQ Occupancy - NCS",
"EvSel": 23,
"MaxIncCyc": 128,
"SubCtr": 1,
"ExtSel": 1,
},
"QPI_LL.RxL_OCCUPANCY_NDR": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NDR flits only.",
"Desc": "RxQ Occupancy - NDR",
"EvSel": 26,
"MaxIncCyc": 128,
"SubCtr": 1,
"ExtSel": 1,
},
"QPI_LL.RxL_OCCUPANCY_SNP": {
"Box": "QPI_LL",
"Category": "QPI_LL RXQ Events",
"Counters": "0-3",
"Defn": "Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors SNP flits only.",
"Desc": "RxQ Occupancy - SNP",
"EvSel": 25,
"MaxIncCyc": 128,
"SubCtr": 1,
"ExtSel": 1,
},
"QPI_LL.TxL0P_POWER_CYCLES": {
"Box": "QPI_LL",
"Category": "QPI_LL POWER_TX Events",
"Counters": "0-3",
"Defn": "Number of QPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses. Use edge detect to count the number of instances when the QPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.",
"Desc": "Cycles in L0p",
"EvSel": 13,
"Notes": "Using .edge_det to count transitions does not function if L1_POWER_CYCLES > 0.",
},
"QPI_LL.TxL0_POWER_CYCLES": {
"Box": "QPI_LL",
"Category": "QPI_LL POWER_TX Events",
"Counters": "0-3",
"Defn": "Number of QPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.",
"Desc": "Cycles in L0",
"EvSel": 12,
},
"QPI_LL.TxL_BYPASSED": {
"Box": "QPI_LL",
"Category": "QPI_LL TXQ Events",
"Counters": "0-3",
"Defn": "Counts the number of times that an incoming flit was able to bypass the Tx flit buffer and pass directly out the QPI Link. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.",
"Desc": "Tx Flit Buffer Bypassed",
"EvSel": 5,
},
"QPI_LL.TxL_CYCLES_NE": {
"Box": "QPI_LL",
"Category": "QPI_LL TXQ Events",
"Counters": "0-3",
"Defn": "Counts the number of cycles when the TxQ is not empty. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.",
"Desc": "Tx Flit Buffer Cycles not Empty",
"EvSel": 6,
},
"QPI_LL.TxL_FLITS_G0": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.",
"Desc": "Flits Transferred - Group 0",
"EvSel": 0,
"MaxIncCyc": 2,
},
"QPI_LL.TxL_FLITS_G0.NON_DATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.",
"Desc": "Flits Transferred - Group 0",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "bxxxxx1xx",
},
"QPI_LL.TxL_FLITS_G0.DATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.",
"Desc": "Flits Transferred - Group 0",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "bxxxxxx1x",
},
"QPI_LL.TxL_FLITS_G0.IDLE": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.",
"Desc": "Flits Transferred - Group 0",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "bxxxxxxx1",
},
"QPI_LL.TxL_FLITS_G1": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 1",
"EvSel": 0,
"MaxIncCyc": 2,
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G1.DRS_DATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 1",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "bxxxx1xxx",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G1.HOM_NONREQ": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 1",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "bxxxxx1xx",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G1.HOM_REQ": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 1",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "bxxxxxx1x",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G1.DRS": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 1",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "b00011000",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G1.HOM": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 1",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "b00000110",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G1.SNP": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 1",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "bxxxxxxx1",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G1.DRS_NONDATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 1",
"EvSel": 0,
"MaxIncCyc": 2,
"Umask": "bxxx1xxxx",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G2": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 2",
"EvSel": 1,
"MaxIncCyc": 2,
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G2.NCS": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 2",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "bxxx1xxxx",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G2.NCB": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 2",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "b00001100",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G2.NDR_AD": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 2",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "bxxxxxxx1",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G2.NCB_NONDATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 2",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "bxxxx1xxx",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G2.NDR_AK": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 2",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "bxxxxxx1x",
"ExtSel": 1,
},
"QPI_LL.TxL_FLITS_G2.NCB_DATA": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_TX Events",
"Counters": "0-3",
"Defn": "Counts the number of flits trasmitted across the QPI Link. This is one of three \"groups\" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each \"flit\" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four \"fits\", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI \"speed\" (for example, 8.0 GT/s), the \"transfers\" here refer to \"fits\". Therefore, in L0, the system will transfer 1 \"flit\" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as \"data\" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual \"data\" and an additional 16 bits of other information. To calculate \"data\" bandwidth, one should therefore do: data flits * 8B / time.",
"Desc": "Flits Transferred - Group 2",
"EvSel": 1,
"MaxIncCyc": 2,
"Umask": "bxxxxx1xx",
"ExtSel": 1,
},
"QPI_LL.TxL_INSERTS": {
"Box": "QPI_LL",
"Category": "QPI_LL TXQ Events",
"Counters": "0-3",
"Defn": "Number of allocations into the QPI Tx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"Desc": "Tx Flit Buffer Allocations",
"EvSel": 4,
},
"QPI_LL.TxL_OCCUPANCY": {
"Box": "QPI_LL",
"Category": "QPI_LL TXQ Events",
"Counters": "0-3",
"Defn": "Accumulates the number of flits in the TxQ. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQ.",
"Desc": "Tx Flit Buffer Occupancy",
"EvSel": 7,
},
"QPI_LL.VNA_CREDIT_RETURNS": {
"Box": "QPI_LL",
"Category": "QPI_LL VNA_CREDIT_RETURN Events",
"Counters": "0-3",
"Defn": "Number of VNA credits returned.",
"Desc": "VNA Credits Returned",
"EvSel": 28,
"ExtSel": 1,
},
"QPI_LL.VNA_CREDIT_RETURN_OCCUPANCY": {
"Box": "QPI_LL",
"Category": "QPI_LL VNA_CREDIT_RETURN Events",
"Counters": "0-3",
"Defn": "Number of VNA credits in the Rx side that are waitng to be returned back across the link.",
"Desc": "VNA Credits Pending Return - Occupancy",
"EvSel": 27,
"MaxIncCyc": 128,
"SubCtr": 1,
"ExtSel": 1,
},
# UBOX:
"UBOX.EVENT_MSG": {
"Box": "UBOX",
"Category": "UBOX EVENT_MSG Events",
"Counters": "0-1",
"Defn": "Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "VLW Received",
"EvSel": 66,
},
"UBOX.EVENT_MSG.DOORBELL_RCVD": {
"Box": "UBOX",
"Category": "UBOX EVENT_MSG Events",
"Counters": "0-1",
"Defn": "Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "VLW Received",
"EvSel": 66,
"Umask": "bxxxx1xxx",
},
"UBOX.EVENT_MSG.IPI_RCVD": {
"Box": "UBOX",
"Category": "UBOX EVENT_MSG Events",
"Counters": "0-1",
"Defn": "Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "VLW Received",
"EvSel": 66,
"Umask": "bxxxxx1xx",
},
"UBOX.EVENT_MSG.INT_PRIO": {
"Box": "UBOX",
"Category": "UBOX EVENT_MSG Events",
"Counters": "0-1",
"Defn": "Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "VLW Received",
"EvSel": 66,
"Umask": "bxxx1xxxx",
},
"UBOX.EVENT_MSG.VLW_RCVD": {
"Box": "UBOX",
"Category": "UBOX EVENT_MSG Events",
"Counters": "0-1",
"Defn": "Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "VLW Received",
"EvSel": 66,
"Umask": "bxxxxxxx1",
},
"UBOX.EVENT_MSG.MSI_RCVD": {
"Box": "UBOX",
"Category": "UBOX EVENT_MSG Events",
"Counters": "0-1",
"Defn": "Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "VLW Received",
"EvSel": 66,
"Umask": "bxxxxxx1x",
},
"UBOX.FILTER_MATCH": {
"Box": "UBOX",
"Category": "UBOX FILTER_MATCH Events",
"Counters": "0-1",
"Defn": "Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "Filter Match",
"EvSel": 65,
},
"UBOX.FILTER_MATCH.U2C_ENABLE": {
"Box": "UBOX",
"Category": "UBOX FILTER_MATCH Events",
"Counters": "0-1",
"Defn": "Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "Filter Match",
"EvSel": 65,
"Umask": "bxxxxx1xx",
},
"UBOX.FILTER_MATCH.U2C_DISABLE": {
"Box": "UBOX",
"Category": "UBOX FILTER_MATCH Events",
"Counters": "0-1",
"Defn": "Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "Filter Match",
"EvSel": 65,
"Umask": "bxxxx1xxx",
},
"UBOX.FILTER_MATCH.DISABLE": {
"Box": "UBOX",
"Category": "UBOX FILTER_MATCH Events",
"Counters": "0-1",
"Defn": "Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "Filter Match",
"EvSel": 65,
"Umask": "bxxxxxx1x",
},
"UBOX.FILTER_MATCH.ENABLE": {
"Box": "UBOX",
"Category": "UBOX FILTER_MATCH Events",
"Counters": "0-1",
"Defn": "Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.",
"Desc": "Filter Match",
"EvSel": 65,
"Umask": "bxxxxxxx1",
},
"UBOX.LOCK_CYCLES": {
"Box": "UBOX",
"Category": "UBOX LOCK Events",
"Counters": "0-1",
"Defn": "Number of times an IDI Lock/SplitLock sequence was started",
"Desc": "IDI Lock/SplitLock Cycles",
"EvSel": 68,
},
}
derived = {
# HA:
"HA.PCT_CYCLES_BL_FULL": {
"Box": "HA",
"Category": "HA BL_EGRESS Events",
"Defn": "Percentage of time the BL Egress Queue is full",
"Desc": "Percent BL Egress Full",
"Equation": "TxR_BL_CYCLES_FULL.ALL / SAMPLE_INTERVAL",
"Obscure": 1,
},
"HA.PCT_CYCLES_CONFLICT": {
"Box": "HA",
"Category": "HA CONFLICTS Events",
"Defn": "Percentage of time in Conflict Resolution",
"Desc": "Percent Conflict",
"Equation": "CONFLICT_CYCLES.CONFLICT / SAMPLE_INTERVAL",
"Broken": 1,
},
"HA.PCT_CYCLES_D2C_DISABLED": {
"Box": "HA",
"Category": "HA DIRECT2CORE Events",
"Defn": "Percentage of time that Direct2Core was disabled.",
"Desc": "Percent D2C Disabled",
"Equation": "DIRECT2CORE_CYCLES_DISABLED / SAMPLE_INTERVAL",
"Obscure": 1,
},
"HA.PCT_RD_REQUESTS": {
"Box": "HA",
"Category": "HA REQUESTS Events",
"Defn": "Percentage of HA traffic that is from Read Requests",
"Desc": "Percent Read Requests",
"Equation": "REQUESTS.READS / (REQUESTS.READS + REQUESTS.WRITES)",
},
"HA.PCT_WR_REQUESTS": {
"Box": "HA",
"Category": "HA REQUESTS Events",
"Defn": "Percentage of HA traffic that is from Write Requests",
"Desc": "Percent Write Requests",
"Equation": "REQUESTS.WRITES / (REQUESTS.READS + REQUESTS.WRITES)",
},
# iMC:
"iMC.MEM_BW_READS": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Defn": "Memory bandwidth consumed by reads. Expressed in bytes.",
"Desc": "Read Memory Bandwidth",
"Equation": "(CAS_COUNT.RD * 64)",
},
"iMC.MEM_BW_TOTAL": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Defn": "Total memory bandwidth. Expressed in bytes.",
"Desc": "Total Memory Bandwidth",
"Equation": "MEM_BW_READS + MEM_BW_WRITES",
},
"iMC.MEM_BW_WRITES": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Defn": "Memory bandwidth consumed by writes Expressed in bytes.",
"Desc": "Write Memory Bandwidth",
"Equation": "(CAS_COUNT.WR * 64)",
},
"iMC.PCT_CYCLES_CRITICAL_THROTTLE": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Defn": "The percentage of cycles all DRAM ranks in critical thermal throttling",
"Desc": "Percent Cycles Critical Throttle",
"Equation": "POWER_CRITICAL_THROTTLE_CYCLES / MC_Chy_PCI_PMON_CTR_FIXED",
},
"iMC.PCT_CYCLES_DLOFF": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Defn": "The percentage of cycles all DRAM ranks in CKE slow (DLOFF) mode",
"Desc": "Percent Cycles DLOFF",
"Equation": "POWER_CHANNEL_DLLOFF / MC_Chy_PCI_PMON_CTR_FIXED",
},
"iMC.PCT_CYCLES_DRAM_RANKx_IN_CKE": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Defn": "The percentage of cycles DRAM rank (x) spent in CKE ON mode.",
"Desc": "Percent Cycles DRAM Rank x in CKE",
"Equation": "POWER_CKE_CYCLES.RANKx / MC_Chy_PCI_PMON_CTR_FIXED",
},
"iMC.PCT_CYCLES_DRAM_RANKx_IN_THR": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Defn": "The percentage of cycles DRAM rank (x) spent in thermal throttling.",
"Desc": "Percent Cycles DRAM Rank x in CKE",
"Equation": "POWER_THROTTLE_CYCLES.RANKx / MC_Chy_PCI_PMON_CTR_FIXED",
},
"iMC.PCT_CYCLES_PPD": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Defn": "The percentage of cycles all DRAM ranks in PPD mode",
"Desc": "Percent Cycles PPD",
"Equation": "POWER_CHANNEL_PPD / MC_Chy_PCI_PMON_CTR_FIXED",
},
"iMC.PCT_CYCLES_SELF_REFRESH": {
"Box": "iMC",
"Category": "iMC POWER Events",
"Defn": "The percentage of cycles Memory is in self refresh power mode",
"Desc": "Percent Cycles Self Refresh",
"Equation": "POWER_SELF_REFRESH / MC_Chy_PCI_PMON_CTR_FIXED",
},
"iMC.PCT_RD_REQUESTS": {
"Box": "iMC",
"Category": "iMC RPQ Events",
"Defn": "Percentage of read requests from total requests.",
"Desc": "Percent Read Requests",
"Equation": "RPQ_INSERTS / (RPQ_INSERTS + WPQ_INSERTS)",
},
"iMC.PCT_REQUESTS_PAGE_EMPTY": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Defn": "Percentage of memory requests that resulted in Page Empty",
"Desc": "Percent Requests Page Empty",
"Equation": "(ACT_COUNT - PRE_COUNT.PAGE_MISS)/ (CAS_COUNT.RD + CAS_COUNT.WR)",
},
"iMC.PCT_REQUESTS_PAGE_HIT": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Defn": "Percentage of memory requests that resulted in Page Hits",
"Desc": "Percent Requests Page Hit",
"Equation": "1 - (PCT_REQUESTS_PAGE_EMPTY + PCT_REQUESTS_PAGE_MISS)",
},
"iMC.PCT_REQUESTS_PAGE_MISS": {
"Box": "iMC",
"Category": "iMC CAS Events",
"Defn": "Percentage of memory requests that resulted in Page Misses",
"Desc": "Percent Requests Page Miss",
"Equation": "PRE_COUNT.PAGE_MISS / (CAS_COUNT.RD + CAS_COUNT.WR)",
},
"iMC.PCT_WR_REQUESTS": {
"Box": "iMC",
"Category": "iMC WPQ Events",
"Defn": "Percentage of write requests from total requests.",
"Desc": "Percent Write Requests",
"Equation": "WPQ_INSERTS / (RPQ_INSERTS + WPQ_INSERTS)",
},
# R2PCIe:
"R2PCIe.CYC_USED_DNEVEN": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Defn": "Cycles Used in the Down direction, Even polarity",
"Desc": "Cycles Used Down and Even",
"Equation": "RING_BL_USED.CCW_EVEN / SAMPLE_INTERVAL",
"Obscure": 1,
},
"R2PCIe.CYC_USED_DNODD": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Defn": "Cycles Used in the Down direction, Odd polarity",
"Desc": "Cycles Used Down and Odd",
"Equation": "RING_BL_USED.CCW_ODD / SAMPLE_INTERVAL",
"Obscure": 1,
},
"R2PCIe.CYC_USED_UPEVEN": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Defn": "Cycles Used in the Up direction, Even polarity",
"Desc": "Cycles Used Up and Even",
"Equation": "RING_BL_USED.CW_EVEN / SAMPLE_INTERVAL",
"Obscure": 1,
},
"R2PCIe.CYC_USED_UPODD": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Defn": "Cycles Used in the Up direction, Odd polarity",
"Desc": "Cycles Used Up and Odd",
"Equation": "RING_BL_USED.CW_ODD / SAMPLE_INTERVAL",
"Obscure": 1,
},
"R2PCIe.RING_THRU_DNEVEN_BYTES": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Defn": "Ring throughput in the Down direction, Even polarity in Bytes",
"Desc": "Ring Throughput Down and Even",
"Equation": "RING_BL_USED.CCW_EVEN * 32",
"Obscure": 1,
},
"R2PCIe.RING_THRU_DNODD_BYTES": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Defn": "Ring throughput in the Down direction, Odd polarity in Bytes",
"Desc": "Ring Throughput Down and Odd",
"Equation": "RING_BL_USED.CCW_ODD * 32",
"Obscure": 1,
},
"R2PCIe.RING_THRU_UPEVEN_BYTES": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Defn": "Ring throughput in the Up direction, Even polarity in Bytes",
"Desc": "Ring Throughput Up and Even",
"Equation": "RING_BL_USED.CW_EVEN * 32",
"Obscure": 1,
},
"R2PCIe.RING_THRU_UPODD_BYTES": {
"Box": "R2PCIe",
"Category": "R2PCIe RING Events",
"Defn": "Ring throughput in the Up direction, Odd polarity in Bytes",
"Desc": "Ring Throughput Up and Odd",
"Equation": "RING_BL_USED.CW_ODD * 32",
"Obscure": 1,
},
# QPI_LL:
"QPI_LL.DATA_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "Data received from QPI in bytes ( = DRS + NCB Data messages received from QPI)",
"Desc": "Data From QPI",
"Equation": "DRS_DATA_MSGS_FROM_QPI + NCB_DATA_MSGS_FROM_QPI",
},
"QPI_LL.DATA_FROM_QPI_TO_HA_OR_IIO": {
"Box": "QPI_LL",
"Category": "QPI_LL DIRECT2CORE Events",
"Defn": "Data received from QPI forwarded to HA or IIO. Expressed in Bytes",
"Desc": "Data From QPI To HA or IIO",
"Equation": "DATA_FROM_QPI - DATA_FROM_QPI_TO_LLC",
"Broken": 1,
},
"QPI_LL.DATA_FROM_QPI_TO_LLC": {
"Box": "QPI_LL",
"Category": "QPI_LL DIRECT2CORE Events",
"Defn": "Data received from QPI forwarded to LLC. Expressed in Bytes",
"Desc": "Data From QPI To LLC",
"Equation": "DIRECT2CORE.SUCCESS * 64",
"Broken": 1,
},
"QPI_LL.DATA_FROM_QPI_TO_NODEx": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "Data packets received from QPI sent to Node ID 'x'. Expressed in bytes",
"Desc": "Data From QPI To Node x",
"Equation": "DRS_DataC_FROM_QPI_TO_NODEx + DRS_WRITE_FROM_QPI_TO_NODEx + NCB_DATA_FROM_QPI_TO_NODEx",
},
"QPI_LL.DRS_DATA_MSGS_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Defn": "DRS Data Messages From QPI in bytes",
"Desc": "DRS Data Messages From QPI",
"Equation": "(RxL_FLITS_G1.DRS_DATA * 8)",
"Obscure": 1,
},
"QPI_LL.DRS_DataC_FROM_QPI_TO_NODEx": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS DataC packets received from QPI sent to Node ID 'x'. Expressed in bytes",
"Desc": "DRS DataC From QPI To Node x",
"Equation": "(CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0{[12:0],dnid}={0x1C00,x},Q_Py_PCI_PMON_PKT_MASK0[17:0]=0x3FF80}) * 64",
"Filter": "QPIMask0[17:0],QPIMatch0[17:0]",
"Obscure": 1,
},
"QPI_LL.DRS_FULL_CACHELINE_MSGS_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS Full Cacheline Data Messages From QPI in bytes",
"Desc": "DRS Full Cacheline Data Messages From QPI",
"Equation": "(CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C00,Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1F00}) * 64",
"Filter": "QPIMask0[12:0],QPIMatch0[12:0]",
"Obscure": 1,
},
"QPI_LL.DRS_F_OR_E_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS response in F or E states received from QPI in bytes. To calculate the total data response for each cache line state, it's necessary to add the contribution from three flavors {DataC, DataC_FrcAckCnflt, DataC_Cmp} of data response packets for each cache line state.",
"Desc": "DRS Data in F or E From QPI",
"Equation": "((CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C00, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0, Q_Py_PCI_PMON_PKT_MATCH1[19:16]=0x4, Q_Py_PCI_PMON_PKT_MASK1[19:16]=0xF }) + (CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C00, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0, Q_Py_PCI_PMON_PKT_MATCH1[19:16]=0x1, Q_Py_PCI_PMON_PKT_MASK1[19:16]=0xF }) + (CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C40, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0, Q_Py_PCI_PMON_PKT_MATCH1[19:16]=0x4, Q_Py_PCI_PMON_PKT_MASK1[19:16]=0xF }) + (CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C40, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0, Q_Py_PCI_PMON_PKT_MATCH1[19:16]=0x1, Q_Py_PCI_PMON_PKT_MASK1[19:16]=0xF }) + (CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C20, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0, Q_Py_PCI_PMON_PKT_MATCH1[19:16]=0x4, Q_Py_PCI_PMON_PKT_MASK1[19:16]=0xF }) + (CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C20, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0, Q_Py_PCI_PMON_PKT_MATCH1[19:16]=0x1, Q_Py_PCI_PMON_PKT_MASK1[19:16]=0xF })) * 64",
"Filter": "QPIMask0[12:0],QPIMatch0[12:0],QPIMask1[19:16],QPIMatch1[19:16]",
"Obscure": 1,
},
"QPI_LL.DRS_M_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS response in M state received from QPI in bytes",
"Desc": "DRS Data in M From QPI",
"Equation": "(CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C00, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0, Q_Py_PCI_PMON_PKT_MATCH1[19:16]=0x8, Q_Py_PCI_PMON_PKT_MASK1[19:16]=0xF }) * 64",
"Filter": "QPIMask0[12:0],QPIMatch0[12:0],QPIMask1[19:16],QPIMatch1[19:16]",
"Obscure": 1,
},
"QPI_LL.DRS_PTL_CACHELINE_MSGS_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS Partial Cacheline Data Messages From QPI in bytes",
"Desc": "DRS Partial Cacheline Data Messages From QPI",
"Equation": "(CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1D00, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1F00}) * 64",
"Filter": "QPIMask0[12:0],QPIMatch0[12:0]",
"Obscure": 1,
},
"QPI_LL.DRS_WB_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS writeback packets received from QPI in bytes. This is the sum of Wb{I,S,E} DRS packets",
"Desc": "DRS Writeback From QPI",
"Equation": "DRS_WbI_FROM_QPI + DRS_WbS_FROM_QPI + DRS_WbE_FROM_QPI",
"Obscure": 1,
},
"QPI_LL.DRS_WRITE_FROM_QPI_TO_NODEx": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS Data packets (Any - DataC) received from QPI sent to Node ID 'x'. Expressed in bytes",
"Desc": "DRS Data From QPI To Node x",
"Equation": "(CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0{[12:0],dnid}={0x1C00,x},Q_Py_PCI_PMON_PKT_MASK0[17:0]=0x3FE00} - CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0{[12:0],dnid}={0x1C00,x},Q_Py_PCI_PMON_PKT_MASK0[17:0]=0x3FF80}) * 64",
"Filter": "QPIMask0[17:0],QPIMatch0[17:0]",
"Obscure": 1,
},
"QPI_LL.DRS_WbE_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS writeback 'change to E state' packets received from QPI in bytes",
"Desc": "DRS WbE From QPI",
"Equation": "(CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1CC0, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0}) * 64",
"Filter": "QPIMask0[12:0],QPIMatch0[12:0]",
"Obscure": 1,
},
"QPI_LL.DRS_WbI_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS writeback 'change to I state' packets received from QPI in bytes",
"Desc": "DRS WbI From QPI",
"Equation": "(CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1C80, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0}) * 64",
"Filter": "QPIMask0[12:0],QPIMatch0[12:0]",
"Obscure": 1,
},
"QPI_LL.DRS_WbS_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "DRS writeback 'change to S state' packets received from QPI in bytes",
"Desc": "DRS WbSFrom QPI",
"Equation": "(CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0[12:0]=0x1CA0, Q_Py_PCI_PMON_PKT_MASK0[12:0]=0x1FE0}) * 64",
"Filter": "QPIMask0[12:0],QPIMatch0[12:0]",
"Obscure": 1,
},
"QPI_LL.NCB_DATA_FROM_QPI_TO_NODEx": {
"Box": "QPI_LL",
"Category": "QPI_LL CTO Events",
"Defn": "NCB Data packets (Any - Interrupts) received from QPI sent to Node ID 'x'. Expressed in bytes",
"Desc": "NCB Data From QPI To Node x",
"Equation": "((CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0{[12:0],dnid}={0x1800,x},Q_Py_PCI_PMON_PKT_MASK0[17:0]=0x3FE00}) - (CTO_COUNT with:{Q_Py_PCI_PMON_PKT_MATCH0{[12:0],dnid}={0x1900,x},Q_Py_PCI_PMON_PKT_MASK0[17:0]=0x3FF80})) * 64",
"Filter": "QPIMask0[17:0],QPIMatch0[17:0]",
"Obscure": 1,
},
"QPI_LL.NCB_DATA_MSGS_FROM_QPI": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Defn": "NCB Data Messages From QPI in bytes",
"Desc": "NCB Data Messages From QPI",
"Equation": "(RxL_FLITS_G2.NCB_DATA * 8)",
"Obscure": 1,
},
"QPI_LL.PCT_LINK_FULL_POWER_CYCLES": {
"Box": "QPI_LL",
"Category": "QPI_LL POWER_RX Events",
"Defn": "Percent of Cycles the QPI link is at Full Power",
"Desc": "Percent Link Full Power Cycles",
"Equation": "RxL0_POWER_CYCLES / CLOCKTICKS",
"Obscure": 1,
},
"QPI_LL.PCT_LINK_HALF_DISABLED_CYCLES": {
"Box": "QPI_LL",
"Category": "QPI_LL POWER_RX Events",
"Defn": "Percent of Cycles the QPI link in power mode where half of the lanes are disabled.",
"Desc": "Percent Link Half Disabled Cycles",
"Equation": "RxL0P_POWER_CYCLES / CLOCKTICKS",
"Obscure": 1,
},
"QPI_LL.PCT_LINK_SHUTDOWN_CYCLES": {
"Box": "QPI_LL",
"Category": "QPI_LL POWER Events",
"Defn": "Percent of Cycles the QPI link is Shutdown",
"Desc": "Percent Link Shutdown Cycles",
"Equation": "L1_POWER_CYCLES / CLOCKTICKS",
"Obscure": 1,
},
"QPI_LL.QPI_LINK_UTIL": {
"Box": "QPI_LL",
"Category": "QPI_LL FLITS_RX Events",
"Defn": "Percentage of cycles that QPI Link was utilized. Calculated from 1 - Number of idle flits - time the link was 'off'",
"Desc": "QPI Link Utilization",
"Equation": "(RxL_FLITS_G0.DATA + RxL_FLITS_G0.NON_DATA) / (2 * CLOCKTICKS)",
},
# PCU:
"PCU.PCT_FREQ_BAND0": {
"Box": "PCU",
"Category": "PCU FREQ_RESIDENCY Events",
"Defn": "Counts the percent that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.",
"Desc": "Frequency Residency",
"Notes": "The PMON control registers in the PCU only update on a frequency transition. Changing the measuring threshold during a sample interval may introduce errors in the counts. This is especially true when running at a constant frequency for an extended period of time. There is a corner case here: we set this code on the GV transition. So, if we never GV we will never call this code. This event does not include transition times. It is handled on fast path.",
"Equation": "FREQ_BAND0_CYCLES / CLOCKTICKS"
},
"PCU.PCT_FREQ_BAND1": {
"Box": "PCU",
"Category": "PCU FREQ_RESIDENCY Events",
"Defn": "Counts the percent that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.",
"Desc": "Frequency Residency",
"Notes": "The PMON control registers in the PCU only update on a frequency transition. Changing the measuring threshold during a sample interval may introduce errors in the counts. This is especially true when running at a constant frequency for an extended period of time. There is a corner case here: we set this code on the GV transition. So, if we never GV we will never call this code. This event does not include transition times. It is handled on fast path.",
"Equation": "FREQ_BAND1_CYCLES / CLOCKTICKS"
},
"PCU.PCT_FREQ_BAND2": {
"Box": "PCU",
"Category": "PCU FREQ_RESIDENCY Events",
"Defn": "Counts the percent that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.",
"Desc": "Frequency Residency",
"Notes": "The PMON control registers in the PCU only update on a frequency transition. Changing the measuring threshold during a sample interval may introduce errors in the counts. This is especially true when running at a constant frequency for an extended period of time. There is a corner case here: we set this code on the GV transition. So, if we never GV we will never call this code. This event does not include transition times. It is handled on fast path.",
"Equation": "FREQ_BAND2_CYCLES / CLOCKTICKS"
},
"PCU.PCT_FREQ_BAND3": {
"Box": "PCU",
"Category": "PCU FREQ_RESIDENCY Events",
"Defn": "Counts the percent that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.",
"Desc": "Frequency Residency",
"Notes": "The PMON control registers in the PCU only update on a frequency transition. Changing the measuring threshold during a sample interval may introduce errors in the counts. This is especially true when running at a constant frequency for an extended period of time. There is a corner case here: we set this code on the GV transition. So, if we never GV we will never call this code. This event does not include transition times. It is handled on fast path.",
"Equation": "FREQ_BAND3_CYCLES / CLOCKTICKS"
},
"PCU.PCT_FREQ_CURRENT_LTD": {
"Box": "PCU",
"Category": "PCU FREQ_MAX_LIMIT Events",
"Defn": "Percent of Cycles the Max Frequency is limited by current",
"Desc": "Percent of Cycles Frequency Current Limited",
"Equation": "FREQ_MAX_CURRENT_CYCLES / CLOCKTICKS",
},
"PCU.PCT_FREQ_OS_LTD": {
"Box": "PCU",
"Category": "PCU FREQ_MAX_LIMIT Events",
"Defn": "Percent of Cycles the Max Frequency is limited by the OS",
"Desc": "Percent of Cycles Frequency OS Limited",
"Equation": "FREQ_MAX_OS_CYCLES / CLOCKTICKS",
},
"PCU.PCT_FREQ_POWER_LTD": {
"Box": "PCU",
"Category": "PCU FREQ_MAX_LIMIT Events",
"Defn": "Percent of Cycles the Max Frequency is limited by power",
"Desc": "Percent of Cycles Frequency Power Limited",
"Equation": "FREQ_MAX_POWER_CYCLES / CLOCKTICKS",
},
"PCU.PCT_FREQ_THERMAL_LTD": {
"Box": "PCU",
"Category": "PCU FREQ_MAX_LIMIT Events",
"Defn": "Percent of Cycles the Max Frequency is limited by thermal issues",
"Desc": "Percent of Cycles Frequency Thermal Limited",
"Equation": "FREQ_MAX_CURRENT_CYCLES / CLOCKTICKS",
},
# CBO:
"CBO.AVG_INGRESS_DEPTH": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Defn": "Average Depth of the Ingress Queue through the sample interval",
"Desc": "Average Ingress Depth",
"Equation": "RxR_OCCUPANCY.IRQ / SAMPLE_INTERVAL",
"Obscure": 1,
},
"CBO.AVG_INGRESS_LATENCY": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Defn": "Average Latency of Requests through the Ingress Queue in Uncore Clocks",
"Desc": "Average Ingress Latency",
"Equation": "RxR_OCCUPANCY.IRQ / RxR_INSERTS.IRQ",
"Obscure": 1,
},
"CBO.AVG_INGRESS_LATENCY_WHEN_NE": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Defn": "Average Latency of Requests through the Ingress Queue in Uncore Clocks when Ingress Queue has at least one entry",
"Desc": "Average Latency in Non-Empty Ingress",
"Equation": "RxR_OCCUPANCY.IRQ / COUNTER0_OCCUPANCY with:{edge_det=1,thresh=0x1}",
"Obscure": 1,
},
"CBO.AVG_TOR_DRDS_MISS_WHEN_NE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Defn": "Average Number of Data Read Entries that Miss the LLC when the TOR is not empty.",
"Desc": "Average Data Read Misses in Non-Empty TOR",
"Equation": "(TOR_OCCUPANCY.MISS_OPCODE / COUNTER0_OCCUPANCY with:{edge_det=1,thresh=0x1}) with:Cn_MSR_PMON_BOX_FILTER.opc=0x182",
"Filter": "CBoFilter[31:23]",
"Obscure": 1,
},
"CBO.AVG_TOR_DRDS_WHEN_NE": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Defn": "Average Number of Data Read Entries when the TOR is not empty.",
"Desc": "Average Data Reads in Non-Empty TOR",
"Equation": "(TOR_OCCUPANCY.OPCODE / COUNTER0_OCCUPANCY with:{edge_det=1,thresh=0x1}) with:Cn_MSR_PMON_BOX_FILTER.opc=0x182",
"Filter": "CBoFilter[31:23]",
"Obscure": 1,
},
"CBO.AVG_TOR_DRD_HIT_LATENCY": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Defn": "Average Latency of Data Reads through the TOR that hit the LLC",
"Desc": "Data Read Hit Latency through TOR",
"Equation": "((TOR_OCCUPANCY.OPCODE - TOR_OCCUPANCY.MISS_OPCODE) / (TOR_INSERTS.OPCODE - TOR_INSERTS.MISS_OPCODE)) with:Cn_MSR_PMON_BOX_FILTER.opc=0x182",
"Filter": "CBoFilter[31:23]",
"Obscure": 1,
},
"CBO.AVG_TOR_DRD_LATENCY": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Defn": "Average Latency of Data Read Entries making their way through the TOR",
"Desc": "Data Read Latency through TOR",
"Equation": "(TOR_OCCUPANCY.OPCODE / TOR_INSERTS.OPCODE) with:Cn_MSR_PMON_BOX_FILTER.opc=0x182",
"Filter": "CBoFilter[31:23]",
"Obscure": 1,
},
"CBO.AVG_TOR_DRD_MISS_LATENCY": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Defn": "Average Latency of Data Reads through the TOR that miss the LLC",
"Desc": "Data Read Miss Latency through TOR",
"Equation": "(TOR_OCCUPANCY.MISS_OPCODE / TOR_INSERTS.MISS_OPCODE) with:Cn_MSR_PMON_BOX_FILTER.opc=0x182",
"Filter": "CBoFilter[31:23]",
"Obscure": 1,
},
"CBO.CYC_INGRESS_BLOCKED": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Defn": "Cycles the Ingress Request Queue arbiter was Blocked",
"Desc": "Cycles Ingress Blocked",
"Equation": "RxR_EXT_STARVED.IRQ / SAMPLE_INTERVAL",
"Obscure": 1,
},
"CBO.CYC_USED_DNEVEN": {
"Box": "CBO",
"Category": "CBO RING Events",
"Defn": "Cycles Used in the Down direction, Even polarity",
"Desc": "Cycles Used Down and Even",
"Equation": "RING_BL_USED.DOWN_EVEN / SAMPLE_INTERVAL",
"Obscure": 1
},
"CBO.CYC_USED_DNODD": {
"Box": "CBO",
"Category": "CBO RING Events",
"Defn": "Cycles Used in the Down direction, Odd polarity",
"Desc": "Cycles Used Down and Odd",
"Equation": "RING_BL_USED.DOWN_ODD / SAMPLE_INTERVAL",
"Obscure": 1
},
"CBO.CYC_USED_UPEVEN": {
"Box": "CBO",
"Category": "CBO RING Events",
"Defn": "Cycles Used in the Up direction, Even polarity",
"Desc": "Cycles Used Up and Even",
"Equation": "RING_BL_USED.UP_EVEN / SAMPLE_INTERVAL",
"Obscure": 1
},
"CBO.CYC_USED_UPODD": {
"Box": "CBO",
"Category": "CBO RING Events",
"Defn": "Cycles Used in the Up direction, Odd polarity",
"Desc": "Cycles Used Up and Odd",
"Equation": "RING_BL_USED.UP_ODD / SAMPLE_INTERVAL",
"Obscure": 1
},
"CBO.INGRESS_REJ_V_INS": {
"Box": "CBO",
"Category": "CBO INGRESS Events",
"Defn": "Ratio of Ingress Request Entries that were rejected vs. inserted",
"Desc": "Ingress Rejects vs. Inserts",
"Equation": "RxR_INSERTS.IRQ_REJECTED / RxR_INSERTS.IRQ",
"Obscure": 1
},
"CBO.LLC_DRD_MISS_PCT": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Defn": "LLC Data Read miss ratio",
"Desc": "LLC DRD Miss Ratio",
"Equation": "LLC_LOOKUP.DATA_READ with:Cn_MSR_PMON_BOX_FILTER.state=0x1 / LLC_LOOKUP.DATA_READ with:Cn_MSR_PMON_BOX_FILTER.state=0x1F",
"Filter": "CBoFilter[22:18]",
"Obscure": 1, # too much multiplexing error
},
"CBO.LLC_PCIE_DATA_BYTES": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Defn": "LLC Miss Data from PCIe in Number of Bytes",
"Desc": "LLC Miss Data from PCIe",
"Equation": "TOR_INSERTS.OPCODE with:Cn_MSR_PMON_BOX_FILTER.opc=0x19C * 64",
"Filter": "CBoFilter[31:23]",
"Broken": 1,
},
"CBO.LLC_RFO_MISS_PCT": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Defn": "LLC RFO Miss Ratio",
"Desc": "LLC RFO Miss Ratio",
"Equation": "(TOR_INSERTS.MISS_OPCODE / TOR_INSERTS.OPCODE) with:Cn_MSR_PMON_BOX_FILTER.opc=0x180",
"Filter": "CBoFilter[31:23]",
},
"CBO.MEM_WB_BYTES": {
"Box": "CBO",
"Category": "CBO CACHE Events",
"Defn": "Data written back to memory in Number of Bytes",
"Desc": "Memory Writebacks",
"Equation": "LLC_VICTIMS.M_STATE * 64",
},
"CBO.PCIE_DATA_BYTES": {
"Box": "CBO",
"Category": "CBO TOR Events",
"Defn": "Data from PCIe in Number of Bytes",
"Desc": "PCIe Data Traffic",
"Equation": "(TOR_INSERTS.OPCODE with:Cn_MSR_PMON_BOX_FILTER.opc=0x194 + TOR_INSERTS.OPCODE with:Cn_MSR_PMON_BOX_FILTER.opc=0x19c) * 64",
"Filter": "CBoFilter[31:23]",
"Broken": 1,
},
"CBO.RING_THRU_DNEVEN_BYTES": {
"Box": "CBO",
"Category": "CBO RING Events",
"Defn": "Ring throughput in the Down direction, Even polarity in Bytes",
"Desc": "Ring Throughput Down and Even",
"Equation": "RING_BL_USED.DOWN_EVEN * 32",
"Obscure": 1,
},
"CBO.RING_THRU_DNODD_BYTES": {
"Box": "CBO",
"Category": "CBO RING Events",
"Defn": "Ring throughput in the Down direction, Odd polarity in Bytes",
"Desc": "Ring Throughput Down and Odd",
"Equation": "RING_BL_USED.DOWN_ODD * 32",
"Obscure": 1,
},
"CBO.RING_THRU_UPEVEN_BYTES": {
"Box": "CBO",
"Category": "CBO RING Events",
"Defn": "Ring throughput in the Up direction, Even polarity in Bytes",
"Desc": "Ring Throughput Up and Even",
"Equation": "RING_BL_USED.UP_EVEN * 32",
"Obscure": 1,
},
"CBO.RING_THRU_UPODD_BYTES": {
"Box": "CBO",
"Category": "CBO RING Events",
"Defn": "Ring throughput in the Up direction, Odd polarity in Bytes",
"Desc": "Ring Throughput Up and Odd",
"Equation": "RING_BL_USED.UP_ODD * 32",
"Obscure": 1,
},
}
categories = (
"CBO CACHE Events",
"CBO EGRESS Events",
"CBO INGRESS Events",
"CBO INGRESS_RETRY Events",
"CBO ISMQ Events",
"CBO MISC Events",
"CBO OCCUPANCY Events",
"CBO RING Events",
"CBO TOR Events",
"CBO UCLK Events",
"HA ADDR_OPCODE_MATCH Events",
"HA AD_EGRESS Events",
"HA AK_EGRESS Events",
"HA BL_EGRESS Events",
"HA CONFLICTS Events",
"HA DIRECT2CORE Events",
"HA DIRECTORY Events",
"HA IMC_MISC Events",
"HA IMC_WRITES Events",
"HA OUTBOUND_TX Events",
"HA QPI_IGR_CREDITS Events",
"HA REQUESTS Events",
"HA RPQ_CREDITS Events",
"HA TAD Events",
"HA TRACKER Events",
"HA UCLK Events",
"HA WPQ_CREDITS Events",
"PCU CORE_C_STATE_TRANSITION Events",
"PCU FREQ_MAX_LIMIT Events",
"PCU FREQ_MIN_LIMIT Events",
"PCU FREQ_RESIDENCY Events",
"PCU FREQ_TRANS Events",
"PCU MEMORY_PHASE_SHEDDING Events",
"PCU PCLK Events",
"PCU POWER_STATE_OCC Events",
"PCU PROCHOT Events",
"PCU VOLT_TRANS Events",
"PCU VR_HOT Events",
"QPI_LL CFCLK Events",
"QPI_LL CRC_ERRORS_RX Events",
"QPI_LL CTO Events",
"QPI_LL DIRECT2CORE Events",
"QPI_LL FLITS_RX Events",
"QPI_LL FLITS_TX Events",
"QPI_LL POWER Events",
"QPI_LL POWER_RX Events",
"QPI_LL POWER_TX Events",
"QPI_LL RXQ Events",
"QPI_LL RX_CREDITS_CONSUMED Events",
"QPI_LL TXQ Events",
"QPI_LL VNA_CREDIT_RETURN Events",
"R2PCIe EGRESS Events",
"R2PCIe INGRESS Events",
"R2PCIe RING Events",
"R2PCIe UCLK Events",
"R3QPI EGRESS Events",
"R3QPI IIO_CREDITS Events",
"R3QPI INGRESS Events",
"R3QPI LINK_VN0_CREDITS Events",
"R3QPI LINK_VNA_CREDITS Events",
"R3QPI RING Events",
"R3QPI UCLK Events",
"UBOX EVENT_MSG Events",
"UBOX FILTER_MATCH Events",
"UBOX LOCK Events",
"iMC ACT Events",
"iMC CAS Events",
"iMC DRAM_PRE_ALL Events",
"iMC DRAM_REFRESH Events",
"iMC ECC Events",
"iMC MAJOR_MODES Events",
"iMC POWER Events",
"iMC PRE Events",
"iMC PREEMPTION Events",
"iMC RPQ Events",
"iMC WPQ Events",
);
| 67.735663 | 1,362 | 0.632691 | 50,477 | 350,803 | 4.341542 | 0.024863 | 0.0188 | 0.031896 | 0.022962 | 0.917595 | 0.899497 | 0.888057 | 0.880483 | 0.872004 | 0.865342 | 0 | 0.022186 | 0.268022 | 350,803 | 5,178 | 1,363 | 67.748745 | 0.83126 | 0.000932 | 0 | 0.687208 | 0 | 0.119946 | 0.755396 | 0.047818 | 0 | 0 | 0.001275 | 0 | 0.000194 | 1 | 0 | false | 0.018857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
dbc15570fa8cb53ddc7e658626925e0db0d64187 | 3,795 | py | Python | stage/configuration/test_pulsar_producer_destination.py | Sentienz/datacollector-tests | ca27988351dc3366488098b5db6c85a8be2f7b85 | [
"Apache-2.0"
] | null | null | null | stage/configuration/test_pulsar_producer_destination.py | Sentienz/datacollector-tests | ca27988351dc3366488098b5db6c85a8be2f7b85 | [
"Apache-2.0"
] | 1 | 2019-04-24T11:06:38.000Z | 2019-04-24T11:06:38.000Z | stage/configuration/test_pulsar_producer_destination.py | anubandhan/datacollector-tests | 301c024c66d68353735256b262b681dd05ba16cc | [
"Apache-2.0"
] | 2 | 2019-05-24T06:34:37.000Z | 2020-03-30T11:48:18.000Z | import pytest
from streamsets.testframework.decorators import stub
@stub
@pytest.mark.parametrize('stage_attributes', [{'async_send': False}, {'async_send': True}])
def test_async_send(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'async_send': True, 'enable_batching': True}])
def test_batch_max_publish_latency_in_ms(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'enable_tls': True}])
def test_ca_certificate_pem(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'enable_mutual_authentication': True, 'enable_tls': True}])
def test_client_certificate_pem(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'enable_mutual_authentication': True, 'enable_tls': True}])
def test_client_key_pem(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'compression_type': 'LZ4'},
{'compression_type': 'NONE'},
{'compression_type': 'ZLIB'}])
def test_compression_type(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'async_send': True, 'enable_batching': False},
{'async_send': True, 'enable_batching': True}])
def test_enable_batching(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'enable_mutual_authentication': False, 'enable_tls': True},
{'enable_mutual_authentication': True, 'enable_tls': True}])
def test_enable_mutual_authentication(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'enable_tls': False}, {'enable_tls': True}])
def test_enable_tls(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'hashing_scheme': 'JAVA_STRING_HASH'},
{'hashing_scheme': 'MUMUR3_32HASH'}])
def test_hashing_scheme(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
def test_keep_alive_interval_in_ms(sdc_builder, sdc_executor):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'async_send': True, 'enable_batching': True}])
def test_max_batch_size_in_messages(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'async_send': True}])
def test_max_pending_messages(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
def test_message_key(sdc_builder, sdc_executor):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'on_record_error': 'DISCARD'},
{'on_record_error': 'STOP_PIPELINE'},
{'on_record_error': 'TO_ERROR'}])
def test_on_record_error(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
def test_operation_timeout_in_ms(sdc_builder, sdc_executor):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'partition_type': 'ROUND_ROBIN'}, {'partition_type': 'SINGLE'}])
def test_partition_type(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
def test_preconditions(sdc_builder, sdc_executor):
pass
@stub
def test_pulsar_configuration_properties(sdc_builder, sdc_executor):
pass
@stub
def test_pulsar_url(sdc_builder, sdc_executor):
pass
@stub
def test_required_fields(sdc_builder, sdc_executor):
pass
@stub
def test_topic(sdc_builder, sdc_executor):
pass
| 27.904412 | 111 | 0.706983 | 452 | 3,795 | 5.542035 | 0.170354 | 0.167665 | 0.114172 | 0.184431 | 0.768862 | 0.743713 | 0.729341 | 0.729341 | 0.665868 | 0.602794 | 0 | 0.001278 | 0.174967 | 3,795 | 135 | 112 | 28.111111 | 0.798786 | 0 | 0 | 0.539326 | 0 | 0 | 0.202952 | 0.02952 | 0 | 0 | 0 | 0 | 0 | 1 | 0.247191 | false | 0.247191 | 0.022472 | 0 | 0.269663 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
dbdcddbe43320b20c44a6cd11ef36dae7ece805c | 3,032 | py | Python | thirdparty/ldap3/qytang/vip_ldap3_1_get_user_info.py | gwaysoft/python | a74a0b553dfca9606083a41ab6d03801e67d2467 | [
"Apache-2.0"
] | null | null | null | thirdparty/ldap3/qytang/vip_ldap3_1_get_user_info.py | gwaysoft/python | a74a0b553dfca9606083a41ab6d03801e67d2467 | [
"Apache-2.0"
] | null | null | null | thirdparty/ldap3/qytang/vip_ldap3_1_get_user_info.py | gwaysoft/python | a74a0b553dfca9606083a41ab6d03801e67d2467 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding=utf-8 -*-
# 本脚由亁颐堂现任明教教主编写,用于乾颐盾Python课程!
# 教主QQ:605658506
# 亁颐堂官网www.qytang.com
# 教主技术进化论拓展你的技术新边疆
# https://ke.qq.com/course/271956?tuin=24199d8a
from ldap3 import Connection
from vip_ldap3_0_login_info import server, ad_admin_username, ad_admin_password
def get_user_info(username):
# 返回用户属于的组与用户名
try:
# 连接服务器
c = Connection(server, auto_bind=True, user="ebaotech\\"+ad_admin_username, password=ad_admin_password)
# 提取域qytang.com, 用户的memberOf,sn和department信息
c.search(search_base='dc=ebaotech,dc=com', search_filter='(&(samAccountName=' + username + '))', attributes=['memberOf', 'sn', 'department', 'createTimeStamp', 'accountExpires', 'userAccountControl', 'objectClass', 'pwdLastSet'], paged_size=5)
# 返回获取的memberOf,sn和department信息
return {'dn': c.response[0]['dn'],
'memberOf': c.response[0]['attributes']['memberOf'],
'sn': c.response[0]['attributes']['sn'],
'department': c.response[0]['attributes']['department'],
'createTimeStamp': c.response[0]['attributes']['createTimeStamp'],
'accountExpires': c.response[0]['attributes']['accountExpires'],
'userAccountControl': c.response[0]['attributes']['userAccountControl'],
'objectClass': c.response[0]['attributes']['objectClass'],
'pwdLastSet': c.response[0]['attributes']['pwdLastSet'],}
except Exception:
return None
def get_user_self_info(username, password):
# 返回用户属于的组与用户名
try:
# 连接服务器
c = Connection(server, auto_bind=True, user="ebaotech\\"+username, password=password)
# 提取域qytang.com, 用户的memberOf,sn和department信息
c.search(search_base='dc=ebaotech,dc=com', search_filter='(&(samAccountName=' + username + '))', attributes=['memberOf', 'sn', 'department', 'createTimeStamp', 'accountExpires', 'userAccountControl', 'objectClass', 'pwdLastSet'], paged_size=5)
# 返回获取的memberOf,sn和department信息
return {'dn': c.response[0]['dn'],
'memberOf': c.response[0]['attributes']['memberOf'],
'sn': c.response[0]['attributes']['sn'],
'department': c.response[0]['attributes']['department'],
'createTimeStamp': c.response[0]['attributes']['createTimeStamp'],
'accountExpires': c.response[0]['attributes']['accountExpires'],
'userAccountControl': c.response[0]['attributes']['userAccountControl'],
'objectClass': c.response[0]['attributes']['objectClass'],
'pwdLastSet': c.response[0]['attributes']['pwdLastSet'],}
except Exception as e:
print(e)
return None
if __name__ == '__main__':
# 可以查詢用戶
print(get_user_info('david.wei'))
# print(get_user_self_info('vip-qinke42', 'Cisc0123'))
# 可以查詢組
# print(get_user_info('vipgroup'))
# userAccountControl
# https://lesca.me/archives/common-useraccountcontrol-values.html
| 48.903226 | 251 | 0.637863 | 298 | 3,032 | 6.355705 | 0.312081 | 0.085533 | 0.095037 | 0.168955 | 0.718057 | 0.718057 | 0.718057 | 0.718057 | 0.718057 | 0.718057 | 0 | 0.02132 | 0.19558 | 3,032 | 61 | 252 | 49.704918 | 0.755228 | 0.176781 | 0 | 0.685714 | 0 | 0 | 0.326869 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0.114286 | 0.057143 | 0 | 0.228571 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
915f1034407fadfe4e3ccfa4317935b9036ed1c0 | 104,713 | py | Python | sdk/python/pulumi_azure_native/cdn/v20190415/_inputs.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/cdn/v20190415/_inputs.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/cdn/v20190415/_inputs.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from ._enums import *
__all__ = [
'CacheExpirationActionParametersArgs',
'CacheKeyQueryStringActionParametersArgs',
'CookiesMatchConditionParametersArgs',
'DeepCreatedOriginArgs',
'DeliveryRuleArgs',
'DeliveryRuleCacheExpirationActionArgs',
'DeliveryRuleCacheKeyQueryStringActionArgs',
'DeliveryRuleCookiesConditionArgs',
'DeliveryRuleHttpVersionConditionArgs',
'DeliveryRuleIsDeviceConditionArgs',
'DeliveryRulePostArgsConditionArgs',
'DeliveryRuleQueryStringConditionArgs',
'DeliveryRuleRemoteAddressConditionArgs',
'DeliveryRuleRequestBodyConditionArgs',
'DeliveryRuleRequestHeaderActionArgs',
'DeliveryRuleRequestHeaderConditionArgs',
'DeliveryRuleRequestMethodConditionArgs',
'DeliveryRuleRequestSchemeConditionArgs',
'DeliveryRuleRequestUriConditionArgs',
'DeliveryRuleResponseHeaderActionArgs',
'DeliveryRuleUrlFileExtensionConditionArgs',
'DeliveryRuleUrlFileNameConditionArgs',
'DeliveryRuleUrlPathConditionArgs',
'EndpointPropertiesUpdateParametersDeliveryPolicyArgs',
'GeoFilterArgs',
'HeaderActionParametersArgs',
'HttpVersionMatchConditionParametersArgs',
'IsDeviceMatchConditionParametersArgs',
'PostArgsMatchConditionParametersArgs',
'QueryStringMatchConditionParametersArgs',
'RemoteAddressMatchConditionParametersArgs',
'RequestBodyMatchConditionParametersArgs',
'RequestHeaderMatchConditionParametersArgs',
'RequestMethodMatchConditionParametersArgs',
'RequestSchemeMatchConditionParametersArgs',
'RequestUriMatchConditionParametersArgs',
'SkuArgs',
'UrlFileExtensionMatchConditionParametersArgs',
'UrlFileNameMatchConditionParametersArgs',
'UrlPathMatchConditionParametersArgs',
'UrlRedirectActionArgs',
'UrlRedirectActionParametersArgs',
'UrlRewriteActionArgs',
'UrlRewriteActionParametersArgs',
]
@pulumi.input_type
class CacheExpirationActionParametersArgs:
def __init__(__self__, *,
cache_behavior: pulumi.Input[Union[str, 'CacheBehavior']],
cache_type: pulumi.Input[Union[str, 'CacheType']],
odata_type: pulumi.Input[str],
cache_duration: Optional[pulumi.Input[str]] = None):
"""
Defines the parameters for the cache expiration action.
:param pulumi.Input[Union[str, 'CacheBehavior']] cache_behavior: Caching behavior for the requests
:param pulumi.Input[Union[str, 'CacheType']] cache_type: The level at which the content needs to be cached.
:param pulumi.Input[str] cache_duration: The duration for which the content needs to be cached. Allowed format is [d.]hh:mm:ss
"""
pulumi.set(__self__, "cache_behavior", cache_behavior)
pulumi.set(__self__, "cache_type", cache_type)
pulumi.set(__self__, "odata_type", odata_type)
if cache_duration is not None:
pulumi.set(__self__, "cache_duration", cache_duration)
@property
@pulumi.getter(name="cacheBehavior")
def cache_behavior(self) -> pulumi.Input[Union[str, 'CacheBehavior']]:
"""
Caching behavior for the requests
"""
return pulumi.get(self, "cache_behavior")
@cache_behavior.setter
def cache_behavior(self, value: pulumi.Input[Union[str, 'CacheBehavior']]):
pulumi.set(self, "cache_behavior", value)
@property
@pulumi.getter(name="cacheType")
def cache_type(self) -> pulumi.Input[Union[str, 'CacheType']]:
"""
The level at which the content needs to be cached.
"""
return pulumi.get(self, "cache_type")
@cache_type.setter
def cache_type(self, value: pulumi.Input[Union[str, 'CacheType']]):
pulumi.set(self, "cache_type", value)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter(name="cacheDuration")
def cache_duration(self) -> Optional[pulumi.Input[str]]:
"""
The duration for which the content needs to be cached. Allowed format is [d.]hh:mm:ss
"""
return pulumi.get(self, "cache_duration")
@cache_duration.setter
def cache_duration(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cache_duration", value)
@pulumi.input_type
class CacheKeyQueryStringActionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
query_string_behavior: pulumi.Input[Union[str, 'QueryStringBehavior']],
query_parameters: Optional[pulumi.Input[str]] = None):
"""
Defines the parameters for the cache-key query string action.
:param pulumi.Input[Union[str, 'QueryStringBehavior']] query_string_behavior: Caching behavior for the requests
:param pulumi.Input[str] query_parameters: query parameters to include or exclude (comma separated).
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "query_string_behavior", query_string_behavior)
if query_parameters is not None:
pulumi.set(__self__, "query_parameters", query_parameters)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter(name="queryStringBehavior")
def query_string_behavior(self) -> pulumi.Input[Union[str, 'QueryStringBehavior']]:
"""
Caching behavior for the requests
"""
return pulumi.get(self, "query_string_behavior")
@query_string_behavior.setter
def query_string_behavior(self, value: pulumi.Input[Union[str, 'QueryStringBehavior']]):
pulumi.set(self, "query_string_behavior", value)
@property
@pulumi.getter(name="queryParameters")
def query_parameters(self) -> Optional[pulumi.Input[str]]:
"""
query parameters to include or exclude (comma separated).
"""
return pulumi.get(self, "query_parameters")
@query_parameters.setter
def query_parameters(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "query_parameters", value)
@pulumi.input_type
class CookiesMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'CookiesOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
selector: Optional[pulumi.Input[str]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for Cookies match conditions
:param pulumi.Input[Union[str, 'CookiesOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[str] selector: Name of Cookies to be matched
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if selector is not None:
pulumi.set(__self__, "selector", selector)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'CookiesOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'CookiesOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def selector(self) -> Optional[pulumi.Input[str]]:
"""
Name of Cookies to be matched
"""
return pulumi.get(self, "selector")
@selector.setter
def selector(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "selector", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class DeepCreatedOriginArgs:
def __init__(__self__, *,
host_name: pulumi.Input[str],
name: pulumi.Input[str],
http_port: Optional[pulumi.Input[int]] = None,
https_port: Optional[pulumi.Input[int]] = None):
"""
The main origin of CDN content which is added when creating a CDN endpoint.
:param pulumi.Input[str] host_name: The address of the origin. It can be a domain name, IPv4 address, or IPv6 address.
:param pulumi.Input[str] name: Origin name
:param pulumi.Input[int] http_port: The value of the HTTP port. Must be between 1 and 65535
:param pulumi.Input[int] https_port: The value of the HTTPS port. Must be between 1 and 65535
"""
pulumi.set(__self__, "host_name", host_name)
pulumi.set(__self__, "name", name)
if http_port is not None:
pulumi.set(__self__, "http_port", http_port)
if https_port is not None:
pulumi.set(__self__, "https_port", https_port)
@property
@pulumi.getter(name="hostName")
def host_name(self) -> pulumi.Input[str]:
"""
The address of the origin. It can be a domain name, IPv4 address, or IPv6 address.
"""
return pulumi.get(self, "host_name")
@host_name.setter
def host_name(self, value: pulumi.Input[str]):
pulumi.set(self, "host_name", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Origin name
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="httpPort")
def http_port(self) -> Optional[pulumi.Input[int]]:
"""
The value of the HTTP port. Must be between 1 and 65535
"""
return pulumi.get(self, "http_port")
@http_port.setter
def http_port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "http_port", value)
@property
@pulumi.getter(name="httpsPort")
def https_port(self) -> Optional[pulumi.Input[int]]:
"""
The value of the HTTPS port. Must be between 1 and 65535
"""
return pulumi.get(self, "https_port")
@https_port.setter
def https_port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "https_port", value)
@pulumi.input_type
class DeliveryRuleArgs:
def __init__(__self__, *,
actions: pulumi.Input[Sequence[pulumi.Input[Union['DeliveryRuleCacheExpirationActionArgs', 'DeliveryRuleCacheKeyQueryStringActionArgs', 'DeliveryRuleRequestHeaderActionArgs', 'DeliveryRuleResponseHeaderActionArgs', 'UrlRedirectActionArgs', 'UrlRewriteActionArgs']]]],
order: pulumi.Input[int],
conditions: Optional[pulumi.Input[Sequence[pulumi.Input[Union['DeliveryRuleCookiesConditionArgs', 'DeliveryRuleHttpVersionConditionArgs', 'DeliveryRuleIsDeviceConditionArgs', 'DeliveryRulePostArgsConditionArgs', 'DeliveryRuleQueryStringConditionArgs', 'DeliveryRuleRemoteAddressConditionArgs', 'DeliveryRuleRequestBodyConditionArgs', 'DeliveryRuleRequestHeaderConditionArgs', 'DeliveryRuleRequestMethodConditionArgs', 'DeliveryRuleRequestSchemeConditionArgs', 'DeliveryRuleRequestUriConditionArgs', 'DeliveryRuleUrlFileExtensionConditionArgs', 'DeliveryRuleUrlFileNameConditionArgs', 'DeliveryRuleUrlPathConditionArgs']]]]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
A rule that specifies a set of actions and conditions
:param pulumi.Input[Sequence[pulumi.Input[Union['DeliveryRuleCacheExpirationActionArgs', 'DeliveryRuleCacheKeyQueryStringActionArgs', 'DeliveryRuleRequestHeaderActionArgs', 'DeliveryRuleResponseHeaderActionArgs', 'UrlRedirectActionArgs', 'UrlRewriteActionArgs']]]] actions: A list of actions that are executed when all the conditions of a rule are satisfied.
:param pulumi.Input[int] order: The order in which the rules are applied for the endpoint. Possible values {0,1,2,3,………}. A rule with a lesser order will be applied before a rule with a greater order. Rule with order 0 is a special rule. It does not require any condition and actions listed in it will always be applied.
:param pulumi.Input[Sequence[pulumi.Input[Union['DeliveryRuleCookiesConditionArgs', 'DeliveryRuleHttpVersionConditionArgs', 'DeliveryRuleIsDeviceConditionArgs', 'DeliveryRulePostArgsConditionArgs', 'DeliveryRuleQueryStringConditionArgs', 'DeliveryRuleRemoteAddressConditionArgs', 'DeliveryRuleRequestBodyConditionArgs', 'DeliveryRuleRequestHeaderConditionArgs', 'DeliveryRuleRequestMethodConditionArgs', 'DeliveryRuleRequestSchemeConditionArgs', 'DeliveryRuleRequestUriConditionArgs', 'DeliveryRuleUrlFileExtensionConditionArgs', 'DeliveryRuleUrlFileNameConditionArgs', 'DeliveryRuleUrlPathConditionArgs']]]] conditions: A list of conditions that must be matched for the actions to be executed
:param pulumi.Input[str] name: Name of the rule
"""
pulumi.set(__self__, "actions", actions)
pulumi.set(__self__, "order", order)
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def actions(self) -> pulumi.Input[Sequence[pulumi.Input[Union['DeliveryRuleCacheExpirationActionArgs', 'DeliveryRuleCacheKeyQueryStringActionArgs', 'DeliveryRuleRequestHeaderActionArgs', 'DeliveryRuleResponseHeaderActionArgs', 'UrlRedirectActionArgs', 'UrlRewriteActionArgs']]]]:
"""
A list of actions that are executed when all the conditions of a rule are satisfied.
"""
return pulumi.get(self, "actions")
@actions.setter
def actions(self, value: pulumi.Input[Sequence[pulumi.Input[Union['DeliveryRuleCacheExpirationActionArgs', 'DeliveryRuleCacheKeyQueryStringActionArgs', 'DeliveryRuleRequestHeaderActionArgs', 'DeliveryRuleResponseHeaderActionArgs', 'UrlRedirectActionArgs', 'UrlRewriteActionArgs']]]]):
pulumi.set(self, "actions", value)
@property
@pulumi.getter
def order(self) -> pulumi.Input[int]:
"""
The order in which the rules are applied for the endpoint. Possible values {0,1,2,3,………}. A rule with a lesser order will be applied before a rule with a greater order. Rule with order 0 is a special rule. It does not require any condition and actions listed in it will always be applied.
"""
return pulumi.get(self, "order")
@order.setter
def order(self, value: pulumi.Input[int]):
pulumi.set(self, "order", value)
@property
@pulumi.getter
def conditions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union['DeliveryRuleCookiesConditionArgs', 'DeliveryRuleHttpVersionConditionArgs', 'DeliveryRuleIsDeviceConditionArgs', 'DeliveryRulePostArgsConditionArgs', 'DeliveryRuleQueryStringConditionArgs', 'DeliveryRuleRemoteAddressConditionArgs', 'DeliveryRuleRequestBodyConditionArgs', 'DeliveryRuleRequestHeaderConditionArgs', 'DeliveryRuleRequestMethodConditionArgs', 'DeliveryRuleRequestSchemeConditionArgs', 'DeliveryRuleRequestUriConditionArgs', 'DeliveryRuleUrlFileExtensionConditionArgs', 'DeliveryRuleUrlFileNameConditionArgs', 'DeliveryRuleUrlPathConditionArgs']]]]]:
"""
A list of conditions that must be matched for the actions to be executed
"""
return pulumi.get(self, "conditions")
@conditions.setter
def conditions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union['DeliveryRuleCookiesConditionArgs', 'DeliveryRuleHttpVersionConditionArgs', 'DeliveryRuleIsDeviceConditionArgs', 'DeliveryRulePostArgsConditionArgs', 'DeliveryRuleQueryStringConditionArgs', 'DeliveryRuleRemoteAddressConditionArgs', 'DeliveryRuleRequestBodyConditionArgs', 'DeliveryRuleRequestHeaderConditionArgs', 'DeliveryRuleRequestMethodConditionArgs', 'DeliveryRuleRequestSchemeConditionArgs', 'DeliveryRuleRequestUriConditionArgs', 'DeliveryRuleUrlFileExtensionConditionArgs', 'DeliveryRuleUrlFileNameConditionArgs', 'DeliveryRuleUrlPathConditionArgs']]]]]):
pulumi.set(self, "conditions", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the rule
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class DeliveryRuleCacheExpirationActionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['CacheExpirationActionParametersArgs']):
"""
Defines the cache expiration action for the delivery rule.
:param pulumi.Input[str] name: The name of the action for the delivery rule.
Expected value is 'CacheExpiration'.
:param pulumi.Input['CacheExpirationActionParametersArgs'] parameters: Defines the parameters for the action.
"""
pulumi.set(__self__, "name", 'CacheExpiration')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the action for the delivery rule.
Expected value is 'CacheExpiration'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['CacheExpirationActionParametersArgs']:
"""
Defines the parameters for the action.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['CacheExpirationActionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleCacheKeyQueryStringActionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['CacheKeyQueryStringActionParametersArgs']):
"""
Defines the cache-key query string action for the delivery rule.
:param pulumi.Input[str] name: The name of the action for the delivery rule.
Expected value is 'CacheKeyQueryString'.
:param pulumi.Input['CacheKeyQueryStringActionParametersArgs'] parameters: Defines the parameters for the action.
"""
pulumi.set(__self__, "name", 'CacheKeyQueryString')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the action for the delivery rule.
Expected value is 'CacheKeyQueryString'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['CacheKeyQueryStringActionParametersArgs']:
"""
Defines the parameters for the action.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['CacheKeyQueryStringActionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleCookiesConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['CookiesMatchConditionParametersArgs']):
"""
Defines the Cookies condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'Cookies'.
:param pulumi.Input['CookiesMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'Cookies')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'Cookies'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['CookiesMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['CookiesMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleHttpVersionConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['HttpVersionMatchConditionParametersArgs']):
"""
Defines the HttpVersion condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'HttpVersion'.
:param pulumi.Input['HttpVersionMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'HttpVersion')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'HttpVersion'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['HttpVersionMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['HttpVersionMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleIsDeviceConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['IsDeviceMatchConditionParametersArgs']):
"""
Defines the IsDevice condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'IsDevice'.
:param pulumi.Input['IsDeviceMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'IsDevice')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'IsDevice'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['IsDeviceMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['IsDeviceMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRulePostArgsConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['PostArgsMatchConditionParametersArgs']):
"""
Defines the PostArgs condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'PostArgs'.
:param pulumi.Input['PostArgsMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'PostArgs')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'PostArgs'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['PostArgsMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['PostArgsMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleQueryStringConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['QueryStringMatchConditionParametersArgs']):
"""
Defines the QueryString condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'QueryString'.
:param pulumi.Input['QueryStringMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'QueryString')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'QueryString'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['QueryStringMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['QueryStringMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleRemoteAddressConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['RemoteAddressMatchConditionParametersArgs']):
"""
Defines the RemoteAddress condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'RemoteAddress'.
:param pulumi.Input['RemoteAddressMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'RemoteAddress')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'RemoteAddress'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['RemoteAddressMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['RemoteAddressMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleRequestBodyConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['RequestBodyMatchConditionParametersArgs']):
"""
Defines the RequestBody condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'RequestBody'.
:param pulumi.Input['RequestBodyMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'RequestBody')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'RequestBody'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['RequestBodyMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['RequestBodyMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleRequestHeaderActionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['HeaderActionParametersArgs']):
"""
Defines the request header action for the delivery rule.
:param pulumi.Input[str] name: The name of the action for the delivery rule.
Expected value is 'ModifyRequestHeader'.
:param pulumi.Input['HeaderActionParametersArgs'] parameters: Defines the parameters for the action.
"""
pulumi.set(__self__, "name", 'ModifyRequestHeader')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the action for the delivery rule.
Expected value is 'ModifyRequestHeader'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['HeaderActionParametersArgs']:
"""
Defines the parameters for the action.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['HeaderActionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleRequestHeaderConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['RequestHeaderMatchConditionParametersArgs']):
"""
Defines the RequestHeader condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'RequestHeader'.
:param pulumi.Input['RequestHeaderMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'RequestHeader')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'RequestHeader'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['RequestHeaderMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['RequestHeaderMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleRequestMethodConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['RequestMethodMatchConditionParametersArgs']):
"""
Defines the RequestMethod condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'RequestMethod'.
:param pulumi.Input['RequestMethodMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'RequestMethod')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'RequestMethod'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['RequestMethodMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['RequestMethodMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleRequestSchemeConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['RequestSchemeMatchConditionParametersArgs']):
"""
Defines the RequestScheme condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'RequestScheme'.
:param pulumi.Input['RequestSchemeMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'RequestScheme')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'RequestScheme'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['RequestSchemeMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['RequestSchemeMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleRequestUriConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['RequestUriMatchConditionParametersArgs']):
"""
Defines the RequestUri condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'RequestUri'.
:param pulumi.Input['RequestUriMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'RequestUri')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'RequestUri'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['RequestUriMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['RequestUriMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleResponseHeaderActionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['HeaderActionParametersArgs']):
"""
Defines the response header action for the delivery rule.
:param pulumi.Input[str] name: The name of the action for the delivery rule.
Expected value is 'ModifyResponseHeader'.
:param pulumi.Input['HeaderActionParametersArgs'] parameters: Defines the parameters for the action.
"""
pulumi.set(__self__, "name", 'ModifyResponseHeader')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the action for the delivery rule.
Expected value is 'ModifyResponseHeader'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['HeaderActionParametersArgs']:
"""
Defines the parameters for the action.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['HeaderActionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleUrlFileExtensionConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['UrlFileExtensionMatchConditionParametersArgs']):
"""
Defines the UrlFileExtension condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'UrlFileExtension'.
:param pulumi.Input['UrlFileExtensionMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'UrlFileExtension')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'UrlFileExtension'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['UrlFileExtensionMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['UrlFileExtensionMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleUrlFileNameConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['UrlFileNameMatchConditionParametersArgs']):
"""
Defines the UrlFileName condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'UrlFileName'.
:param pulumi.Input['UrlFileNameMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'UrlFileName')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'UrlFileName'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['UrlFileNameMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['UrlFileNameMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DeliveryRuleUrlPathConditionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['UrlPathMatchConditionParametersArgs']):
"""
Defines the UrlPath condition for the delivery rule.
:param pulumi.Input[str] name: The name of the condition for the delivery rule.
Expected value is 'UrlPath'.
:param pulumi.Input['UrlPathMatchConditionParametersArgs'] parameters: Defines the parameters for the condition.
"""
pulumi.set(__self__, "name", 'UrlPath')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the condition for the delivery rule.
Expected value is 'UrlPath'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['UrlPathMatchConditionParametersArgs']:
"""
Defines the parameters for the condition.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['UrlPathMatchConditionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class EndpointPropertiesUpdateParametersDeliveryPolicyArgs:
def __init__(__self__, *,
rules: pulumi.Input[Sequence[pulumi.Input['DeliveryRuleArgs']]],
description: Optional[pulumi.Input[str]] = None):
"""
A policy that specifies the delivery rules to be used for an endpoint.
:param pulumi.Input[Sequence[pulumi.Input['DeliveryRuleArgs']]] rules: A list of the delivery rules.
:param pulumi.Input[str] description: User-friendly description of the policy.
"""
pulumi.set(__self__, "rules", rules)
if description is not None:
pulumi.set(__self__, "description", description)
@property
@pulumi.getter
def rules(self) -> pulumi.Input[Sequence[pulumi.Input['DeliveryRuleArgs']]]:
"""
A list of the delivery rules.
"""
return pulumi.get(self, "rules")
@rules.setter
def rules(self, value: pulumi.Input[Sequence[pulumi.Input['DeliveryRuleArgs']]]):
pulumi.set(self, "rules", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
User-friendly description of the policy.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@pulumi.input_type
class GeoFilterArgs:
def __init__(__self__, *,
action: pulumi.Input['GeoFilterActions'],
country_codes: pulumi.Input[Sequence[pulumi.Input[str]]],
relative_path: pulumi.Input[str]):
"""
Rules defining user's geo access within a CDN endpoint.
:param pulumi.Input['GeoFilterActions'] action: Action of the geo filter, i.e. allow or block access.
:param pulumi.Input[Sequence[pulumi.Input[str]]] country_codes: Two letter country codes defining user country access in a geo filter, e.g. AU, MX, US.
:param pulumi.Input[str] relative_path: Relative path applicable to geo filter. (e.g. '/mypictures', '/mypicture/kitty.jpg', and etc.)
"""
pulumi.set(__self__, "action", action)
pulumi.set(__self__, "country_codes", country_codes)
pulumi.set(__self__, "relative_path", relative_path)
@property
@pulumi.getter
def action(self) -> pulumi.Input['GeoFilterActions']:
"""
Action of the geo filter, i.e. allow or block access.
"""
return pulumi.get(self, "action")
@action.setter
def action(self, value: pulumi.Input['GeoFilterActions']):
pulumi.set(self, "action", value)
@property
@pulumi.getter(name="countryCodes")
def country_codes(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
Two letter country codes defining user country access in a geo filter, e.g. AU, MX, US.
"""
return pulumi.get(self, "country_codes")
@country_codes.setter
def country_codes(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "country_codes", value)
@property
@pulumi.getter(name="relativePath")
def relative_path(self) -> pulumi.Input[str]:
"""
Relative path applicable to geo filter. (e.g. '/mypictures', '/mypicture/kitty.jpg', and etc.)
"""
return pulumi.get(self, "relative_path")
@relative_path.setter
def relative_path(self, value: pulumi.Input[str]):
pulumi.set(self, "relative_path", value)
@pulumi.input_type
class HeaderActionParametersArgs:
def __init__(__self__, *,
header_action: pulumi.Input[Union[str, 'HeaderAction']],
header_name: pulumi.Input[str],
odata_type: pulumi.Input[str],
value: Optional[pulumi.Input[str]] = None):
"""
Defines the parameters for the request header action.
:param pulumi.Input[Union[str, 'HeaderAction']] header_action: Action to perform
:param pulumi.Input[str] header_name: Name of the header to modify
:param pulumi.Input[str] value: Value for the specified action
"""
pulumi.set(__self__, "header_action", header_action)
pulumi.set(__self__, "header_name", header_name)
pulumi.set(__self__, "odata_type", odata_type)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter(name="headerAction")
def header_action(self) -> pulumi.Input[Union[str, 'HeaderAction']]:
"""
Action to perform
"""
return pulumi.get(self, "header_action")
@header_action.setter
def header_action(self, value: pulumi.Input[Union[str, 'HeaderAction']]):
pulumi.set(self, "header_action", value)
@property
@pulumi.getter(name="headerName")
def header_name(self) -> pulumi.Input[str]:
"""
Name of the header to modify
"""
return pulumi.get(self, "header_name")
@header_name.setter
def header_name(self, value: pulumi.Input[str]):
pulumi.set(self, "header_name", value)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
Value for the specified action
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class HttpVersionMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'HttpVersionOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None):
"""
Defines the parameters for HttpVersion match conditions
:param pulumi.Input[Union[str, 'HttpVersionOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'HttpVersionOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'HttpVersionOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@pulumi.input_type
class IsDeviceMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'IsDeviceOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for IsDevice match conditions
:param pulumi.Input[Union[str, 'IsDeviceOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'IsDeviceOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'IsDeviceOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class PostArgsMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'PostArgsOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
selector: Optional[pulumi.Input[str]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for PostArgs match conditions
:param pulumi.Input[Union[str, 'PostArgsOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[str] selector: Name of PostArg to be matched
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if selector is not None:
pulumi.set(__self__, "selector", selector)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'PostArgsOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'PostArgsOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def selector(self) -> Optional[pulumi.Input[str]]:
"""
Name of PostArg to be matched
"""
return pulumi.get(self, "selector")
@selector.setter
def selector(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "selector", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class QueryStringMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'QueryStringOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for QueryString match conditions
:param pulumi.Input[Union[str, 'QueryStringOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'QueryStringOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'QueryStringOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class RemoteAddressMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'RemoteAddressOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for RemoteAddress match conditions
:param pulumi.Input[Union[str, 'RemoteAddressOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: Match values to match against. The operator will apply to each value in here with OR semantics. If any of them match the variable with the given operator this match condition is considered a match.
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'RemoteAddressOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'RemoteAddressOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Match values to match against. The operator will apply to each value in here with OR semantics. If any of them match the variable with the given operator this match condition is considered a match.
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class RequestBodyMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'RequestBodyOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for RequestBody match conditions
:param pulumi.Input[Union[str, 'RequestBodyOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'RequestBodyOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'RequestBodyOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class RequestHeaderMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'RequestHeaderOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
selector: Optional[pulumi.Input[str]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for RequestHeader match conditions
:param pulumi.Input[Union[str, 'RequestHeaderOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[str] selector: Name of Header to be matched
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if selector is not None:
pulumi.set(__self__, "selector", selector)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'RequestHeaderOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'RequestHeaderOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def selector(self) -> Optional[pulumi.Input[str]]:
"""
Name of Header to be matched
"""
return pulumi.get(self, "selector")
@selector.setter
def selector(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "selector", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class RequestMethodMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'RequestMethodOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None):
"""
Defines the parameters for RequestMethod match conditions
:param pulumi.Input[Union[str, 'RequestMethodOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'RequestMethodOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'RequestMethodOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@pulumi.input_type
class RequestSchemeMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[str],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None):
"""
Defines the parameters for RequestScheme match conditions
:param pulumi.Input[str] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[str]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[str]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@pulumi.input_type
class RequestUriMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'RequestUriOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for RequestUri match conditions
:param pulumi.Input[Union[str, 'RequestUriOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'RequestUriOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'RequestUriOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class SkuArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[Union[str, 'SkuName']]] = None):
"""
The pricing tier (defines a CDN provider, feature list and rate) of the CDN profile.
:param pulumi.Input[Union[str, 'SkuName']] name: Name of the pricing tier.
"""
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[Union[str, 'SkuName']]]:
"""
Name of the pricing tier.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[Union[str, 'SkuName']]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class UrlFileExtensionMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'UrlFileExtensionOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for UrlFileExtension match conditions
:param pulumi.Input[Union[str, 'UrlFileExtensionOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'UrlFileExtensionOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'UrlFileExtensionOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class UrlFileNameMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'UrlFileNameOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for UrlFilename match conditions
:param pulumi.Input[Union[str, 'UrlFileNameOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'UrlFileNameOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'UrlFileNameOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class UrlPathMatchConditionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
operator: pulumi.Input[Union[str, 'UrlPathOperator']],
match_values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
negate_condition: Optional[pulumi.Input[bool]] = None,
transforms: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]] = None):
"""
Defines the parameters for UrlPath match conditions
:param pulumi.Input[Union[str, 'UrlPathOperator']] operator: Describes operator to be matched
:param pulumi.Input[Sequence[pulumi.Input[str]]] match_values: The match value for the condition of the delivery rule
:param pulumi.Input[bool] negate_condition: Describes if this is negate condition or not
:param pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]] transforms: List of transforms
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "operator", operator)
if match_values is not None:
pulumi.set(__self__, "match_values", match_values)
if negate_condition is not None:
pulumi.set(__self__, "negate_condition", negate_condition)
if transforms is not None:
pulumi.set(__self__, "transforms", transforms)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[Union[str, 'UrlPathOperator']]:
"""
Describes operator to be matched
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[Union[str, 'UrlPathOperator']]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="matchValues")
def match_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The match value for the condition of the delivery rule
"""
return pulumi.get(self, "match_values")
@match_values.setter
def match_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "match_values", value)
@property
@pulumi.getter(name="negateCondition")
def negate_condition(self) -> Optional[pulumi.Input[bool]]:
"""
Describes if this is negate condition or not
"""
return pulumi.get(self, "negate_condition")
@negate_condition.setter
def negate_condition(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "negate_condition", value)
@property
@pulumi.getter
def transforms(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]:
"""
List of transforms
"""
return pulumi.get(self, "transforms")
@transforms.setter
def transforms(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[Union[str, 'Transform']]]]]):
pulumi.set(self, "transforms", value)
@pulumi.input_type
class UrlRedirectActionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['UrlRedirectActionParametersArgs']):
"""
Defines the url redirect action for the delivery rule.
:param pulumi.Input[str] name: The name of the action for the delivery rule.
Expected value is 'UrlRedirect'.
:param pulumi.Input['UrlRedirectActionParametersArgs'] parameters: Defines the parameters for the action.
"""
pulumi.set(__self__, "name", 'UrlRedirect')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the action for the delivery rule.
Expected value is 'UrlRedirect'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['UrlRedirectActionParametersArgs']:
"""
Defines the parameters for the action.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['UrlRedirectActionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class UrlRedirectActionParametersArgs:
def __init__(__self__, *,
odata_type: pulumi.Input[str],
redirect_type: pulumi.Input[Union[str, 'RedirectType']],
custom_fragment: Optional[pulumi.Input[str]] = None,
custom_hostname: Optional[pulumi.Input[str]] = None,
custom_path: Optional[pulumi.Input[str]] = None,
custom_query_string: Optional[pulumi.Input[str]] = None,
destination_protocol: Optional[pulumi.Input[Union[str, 'DestinationProtocol']]] = None):
"""
Defines the parameters for the url redirect action.
:param pulumi.Input[Union[str, 'RedirectType']] redirect_type: The redirect type the rule will use when redirecting traffic.
:param pulumi.Input[str] custom_fragment: Fragment to add to the redirect URL. Fragment is the part of the URL that comes after #. Do not include the #.
:param pulumi.Input[str] custom_hostname: Host to redirect. Leave empty to use the incoming host as the destination host.
:param pulumi.Input[str] custom_path: The full path to redirect. Path cannot be empty and must start with /. Leave empty to use the incoming path as destination path.
:param pulumi.Input[str] custom_query_string: The set of query strings to be placed in the redirect URL. Setting this value would replace any existing query string; leave empty to preserve the incoming query string. Query string must be in <key>=<value> format. ? and & will be added automatically so do not include them.
:param pulumi.Input[Union[str, 'DestinationProtocol']] destination_protocol: Protocol to use for the redirect. The default value is MatchRequest
"""
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "redirect_type", redirect_type)
if custom_fragment is not None:
pulumi.set(__self__, "custom_fragment", custom_fragment)
if custom_hostname is not None:
pulumi.set(__self__, "custom_hostname", custom_hostname)
if custom_path is not None:
pulumi.set(__self__, "custom_path", custom_path)
if custom_query_string is not None:
pulumi.set(__self__, "custom_query_string", custom_query_string)
if destination_protocol is not None:
pulumi.set(__self__, "destination_protocol", destination_protocol)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter(name="redirectType")
def redirect_type(self) -> pulumi.Input[Union[str, 'RedirectType']]:
"""
The redirect type the rule will use when redirecting traffic.
"""
return pulumi.get(self, "redirect_type")
@redirect_type.setter
def redirect_type(self, value: pulumi.Input[Union[str, 'RedirectType']]):
pulumi.set(self, "redirect_type", value)
@property
@pulumi.getter(name="customFragment")
def custom_fragment(self) -> Optional[pulumi.Input[str]]:
"""
Fragment to add to the redirect URL. Fragment is the part of the URL that comes after #. Do not include the #.
"""
return pulumi.get(self, "custom_fragment")
@custom_fragment.setter
def custom_fragment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_fragment", value)
@property
@pulumi.getter(name="customHostname")
def custom_hostname(self) -> Optional[pulumi.Input[str]]:
"""
Host to redirect. Leave empty to use the incoming host as the destination host.
"""
return pulumi.get(self, "custom_hostname")
@custom_hostname.setter
def custom_hostname(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_hostname", value)
@property
@pulumi.getter(name="customPath")
def custom_path(self) -> Optional[pulumi.Input[str]]:
"""
The full path to redirect. Path cannot be empty and must start with /. Leave empty to use the incoming path as destination path.
"""
return pulumi.get(self, "custom_path")
@custom_path.setter
def custom_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_path", value)
@property
@pulumi.getter(name="customQueryString")
def custom_query_string(self) -> Optional[pulumi.Input[str]]:
"""
The set of query strings to be placed in the redirect URL. Setting this value would replace any existing query string; leave empty to preserve the incoming query string. Query string must be in <key>=<value> format. ? and & will be added automatically so do not include them.
"""
return pulumi.get(self, "custom_query_string")
@custom_query_string.setter
def custom_query_string(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_query_string", value)
@property
@pulumi.getter(name="destinationProtocol")
def destination_protocol(self) -> Optional[pulumi.Input[Union[str, 'DestinationProtocol']]]:
"""
Protocol to use for the redirect. The default value is MatchRequest
"""
return pulumi.get(self, "destination_protocol")
@destination_protocol.setter
def destination_protocol(self, value: Optional[pulumi.Input[Union[str, 'DestinationProtocol']]]):
pulumi.set(self, "destination_protocol", value)
@pulumi.input_type
class UrlRewriteActionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
parameters: pulumi.Input['UrlRewriteActionParametersArgs']):
"""
Defines the url rewrite action for the delivery rule.
:param pulumi.Input[str] name: The name of the action for the delivery rule.
Expected value is 'UrlRewrite'.
:param pulumi.Input['UrlRewriteActionParametersArgs'] parameters: Defines the parameters for the action.
"""
pulumi.set(__self__, "name", 'UrlRewrite')
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the action for the delivery rule.
Expected value is 'UrlRewrite'.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def parameters(self) -> pulumi.Input['UrlRewriteActionParametersArgs']:
"""
Defines the parameters for the action.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: pulumi.Input['UrlRewriteActionParametersArgs']):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class UrlRewriteActionParametersArgs:
def __init__(__self__, *,
destination: pulumi.Input[str],
odata_type: pulumi.Input[str],
source_pattern: pulumi.Input[str],
preserve_unmatched_path: Optional[pulumi.Input[bool]] = None):
"""
Defines the parameters for the url rewrite action.
:param pulumi.Input[str] destination: Define the destination path for be used in the rewrite. This will overwrite the source pattern
:param pulumi.Input[str] source_pattern: define a request URI pattern that identifies the type of requests that may be rewritten. Currently, source pattern uses a prefix-based match. To match all URL paths, use "/" as the source pattern value. To match only the root directory and re-write this path, use the origin path field
:param pulumi.Input[bool] preserve_unmatched_path: If True, the remaining path after the source pattern will be appended to the new destination path.
"""
pulumi.set(__self__, "destination", destination)
pulumi.set(__self__, "odata_type", odata_type)
pulumi.set(__self__, "source_pattern", source_pattern)
if preserve_unmatched_path is not None:
pulumi.set(__self__, "preserve_unmatched_path", preserve_unmatched_path)
@property
@pulumi.getter
def destination(self) -> pulumi.Input[str]:
"""
Define the destination path for be used in the rewrite. This will overwrite the source pattern
"""
return pulumi.get(self, "destination")
@destination.setter
def destination(self, value: pulumi.Input[str]):
pulumi.set(self, "destination", value)
@property
@pulumi.getter(name="odataType")
def odata_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "odata_type")
@odata_type.setter
def odata_type(self, value: pulumi.Input[str]):
pulumi.set(self, "odata_type", value)
@property
@pulumi.getter(name="sourcePattern")
def source_pattern(self) -> pulumi.Input[str]:
"""
define a request URI pattern that identifies the type of requests that may be rewritten. Currently, source pattern uses a prefix-based match. To match all URL paths, use "/" as the source pattern value. To match only the root directory and re-write this path, use the origin path field
"""
return pulumi.get(self, "source_pattern")
@source_pattern.setter
def source_pattern(self, value: pulumi.Input[str]):
pulumi.set(self, "source_pattern", value)
@property
@pulumi.getter(name="preserveUnmatchedPath")
def preserve_unmatched_path(self) -> Optional[pulumi.Input[bool]]:
"""
If True, the remaining path after the source pattern will be appended to the new destination path.
"""
return pulumi.get(self, "preserve_unmatched_path")
@preserve_unmatched_path.setter
def preserve_unmatched_path(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "preserve_unmatched_path", value)
| 39.936308 | 701 | 0.661704 | 11,304 | 104,713 | 5.986377 | 0.032466 | 0.117851 | 0.056096 | 0.040993 | 0.876622 | 0.794946 | 0.765583 | 0.743151 | 0.736988 | 0.712147 | 0 | 0.000481 | 0.225684 | 104,713 | 2,621 | 702 | 39.951545 | 0.833895 | 0.240734 | 0 | 0.71254 | 1 | 0 | 0.16974 | 0.081812 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216077 | false | 0 | 0.003859 | 0.012219 | 0.342122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
9196d456f59816b77310ea74e87087ac8d60e4b8 | 89 | py | Python | app/models/__init__.py | WesGtoX/lost-pets-api | b4585995469c39c937e013bedac5e6791c9f3d23 | [
"MIT"
] | null | null | null | app/models/__init__.py | WesGtoX/lost-pets-api | b4585995469c39c937e013bedac5e6791c9f3d23 | [
"MIT"
] | null | null | null | app/models/__init__.py | WesGtoX/lost-pets-api | b4585995469c39c937e013bedac5e6791c9f3d23 | [
"MIT"
] | null | null | null | from app.models.user import User # noqa
from app.models.lost_pet import LostPet # noqa
| 29.666667 | 47 | 0.775281 | 15 | 89 | 4.533333 | 0.6 | 0.205882 | 0.382353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157303 | 89 | 2 | 48 | 44.5 | 0.906667 | 0.101124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
37d9815e1344a9bfeef8d6324ffcefae97c79df0 | 36,958 | py | Python | scripts/comparesim_testbed.py | barronh/MONET | acd72487c7aeff66d89f87fa663a9c96fa9b7bb0 | [
"MIT"
] | 1 | 2019-07-09T19:50:59.000Z | 2019-07-09T19:50:59.000Z | scripts/comparesim_testbed.py | barronh/MONET | acd72487c7aeff66d89f87fa663a9c96fa9b7bb0 | [
"MIT"
] | null | null | null | scripts/comparesim_testbed.py | barronh/MONET | acd72487c7aeff66d89f87fa663a9c96fa9b7bb0 | [
"MIT"
] | null | null | null | #!/data/aqf/barryb/anaconda2/bin/python
###for AITKEN
#### /data/aqf/barryb/anaconda2/bin/python
###for WCOSS
### /naqfc/noscrub/Barry.Baker/anaconda2/bin/python
import f90nml
from numpy import sort
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
from numpy import unique,sort
from datetime import datetime, timedelta
import pandas as pd
from glob import glob
from verify_airnow import verify_airnow
#Read the Namelist
nml = f90nml.read('comparesim_testbed.namelist')
base = nml['files']['basename']
gridcro = nml['files']['gridcro']
datapath = nml['files']['airnow_data_dir']
interp = nml['interp']['method']
neighbors = nml['interp']['neighbors']
radius = nml['interp']['radius_of_influence']
#airnow user and pass
usr = 'Barry.Baker'
p = 'p00pST!ck123'
date2days = datetime.now() - timedelta(days=2)
dateconc = date2days.strftime('%Y%m%d/aqm.t12z.aconc.ncf')
datemet = date2days.strftime('%Y%m%d/aqm.t12z.metcro2d.ncf')
#INTERP SIMULATIONS TO OBSERVATIONS
if nml['files']['sim1'].lower() != 'none':
print 'Pairing Sim1...'
print ' '
if nml['files']['sim1'].lower()[-4:] =='.hdf':
print ' Loading Paired Data: ', nml['files']['sim1']
sim1 = verify_airnow()
sim1.df = pd.read_hdf(nml['files']['sim1'])
else:
import monet as m
print nml['files']['sim1'] + dateconc
files = sort(glob(nml['files']['sim1'] + dateconc))
metfiles = nml['files']['sim1'] + datemet
print metfiles
print files
sim1 = m.vairnow(concpath=files,gridcro=gridcro,met2dpath=metfiles,datapath=datapath,interp=interp,neighbors=neighbors,radius=radius,user=usr,passw=p)
print sim1.df.keys()
if nml['files']['sim2'].lower()!= 'none':
print ' '
print 'Pairing Sim2...'
if nml['files']['sim2'].lower()[-4:] =='.hdf':
print ' '
print ' Loading Paired Data: ', nml['files']['sim2']
sim2 = verify_airnow()
sim2.df = pd.read_hdf(nml['files']['sim2'])
else:
files = sort(glob(nml['files']['sim2']))
import monet as mm
print files
sim2 = mm.vairnow(concpath=files,gridcro=gridcro,datapath=datapath,interp=interp,neighbors=neighbors,radius=radius,user=usr,passw=p)
else:
sim2=False
if nml['files']['sim3'].lower()!= 'none':
print ' '
print 'Pairing Sim3...'
print ' '
if nml['files']['sim3'].lower()[-4:] =='.hdf':
print ' Loading Paired Data: ', nml['files']['sim3']
sim3 = verify_airnow()
sim3.df = pd.read_hdf(nml['files']['sim3'])
else:
import monet as mmm
files = sort(glob(nml['files']['sim3']))
sim3 = mmm.vairnow(concpath=files,gridcro=gridcro,datapath=datapath,interp=interp,neighbors=neighbors,radius=radius,user=usr,passw=p)
else:
sim3 = False
if nml['files']['sim4'].lower()!= 'none':
print 'Pairing Sim4...'
print ' '
if nml['files']['sim4'].lower()[:-4] =='.hdf':
print ' Loading Paired Data: ', nml['files']['sim4']
import verify_airnow as vairnow
sim4 = vairnow()
sim4.df = pd.read_hdf(nml['files']['sim4'])
else:
import monet
files = sort(glob(nml['files']['sim4']))
sim4 = monet.vairnow(concpath=files,gridcro=gridcro,datapath=datapath,interp=interp,neighbors=neighbors,radius=radius,user=usr,passw=p)
else:
sim4 = False
date = sim1.cmaq.dates[0]
ymd= date.strftime('%Y%m%d')
#DOMAIN PLOTTING
if nml['domain']['params'].lower() != 'none':
if nml['domain']['params'] == 'all':
params = sort(sim1.df.Species.unique())
else:
params = nml['domain']['params'].split(',')
for i in params:
print i,'domain'
if nml['domain']['tseries']:
try:
sim1.compare_param(param=i,timeseries=True,label=nml['files']['sim1label'])
if sim2 is not False:
sim2.compare_param(param=i,timeseries=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,timeseries=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,timeseries=True,fig=plt.figure(1),label=nml['files']['sim4label'])
code = '00000'
savename = ymd + '.5X.'+i.replace('.','P') +'.ts.'+code+'.png'
plt.savefig(savename,dpi=75)
print 'Saving: ' + savename
plt.close('all')
except:
plt.close('all')
pass
if nml['domain']['tseriesrmse']:
try:
sim1.compare_param(param=i,timeseries_rmse=True,label=nml['files']['sim1label'],footer=nml['domain']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+'timeseries_rmse.jpg',dpi=75)
print 'Saving: ' + base +'_'+i.replace('.','')+'_'+'timeseries_rmse.jpg'
plt.close('all')
except:
plt.close('all')
pass
if nml['domain']['tseriesbias']:
try:
sim1.compare_param(param=i,timeseries_mb=True,label=nml['files']['sim1label'],footer=nml['domain']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+'timeseries_mb.jpg',dpi=75)
print 'Saving: ' + base +'_'+i.replace('.','')+'_'+'timeseries_mb.jpg'
plt.close('all')
except:
plt.close('all')
pass
if nml['domain']['scatter']:
try:
sim1.compare_param(param=i,scatter=True,label=nml['files']['sim1label'],footer=nml['domain']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,scatter=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,scatter=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,scatter=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+'scatter.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['domain']['diffscatter']:
try:
sim1.compare_param(param=i,diffscatter=True,label=nml['files']['sim1label'],footer=nml['domain']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+'diffscatter.jpg',dpi=75)
print 'Saving: ' + base +'_'+i.replace('.','')+'_'+'diffscatter.jpg'
plt.close('all')
except:
plt.close('all')
pass
if nml['domain']['pdfs']:
try:
sim1.compare_param(param=i,pdfs=True,label=nml['files']['sim1label'],footer=nml['domain']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,pdfs=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,pdfs=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,pdfs=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+'pdfs.jpg',dpi=75)
print 'Saving: ' + base +'_'+i.replace('.','')+'_'+'pdfs.jpg'
plt.close('all')
except:
plt.close('all')
pass
if nml['domain']['diffpdfs']:
try:
sim1.compare_param(param=i,diffpdfs=True,label=nml['files']['sim1label'],footer=nml['domain']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+'diffpdfs.jpg',dpi=75)
print 'Saving: ' + base +'_'+i.replace('.','')+'_'+'diffpdfs.jpg'
plt.close('all')
except:
plt.close('all')
pass
if nml['domain']['taylordiagram']:
try:
dia = sim1.compare_param(param=i,taylordiagram=True,label=nml['files']['sim1label'])
if sim2 is not False:
sim2.compare_param(param=i,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim2label'],dia=dia)
if sim3 is not False:
sim3.compare_param(param=i,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim3label'],dia=dia)
if sim4 is not False:
sim4.compare_param(param=i,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim4label'],dia=dia)
plt.savefig(base +'_'+i.replace('.','')+'_'+'taylor.jpg',dpi=75)
print 'Saving: ' + base +'_'+i.replace('.','')+'_'+'taylor.jpg'
plt.close('all')
except:
plt.close('all')
pass
#EPA Regions
if (nml['epa_region']['params'].lower() != 'none') & (nml['epa_region']['epa_region'].lower() !='none'):
if nml['epa_region']['params'] == 'all':
params = sort(sim1.df.Species.unique())
else:
params = nml['epa_region']['params'].split(',')
if nml['epa_region']['epa_region'] =='all':
regions = sim1.df.EPA_region.dropna().unique()
else:
regions = nml['epa_region']['epa_region'].split(',')
for j in regions:
for i in params:
print i,j
if nml['epa_region']['tseries']:
try:
sim1.compare_param(param=i,region=j,timeseries=True,label=nml['files']['sim1label'],footer=nml['epa_region']['footers'])
print 'here'
if sim2 is not False:
sim2.compare_param(param=i,region=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,region=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,region=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim4label'])
r = j.strip('R')
code = r.zfill(5)
savename = ymd + '.5X.'+ i.replace('.','P') +'.ts.'+code+'.png'
print savename
plt.savefig(savename,dpi=75)
print 'Saving: ', savename
plt.close('all')
except:
plt.close('all')
pass
if nml['epa_region']['tseriesrmse']:
try:
sim1.compare_param(param=i,region=j,timeseries_rmse=True,label=nml['files']['sim1label'],footer=nml['epa_region']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,region=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,region=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,region=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'timeseries_rmse.jpg',dpi=75)
print 'Saving: ' + base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'timeseries_rmse.jpg'
plt.close('all')
except:
plt.close('all')
pass
if nml['epa_region']['tseriesbias']:
try:
sim1.compare_param(param=i,region=j,timeseries_mb=True,label=nml['files']['sim1label'],footer=nml['epa_region']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,region=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,region=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,region=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'timeseries_mb.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['epa_region']['scatter']:
try:
sim1.compare_param(param=i,region=j,scatter=True,label=nml['files']['sim1label'],footer=nml['epa_region']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,region=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,region=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,region=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'scatter.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['epa_region']['diffscatter']:
try:
sim1.compare_param(param=i,region=j,diffscatter=True,label=nml['files']['sim1label'],footer=nml['epa_region']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,region=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,region=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,region=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'diffscatter.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['epa_region']['pdfs']:
try:
sim1.compare_param(param=i,region=j,pdfs=True,label=nml['files']['sim1label'],footer=nml['epa_region']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,region=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,region=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,region=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'pdfs.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['epa_region']['diffpdfs']:
try:
sim1.compare_param(param=i,region=j,diffpdfs=True,label=nml['files']['sim1label'],footer=nml['epa_region']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,region=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,region=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,region=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'diffpdfs.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['epa_region']['taylordiagram']:
try:
dia = sim1.compare_param(param=i,region=j,taylordiagram=True,label=nml['files']['sim1label'])
if sim2 is not False:
sim2.compare_param(param=i,region=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim2label'],dia=dia)
if sim3 is not False:
sim3.compare_param(param=i,region=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim3label'],dia=dia)
if sim4 is not False:
sim4.compare_param(param=i,region=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim4label'],dia=dia)
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'taylor.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
#States
if (nml['state']['params'].lower() != 'none') & (nml['state']['state'].lower() !='none'):
plt.close('all')
if nml['state']['params'] == 'all':
params = sim1.df.Species.unique()
else:
params = nml['state']['params'].split(',')
if nml['state']['state'] =='all':
states = sim1.df.State_Name.unique()
else:
states = nml['state']['state'].split(',')
for j in states:
for i in params:
print i,j
if nml['state']['tseries']:
try:
sim1.compare_param(param=i,state=j,timeseries=True,label=nml['files']['sim1label'],footer=nml['state']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,state=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,state=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,state=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'timeseries.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['state']['tseriesrmse']:
try:
sim1.compare_param(param=i,state=j,timeseries_rmse=True,label=nml['files']['sim1label'],footer=nml['state']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,state=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,state=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,state=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'timeseries_rmse.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['state']['tseriesbias']:
try:
sim1.compare_param(param=i,state=j,timeseries_mb=True,label=nml['files']['sim1label'],footer=nml['state']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,state=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,state=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,state=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'timeseries_mb.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['state']['scatter']:
try:
sim1.compare_param(param=i,state=j,scatter=True,label=nml['files']['sim1label'],footer=nml['state']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,state=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,state=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,state=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'scatter.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['state']['diffscatter']:
try:
sim1.compare_param(param=i,state=j,diffscatter=True,label=nml['files']['sim1label'],footer=nml['state']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,state=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,state=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,state=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'diffscatter.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['state']['pdfs']:
try:
sim1.compare_param(param=i,state=j,pdfs=True,label=nml['files']['sim1label'],footer=nml['state']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,state=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,state=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,state=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'pdfs.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['state']['diffpdfs']:
try:
sim1.compare_param(param=i,state=j,diffpdfs=True,label=nml['files']['sim1label'],footer=nml['state']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,state=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim2label'])
if sim3 is not False:
sim3.compare_param(param=i,state=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim3label'])
if sim4 is not False:
sim4.compare_param(param=i,state=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim4label'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'diffpdfs.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['state']['taylordiagram']:
try:
dia = sim1.compare_param(param=i,state=j,taylordiagram=True,label=nml['files']['sim1label'])
if sim2 is not False:
sim2.compare_param(param=i,state=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim2label'],dia=dia)
if sim3 is not False:
sim3.compare_param(param=i,state=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim3label'],dia=dia)
if sim4 is not False:
sim4.compare_param(param=i,state=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim4label'],dia=dia)
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'taylor.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
#CITY
if (nml['city']['params'].lower() != 'none') & (nml['city']['city'].lower() !='none'):
if nml['city']['params'] == 'all':
params = sim1.df.Species.unique()
else:
params = nml['city']['params'].split(',')
if nml['city']['city'] =='all':
citys = sim1.df.MSA_Name.unique()
else:
citys = nml['city']['city'].split(',')
for j in citys:
for i in params:
print i,j
names = sim1.df.MSA_Name.dropna().values
codes = sim1.df.MSA_Code.dropna().values
names,index = unique(names,return_index=True)
codes = codes[index].astype('|S5')
for k,p in zip(names,codes):
if j.lower() in k.lower():
name = k
code = p
if nml['city']['tseries']:
try:
sim1.compare_param(param=i,city=j,timeseries=True,label=nml['files']['sim1label'],footer=nml['city']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,city=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim2label'],footer=nml['city']['footers'])
if sim3 is not False:
sim3.compare_param(param=i,city=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim3label'],footer=nml['city']['footers'])
if sim4 is not False:
sim4.compare_param(param=i,city=j,timeseries=True,fig=plt.figure(1),label=nml['files']['sim4label'],footer=nml['city']['footers'])
savename = ymd + '.5X.'+ i.replace('.','P') +'.ts.'+code+'.png'
plt.savefig(savename,dpi=75)
print 'Saving: ', savename
plt.close('all')
except:
plt.close('all')
pass
if nml['city']['tseriesrmse']:
try:
sim1.compare_param(param=i,city=j,timeseries_rmse=True,label=nml['files']['sim1label'],footer=nml['city']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,city=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim2label'],footer=nml['city']['footers'])
if sim3 is not False:
sim3.compare_param(param=i,city=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim3label'],footer=nml['city']['footers'])
if sim4 is not False:
sim4.compare_param(param=i,city=j,timeseries_rmse=True,fig=plt.figure(1),label=nml['files']['sim4label'],footer=nml['city']['footers'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'timeseries_rmse.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['city']['tseriesbias']:
try:
sim1.compare_param(param=i,city=j,timeseries_mb=True,label=nml['files']['sim1label'],footer=nml['city']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,city=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim2label'],footer=nml['city']['footers'])
if sim3 is not False:
sim3.compare_param(param=i,city=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim3label'],footer=nml['city']['footers'])
if sim4 is not False:
sim4.compare_param(param=i,city=j,timeseries_mb=True,fig=plt.figure(1),label=nml['files']['sim4label'],footer=nml['city']['footers'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'timeseries_mb.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['city']['scatter']:
try:
sim1.compare_param(param=i,city=j,scatter=True,label=nml['files']['sim1label'],footer=nml['city']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,city=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim2label'],footer=nml['city']['footers'])
if sim3 is not False:
sim3.compare_param(param=i,city=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim3label'],footer=nml['city']['footers'])
if sim4 is not False:
sim4.compare_param(param=i,city=j,scatter=True,fig=plt.figure(1),label=nml['files']['sim4label'],footer=nml['city']['footers'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'scatter.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['city']['diffscatter']:
try:
sim1.compare_param(param=i,city=j,diffscatter=True,label=nml['files']['sim1label'],footer=nml['city']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,city=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim2label'],footer=nml['city']['footers'])
if sim3 is not False:
sim3.compare_param(param=i,city=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim3label'],footer=nml['city']['footers'])
if sim4 is not False:
sim4.compare_param(param=i,city=j,diffscatter=True,fig=plt.figure(1),label=nml['files']['sim4label'],footer=nml['city']['footers'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'diffscatter.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['city']['pdfs']:
try:
sim1.compare_param(param=i,city=j,pdfs=True,label=nml['files']['sim1label'],footer=nml['city']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,city=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim2label'],footer=nml['city']['footers'])
if sim3 is not False:
sim3.compare_param(param=i,city=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim3label'],footer=nml['city']['footers'])
if sim4 is not False:
sim4.compare_param(param=i,city=j,pdfs=True,fig=plt.figure(1),label=nml['files']['sim4label'],footer=nml['city']['footers'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'pdfs.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['city']['diffpdfs']:
try:
sim1.compare_param(param=i,city=j,diffpdfs=True,label=nml['files']['sim1label'],footer=nml['city']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,city=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim2label'],footer=nml['city']['footers'])
if sim3 is not False:
sim3.compare_param(param=i,city=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim3label'],footer=nml['city']['footers'])
if sim4 is not False:
sim4.compare_param(param=i,city=j,diffpdfs=True,fig=plt.figure(1),label=nml['files']['sim4label'],footer=nml['city']['footers'])
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'diffpdfs.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
if nml['city']['taylordiagram']:
try:
dia = sim1compare_param(param=i,city=j,taylordiagram=True,label=nml['files']['sim1label'],footer=nml['city']['footers'])
if sim2 is not False:
sim2.compare_param(param=i,city=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim2label'],dia=dia)
if sim3 is not False:
sim3.compare_param(param=i,city=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim3label'],dia=dia)
if sim4 is not False:
sim4.compare_param(param=i,city=j,taylordiagram=True,fig=plt.figure(1),label=nml['files']['sim4label'],dia=dia)
plt.savefig(base +'_'+i.replace('.','')+'_'+j.replace(' ','')+'_'+'taylor.jpg',dpi=75)
plt.close('all')
except:
plt.close('all')
pass
def make_tseries_name(sim,var,city='',region=''):
date = sim.cmaq.dates[0]
print date
ymd= date.strftime('%Y.%m.%d')
print var,city,region
if city != '':
names = sim.df.MSA_Name.dropna().values
codes = sim.df.MSA_Code.dropna().values
names,index = unique(names,return_index=True)
codes = codes[index].astype('|S5')
for i,j in zip(names,codes):
if city.upper() in i.upper():
name = i
code = j
elif region != '':
r = region.strip('R')
code = r.zfill(5)
elif (region == '') & (city == ''):
code = '00000'
savename = ymd + '.5X.'+var +'.ts.'+code+'.png'
return savename
| 56.684049 | 159 | 0.523973 | 4,383 | 36,958 | 4.34611 | 0.043121 | 0.064255 | 0.073915 | 0.120006 | 0.898997 | 0.871857 | 0.858733 | 0.827707 | 0.793637 | 0.756733 | 0 | 0.023602 | 0.300693 | 36,958 | 651 | 160 | 56.771121 | 0.713446 | 0.006764 | 0 | 0.525723 | 0 | 0 | 0.130704 | 0.002181 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.057878 | 0.022508 | null | null | 0.059486 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
37e525c68ed043e8e459857cb4bbeb77ba8b230f | 5,153 | py | Python | anomaly_detection/utils/datasets.py | ninatu/anomaly_detection | 6fa35f3fd35976ce2b857801d288e17f454241b9 | [
"Apache-2.0"
] | 22 | 2020-10-21T07:59:33.000Z | 2022-03-18T08:07:49.000Z | anomaly_detection/utils/datasets.py | ninatu/anomaly_detection | 6fa35f3fd35976ce2b857801d288e17f454241b9 | [
"Apache-2.0"
] | 2 | 2020-10-26T05:19:39.000Z | 2021-09-21T18:16:02.000Z | anomaly_detection/utils/datasets.py | ninatu/anomaly_detection | 6fa35f3fd35976ce2b857801d288e17f454241b9 | [
"Apache-2.0"
] | 7 | 2020-11-19T12:32:29.000Z | 2022-03-06T21:02:30.000Z | import os
from torchvision import datasets
from torch.utils.data import Dataset
import numpy as np
import tqdm
from enum import Enum
import PIL.Image
__all__ = ['DatasetType', 'DATASETS', 'CIFAR10Dataset', 'NIHDataset', 'SVHNDataset']
class CIFAR10Dataset(Dataset):
def __init__(self, root, split, transform=None, target_classes=None, target_indexes_path=None):
super().__init__()
if split == 'train':
self._dataset = datasets.CIFAR10(root=root, train=True, transform=transform, download=True)
else:
self._dataset = datasets.CIFAR10(root=root, train=False, transform=transform, download=True)
if (target_classes is not None) and (target_indexes_path is not None):
raise ValueError("You must specify either 'target_classes' either 'target_indexes_path',"
"but not both")
if target_classes is not None:
self._target_indexes = []
for index, label in enumerate(self._dataset.targets):
if label in target_classes:
self._target_indexes.append(index)
elif target_indexes_path is not None:
self._target_indexes = np.load(target_indexes_path)
else:
self._target_indexes = list(range(len(self._dataset)))
def __getitem__(self, index):
image, _ = self._dataset[self._target_indexes[index]]
return image
def __len__(self):
return len(self._target_indexes)
class SVHNDataset(Dataset):
def __init__(self, root, split, transform=None, target_classes=None, target_indexes_path=None):
super().__init__()
self._dataset = datasets.SVHN(root=root, split=split, transform=transform, download=True)
if (target_classes is not None) and (target_indexes_path is not None):
raise ValueError("You must specify either 'target_classes' either 'target_indexes_path',"
"but not both")
if target_classes is not None:
self._target_indexes = []
for index, (_, label) in enumerate(self._dataset):
if label in target_classes:
self._target_indexes.append(index)
elif target_indexes_path is not None:
self._target_indexes = np.load(target_indexes_path)
else:
self._target_indexes = list(range(len(self._dataset)))
def __getitem__(self, index):
image, _ = self._dataset[self._target_indexes[index]]
return image
def __len__(self):
return len(self._target_indexes)
class Camelyon16Dataset(Dataset):
def __init__(self, image_root, split_root, split, transform=None, cache_data=False):
super().__init__()
self._image_root = image_root
self._transform = transform
split_info_path = os.path.join(split_root, split)
with open(split_info_path) as f_in:
self._image_filenames = [filename.strip() for filename in f_in.readlines()]
self._cached_images = {}
if cache_data:
self._cache_data = False
print('Loading dataset ... ')
for index in tqdm.tqdm(range(len(self))):
self._cached_images[index] = self[index]
self._cache_data = cache_data
def __getitem__(self, index):
if self._cache_data:
return self._cached_images[index]
else:
image_path = os.path.join(self._image_root, self._image_filenames[index])
image = PIL.Image.open(image_path)
if self._transform is not None:
image = self._transform(image)
return image
def __len__(self):
return len(self._image_filenames)
class NIHDataset(Dataset):
def __init__(self, image_root, split_root, split, transform=None, cache_data=False):
super().__init__()
self._image_root = image_root
self._transform = transform
split_info_path = os.path.join(split_root, split)
with open(split_info_path) as f_in:
self._image_filenames = [filename.strip() for filename in f_in.readlines()]
self._cached_images = {}
if cache_data:
self._cache_data = False
print('Loading dataset ... ')
for index in tqdm.tqdm(range(len(self))):
self._cached_images[index] = self[index]
self._cache_data = cache_data
def __getitem__(self, index):
if self._cache_data:
return self._cached_images[index]
else:
image_path = os.path.join(self._image_root, self._image_filenames[index])
image = PIL.Image.open(image_path)
if self._transform is not None:
image = self._transform(image)
return image
def __len__(self):
return len(self._image_filenames)
class DatasetType(Enum):
cifar10 = 'cifar10'
camelyon16 = 'camelyon16'
nih = 'nih'
svhn = 'svhn'
DATASETS = {
DatasetType.cifar10: CIFAR10Dataset,
DatasetType.camelyon16: Camelyon16Dataset,
DatasetType.nih: NIHDataset,
DatasetType.svhn: SVHNDataset,
}
| 34.817568 | 104 | 0.638269 | 608 | 5,153 | 5.057566 | 0.138158 | 0.093008 | 0.066341 | 0.023415 | 0.826667 | 0.826667 | 0.826667 | 0.801301 | 0.801301 | 0.801301 | 0 | 0.006898 | 0.268581 | 5,153 | 147 | 105 | 35.054422 | 0.808968 | 0 | 0 | 0.75 | 0 | 0 | 0.055696 | 0.008539 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.060345 | 0.034483 | 0.327586 | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5300ede824221ba01b007bed326e893d0664fa5f | 1,387 | py | Python | src/plyer_lach/facades/camera.py | locksmith47/turing-sim-kivy | f57de9d52494245c56f67dd7e63121434bb0553f | [
"MIT"
] | null | null | null | src/plyer_lach/facades/camera.py | locksmith47/turing-sim-kivy | f57de9d52494245c56f67dd7e63121434bb0553f | [
"MIT"
] | null | null | null | src/plyer_lach/facades/camera.py | locksmith47/turing-sim-kivy | f57de9d52494245c56f67dd7e63121434bb0553f | [
"MIT"
] | null | null | null | class Camera(object):
'''Camera facade.
'''
def take_picture(self, filename, on_complete):
'''Ask the OS to capture a picture, and store it at filename.
When the capture is done, on_complete will be called with the filename
as an argument. If the callback returns True, the filename will be
unlinked.
:param filename: Name of the image file
:param on_complete: Callback that will be called when the operation is
done
:type filename: str
:type on_complete: callable
'''
self._take_picture(filename=filename, on_complete=on_complete)
def take_video(self, filename, on_complete):
'''Ask the OS to capture a video, and store it at filename.
When the capture is done, on_complete will be called with the filename
as an argument. If the callback returns True, the filename will be
unlinked.
:param filename: Name of the video file
:param on_complete: Callback that will be called when the operation is
done
:type filename: str
:type on_complete: callable
'''
self._take_video(filename=filename, on_complete=on_complete)
# private
def _take_picture(self, **kwargs):
raise NotImplementedError()
def _take_video(self, **kwargs):
raise NotImplementedError()
| 31.522727 | 78 | 0.651766 | 182 | 1,387 | 4.846154 | 0.28022 | 0.136054 | 0.081633 | 0.040816 | 0.784581 | 0.784581 | 0.702948 | 0.702948 | 0.702948 | 0.702948 | 0 | 0 | 0.284066 | 1,387 | 43 | 79 | 32.255814 | 0.888218 | 0.563807 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
5311a916adc84ea5c2d696f160bc7556e22c3a07 | 229 | py | Python | 000818CoursPyGusto/Coursera000818PyBasicsHSEw01v02_test_sep_err_20200507.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 000818CoursPyGusto/Coursera000818PyBasicsHSEw01v02_test_sep_err_20200507.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 000818CoursPyGusto/Coursera000818PyBasicsHSEw01v02_test_sep_err_20200507.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | print('1', '2', '3', sep=' + ')
print('=', 1 + 2 + 3)
print('1', '2', '3', sep=' + ', end='')
print('=', 1 + 2 + 3)
print('1', '2', '3', sep=' + ', end=' ') # в конце пробел, сравнить с предыдущим выводом
print('=', 1 + 2 + 3)
| 25.444444 | 88 | 0.441048 | 36 | 229 | 2.805556 | 0.361111 | 0.356436 | 0.415842 | 0.475248 | 0.544554 | 0.435644 | 0.435644 | 0.435644 | 0.435644 | 0.435644 | 0 | 0.1 | 0.213974 | 229 | 8 | 89 | 28.625 | 0.461111 | 0.196507 | 0 | 0.833333 | 0 | 0 | 0.120879 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
532156db4a964d9aa621ad67ba99b36830eda8de | 24,689 | py | Python | tests.py | wueric/fastconv | 7b34f09eb83439241737e764b93e584d582ca917 | [
"MIT"
] | null | null | null | tests.py | wueric/fastconv | 7b34f09eb83439241737e764b93e584d582ca917 | [
"MIT"
] | null | null | null | tests.py | wueric/fastconv | 7b34f09eb83439241737e764b93e584d582ca917 | [
"MIT"
] | null | null | null | from fastconv import corr1d
import numpy as np
import os
TESTCASE_PATH = 'testcases'
MULTIDATA_MULTICHAN_FNAME = 'multidata_multichan.npy'
MULTICHAN_FILTERS_FNAME = 'multifilters.npy'
FILTER_PATH = os.path.join(TESTCASE_PATH, MULTICHAN_FILTERS_FNAME)
DATA_PATH = os.path.join(TESTCASE_PATH, MULTIDATA_MULTICHAN_FNAME)
FILTERS = np.load(FILTER_PATH)
DATA = np.load(DATA_PATH)
def test_corr1D_AA_double():
test_data = DATA[0, 35, :].astype(np.float64)
test_filter = FILTERS[0, 35, :].astype(np.float64)
comparison = np.correlate(test_data, test_filter, mode='valid') # type: np.ndarray
my_output = corr1d.short_filter_correlate1D(test_data, test_filter) # type: np.ndarray
assert my_output.dtype == np.float64, 'data type should be np.float64'
assert comparison.shape == my_output.shape, 'shape is {0}, should be {1}'.format(my_output.shape,
comparison.shape)
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_corr1D_AA_float():
test_data = DATA[0, 35, :].astype(np.float32)
test_filter = FILTERS[0, 35, :].astype(np.float32)
comparison = np.correlate(test_data, test_filter, mode='valid')
my_output = corr1d.short_filter_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32, 'data type should be np.float32'
assert comparison.shape == my_output.shape, 'shape is {0}, should be {1}'.format(my_output.shape,
comparison.shape)
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_corr1D_AB_double():
test_data = DATA[0, 35, 1000:2001].astype(np.float64)
test_filter = FILTERS[0, 35, :].astype(np.float64)
comparison = np.correlate(test_data, test_filter, mode='valid') # type: np.ndarray
my_output = corr1d.short_filter_correlate1D(test_data, test_filter) # type: np.ndarray
assert my_output.dtype == np.float64, 'data type should be np.float64'
assert comparison.shape == my_output.shape, 'shape is {0}, should be {1}'.format(my_output.shape,
comparison.shape)
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_corr1D_AB_float():
test_data = DATA[0, 35, 1000:2001].astype(np.float32)
test_filter = FILTERS[0, 35, :].astype(np.float32)
comparison = np.correlate(test_data, test_filter, mode='valid')
my_output = corr1d.short_filter_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32, 'data type should be np.float32'
assert comparison.shape == my_output.shape, 'shape is {0}, should be {1}'.format(my_output.shape,
comparison.shape)
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_corr1D_AC_double():
test_data = FILTERS[0, 35, :].astype(np.float64)
test_filter = FILTERS[0, 35, :].astype(np.float64)
comparison = np.correlate(test_data, test_filter, mode='valid') # type: np.ndarray
my_output = corr1d.short_filter_correlate1D(test_data, test_filter) # type: np.ndarray
assert my_output.dtype == np.float64, 'data type should be np.float64'
assert comparison.shape == my_output.shape, 'shape is {0}, should be {1}'.format(my_output.shape,
comparison.shape)
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_corr1D_AC_float():
test_data = FILTERS[0, 35, :].astype(np.float32)
test_filter = FILTERS[0, 35, :].astype(np.float32)
comparison = np.correlate(test_data, test_filter, mode='valid')
my_output = corr1d.short_filter_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32, 'data type should be np.float32'
assert comparison.shape == my_output.shape, 'shape is {0}, should be {1}'.format(my_output.shape,
comparison.shape)
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_filter_multidata_1D_A_double():
test_data = DATA[0, ...].astype(np.float64)
test_filter = FILTERS[0, 35, :].astype(np.float64)
comparison = np.zeros((test_data.shape[0], test_data.shape[1] - test_filter.shape[0] + 1),
dtype=np.float64)
for i in range(test_data.shape[0]):
comparison[i, ...] = np.correlate(test_data[i, :], test_filter)
my_output = corr1d.single_filter_multiple_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_filter_multidata_1D_A_float():
test_data = DATA[0, ...].astype(np.float32)
test_filter = FILTERS[0, 35, :].astype(np.float32)
comparison = np.zeros((test_data.shape[0], test_data.shape[1] - test_filter.shape[0] + 1),
dtype=np.float32)
for i in range(test_data.shape[0]):
comparison[i, ...] = np.correlate(test_data[i, :], test_filter)
my_output = corr1d.single_filter_multiple_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_filter_multidata_1D_B_double():
test_data = DATA[0, :, :1001].astype(np.float64)
test_filter = FILTERS[0, 35, :].astype(np.float64)
comparison = np.zeros((test_data.shape[0], test_data.shape[1] - test_filter.shape[0] + 1),
dtype=np.float64)
for i in range(test_data.shape[0]):
comparison[i, ...] = np.correlate(test_data[i, :], test_filter)
my_output = corr1d.single_filter_multiple_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_filter_multidata_1D_B_float():
test_data = DATA[0, :, :1001].astype(np.float32)
test_filter = FILTERS[0, 35, :].astype(np.float32)
comparison = np.zeros((test_data.shape[0], test_data.shape[1] - test_filter.shape[0] + 1),
dtype=np.float32)
for i in range(test_data.shape[0]):
comparison[i, ...] = np.correlate(test_data[i, :], test_filter)
my_output = corr1d.single_filter_multiple_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_data_multifilter_1D_A_double():
test_data = DATA[0, 35, :].astype(np.float64)
test_filter = FILTERS[0, :, :].astype(np.float64)
comparison = np.zeros((test_filter.shape[0], test_data.shape[0] - test_filter.shape[1] + 1),
dtype=np.float64)
for i in range(test_filter.shape[0]):
comparison[i, ...] = np.correlate(test_data, test_filter[i, :])
my_output = corr1d.multiple_filter_single_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_data_multifilter_1D_A_float():
test_data = DATA[0, 35, :].astype(np.float32)
test_filter = FILTERS[0, :, :].astype(np.float32)
comparison = np.zeros((test_filter.shape[0], test_data.shape[0] - test_filter.shape[1] + 1),
dtype=np.float32)
for i in range(test_filter.shape[0]):
comparison[i, ...] = np.correlate(test_data, test_filter[i, :])
my_output = corr1d.multiple_filter_single_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_data_multifilter_1D_B_double():
test_data = DATA[0, 54, 1337:9999].astype(np.float64)
test_filter = FILTERS[0, :, :].astype(np.float64)
comparison = np.zeros((test_filter.shape[0], test_data.shape[0] - test_filter.shape[1] + 1),
dtype=np.float64)
for i in range(test_filter.shape[0]):
comparison[i, ...] = np.correlate(test_data, test_filter[i, :])
my_output = corr1d.multiple_filter_single_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_data_multifilter_1D_B_float():
test_data = DATA[0, 54, 1337:9999].astype(np.float32)
test_filter = FILTERS[0, :, :].astype(np.float32)
comparison = np.zeros((test_filter.shape[0], test_data.shape[0] - test_filter.shape[1] + 1),
dtype=np.float32)
for i in range(test_filter.shape[0]):
comparison[i, ...] = np.correlate(test_data, test_filter[i, :])
my_output = corr1d.multiple_filter_single_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_multidata_multifilter_1D_A_double():
test_data = DATA[0, 13:26, :].astype(np.float64)
test_filter = FILTERS[0, 15:19, :].astype(np.float64)
comparison = np.zeros((test_data.shape[0], test_filter.shape[0], test_data.shape[1] - test_filter.shape[1] + 1),
dtype=np.float64)
for j in range(test_data.shape[0]):
for i in range(test_filter.shape[0]):
comparison[j, i, :] = np.correlate(test_data[j, :], test_filter[i, :])
my_output = corr1d.multiple_filter_multiple_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_multidata_multifilter_1D_A_float():
test_data = DATA[0, 13:26, :].astype(np.float32)
test_filter = FILTERS[0, 15:19, :].astype(np.float32)
comparison = np.zeros((test_data.shape[0], test_filter.shape[0], test_data.shape[1] - test_filter.shape[1] + 1),
dtype=np.float32)
for j in range(test_data.shape[0]):
for i in range(test_filter.shape[0]):
comparison[j, i, :] = np.correlate(test_data[j, :], test_filter[i, :])
my_output = corr1d.multiple_filter_multiple_data_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_data_single_filter_accum_A_double():
test_data = (DATA[0, :, :] / 10.0).astype(np.float64)
test_filter = (FILTERS[0, :, :] / 10.0).astype(np.float64)
buffer = np.zeros((test_data.shape[0], test_data.shape[1] - test_filter.shape[1] + 1),
dtype=np.float64)
for j in range(test_data.shape[0]):
buffer[j, :] = np.correlate(test_data[j, :], test_filter[j, :])
comparison = np.sum(buffer, axis=0)
my_output = corr1d.multichan_accum_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_data_single_filter_accum_A_float():
test_data = (DATA[0, :, :] / 10.0).astype(np.float32)
test_filter = (FILTERS[0, :, :] / 10.0).astype(np.float32)
buffer = np.zeros((test_data.shape[0], test_data.shape[1] - test_filter.shape[1] + 1),
dtype=np.float32)
for j in range(test_data.shape[0]):
buffer[j, :] = np.correlate(test_data[j, :], test_filter[j, :])
comparison = np.sum(buffer, axis=0)
my_output = corr1d.multichan_accum_correlate1D(test_data, test_filter)
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_data_multifilter_accum_A_double():
test_data = (DATA[0, :13, :] / 10.0).astype(np.float64)
test_filter = (FILTERS[:, :13, :] / 10.0).astype(np.float64)
buffer = np.zeros((test_filter.shape[0], test_data.shape[0], test_data.shape[1] - test_filter.shape[2] + 1),
dtype=np.float64)
for k in range(test_filter.shape[0]):
for j in range(test_data.shape[0]):
buffer[k, j, :] = np.correlate(test_data[j, :], test_filter[k, j, :])
comparison = np.sum(buffer, axis=1)
my_output = corr1d.batch_filter_multichan_accum_correlate1D(test_data, test_filter)
print(np.max(np.abs(comparison - my_output)))
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_single_data_multifilter_accum_A_float():
test_data = (DATA[0, :13, :] / 10.0).astype(np.float32)
test_filter = (FILTERS[:, :13, :] / 10.0).astype(np.float32)
buffer = np.zeros((test_filter.shape[0], test_data.shape[0], test_data.shape[1] - test_filter.shape[2] + 1),
dtype=np.float32)
for k in range(test_filter.shape[0]):
for j in range(test_data.shape[0]):
buffer[k, j, :] = np.correlate(test_data[j, :], test_filter[k, j, :])
comparison = np.sum(buffer, axis=1)
my_output = corr1d.batch_filter_multichan_accum_correlate1D(test_data, test_filter)
print(np.max(np.abs(comparison - my_output)))
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-3, atol=1e-2)
def test_multidata_single_filter_accum_A_double():
test_data = (DATA[:, :13, :] / 10.0).astype(np.float64)
test_filter = (FILTERS[0, :13, :] / 10.0).astype(np.float64)
buffer = np.zeros((test_data.shape[0], test_data.shape[1], test_data.shape[2] - test_filter.shape[1] + 1),
dtype=np.float64)
for k in range(test_data.shape[0]):
for j in range(test_data.shape[1]):
buffer[k, j, :] = np.correlate(test_data[k, j, :], test_filter[j, :])
comparison = np.sum(buffer, axis=1)
my_output = corr1d.batch_data_multichan_accum_correlate1D(test_data, test_filter)
print(np.max(np.abs(comparison - my_output)))
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-2, atol=1e-2)
def test_multidata_single_filter_accum_A_float():
test_data = (DATA[:, :13, :] / 10.0).astype(np.float32)
test_filter = (FILTERS[0, :13, :] / 10.0).astype(np.float32)
buffer = np.zeros((test_data.shape[0], test_data.shape[1], test_data.shape[2] - test_filter.shape[1] + 1),
dtype=np.float32)
for k in range(test_data.shape[0]):
for j in range(test_data.shape[1]):
buffer[k, j, :] = np.correlate(test_data[k, j, :], test_filter[j, :])
comparison = np.sum(buffer, axis=1)
my_output = corr1d.batch_data_multichan_accum_correlate1D(test_data, test_filter)
print(np.max(np.abs(comparison - my_output)))
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-2, atol=1e-2)
def test_multidata_multifilter_accum_A_double():
test_data = (DATA[:, :13, :] / 10.0).astype(np.float64)
test_filter = (FILTERS[:, :13, :] / 10.0).astype(np.float64)
buffer = np.zeros((test_data.shape[0], test_filter.shape[0],
test_data.shape[1], test_data.shape[2] - test_filter.shape[2] + 1),
dtype=np.float64)
for k in range(test_data.shape[0]):
for l in range(test_filter.shape[0]):
for j in range(test_data.shape[1]):
buffer[k, l, j, :] = np.correlate(test_data[k, j, :], test_filter[l, j, :])
comparison = np.sum(buffer, axis=2)
my_output = corr1d.batch_data_batch_filter_multichan_accum_correlate1D(test_data, test_filter)
print(np.max(np.abs(comparison - my_output)))
assert my_output.dtype == np.float64
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-2, atol=1e-2)
def test_multidata_multifilter_accum_A_float():
test_data = (DATA[:, :13, :] / 10.0).astype(np.float32)
test_filter = (FILTERS[:, :13, :] / 10.0).astype(np.float32)
buffer = np.zeros((test_data.shape[0], test_filter.shape[0],
test_data.shape[1], test_data.shape[2] - test_filter.shape[2] + 1),
dtype=np.float32)
for k in range(test_data.shape[0]):
for l in range(test_filter.shape[0]):
for j in range(test_data.shape[1]):
buffer[k, l, j, :] = np.correlate(test_data[k, j, :], test_filter[l, j, :])
comparison = np.sum(buffer, axis=2)
my_output = corr1d.batch_data_batch_filter_multichan_accum_correlate1D(test_data, test_filter)
print(np.max(np.abs(comparison - my_output)))
assert my_output.dtype == np.float32
assert my_output.shape == comparison.shape
assert np.allclose(comparison, my_output, rtol=1e-2, atol=1e-2)
def test__single_filter_single_data_channel_correlate1D():
N_CH = 13
test_data = (DATA[0, :N_CH, :] / 10.0).astype(np.float64)
test_filter = (FILTERS[0, :N_CH, :] / 10.0).astype(np.float64)
buffer = np.zeros((test_data.shape[0], test_data.shape[1] - test_filter.shape[1] + 1),
dtype=np.float64)
for k in range(N_CH):
buffer[k, :] = np.correlate(test_data[k, :], test_filter[k, :])
my_output = corr1d.single_filter_single_data_channel_correlate1D(test_data,
test_filter)
print(np.max(np.abs(buffer - my_output)))
assert my_output.dtype == np.float64
assert my_output.shape == buffer.shape
assert np.allclose(buffer, my_output, rtol=1e-2, atol=1e-2)
def test__single_filter_single_data_channel_correlate1D__float():
N_CH = 13
test_data = (DATA[0, :N_CH, :] / 10.0).astype(np.float32)
test_filter = (FILTERS[0, :N_CH, :] / 10.0).astype(np.float32)
buffer = np.zeros((test_data.shape[0], test_data.shape[1] - test_filter.shape[1] + 1),
dtype=np.float32)
for k in range(N_CH):
buffer[k, :] = np.correlate(test_data[k, :], test_filter[k, :])
my_output = corr1d.single_filter_single_data_channel_correlate1D(test_data,
test_filter)
print(np.max(np.abs(buffer - my_output)))
assert my_output.dtype == np.float32
assert my_output.shape == buffer.shape
assert np.allclose(buffer, my_output, rtol=1e-2, atol=1e-2)
def test__single_filter_batch_data_channel_correlate1D():
N_CH = 13
test_data = (DATA[:, :N_CH, :] / 10.0).astype(np.float64)
test_filter = (FILTERS[0, :N_CH, :] / 10.0).astype(np.float64)
buffer = np.zeros((test_data.shape[0], test_data.shape[1], test_data.shape[2] - test_filter.shape[1] + 1),
dtype=np.float64)
for i in range(test_data.shape[0]):
for k in range(test_data.shape[1]):
buffer[i, k, :] = np.correlate(test_data[i, k, :], test_filter[k, :])
my_output = corr1d.single_filter_batch_data_channel_correlate1D(test_data,
test_filter)
print(np.max(np.abs(buffer - my_output)))
assert my_output.dtype == np.float64
assert my_output.shape == buffer.shape
assert np.allclose(buffer, my_output, rtol=1e-2, atol=1e-2)
def test__single_filter_batch_data_channel_correlate1D__float():
N_CH = 13
test_data = (DATA[:, :N_CH, :] / 10.0).astype(np.float32)
test_filter = (FILTERS[0, :N_CH, :] / 10.0).astype(np.float32)
buffer = np.zeros((test_data.shape[0], test_data.shape[1], test_data.shape[2] - test_filter.shape[1] + 1),
dtype=np.float32)
for i in range(test_data.shape[0]):
for k in range(test_data.shape[1]):
buffer[i, k, :] = np.correlate(test_data[i, k, :], test_filter[k, :])
my_output = corr1d.single_filter_batch_data_channel_correlate1D(test_data,
test_filter)
print(np.max(np.abs(buffer - my_output)))
assert my_output.dtype == np.float32
assert my_output.shape == buffer.shape
assert np.allclose(buffer, my_output, rtol=1e-2, atol=1e-2)
def test__batch_filter_single_data_channel_correlate1D():
N_CH = 13
test_data = (DATA[0, :N_CH, :] / 10.0).astype(np.float64)
test_filter = (FILTERS[:, :N_CH, :] / 10.0).astype(np.float64)
buffer = np.zeros((test_filter.shape[0], test_filter.shape[1], test_data.shape[1] - test_filter.shape[2] + 1),
dtype=np.float64)
for i in range(test_filter.shape[0]):
for k in range(N_CH):
buffer[i, k, :] = np.correlate(test_data[k, :], test_filter[i, k, :])
my_output = corr1d.batch_filter_single_data_channel_correlate1D(test_data,
test_filter)
print(np.max(np.abs(buffer - my_output)))
assert my_output.dtype == np.float64
assert my_output.shape == buffer.shape
assert np.allclose(buffer, my_output, rtol=1e-2, atol=1e-2)
def test__batch_filter_single_data_channel_correlate1D__float():
N_CH = 13
test_data = (DATA[0, :N_CH, :] / 10.0).astype(np.float32)
test_filter = (FILTERS[:, :N_CH, :] / 10.0).astype(np.float32)
buffer = np.zeros((test_filter.shape[0], test_filter.shape[1], test_data.shape[1] - test_filter.shape[2] + 1),
dtype=np.float32)
for i in range(test_filter.shape[0]):
for k in range(N_CH):
buffer[i, k, :] = np.correlate(test_data[k, :], test_filter[i, k, :])
my_output = corr1d.batch_filter_single_data_channel_correlate1D(test_data,
test_filter)
print(np.max(np.abs(buffer - my_output)))
assert my_output.dtype == np.float32
assert my_output.shape == buffer.shape
assert np.allclose(buffer, my_output, rtol=1e-2, atol=1e-2)
def test__batch_filter_batch_data_channel_correlate1D():
N_CH = 13
test_data = (DATA[:, :N_CH, :] / 10.0).astype(np.float64)
test_filter = (FILTERS[:, :N_CH, :] / 10.0).astype(np.float64)
buffer = np.zeros((test_data.shape[0], test_filter.shape[0], test_filter.shape[1],
test_data.shape[2] - test_filter.shape[2] + 1),
dtype=np.float64)
for j in range(test_data.shape[0]):
for i in range(test_filter.shape[0]):
for k in range(N_CH):
buffer[j, i, k, :] = np.correlate(test_data[j, k, :], test_filter[i, k, :])
my_output = corr1d.batch_filter_batch_data_channel_correlate1D(test_data,
test_filter)
print(np.max(np.abs(buffer - my_output)))
assert my_output.dtype == np.float64
assert my_output.shape == buffer.shape
assert np.allclose(buffer, my_output, rtol=1e-2, atol=1e-2)
def test__batch_filter_batch_data_channel_correlate1D__float():
N_CH = 13
test_data = (DATA[:, :N_CH, :] / 10.0).astype(np.float32)
test_filter = (FILTERS[:, :N_CH, :] / 10.0).astype(np.float32)
buffer = np.zeros((test_data.shape[0], test_filter.shape[0], test_filter.shape[1],
test_data.shape[2] - test_filter.shape[2] + 1),
dtype=np.float32)
for j in range(test_data.shape[0]):
for i in range(test_filter.shape[0]):
for k in range(N_CH):
buffer[j, i, k, :] = np.correlate(test_data[j, k, :], test_filter[i, k, :])
my_output = corr1d.batch_filter_batch_data_channel_correlate1D(test_data,
test_filter)
print(np.max(np.abs(buffer - my_output)))
assert my_output.dtype == np.float32
assert my_output.shape == buffer.shape
assert np.allclose(buffer, my_output, rtol=1e-2, atol=1e-2)
| 39.5024 | 116 | 0.64389 | 3,576 | 24,689 | 4.212808 | 0.026007 | 0.091338 | 0.065582 | 0.050183 | 0.982011 | 0.982011 | 0.977829 | 0.968138 | 0.958579 | 0.945835 | 0 | 0.048407 | 0.219328 | 24,689 | 624 | 117 | 39.565705 | 0.733216 | 0.004091 | 0 | 0.843902 | 0 | 0 | 0.017086 | 0.000936 | 0 | 0 | 0 | 0 | 0.234146 | 1 | 0.078049 | false | 0 | 0.007317 | 0 | 0.085366 | 0.034146 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
536f2bdb29944271b164a167070c771a89c42962 | 1,472 | py | Python | Plugins/UnrealEnginePython/Binaries/Win64/Lib/site-packages/tensorflow/_api/v1/losses/__init__.py | JustinACoder/H22-GR3-UnrealAI | 361eb9ef1147f8a2991e5f98c4118cd823184adf | [
"MIT"
] | 6 | 2022-02-04T18:12:24.000Z | 2022-03-21T23:57:12.000Z | Lib/site-packages/tensorflow/_api/v1/losses/__init__.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/tensorflow/_api/v1/losses/__init__.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | 1 | 2022-02-08T03:53:23.000Z | 2022-02-08T03:53:23.000Z | # This file is MACHINE GENERATED! Do not edit.
# Generated by: tensorflow/python/tools/api/generator/create_python_api.py script.
"""Loss operations for use in neural networks.
Note: All the losses are added to the `GraphKeys.LOSSES` collection by default.
"""
from __future__ import print_function
from tensorflow.python.ops.losses.losses import Reduction
from tensorflow.python.ops.losses.losses import absolute_difference
from tensorflow.python.ops.losses.losses import add_loss
from tensorflow.python.ops.losses.losses import compute_weighted_loss
from tensorflow.python.ops.losses.losses import cosine_distance
from tensorflow.python.ops.losses.losses import get_losses
from tensorflow.python.ops.losses.losses import get_regularization_loss
from tensorflow.python.ops.losses.losses import get_regularization_losses
from tensorflow.python.ops.losses.losses import get_total_loss
from tensorflow.python.ops.losses.losses import hinge_loss
from tensorflow.python.ops.losses.losses import huber_loss
from tensorflow.python.ops.losses.losses import log_loss
from tensorflow.python.ops.losses.losses import mean_pairwise_squared_error
from tensorflow.python.ops.losses.losses import mean_squared_error
from tensorflow.python.ops.losses.losses import sigmoid_cross_entropy
from tensorflow.python.ops.losses.losses import softmax_cross_entropy
from tensorflow.python.ops.losses.losses import sparse_softmax_cross_entropy
del print_function
| 49.066667 | 83 | 0.841033 | 211 | 1,472 | 5.701422 | 0.303318 | 0.239402 | 0.282627 | 0.325021 | 0.692436 | 0.692436 | 0.692436 | 0.590191 | 0.319202 | 0 | 0 | 0 | 0.096467 | 1,472 | 29 | 84 | 50.758621 | 0.904511 | 0.170516 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.947368 | 0 | 0.947368 | 0.105263 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
725af5ee775d9897b31e54c364fe0db228862d78 | 44,452 | py | Python | html_cleaner/cleaner.py | ProstoKSI/html-cleaner | 0454ab40772e42b2e60995b7327cd01ef3c6c75d | [
"MIT"
] | null | null | null | html_cleaner/cleaner.py | ProstoKSI/html-cleaner | 0454ab40772e42b2e60995b7327cd01ef3c6c75d | [
"MIT"
] | 2 | 2015-04-29T14:01:30.000Z | 2015-04-30T08:55:32.000Z | html_cleaner/cleaner.py | ProstoKSI/html-cleaner | 0454ab40772e42b2e60995b7327cd01ef3c6c75d | [
"MIT"
] | null | null | null |
###
### This file generates automaticly
### Do not change anything in it
### Generated from 'strict.cfg'
###
import re
from BeautifulSoup import BeautifulSoup
tag_check = re.compile(r'^(a|blockquote|p|u|b|i|em|strike|strong|ul|ol|li|sub|sup|br|hr|object|param|embed|h1|h2|h3|h4|h5|h6|center|address|pre|iframe|img|font|span|table|tr|td|tname|tbody|div|dd|dt)$', re.IGNORECASE)
attr_check = {}
attr_value_check = {}
style_check = {}
style_value_check = {}
attr_check['a'] = re.compile(r'^(name|href|style)$', re.IGNORECASE)
attr_value_check['a'] = {}
attr_value_check['a']['name'] = re.compile(r'^(.*)$', re.IGNORECASE)
attr_value_check['a']['href'] = re.compile(r'^(((https?|ftp)://|/).*)$', re.IGNORECASE)
attr_value_check['a']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['blockquote'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['blockquote'] = {}
attr_value_check['blockquote']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['p'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['p'] = {}
attr_value_check['p']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['u'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['u'] = {}
attr_value_check['u']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['b'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['b'] = {}
attr_value_check['b']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['i'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['i'] = {}
attr_value_check['i']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['em'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['em'] = {}
attr_value_check['em']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['strike'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['strike'] = {}
attr_value_check['strike']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['strong'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['strong'] = {}
attr_value_check['strong']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['ul'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['ul'] = {}
attr_value_check['ul']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['ol'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['ol'] = {}
attr_value_check['ol']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['li'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['li'] = {}
attr_value_check['li']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['sub'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['sub'] = {}
attr_value_check['sub']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['sup'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['sup'] = {}
attr_value_check['sup']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['br'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['br'] = {}
attr_value_check['br']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['hr'] = re.compile(r'^(class|style)$', re.IGNORECASE)
attr_value_check['hr'] = {}
attr_value_check['hr']['class'] = re.compile(r'^(redactor_cut)$', re.IGNORECASE)
attr_value_check['hr']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['object'] = re.compile(r'^(data|width|height|style)$', re.IGNORECASE)
attr_value_check['object'] = {}
attr_value_check['object']['data'] = re.compile(r'^(.*)$', re.IGNORECASE)
attr_value_check['object']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['object']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['object']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['param'] = re.compile(r'^(name|value|style)$', re.IGNORECASE)
attr_value_check['param'] = {}
attr_value_check['param']['name'] = re.compile(r'^(movie|allowFullScreen|allowscriptaccess)$', re.IGNORECASE)
attr_value_check['param']['value'] = re.compile(r'^(http://www\.youtube\.com/.*|true|always)$', re.IGNORECASE)
attr_value_check['param']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['embed'] = re.compile(r'^(src|type|allowfullscreen|width|height|style)$', re.IGNORECASE)
attr_value_check['embed'] = {}
attr_value_check['embed']['src'] = re.compile(r'^((http://www\.youtube\.com/|http://youtube\.com/).*)$', re.IGNORECASE)
attr_value_check['embed']['type'] = re.compile(r'^(application/x-shockwave-falsh)$', re.IGNORECASE)
attr_value_check['embed']['allowfullscreen'] = re.compile(r'^(true|false)$', re.IGNORECASE)
attr_value_check['embed']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['embed']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['embed']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['h1'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['h1'] = {}
attr_value_check['h1']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['h2'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['h2'] = {}
attr_value_check['h2']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['h3'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['h3'] = {}
attr_value_check['h3']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['h4'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['h4'] = {}
attr_value_check['h4']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['h5'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['h5'] = {}
attr_value_check['h5']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['h6'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['h6'] = {}
attr_value_check['h6']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['center'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['center'] = {}
attr_value_check['center']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['address'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['address'] = {}
attr_value_check['address']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['pre'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['pre'] = {}
attr_value_check['pre']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['iframe'] = re.compile(r'^(src|width|height|frameborder|style)$', re.IGNORECASE)
attr_value_check['iframe'] = {}
attr_value_check['iframe']['src'] = re.compile(r'^((http://player\.vimeo\.com/|http://(vkontakte\.ru|vk\.com)/video_ext\.php|http://www\.youtube\.com/|http://youtube\.com/).*)$', re.IGNORECASE)
attr_value_check['iframe']['width'] = re.compile(r'^([\d]+)$', re.IGNORECASE)
attr_value_check['iframe']['height'] = re.compile(r'^([\d]+)$', re.IGNORECASE)
attr_value_check['iframe']['frameborder'] = re.compile(r'^([\d]+)$', re.IGNORECASE)
attr_value_check['iframe']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['img'] = re.compile(r'^(width|height|style|src|alt|class|align)$', re.IGNORECASE)
attr_value_check['img'] = {}
attr_value_check['img']['width'] = re.compile(r'^([\d]+)$', re.IGNORECASE)
attr_value_check['img']['height'] = re.compile(r'^([\d]+)$', re.IGNORECASE)
attr_value_check['img']['style'] = re.compile(r'^(width|height|float|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_value_check['img']['src'] = re.compile(r'^(((https?|ftp)://|/).*)$', re.IGNORECASE)
attr_value_check['img']['alt'] = re.compile(r'^(.*)$', re.IGNORECASE)
attr_value_check['img']['class'] = re.compile(r'^(img_(left|right))$', re.IGNORECASE)
attr_value_check['img']['align'] = re.compile(r'^(left|right|center)$', re.IGNORECASE)
attr_check['font'] = re.compile(r'^(style)$', re.IGNORECASE)
attr_value_check['font'] = {}
attr_value_check['font']['style'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['span'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['span'] = {}
attr_value_check['span']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['span']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['span']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['span']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['table'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['table'] = {}
attr_value_check['table']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['table']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['table']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['table']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['tr'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['tr'] = {}
attr_value_check['tr']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['tr']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['tr']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['tr']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['td'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['td'] = {}
attr_value_check['td']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['td']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['td']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['td']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['tname'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['tname'] = {}
attr_value_check['tname']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['tname']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['tname']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['tname']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['tbody'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['tbody'] = {}
attr_value_check['tbody']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['tbody']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['tbody']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['tbody']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['div'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['div'] = {}
attr_value_check['div']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['div']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['div']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['div']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['dd'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['dd'] = {}
attr_value_check['dd']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['dd']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['dd']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['dd']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
attr_check['dt'] = re.compile(r'^(width|height|align|style)$', re.IGNORECASE)
attr_value_check['dt'] = {}
attr_value_check['dt']['width'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['dt']['height'] = re.compile(r'^(\d+)$', re.IGNORECASE)
attr_value_check['dt']['align'] = re.compile(r'^(\w+)$', re.IGNORECASE)
attr_value_check['dt']['style'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_check['a'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['a'] = {}
style_value_check['a']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['a']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['a']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['a']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['a']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['a']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['blockquote'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['blockquote'] = {}
style_value_check['blockquote']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['blockquote']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['blockquote']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['blockquote']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['blockquote']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['blockquote']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['p'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['p'] = {}
style_value_check['p']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['p']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['p']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['p']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['p']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['p']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['u'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['u'] = {}
style_value_check['u']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['u']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['u']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['u']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['u']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['u']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['b'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['b'] = {}
style_value_check['b']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['b']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['b']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['b']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['b']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['b']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['i'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['i'] = {}
style_value_check['i']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['i']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['i']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['i']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['i']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['i']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['em'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['em'] = {}
style_value_check['em']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['em']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['em']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['em']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['em']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['em']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['strike'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['strike'] = {}
style_value_check['strike']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['strike']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['strike']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['strike']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['strike']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['strike']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['strong'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['strong'] = {}
style_value_check['strong']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['strong']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['strong']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['strong']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['strong']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['strong']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['ul'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['ul'] = {}
style_value_check['ul']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['ul']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['ul']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['ul']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['ul']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['ul']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['ol'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['ol'] = {}
style_value_check['ol']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['ol']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['ol']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['ol']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['ol']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['ol']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['li'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['li'] = {}
style_value_check['li']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['li']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['li']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['li']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['li']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['li']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['sub'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['sub'] = {}
style_value_check['sub']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['sub']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['sub']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['sub']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['sub']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['sub']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['sup'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['sup'] = {}
style_value_check['sup']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['sup']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['sup']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['sup']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['sup']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['sup']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['br'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['br'] = {}
style_value_check['br']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['br']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['br']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['br']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['br']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['br']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['hr'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['hr'] = {}
style_value_check['hr']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['hr']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['hr']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['hr']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['hr']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['hr']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['object'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['object'] = {}
style_value_check['object']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['object']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['object']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['object']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['object']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['object']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['param'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['param'] = {}
style_value_check['param']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['param']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['param']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['param']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['param']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['param']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['embed'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['embed'] = {}
style_value_check['embed']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['embed']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['embed']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['embed']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['embed']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['embed']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['h1'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['h1'] = {}
style_value_check['h1']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['h1']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['h1']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['h1']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['h1']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['h1']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['h2'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['h2'] = {}
style_value_check['h2']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['h2']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['h2']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['h2']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['h2']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['h2']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['h3'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['h3'] = {}
style_value_check['h3']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['h3']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['h3']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['h3']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['h3']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['h3']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['h4'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['h4'] = {}
style_value_check['h4']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['h4']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['h4']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['h4']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['h4']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['h4']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['h5'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['h5'] = {}
style_value_check['h5']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['h5']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['h5']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['h5']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['h5']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['h5']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['h6'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['h6'] = {}
style_value_check['h6']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['h6']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['h6']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['h6']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['h6']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['h6']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['center'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['center'] = {}
style_value_check['center']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['center']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['center']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['center']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['center']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['center']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['address'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['address'] = {}
style_value_check['address']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['address']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['address']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['address']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['address']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['address']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['pre'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['pre'] = {}
style_value_check['pre']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['pre']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['pre']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['pre']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['pre']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['pre']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['iframe'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['iframe'] = {}
style_value_check['iframe']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['iframe']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['iframe']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['iframe']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['iframe']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['iframe']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['img'] = re.compile(r'^(width|height|float|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['img'] = {}
style_value_check['img']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['img']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['img']['float'] = re.compile(r'^(right|left)$', re.IGNORECASE)
style_value_check['img']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['img']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['img']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['img']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['img']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['img']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['font'] = re.compile(r'^(font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['font'] = {}
style_value_check['font']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['font']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['font']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['font']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['font']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['font']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['span'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['span'] = {}
style_value_check['span']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['span']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['span']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['span']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['span']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['span']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['span']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['span']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['table'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['table'] = {}
style_value_check['table']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['table']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['table']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['table']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['table']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['table']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['table']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['table']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['tr'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['tr'] = {}
style_value_check['tr']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['tr']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['tr']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['tr']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['tr']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['tr']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['tr']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['tr']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['td'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['td'] = {}
style_value_check['td']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['td']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['td']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['td']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['td']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['td']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['td']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['td']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['tname'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['tname'] = {}
style_value_check['tname']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['tname']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['tname']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['tname']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['tname']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['tname']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['tname']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['tname']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['tbody'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['tbody'] = {}
style_value_check['tbody']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['tbody']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['tbody']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['tbody']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['tbody']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['tbody']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['tbody']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['tbody']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['div'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['div'] = {}
style_value_check['div']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['div']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['div']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['div']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['div']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['div']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['div']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['div']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['dd'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['dd'] = {}
style_value_check['dd']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['dd']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['dd']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['dd']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['dd']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['dd']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['dd']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['dd']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_check['dt'] = re.compile(r'^(width|height|font-weight|font-style|text-decoration|text-align|margin.*|padding.*)$', re.IGNORECASE)
style_value_check['dt'] = {}
style_value_check['dt']['width'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['dt']['height'] = re.compile(r'^(\d+(\.\d+)?(px|em|%))$', re.IGNORECASE)
style_value_check['dt']['font-weight'] = re.compile(r'^(bold)$', re.IGNORECASE)
style_value_check['dt']['font-style'] = re.compile(r'^(italic)$', re.IGNORECASE)
style_value_check['dt']['text-decoration'] = re.compile(r'^(line-through|underline)$', re.IGNORECASE)
style_value_check['dt']['text-align'] = re.compile(r'^(center|left|right|justify)$', re.IGNORECASE)
style_value_check['dt']['margin.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
style_value_check['dt']['padding.*'] = re.compile(r'^(.*)$', re.IGNORECASE)
def clear_html_code(text):
text_re = re.compile('\<\!--.*?--\>', flags=re.DOTALL)
text = text_re.sub('', text)
soup = BeautifulSoup(text)
tags = soup.findAll()
for tag in tags:
if not tag_check.match(tag.name):
tag.extract()
else:
for attr in tag.attrs[:]:
if not attr_check[tag.name].match(attr[0]):
tag.attrs.remove(attr)
else:
if (attr[0].lower() == "style"):
list = attr[1].split(';')
res = ""
for x in list:
if x.find(':') > 0:
keys = [q.strip() for q in x.split(':', 1)]
if style_check[tag.name].match(keys[0]) and (keys[0] not in style_value_check[tag.name] or style_value_check[tag.name][keys[0]].match(keys[1])):
res += x + ';'
tag.attrs.remove(attr)
tag.attrs.append(("style", res))
else:
if not attr_value_check[tag.name][attr[0]].match(attr[1]):
tag.attrs.remove(attr)
return unicode(soup)
| 79.806104 | 217 | 0.685886 | 6,313 | 44,452 | 4.676699 | 0.02376 | 0.147676 | 0.146322 | 0.194486 | 0.934934 | 0.904789 | 0.871393 | 0.856117 | 0.789087 | 0.6975 | 0 | 0.001981 | 0.057365 | 44,452 | 556 | 218 | 79.94964 | 0.702616 | 0.00198 | 0 | 0.011009 | 1 | 0.150459 | 0.367925 | 0.212872 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001835 | false | 0 | 0.00367 | 0 | 0.007339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
728fd51d29cb9f38c555c2e776274c54354d3d2d | 735 | py | Python | Chapter 3/more_guests.py | WilliamJaber/Python-Crash-Course | d87621785011039fbe0b42f0d8b6cd2364246577 | [
"MIT"
] | null | null | null | Chapter 3/more_guests.py | WilliamJaber/Python-Crash-Course | d87621785011039fbe0b42f0d8b6cd2364246577 | [
"MIT"
] | null | null | null | Chapter 3/more_guests.py | WilliamJaber/Python-Crash-Course | d87621785011039fbe0b42f0d8b6cd2364246577 | [
"MIT"
] | 5 | 2021-09-22T16:53:47.000Z | 2022-03-24T00:56:49.000Z | guest_list = ['Alex', 'Dan', 'Dave']
guest_list.insert(0, 'Greg')
guest_list.insert(3, 'Sam')
guest_list.append('Eddy')
print(f'Hey {guest_list[0]}! you are invited for my celebration dinner next week.')
print(f'Hey {guest_list[1]}! you are invited for my celebration dinner next week.')
print(f'Hey {guest_list[2]}! you are invited for my celebration dinner next week.')
print(f'Hey {guest_list[3]}! you are invited for my celebration dinner next week.')
print(f'Hey {guest_list[4]}! you are invited for my celebration dinner next week.')
print(f'Hey {guest_list[5]}! you are invited for my celebration dinner next week.')
# for guest in guest_list:
# print(f'Hey {guest}! you are invited for my celebration dinner next week')
| 45.9375 | 83 | 0.730612 | 127 | 735 | 4.141732 | 0.228346 | 0.188213 | 0.119772 | 0.186312 | 0.777567 | 0.743346 | 0.743346 | 0.743346 | 0.743346 | 0.579848 | 0 | 0.012598 | 0.136054 | 735 | 15 | 84 | 49 | 0.815748 | 0.140136 | 0 | 0 | 0 | 0 | 0.73132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
f42ae1581d64e00aecb047bfe08d939af908fcd7 | 130 | py | Python | FINITO_FEM_TOOLBOX/__init__.py | wmpjrufg/FINITO_ALGORITMOS | 6aca0937423bb01172d1151b6a8ea2c22cbfb6dc | [
"MIT"
] | null | null | null | FINITO_FEM_TOOLBOX/__init__.py | wmpjrufg/FINITO_ALGORITMOS | 6aca0937423bb01172d1151b6a8ea2c22cbfb6dc | [
"MIT"
] | null | null | null | FINITO_FEM_TOOLBOX/__init__.py | wmpjrufg/FINITO_ALGORITMOS | 6aca0937423bb01172d1151b6a8ea2c22cbfb6dc | [
"MIT"
] | null | null | null | from .FINITO import *
from .FINITO_COMMON_LIBRARY import *
from .FINITO_MEF1D_LIBRARY import *
from .FINITO_MEF2D_LIBRARY import * | 32.5 | 36 | 0.823077 | 18 | 130 | 5.611111 | 0.388889 | 0.39604 | 0.475248 | 0.455446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017391 | 0.115385 | 130 | 4 | 37 | 32.5 | 0.86087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
f4345267a4827fb0cd58ac7ffe6ce2db7fc4cd6a | 59,814 | py | Python | sdk/appconfiguration/azure-appconfiguration/azure/appconfiguration/_generated/operations/_azure_app_configuration_operations.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 2,728 | 2015-01-09T10:19:32.000Z | 2022-03-31T14:50:33.000Z | sdk/appconfiguration/azure-appconfiguration/azure/appconfiguration/_generated/operations/_azure_app_configuration_operations.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 17,773 | 2015-01-05T15:57:17.000Z | 2022-03-31T23:50:25.000Z | sdk/appconfiguration/azure-appconfiguration/azure/appconfiguration/_generated/operations/_azure_app_configuration_operations.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 1,916 | 2015-01-19T05:05:41.000Z | 2022-03-31T19:36:44.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import TYPE_CHECKING
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.paging import ItemPaged
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpRequest, HttpResponse
from .. import models as _models
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, Iterable, List, Optional, TypeVar, Union
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
class AzureAppConfigurationOperationsMixin(object):
def get_keys(
self,
name=None, # type: Optional[str]
after=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> Iterable["_models.KeyListResult"]
"""Gets a list of keys.
Gets a list of keys.
:param name: A filter for the name of the returned keys.
:type name: str
:param after: Instructs the server to return elements that appear after the element referred to
by the specified token.
:type after: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either KeyListResult or the result of cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.appconfiguration.models.KeyListResult]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.KeyListResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
accept = "application/vnd.microsoft.appconfig.keyset+json, application/json, application/problem+json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_keys.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if name is not None:
query_parameters['name'] = self._serialize.query("name", name, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if after is not None:
query_parameters['After'] = self._serialize.query("after", after, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
request = self._client.get(url, query_parameters, header_parameters)
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('KeyListResult', pipeline_response)
list_of_elem = deserialized.items
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize.failsafe_deserialize(_models.Error, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return ItemPaged(
get_next, extract_data
)
get_keys.metadata = {'url': '/keys'} # type: ignore
def check_keys(
self,
name=None, # type: Optional[str]
after=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> None
"""Requests the headers and status of the given resource.
Requests the headers and status of the given resource.
:param name: A filter for the name of the returned keys.
:type name: str
:param after: Instructs the server to return elements that appear after the element referred to
by the specified token.
:type after: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
# Construct URL
url = self.check_keys.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if name is not None:
query_parameters['name'] = self._serialize.query("name", name, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if after is not None:
query_parameters['After'] = self._serialize.query("after", after, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
request = self._client.head(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
if cls:
return cls(pipeline_response, None, response_headers)
check_keys.metadata = {'url': '/keys'} # type: ignore
def get_key_values(
self,
key=None, # type: Optional[str]
label=None, # type: Optional[str]
after=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
select=None, # type: Optional[List[Union[str, "_models.Get6ItemsItem"]]]
**kwargs # type: Any
):
# type: (...) -> Iterable["_models.KeyValueListResult"]
"""Gets a list of key-values.
Gets a list of key-values.
:param key: A filter used to match keys.
:type key: str
:param label: A filter used to match labels.
:type label: str
:param after: Instructs the server to return elements that appear after the element referred to
by the specified token.
:type after: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:param select: Used to select what fields are present in the returned resource(s).
:type select: list[str or ~azure.appconfiguration.models.Get6ItemsItem]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either KeyValueListResult or the result of cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.appconfiguration.models.KeyValueListResult]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.KeyValueListResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
accept = "application/vnd.microsoft.appconfig.kvset+json, application/json, application/problem+json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_key_values.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if key is not None:
query_parameters['key'] = self._serialize.query("key", key, 'str')
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if after is not None:
query_parameters['After'] = self._serialize.query("after", after, 'str')
if select is not None:
query_parameters['$Select'] = self._serialize.query("select", select, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
request = self._client.get(url, query_parameters, header_parameters)
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('KeyValueListResult', pipeline_response)
list_of_elem = deserialized.items
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize.failsafe_deserialize(_models.Error, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return ItemPaged(
get_next, extract_data
)
get_key_values.metadata = {'url': '/kv'} # type: ignore
def check_key_values(
self,
key=None, # type: Optional[str]
label=None, # type: Optional[str]
after=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
select=None, # type: Optional[List[Union[str, "_models.Head6ItemsItem"]]]
**kwargs # type: Any
):
# type: (...) -> None
"""Requests the headers and status of the given resource.
Requests the headers and status of the given resource.
:param key: A filter used to match keys.
:type key: str
:param label: A filter used to match labels.
:type label: str
:param after: Instructs the server to return elements that appear after the element referred to
by the specified token.
:type after: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:param select: Used to select what fields are present in the returned resource(s).
:type select: list[str or ~azure.appconfiguration.models.Head6ItemsItem]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
# Construct URL
url = self.check_key_values.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if key is not None:
query_parameters['key'] = self._serialize.query("key", key, 'str')
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if after is not None:
query_parameters['After'] = self._serialize.query("after", after, 'str')
if select is not None:
query_parameters['$Select'] = self._serialize.query("select", select, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
request = self._client.head(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
if cls:
return cls(pipeline_response, None, response_headers)
check_key_values.metadata = {'url': '/kv'} # type: ignore
def get_key_value(
self,
key, # type: str
label=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
if_match=None, # type: Optional[str]
if_none_match=None, # type: Optional[str]
select=None, # type: Optional[List[Union[str, "_models.Get7ItemsItem"]]]
**kwargs # type: Any
):
# type: (...) -> "_models.KeyValue"
"""Gets a single key-value.
Gets a single key-value.
:param key: The key of the key-value to retrieve.
:type key: str
:param label: The label of the key-value to retrieve.
:type label: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:param if_match: Used to perform an operation only if the targeted resource's etag matches the
value provided.
:type if_match: str
:param if_none_match: Used to perform an operation only if the targeted resource's etag does
not match the value provided.
:type if_none_match: str
:param select: Used to select what fields are present in the returned resource(s).
:type select: list[str or ~azure.appconfiguration.models.Get7ItemsItem]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: KeyValue, or the result of cls(response)
:rtype: ~azure.appconfiguration.models.KeyValue
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.KeyValue"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
accept = "application/vnd.microsoft.appconfig.kv+json, application/json, application/problem+json"
# Construct URL
url = self.get_key_value.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'key': self._serialize.url("key", key, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if select is not None:
query_parameters['$Select'] = self._serialize.query("select", select, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if if_none_match is not None:
header_parameters['If-None-Match'] = self._serialize.header("if_none_match", if_none_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.Error, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
response_headers['Last-Modified']=self._deserialize('str', response.headers.get('Last-Modified'))
deserialized = self._deserialize('KeyValue', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_key_value.metadata = {'url': '/kv/{key}'} # type: ignore
def put_key_value(
self,
key, # type: str
label=None, # type: Optional[str]
if_match=None, # type: Optional[str]
if_none_match=None, # type: Optional[str]
entity=None, # type: Optional["_models.KeyValue"]
**kwargs # type: Any
):
# type: (...) -> "_models.KeyValue"
"""Creates a key-value.
Creates a key-value.
:param key: The key of the key-value to create.
:type key: str
:param label: The label of the key-value to create.
:type label: str
:param if_match: Used to perform an operation only if the targeted resource's etag matches the
value provided.
:type if_match: str
:param if_none_match: Used to perform an operation only if the targeted resource's etag does
not match the value provided.
:type if_none_match: str
:param entity: The key-value to create.
:type entity: ~azure.appconfiguration.models.KeyValue
:keyword callable cls: A custom type or function that will be passed the direct response
:return: KeyValue, or the result of cls(response)
:rtype: ~azure.appconfiguration.models.KeyValue
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.KeyValue"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
content_type = kwargs.pop("content_type", "application/vnd.microsoft.appconfig.kv+json")
accept = "application/vnd.microsoft.appconfig.kv+json, application/json, application/problem+json"
# Construct URL
url = self.put_key_value.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'key': self._serialize.url("key", key, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if if_none_match is not None:
header_parameters['If-None-Match'] = self._serialize.header("if_none_match", if_none_match, 'str')
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
if entity is not None:
body_content = self._serialize.body(entity, 'KeyValue')
else:
body_content = None
body_content_kwargs['content'] = body_content
request = self._client.put(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.Error, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('KeyValue', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
put_key_value.metadata = {'url': '/kv/{key}'} # type: ignore
def delete_key_value(
self,
key, # type: str
label=None, # type: Optional[str]
if_match=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> Optional["_models.KeyValue"]
"""Deletes a key-value.
Deletes a key-value.
:param key: The key of the key-value to delete.
:type key: str
:param label: The label of the key-value to delete.
:type label: str
:param if_match: Used to perform an operation only if the targeted resource's etag matches the
value provided.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: KeyValue, or the result of cls(response)
:rtype: ~azure.appconfiguration.models.KeyValue or None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Optional["_models.KeyValue"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
accept = "application/vnd.microsoft.appconfig.kv+json, application/json, application/problem+json"
# Construct URL
url = self.delete_key_value.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'key': self._serialize.url("key", key, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.Error, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
deserialized = None
if response.status_code == 200:
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('KeyValue', pipeline_response)
if response.status_code == 204:
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
delete_key_value.metadata = {'url': '/kv/{key}'} # type: ignore
def check_key_value(
self,
key, # type: str
label=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
if_match=None, # type: Optional[str]
if_none_match=None, # type: Optional[str]
select=None, # type: Optional[List[Union[str, "_models.Head7ItemsItem"]]]
**kwargs # type: Any
):
# type: (...) -> None
"""Requests the headers and status of the given resource.
Requests the headers and status of the given resource.
:param key: The key of the key-value to retrieve.
:type key: str
:param label: The label of the key-value to retrieve.
:type label: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:param if_match: Used to perform an operation only if the targeted resource's etag matches the
value provided.
:type if_match: str
:param if_none_match: Used to perform an operation only if the targeted resource's etag does
not match the value provided.
:type if_none_match: str
:param select: Used to select what fields are present in the returned resource(s).
:type select: list[str or ~azure.appconfiguration.models.Head7ItemsItem]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
# Construct URL
url = self.check_key_value.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'key': self._serialize.url("key", key, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if select is not None:
query_parameters['$Select'] = self._serialize.query("select", select, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if if_none_match is not None:
header_parameters['If-None-Match'] = self._serialize.header("if_none_match", if_none_match, 'str')
request = self._client.head(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
response_headers['Last-Modified']=self._deserialize('str', response.headers.get('Last-Modified'))
if cls:
return cls(pipeline_response, None, response_headers)
check_key_value.metadata = {'url': '/kv/{key}'} # type: ignore
def get_labels(
self,
name=None, # type: Optional[str]
after=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
select=None, # type: Optional[List[str]]
**kwargs # type: Any
):
# type: (...) -> Iterable["_models.LabelListResult"]
"""Gets a list of labels.
Gets a list of labels.
:param name: A filter for the name of the returned labels.
:type name: str
:param after: Instructs the server to return elements that appear after the element referred to
by the specified token.
:type after: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:param select: Used to select what fields are present in the returned resource(s).
:type select: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either LabelListResult or the result of cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.appconfiguration.models.LabelListResult]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.LabelListResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
accept = "application/vnd.microsoft.appconfig.labelset+json, application/json, application/problem+json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_labels.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if name is not None:
query_parameters['name'] = self._serialize.query("name", name, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if after is not None:
query_parameters['After'] = self._serialize.query("after", after, 'str')
if select is not None:
query_parameters['$Select'] = self._serialize.query("select", select, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
request = self._client.get(url, query_parameters, header_parameters)
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('LabelListResult', pipeline_response)
list_of_elem = deserialized.items
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize.failsafe_deserialize(_models.Error, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return ItemPaged(
get_next, extract_data
)
get_labels.metadata = {'url': '/labels'} # type: ignore
def check_labels(
self,
name=None, # type: Optional[str]
after=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
select=None, # type: Optional[List[str]]
**kwargs # type: Any
):
# type: (...) -> None
"""Requests the headers and status of the given resource.
Requests the headers and status of the given resource.
:param name: A filter for the name of the returned labels.
:type name: str
:param after: Instructs the server to return elements that appear after the element referred to
by the specified token.
:type after: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:param select: Used to select what fields are present in the returned resource(s).
:type select: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
# Construct URL
url = self.check_labels.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if name is not None:
query_parameters['name'] = self._serialize.query("name", name, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if after is not None:
query_parameters['After'] = self._serialize.query("after", after, 'str')
if select is not None:
query_parameters['$Select'] = self._serialize.query("select", select, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
request = self._client.head(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
if cls:
return cls(pipeline_response, None, response_headers)
check_labels.metadata = {'url': '/labels'} # type: ignore
def put_lock(
self,
key, # type: str
label=None, # type: Optional[str]
if_match=None, # type: Optional[str]
if_none_match=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> "_models.KeyValue"
"""Locks a key-value.
Locks a key-value.
:param key: The key of the key-value to lock.
:type key: str
:param label: The label, if any, of the key-value to lock.
:type label: str
:param if_match: Used to perform an operation only if the targeted resource's etag matches the
value provided.
:type if_match: str
:param if_none_match: Used to perform an operation only if the targeted resource's etag does
not match the value provided.
:type if_none_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: KeyValue, or the result of cls(response)
:rtype: ~azure.appconfiguration.models.KeyValue
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.KeyValue"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
accept = "application/vnd.microsoft.appconfig.kv+json, application/json, application/problem+json"
# Construct URL
url = self.put_lock.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'key': self._serialize.url("key", key, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if if_none_match is not None:
header_parameters['If-None-Match'] = self._serialize.header("if_none_match", if_none_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.put(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.Error, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('KeyValue', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
put_lock.metadata = {'url': '/locks/{key}'} # type: ignore
def delete_lock(
self,
key, # type: str
label=None, # type: Optional[str]
if_match=None, # type: Optional[str]
if_none_match=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> "_models.KeyValue"
"""Unlocks a key-value.
Unlocks a key-value.
:param key: The key of the key-value to unlock.
:type key: str
:param label: The label, if any, of the key-value to unlock.
:type label: str
:param if_match: Used to perform an operation only if the targeted resource's etag matches the
value provided.
:type if_match: str
:param if_none_match: Used to perform an operation only if the targeted resource's etag does
not match the value provided.
:type if_none_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: KeyValue, or the result of cls(response)
:rtype: ~azure.appconfiguration.models.KeyValue
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.KeyValue"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
accept = "application/vnd.microsoft.appconfig.kv+json, application/json, application/problem+json"
# Construct URL
url = self.delete_lock.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'key': self._serialize.url("key", key, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if if_none_match is not None:
header_parameters['If-None-Match'] = self._serialize.header("if_none_match", if_none_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.Error, response)
raise HttpResponseError(response=response, model=error)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('KeyValue', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
delete_lock.metadata = {'url': '/locks/{key}'} # type: ignore
def get_revisions(
self,
key=None, # type: Optional[str]
label=None, # type: Optional[str]
after=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
select=None, # type: Optional[List[Union[str, "_models.Enum4"]]]
**kwargs # type: Any
):
# type: (...) -> Iterable["_models.KeyValueListResult"]
"""Gets a list of key-value revisions.
Gets a list of key-value revisions.
:param key: A filter used to match keys.
:type key: str
:param label: A filter used to match labels.
:type label: str
:param after: Instructs the server to return elements that appear after the element referred to
by the specified token.
:type after: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:param select: Used to select what fields are present in the returned resource(s).
:type select: list[str or ~azure.appconfiguration.models.Enum4]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either KeyValueListResult or the result of cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.appconfiguration.models.KeyValueListResult]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.KeyValueListResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
accept = "application/vnd.microsoft.appconfig.kvset+json, application/json, application/problem+json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_revisions.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if key is not None:
query_parameters['key'] = self._serialize.query("key", key, 'str')
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if after is not None:
query_parameters['After'] = self._serialize.query("after", after, 'str')
if select is not None:
query_parameters['$Select'] = self._serialize.query("select", select, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
request = self._client.get(url, query_parameters, header_parameters)
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('KeyValueListResult', pipeline_response)
list_of_elem = deserialized.items
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize.failsafe_deserialize(_models.Error, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return ItemPaged(
get_next, extract_data
)
get_revisions.metadata = {'url': '/revisions'} # type: ignore
def check_revisions(
self,
key=None, # type: Optional[str]
label=None, # type: Optional[str]
after=None, # type: Optional[str]
accept_datetime=None, # type: Optional[str]
select=None, # type: Optional[List[Union[str, "_models.Enum5"]]]
**kwargs # type: Any
):
# type: (...) -> None
"""Requests the headers and status of the given resource.
Requests the headers and status of the given resource.
:param key: A filter used to match keys.
:type key: str
:param label: A filter used to match labels.
:type label: str
:param after: Instructs the server to return elements that appear after the element referred to
by the specified token.
:type after: str
:param accept_datetime: Requests the server to respond with the state of the resource at the
specified time.
:type accept_datetime: str
:param select: Used to select what fields are present in the returned resource(s).
:type select: list[str or ~azure.appconfiguration.models.Enum5]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "1.0"
# Construct URL
url = self.check_revisions.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if key is not None:
query_parameters['key'] = self._serialize.query("key", key, 'str')
if label is not None:
query_parameters['label'] = self._serialize.query("label", label, 'str')
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if after is not None:
query_parameters['After'] = self._serialize.query("after", after, 'str')
if select is not None:
query_parameters['$Select'] = self._serialize.query("select", select, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if self._config.sync_token is not None:
header_parameters['Sync-Token'] = self._serialize.header("self._config.sync_token", self._config.sync_token, 'str')
if accept_datetime is not None:
header_parameters['Accept-Datetime'] = self._serialize.header("accept_datetime", accept_datetime, 'str')
request = self._client.head(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Sync-Token']=self._deserialize('str', response.headers.get('Sync-Token'))
if cls:
return cls(pipeline_response, None, response_headers)
check_revisions.metadata = {'url': '/revisions'} # type: ignore
| 48.354082 | 133 | 0.635854 | 6,946 | 59,814 | 5.296718 | 0.03556 | 0.041695 | 0.017124 | 0.024272 | 0.956185 | 0.9517 | 0.945449 | 0.938653 | 0.934875 | 0.928923 | 0 | 0.004895 | 0.255475 | 59,814 | 1,236 | 134 | 48.393204 | 0.821256 | 0.26875 | 0 | 0.857332 | 0 | 0.011889 | 0.10726 | 0.033001 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034346 | false | 0 | 0.010568 | 0 | 0.087186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f44828ec88c1cd80d71ca74da01697c9fc69ee57 | 3,873 | py | Python | tests/test_backbones.py | edoarn/cv-models | 5fa7e50fd69f76b54611bb323b15610eeb1bb5cf | [
"MIT"
] | null | null | null | tests/test_backbones.py | edoarn/cv-models | 5fa7e50fd69f76b54611bb323b15610eeb1bb5cf | [
"MIT"
] | 4 | 2021-04-23T12:05:45.000Z | 2021-04-25T11:38:01.000Z | tests/test_backbones.py | edoarn/cv-models | 5fa7e50fd69f76b54611bb323b15610eeb1bb5cf | [
"MIT"
] | null | null | null | import torch
from cvmodels.segmentation.backbones import resnet as rn, xception as xc
def test_resnet_out(random_seg_batch: torch.Tensor):
batch, _, height, width = random_seg_batch.shape
factors = [16, 8]
high_features = [512, 512, 2048, 2048, 2048]
low_features = [64, 64, 256, 256, 256]
for i, stride in enumerate(rn.OutputStrides):
for j, variant in enumerate(rn.ResNetVariants):
model = rn.ResNetBackbone(variant=variant, output_strides=stride, pretrained=False)
model.eval()
with torch.no_grad():
out = model(random_seg_batch)
assert len(out) == 2
high, low = out
assert high.shape == (batch, high_features[j], height / factors[i], width / factors[i])
assert low.shape == (batch, low_features[j], height / 4, width / 4)
def test_resnet_pretrain(random_seg_batch: torch.Tensor):
batch, _, height, width = random_seg_batch.shape
factors = [16, 8]
high_features = [512, 512, 2048, 2048, 2048]
low_features = [64, 64, 256, 256, 256]
for i, stride in enumerate(rn.OutputStrides):
for j, variant in enumerate(rn.ResNetVariants):
model = rn.ResNetBackbone(variant=variant, output_strides=stride, pretrained=True)
model.eval()
with torch.no_grad():
out = model(random_seg_batch)
assert len(out) == 2
high, low = out
assert high.shape == (batch, high_features[j], height / factors[i], width / factors[i])
assert low.shape == (batch, low_features[j], height / 4, width / 4)
def test_resnet_custom():
random_batch = torch.rand((2, 4, 512, 512))
batch, _, height, width = random_batch.shape
factors = [16, 8]
high_features = [512, 512, 2048, 2048, 2048]
low_features = [64, 64, 256, 256, 256]
for i, stride in enumerate(rn.OutputStrides):
for j, variant in enumerate(rn.ResNetVariants):
model = rn.ResNetBackbone(in_channels=4, variant=variant, output_strides=stride, pretrained=True)
model.eval()
with torch.no_grad():
out = model(random_batch)
assert len(out) == 2
high, low = out
assert high.shape == (batch, high_features[j], height / factors[i], width / factors[i])
assert low.shape == (batch, low_features[j], height / 4, width / 4)
def test_xception_out(random_seg_batch: torch.Tensor):
model = xc.XceptionBackbone(variant=xc.XceptionVariants.MF08, output_strides=xc.OutputStrides.OS16)
model.eval()
with torch.no_grad():
out = model(random_seg_batch)
assert len(out) == 2
high, low = out
assert high.shape == (2, 2048, 32, 32)
assert low.shape == (2, 128, 128, 128)
def test_xception_pretrain(random_seg_batch: torch.Tensor):
model = xc.XceptionBackbone(variant=xc.XceptionVariants.MF08,
output_strides=xc.OutputStrides.OS16,
pretrained=True)
model.eval()
with torch.no_grad():
out = model(random_seg_batch)
assert len(out) == 2
high, low = out
assert high.shape == (2, 2048, 32, 32)
assert low.shape == (2, 128, 128, 128)
def test_xception_custom():
random_batch = torch.rand((2, 4, 512, 512))
model = xc.XceptionBackbone(in_channels=4,
variant=xc.XceptionVariants.MF08,
output_strides=xc.OutputStrides.OS16,
pretrained=True)
model.eval()
with torch.no_grad():
out = model(random_batch)
assert len(out) == 2
high, low = out
assert high.shape == (2, 2048, 32, 32)
assert low.shape == (2, 128, 128, 128)
| 41.202128 | 109 | 0.593597 | 482 | 3,873 | 4.63278 | 0.134855 | 0.040305 | 0.062696 | 0.048365 | 0.926108 | 0.926108 | 0.916256 | 0.916256 | 0.916256 | 0.885804 | 0 | 0.073331 | 0.29228 | 3,873 | 93 | 110 | 41.645161 | 0.741335 | 0 | 0 | 0.802469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.074074 | false | 0 | 0.024691 | 0 | 0.098765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f481b33f5d1568be7d32e09737b759ad97fc6b58 | 199 | py | Python | extra_tests/snippets/literals.py | dbrgn/RustPython | 6d371cea8a62d84dbbeec5a53cfd040f45899211 | [
"CC-BY-4.0",
"MIT"
] | 11,058 | 2018-05-29T07:40:06.000Z | 2022-03-31T11:38:42.000Z | extra_tests/snippets/literals.py | dbrgn/RustPython | 6d371cea8a62d84dbbeec5a53cfd040f45899211 | [
"CC-BY-4.0",
"MIT"
] | 2,105 | 2018-06-01T10:07:16.000Z | 2022-03-31T14:56:42.000Z | extra_tests/snippets/literals.py | dbrgn/RustPython | 6d371cea8a62d84dbbeec5a53cfd040f45899211 | [
"CC-BY-4.0",
"MIT"
] | 914 | 2018-07-27T09:36:14.000Z | 2022-03-31T19:56:34.000Z | # Integer literals
assert 0b101010 == 42
assert 0B101010 == 42
assert 0o777 == 511
assert 0O777 == 511
assert 0xcafebabe == 3405691582
assert 0Xcafebabe == 3405691582
assert 0xCAFEBABE == 3405691582
| 22.111111 | 31 | 0.768844 | 23 | 199 | 6.652174 | 0.391304 | 0.313725 | 0.509804 | 0.287582 | 0.509804 | 0.509804 | 0 | 0 | 0 | 0 | 0 | 0.386905 | 0.155779 | 199 | 8 | 32 | 24.875 | 0.52381 | 0.080402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165746 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f48547a143775966465863cb09e19dadbf0cf9c7 | 8,163 | py | Python | tests/integration/test_executable_user_defined_function/test.py | zzachimed/ClickHouse | a403f1cd1b2655a60ca196d209ef443ef6d91b39 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_executable_user_defined_function/test.py | zzachimed/ClickHouse | a403f1cd1b2655a60ca196d209ef443ef6d91b39 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_executable_user_defined_function/test.py | zzachimed/ClickHouse | a403f1cd1b2655a60ca196d209ef443ef6d91b39 | [
"Apache-2.0"
] | null | null | null | import os
import sys
import time
import pytest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
from helpers.cluster import ClickHouseCluster
cluster = ClickHouseCluster(__file__)
node = cluster.add_instance("node", stay_alive=True, main_configs=[])
def skip_test_msan(instance):
if instance.is_built_with_memory_sanitizer():
pytest.skip("Memory Sanitizer cannot work with vfork")
def copy_file_to_container(local_path, dist_path, container_id):
os.system(
"docker cp {local} {cont_id}:{dist}".format(
local=local_path, cont_id=container_id, dist=dist_path
)
)
config = """<clickhouse>
<user_defined_executable_functions_config>/etc/clickhouse-server/functions/test_function_config.xml</user_defined_executable_functions_config>
</clickhouse>"""
@pytest.fixture(scope="module")
def started_cluster():
try:
cluster.start()
node.replace_config(
"/etc/clickhouse-server/config.d/executable_user_defined_functions_config.xml",
config,
)
copy_file_to_container(
os.path.join(SCRIPT_DIR, "functions/."),
"/etc/clickhouse-server/functions",
node.docker_id,
)
copy_file_to_container(
os.path.join(SCRIPT_DIR, "user_scripts/."),
"/var/lib/clickhouse/user_scripts",
node.docker_id,
)
node.restart_clickhouse()
yield cluster
finally:
cluster.shutdown()
def test_executable_function_bash(started_cluster):
skip_test_msan(node)
assert node.query("SELECT test_function_bash(toUInt64(1))") == "Key 1\n"
assert node.query("SELECT test_function_bash(1)") == "Key 1\n"
assert node.query("SELECT test_function_pool_bash(toUInt64(1))") == "Key 1\n"
assert node.query("SELECT test_function_pool_bash(1)") == "Key 1\n"
def test_executable_function_python(started_cluster):
skip_test_msan(node)
assert node.query("SELECT test_function_python(toUInt64(1))") == "Key 1\n"
assert node.query("SELECT test_function_python(1)") == "Key 1\n"
assert node.query("SELECT test_function_pool_python(toUInt64(1))") == "Key 1\n"
assert node.query("SELECT test_function_pool_python(1)") == "Key 1\n"
def test_executable_function_send_chunk_header_python(started_cluster):
skip_test_msan(node)
assert (
node.query("SELECT test_function_send_chunk_header_python(toUInt64(1))")
== "Key 1\n"
)
assert node.query("SELECT test_function_send_chunk_header_python(1)") == "Key 1\n"
assert (
node.query("SELECT test_function_send_chunk_header_pool_python(toUInt64(1))")
== "Key 1\n"
)
assert (
node.query("SELECT test_function_send_chunk_header_pool_python(1)") == "Key 1\n"
)
def test_executable_function_sum_python(started_cluster):
skip_test_msan(node)
assert (
node.query("SELECT test_function_sum_python(toUInt64(1), toUInt64(1))") == "2\n"
)
assert node.query("SELECT test_function_sum_python(1, 1)") == "2\n"
assert (
node.query("SELECT test_function_sum_pool_python(toUInt64(1), toUInt64(1))")
== "2\n"
)
assert node.query("SELECT test_function_sum_pool_python(1, 1)") == "2\n"
def test_executable_function_argument_python(started_cluster):
skip_test_msan(node)
assert (
node.query("SELECT test_function_argument_python(toUInt64(1))") == "Key 1 1\n"
)
assert node.query("SELECT test_function_argument_python(1)") == "Key 1 1\n"
assert (
node.query("SELECT test_function_argument_pool_python(toUInt64(1))")
== "Key 1 1\n"
)
assert node.query("SELECT test_function_argument_pool_python(1)") == "Key 1 1\n"
def test_executable_function_signalled_python(started_cluster):
skip_test_msan(node)
assert node.query_and_get_error(
"SELECT test_function_signalled_python(toUInt64(1))"
)
assert node.query_and_get_error("SELECT test_function_signalled_python(1)")
assert node.query_and_get_error(
"SELECT test_function_signalled_pool_python(toUInt64(1))"
)
assert node.query_and_get_error("SELECT test_function_signalled_pool_python(1)")
def test_executable_function_slow_python(started_cluster):
skip_test_msan(node)
assert node.query_and_get_error("SELECT test_function_slow_python(toUInt64(1))")
assert node.query_and_get_error("SELECT test_function_slow_python(1)")
assert node.query_and_get_error(
"SELECT test_function_slow_pool_python(toUInt64(1))"
)
assert node.query_and_get_error("SELECT test_function_slow_pool_python(1)")
def test_executable_function_non_direct_bash(started_cluster):
skip_test_msan(node)
assert node.query("SELECT test_function_non_direct_bash(toUInt64(1))") == "Key 1\n"
assert node.query("SELECT test_function_non_direct_bash(1)") == "Key 1\n"
assert (
node.query("SELECT test_function_non_direct_pool_bash(toUInt64(1))")
== "Key 1\n"
)
assert node.query("SELECT test_function_non_direct_pool_bash(1)") == "Key 1\n"
def test_executable_function_sum_json_python(started_cluster):
skip_test_msan(node)
node.query("CREATE TABLE test_table (lhs UInt64, rhs UInt64) ENGINE=TinyLog;")
node.query("INSERT INTO test_table VALUES (0, 0), (1, 1), (2, 2);")
assert (
node.query("SELECT test_function_sum_json_unnamed_args_python(1, 2);") == "3\n"
)
assert (
node.query(
"SELECT test_function_sum_json_unnamed_args_python(lhs, rhs) FROM test_table;"
)
== "0\n2\n4\n"
)
assert (
node.query("SELECT test_function_sum_json_partially_named_args_python(1, 2);")
== "3\n"
)
assert (
node.query(
"SELECT test_function_sum_json_partially_named_args_python(lhs, rhs) FROM test_table;"
)
== "0\n2\n4\n"
)
assert node.query("SELECT test_function_sum_json_named_args_python(1, 2);") == "3\n"
assert (
node.query(
"SELECT test_function_sum_json_named_args_python(lhs, rhs) FROM test_table;"
)
== "0\n2\n4\n"
)
assert (
node.query("SELECT test_function_sum_json_unnamed_args_pool_python(1, 2);")
== "3\n"
)
assert (
node.query(
"SELECT test_function_sum_json_unnamed_args_pool_python(lhs, rhs) FROM test_table;"
)
== "0\n2\n4\n"
)
assert (
node.query("SELECT test_function_sum_json_partially_named_args_python(1, 2);")
== "3\n"
)
assert (
node.query(
"SELECT test_function_sum_json_partially_named_args_python(lhs, rhs) FROM test_table;"
)
== "0\n2\n4\n"
)
assert (
node.query("SELECT test_function_sum_json_named_args_pool_python(1, 2);")
== "3\n"
)
assert (
node.query(
"SELECT test_function_sum_json_named_args_pool_python(lhs, rhs) FROM test_table;"
)
== "0\n2\n4\n"
)
node.query("DROP TABLE test_table;")
def test_executable_function_input_nullable_python(started_cluster):
skip_test_msan(node)
node.query(
"CREATE TABLE test_table_nullable (value Nullable(UInt64)) ENGINE=TinyLog;"
)
node.query("INSERT INTO test_table_nullable VALUES (0), (NULL), (2);")
assert (
node.query(
"SELECT test_function_nullable_python(1), test_function_nullable_python(NULL)"
)
== "Key 1\tKey Nullable\n"
)
assert (
node.query(
"SELECT test_function_nullable_python(value) FROM test_table_nullable;"
)
== "Key 0\nKey Nullable\nKey 2\n"
)
assert (
node.query(
"SELECT test_function_nullable_pool_python(1), test_function_nullable_pool_python(NULL)"
)
== "Key 1\tKey Nullable\n"
)
assert (
node.query(
"SELECT test_function_nullable_pool_python(value) FROM test_table_nullable;"
)
== "Key 0\nKey Nullable\nKey 2\n"
)
| 30.68797 | 146 | 0.670587 | 1,091 | 8,163 | 4.657195 | 0.112741 | 0.093879 | 0.141704 | 0.165322 | 0.803779 | 0.761661 | 0.755363 | 0.744735 | 0.730958 | 0.694548 | 0 | 0.024716 | 0.211932 | 8,163 | 265 | 147 | 30.803774 | 0.765117 | 0 | 0 | 0.344498 | 0 | 0 | 0.441504 | 0.280412 | 0 | 0 | 0 | 0 | 0.229665 | 1 | 0.062201 | false | 0 | 0.023923 | 0 | 0.086124 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
be5b5f18b4218666676a16738524b02bdcfb372c | 3,596 | py | Python | pkgs/clean-pkg/src/genie/libs/clean/stages/tests/test_power_cycle.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | 1 | 2022-01-16T10:00:24.000Z | 2022-01-16T10:00:24.000Z | pkgs/clean-pkg/src/genie/libs/clean/stages/tests/test_power_cycle.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | null | null | null | pkgs/clean-pkg/src/genie/libs/clean/stages/tests/test_power_cycle.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | null | null | null | import logging
import unittest
from unittest.mock import Mock
from genie.libs.clean.stages.stages import PowerCycle
from genie.libs.clean.stages.tests.utils import CommonStageTests, create_test_device
from pyats.aetest.steps import Steps
from pyats.results import Passed, Failed
from pyats.aetest.signals import TerminateStepSignal
# Disable logging. It may be useful to comment this out when developing tests.
logging.disable(logging.CRITICAL)
class Powercycle(unittest.TestCase):
def setUp(self):
# Instantiate class object
self.cls = PowerCycle()
# Instantiate device object. This also sets up commonly needed
# attributes and Mock objects associated with the device.
self.device = create_test_device('PE1', os='iosxe')
def test_pass(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
# And we want the execute_power_cycle_device api to be mocked with device console output.
# To simulate pass case
self.device.api.execute_power_cycle_device = Mock()
# Call the method to be tested (clean step inside class)
self.cls.powercycle(
steps=steps, device=self.device, sleep_after_power_off=0
)
# Check that the result is expected
self.assertEqual(Passed, steps.details[0].result)
def test_fail_to_do_powercycle(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
# And we want the execute_power_cycle_device api to raise an exception when called.
# To simulate fail case
self.device.api.execute_power_cycle_device = Mock(side_effect=Exception)
# We expect this step to fail so make sure it raises the signal
with self.assertRaises(TerminateStepSignal):
self.cls.powercycle(
steps=steps, device=self.device, sleep_after_power_off=0
)
# Check the overall result is as expected
self.assertEqual(Failed, steps.details[0].result)
class Reconnect(unittest.TestCase):
def setUp(self):
# Instantiate class object
self.cls = PowerCycle()
# Instantiate device object. This also sets up commonly needed
# attributes and Mock objects associated with the device.
self.device = create_test_device('PE1', os='iosxe')
def test_pass(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
# And we want the connect method to be mocked with device console output.
# To simulate pass case
self.device.connect = Mock()
# Call the method to be tested (clean step inside class)
self.cls.reconnect(
steps=steps, device=self.device, boot_timeout=5
)
# Check that the result is expected
self.assertEqual(Passed, steps.details[0].result)
def test_fail_to_reconnect(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
# And we want the connect method to raise an exception when called.
# To simulate fail case
self.device.connect = Mock(side_effect=Exception)
# We expect this step to fail so make sure it raises the signal
with self.assertRaises(TerminateStepSignal):
self.cls.reconnect(
steps=steps, device=self.device, boot_timeout=5
)
# Check the overall result is as expected
self.assertEqual(Failed, steps.details[0].result)
| 34.247619 | 97 | 0.677141 | 471 | 3,596 | 5.091295 | 0.239915 | 0.041701 | 0.040033 | 0.023353 | 0.843203 | 0.817348 | 0.817348 | 0.817348 | 0.817348 | 0.795663 | 0 | 0.003738 | 0.256118 | 3,596 | 104 | 98 | 34.576923 | 0.89271 | 0.389878 | 0 | 0.553191 | 0 | 0 | 0.00739 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 1 | 0.12766 | false | 0.106383 | 0.170213 | 0 | 0.340426 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
fe2be06b1929d4c4abce9d0d90c7f218712e8193 | 3,194 | py | Python | tests/markets/test_data_valid.py | overlay-market/v1-core | e18fabd242f21c243a555712d3f08ca059941f41 | [
"MIT"
] | 3 | 2022-02-17T16:11:39.000Z | 2022-03-10T23:46:19.000Z | tests/markets/test_data_valid.py | overlay-market/v1-core | e18fabd242f21c243a555712d3f08ca059941f41 | [
"MIT"
] | 10 | 2022-01-25T21:49:20.000Z | 2022-03-31T00:32:29.000Z | tests/markets/test_data_valid.py | overlay-market/v1-core | e18fabd242f21c243a555712d3f08ca059941f41 | [
"MIT"
] | 2 | 2022-01-21T01:04:54.000Z | 2022-02-23T08:38:20.000Z | from decimal import Decimal
from math import exp
from .utils import RiskParameter
def test_data_is_valid(market, rando):
tx = market.update({"from": rando})
data = tx.return_value
idx = RiskParameter.PRICE_DRIFT_UPPER_LIMIT.value
_, _, _, _, price_macro_now, price_macro_ago, _, _ = data
drift = (market.params(idx) / Decimal(1e18))
dp = price_macro_now / price_macro_ago
dp_lower_limit = exp(-drift * 3000)
dp_upper_limit = exp(drift * 3000)
expect = (dp >= dp_lower_limit and dp <= dp_upper_limit)
actual = market.dataIsValid(data)
assert expect == actual
def test_data_is_valid_when_dp_less_than_lower_limit(market):
tol = 1e-04
idx = RiskParameter.PRICE_DRIFT_UPPER_LIMIT.value
drift = (market.params(idx) / Decimal(1e18))
price_now = 2562676671798193257266
# check data is not valid when price is less than lower limit
pow = Decimal(drift) * Decimal(3000) * Decimal(1+tol)
price_ago = int(price_now * exp(pow))
data = (1643583611, 600, 3000, 2569091057405103628119,
price_now, price_ago,
4677792160494647834844974, True)
expect = False
actual = market.dataIsValid(data)
assert expect == actual
# check data is valid when price is just above the lower limit
pow = Decimal(drift) * Decimal(3000) * Decimal(1-tol)
price_ago = int(price_now * exp(pow))
data = (1643583611, 600, 3000, 2569091057405103628119,
price_now, price_ago,
4677792160494647834844974, True)
expect = True
actual = market.dataIsValid(data)
assert expect == actual
def test_data_is_valid_when_dp_greater_than_upper_limit(market):
tol = 1e-04
idx = RiskParameter.PRICE_DRIFT_UPPER_LIMIT.value
drift = (market.params(idx) / Decimal(1e18))
price_ago = 2562676671798193257266
# check data is not valid when price is greater than upper limit
pow = Decimal(drift) * Decimal(3000) * Decimal(1+tol)
price_now = int(price_ago * exp(pow))
data = (1643583611, 600, 3000, 2569091057405103628119,
price_now, price_ago,
4677792160494647834844974, True)
expect = False
actual = market.dataIsValid(data)
assert expect == actual
# check data is valid when price is just below the upper limit
pow = Decimal(drift) * Decimal(3000) * Decimal(1-tol)
price_now = int(price_ago * exp(pow))
data = (1643583611, 600, 3000, 2569091057405103628119,
price_now, price_ago,
4677792160494647834844974, True)
expect = True
actual = market.dataIsValid(data)
assert expect == actual
def test_data_is_valid_when_price_now_is_zero(market):
data = (1643583611, 600, 3000, 2569091057405103628119,
0, 2565497026032266989873,
4677792160494647834844974, True)
expect = False
actual = market.dataIsValid(data)
assert expect == actual
def test_data_is_valid_when_price_ago_is_zero(market):
data = (1643583611, 600, 3000, 2569091057405103628119,
2565497026032266989873, 0,
4677792160494647834844974, True)
expect = False
actual = market.dataIsValid(data)
assert expect == actual
| 31.94 | 68 | 0.690983 | 392 | 3,194 | 5.408163 | 0.158163 | 0.037736 | 0.036321 | 0.089151 | 0.842925 | 0.834434 | 0.795283 | 0.775943 | 0.724057 | 0.675 | 0 | 0.208552 | 0.223857 | 3,194 | 99 | 69 | 32.262626 | 0.646632 | 0.076393 | 0 | 0.732394 | 0 | 0 | 0.001358 | 0 | 0 | 0 | 0 | 0 | 0.098592 | 1 | 0.070423 | false | 0 | 0.042254 | 0 | 0.112676 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fe7121d8a53a43acdc9169e7aa8a4664880d2607 | 22,944 | py | Python | Yellow_Pages_Saint_Kitts_And_Nevis/unit_tests.py | Jay4C/Web-Scraping | 187679bee035dad661d983b5a8382240f820c337 | [
"MIT"
] | 1 | 2022-02-28T05:05:06.000Z | 2022-02-28T05:05:06.000Z | Yellow_Pages_Saint_Kitts_And_Nevis/unit_tests.py | Jay4C/Web-Scraping | 187679bee035dad661d983b5a8382240f820c337 | [
"MIT"
] | 23 | 2020-03-04T22:17:32.000Z | 2021-01-21T09:35:33.000Z | Yellow_Pages_Saint_Kitts_And_Nevis/unit_tests.py | Jay4C/Web-Scraping | 187679bee035dad661d983b5a8382240f820c337 | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
import requests
import time
import pymysql.cursors
import unittest
class UnitTestsDataMinerYellowPagesSaintKittsAndNevis(unittest.TestCase):
def test_extract_email_from_one_result(self):
print("test_extract_email_from_one_result")
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103'
}
url = "https://www.findyello.com/st-kitts/royal-st-kitts-hotel-casino/profile/"
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("li", {'class': 'profile-menu-website'}) is not None:
email = "info@" + soup \
.find("li", {'class': 'profile-menu-website'}) \
.find("a") \
.get("href") \
.replace('https://www.findyello.com/redirector.php?yelref=https%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=http%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=', '') \
.replace('www.', '') \
.replace('%2F', '') \
.split('/')[0]
print("email : " + email)
else:
print("no email business")
def test_extract_each_email_from_one_page_of_results_for_one_activity_and_one_capital(self):
print("test_extract_each_email_from_one_page_of_results_for_one_activity_and_one_capital")
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103'
}
url = "https://www.findyello.com/st-kitts/?search=hotel&pageno=1"
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("h2", {'class': 'h4'}) is not None:
all_a = soup.find_all("h2", {'class': 'h4'})
for a in all_a:
url = "https://www.findyello.com" + a.find('a').get('href')
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("li", {'class': 'profile-menu-website'}) is not None:
email = "info@" + soup \
.find("li", {'class': 'profile-menu-website'}) \
.find("a") \
.get("href") \
.replace('https://www.findyello.com/redirector.php?yelref=https%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=http%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=', '') \
.replace('www.', '') \
.replace('%2F', '') \
.split('/')[0]
print("email : " + email)
else:
print("no email business")
else:
print("no a class companyName")
def test_extract_each_email_from_all_pages_of_results_for_one_activity_and_one_capital(self):
print("test_extract_each_email_from_all_pages_of_results_for_one_activity_and_one_capital")
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103'
}
activity = "hotel"
city = "st-kitts"
number_of_pages = 0
url_page = "https://www.findyello.com/" + city + "/?search=" + activity
time.sleep(2)
html_search = requests.get(url_page, headers=headers)
soup = BeautifulSoup(html_search.content, 'html.parser')
if soup.find("p", {"class": "strong small lighter"}) is not None:
number_of_pages_with_coma = int(soup.find("p", {"class": "strong small lighter"})
.text
.split("of")[1]
.replace(" ", "")
.replace("Results", "")
) / 15
if int(str(number_of_pages_with_coma).split(".")[1][:1]) < 5:
number_of_pages += round(number_of_pages_with_coma) + 1
print('number_of_pages : ' + str(number_of_pages))
elif int(str(number_of_pages_with_coma).split(".")[1][:1]) >= 5:
number_of_pages += round(number_of_pages_with_coma)
print('number_of_pages : ' + str(number_of_pages))
else:
print("error pages")
i_1 = 0
if number_of_pages > 1:
for i in range(1, number_of_pages + 1):
if i < 20:
url = url_page + "&pageno=" + str(i)
print(url)
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("h2", {'class': 'h4'}) is not None:
all_a = soup.find_all("h2", {'class': 'h4'})
for a in all_a:
i_1 += 1
url = "https://www.findyello.com" + a.find('a').get('href')
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("li", {'class': 'profile-menu-website'}) is not None:
email = "info@" + soup \
.find("li", {'class': 'profile-menu-website'}) \
.find("a") \
.get("href") \
.replace('https://www.findyello.com/redirector.php?yelref=https%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=http%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=', '') \
.replace('www.', '') \
.replace('%2F', '') \
.split('/')[0]
print(str(i_1) + " email : " + email)
else:
print(str(i_1) + " no email business")
else:
print("no a class companyName")
else:
print(url_page)
if soup.find("h2", {'class': 'h4'}) is not None:
all_a = soup.find_all("h2", {'class': 'h4'})
for a in all_a:
i_1 += 1
url = "https://www.findyello.com" + a.find('a').get('href')
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("li", {'class': 'profile-menu-website'}) is not None:
email = "info@" + soup \
.find("li", {'class': 'profile-menu-website'}) \
.find("a") \
.get("href") \
.replace('https://www.findyello.com/redirector.php?yelref=https%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=http%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=', '') \
.replace('www.', '') \
.replace('%2F', '') \
.split('/')[0]
print(str(i_1) + " email : " + email)
else:
print(str(i_1) + " no email business")
else:
print("no a class companyName")
def test_extract_each_email_from_all_pages_of_results_for_all_activities_and_all_capitals(self):
print("test_extract_each_email_from_all_pages_of_results_for_all_activities_and_all_capitals")
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103'
}
activites = [
{'id': '1', 'url': 'employment'}, # Temporary employment agencies
{'id': '2', 'url': 'real-estate'}, # Real estate
{'id': '3', 'url': 'recruitment'}, # Recruiter
{'id': '4', 'url': 'software'}, # software
{'id': '5', 'url': 'hotel'}, # hotel
{'id': '6', 'url': 'social'}, # social landlord
{'id': '7', 'url': 'cleaning'}, # cleaning
{'id': '8', 'url': 'charity'}, # charity
{'id': '9', 'url': 'financial'}, # financial
{'id': '10', 'url': 'restaurant'}, # restaurant
{'id': '11', 'url': 'building'}, # building
{'id': '12', 'url': 'hairdresser'}, # hairdresser
{'id': '13', 'url': 'florist'}, # florist
{'id': '14', 'url': 'locksmith'}, # locksmith
{'id': '15', 'url': 'bakery'}, # bakery
{'id': '16', 'url': 'insurance'}, # insurance
{'id': '17', 'url': 'pharmacy'}, # pharmacy
{'id': '18', 'url': 'mover'}, # mover
{'id': '19', 'url': 'electricity'}, # electricity
{'id': '20', 'url': 'plumbing'}, # plumbing
{'id': '21', 'url': 'security'}, # security
{'id': '22', 'url': 'attorney'}, # attorney
{'id': '23', 'url': 'bank'}, # bank
{'id': '24', 'url': 'garage'}, # garage
{'id': '25', 'url': 'dentist'}, # dentist
{'id': '26', 'url': 'doctor'}, # doctor
{'id': '27', 'url': 'accountant'}, # accountant
{'id': '28', 'url': 'grocery'}, # grocery stores
{'id': '29', 'url': 'notary'}, # notary
{'id': '30', 'url': 'jewellery'}, # jewellery
{'id': '31', 'url': 'tailor'}, # tailor
{'id': '32', 'url': 'butcher'}, # butcher
{'id': '33', 'url': 'library'}, # library
{'id': '34', 'url': 'architect'}, # architect
{'id': '36', 'url': 'cement'}, # cement
{'id': '37', 'url': 'heating'}, # heating
{'id': '38', 'url': 'boat'}, # boat
{'id': '39', 'url': 'cold'}, # cold
{'id': '41', 'url': 'steel'}, # steel
{'id': '42', 'url': 'chemical'}, # chemical
{'id': '43', 'url': 'gas'}, # gas
{'id': '44', 'url': 'gold'} # gold
]
capitales_du_monde = [
{'id': '948', 'nom': 'st-kitts', 'pays': 'st-kitts'},
]
try:
for capitale in capitales_du_monde:
for activite in activites:
activity = activite.get('url')
city = capitale.get('nom')
number_of_pages = 0
url_page = "https://www.findyello.com/" + city + "/?search=" + activity
time.sleep(2)
html_search = requests.get(url_page, headers=headers)
soup = BeautifulSoup(html_search.content, 'html.parser')
if soup.find("p", {"class": "strong small lighter"}) is not None:
number_of_pages_with_coma = int(soup.find("p", {"class": "strong small lighter"})
.text
.split("of")[1]
.replace(" ", "")
.replace("Results", "")
) / 15
if int(str(number_of_pages_with_coma).split(".")[1][:1]) < 5:
number_of_pages += round(number_of_pages_with_coma) + 1
print('number_of_pages : ' + str(number_of_pages))
elif int(str(number_of_pages_with_coma).split(".")[1][:1]) >= 5:
number_of_pages += round(number_of_pages_with_coma)
print('number_of_pages : ' + str(number_of_pages))
else:
print("error pages")
i_1 = 0
if number_of_pages > 1:
for i in range(1, number_of_pages + 1):
if i < 20:
url = url_page + "&pageno=" + str(i)
print(url)
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("h2", {'class': 'h4'}) is not None:
all_a = soup.find_all("h2", {'class': 'h4'})
for a in all_a:
i_1 += 1
url = "https://www.findyello.com" + a.find('a').get('href')
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("li", {'class': 'profile-menu-website'}) is not None:
email = "info@" + soup \
.find("li", {'class': 'profile-menu-website'}) \
.find("a") \
.get("href") \
.replace(
'https://www.findyello.com/redirector.php?yelref=https%3A%2F%2F', '') \
.replace(
'https://www.findyello.com/redirector.php?yelref=http%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=', '') \
.replace('www.', '') \
.replace('%2F', '') \
.split('/')[0]
print(str(i_1) + " email : " + email)
try:
connection = pymysql.connect(
host='localhost',
port=3306,
user='',
password='',
db='contacts_professionnels',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor
)
with connection.cursor() as cursor:
try:
sql = "INSERT INTO `emails` (" \
"`id_activite`, " \
"`id_capitale_du_monde`, " \
"`email`) VALUE (%s, %s, %s)"
cursor.execute(sql, (
activite.get('id'),
capitale.get('id'),
email))
connection.commit()
print(str(i_1)
+ " The record is stored : "
+ email)
connection.close()
except Exception as e:
print(str(i_1)
+ " The record already exists : "
+ email
+ " " + str(e))
connection.close()
except Exception as e:
print(str(i_1) + " An error with the email : " + email + " " + str(e))
else:
print(str(i_1) + " no email business")
else:
print("no a class companyName")
else:
print(url_page)
if soup.find("h2", {'class': 'h4'}) is not None:
all_a = soup.find_all("h2", {'class': 'h4'})
for a in all_a:
i_1 += 1
url = "https://www.findyello.com" + a.find('a').get('href')
time.sleep(2)
# Request the content of a page from the url
html = requests.get(url, headers=headers)
# Parse the content of html_doc
soup = BeautifulSoup(html.content, 'html.parser')
if soup.find("li", {'class': 'profile-menu-website'}) is not None:
email = "info@" + soup \
.find("li", {'class': 'profile-menu-website'}) \
.find("a") \
.get("href") \
.replace('https://www.findyello.com/redirector.php?yelref=https%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=http%3A%2F%2F', '') \
.replace('https://www.findyello.com/redirector.php?yelref=', '') \
.replace('www.', '') \
.replace('%2F', '') \
.split('/')[0]
print(str(i_1) + " email : " + email)
try:
connection = pymysql.connect(
host='localhost',
port=3306,
user='',
password='',
db='contacts_professionnels',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor
)
with connection.cursor() as cursor:
try:
sql = "INSERT INTO `emails` (" \
"`id_activite`, " \
"`id_capitale_du_monde`, " \
"`email`) VALUE (%s, %s, %s)"
cursor.execute(sql, (
activite.get('id'),
capitale.get('id'),
email))
connection.commit()
print(str(i_1)
+ " The record is stored : "
+ email)
connection.close()
except Exception as e:
print(str(i_1)
+ " The record already exists : "
+ email
+ " " + str(e))
connection.close()
except Exception as e:
print(str(i_1) + " An error with the email : " + email + " " + str(e))
else:
print(str(i_1) + " no email business")
else:
print("no a class companyname")
except Exception as e:
print("error : " + str(e))
if __name__ == '__main__':
unittest.main()
| 48.817021 | 120 | 0.372341 | 1,928 | 22,944 | 4.300311 | 0.129668 | 0.027017 | 0.043903 | 0.065131 | 0.838379 | 0.835605 | 0.828248 | 0.828248 | 0.828248 | 0.828248 | 0 | 0.028195 | 0.503792 | 22,944 | 469 | 121 | 48.921109 | 0.700044 | 0.045284 | 0 | 0.771831 | 0 | 0.047887 | 0.196438 | 0.017121 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011268 | false | 0.005634 | 0.014085 | 0 | 0.028169 | 0.107042 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
feaf549a6b49f1b2409da1aac9365eaefd2fcb6b | 152 | py | Python | meiduo_mall1/celery_tasks/config.py | songhaokk/SH | e0263ba51aa81f79e3473314cdb952ff2aabb6cd | [
"MIT"
] | 1 | 2019-10-24T03:30:07.000Z | 2019-10-24T03:30:07.000Z | meiduo_mall1/celery_tasks/config.py | songhaokk/SH | e0263ba51aa81f79e3473314cdb952ff2aabb6cd | [
"MIT"
] | null | null | null | meiduo_mall1/celery_tasks/config.py | songhaokk/SH | e0263ba51aa81f79e3473314cdb952ff2aabb6cd | [
"MIT"
] | null | null | null |
broker_url = "redis://127.0.0.1/14"
result_backend = "redis://127.0.0.1/15"
broker_url = "redis://127.0.0.1/14"
result_backend = "redis://127.0.0.1/15" | 30.4 | 39 | 0.657895 | 32 | 152 | 3 | 0.3125 | 0.333333 | 0.375 | 0.416667 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.228571 | 0.078947 | 152 | 5 | 40 | 30.4 | 0.457143 | 0 | 0 | 1 | 0 | 0 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
2290e4b60991083c8ca8db3553f944d5cb115c84 | 126 | py | Python | tests/test_data/test_data_utils/test_adj_matrix.py | jvrana/caldera | a346324e77f20739e00a82f97530dda4906f59dd | [
"MIT"
] | 2 | 2021-12-13T17:52:17.000Z | 2021-12-13T17:52:18.000Z | tests/test_data/test_data_utils/test_adj_matrix.py | jvrana/caldera | a346324e77f20739e00a82f97530dda4906f59dd | [
"MIT"
] | 4 | 2020-10-06T21:06:15.000Z | 2020-10-10T01:18:23.000Z | tests/test_data/test_data_utils/test_adj_matrix.py | jvrana/caldera | a346324e77f20739e00a82f97530dda4906f59dd | [
"MIT"
] | null | null | null | from caldera.data.utils import adj_matrix
def test_adj_matrix(random_data_example):
M = adj_matrix(random_data_example)
| 21 | 41 | 0.81746 | 20 | 126 | 4.75 | 0.6 | 0.284211 | 0.315789 | 0.4 | 0.547368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 126 | 5 | 42 | 25.2 | 0.855856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
a3a9eb0da636e63be0acab9ce7eb3b09a7d6817f | 4,783 | py | Python | examples/stochastic/visualize_simple.py | JamesBrofos/Adaptive-Normalizing-Flow-Chains | bfcbbf622b3c8472f28a46d33fa71030c6d7cece | [
"MIT"
] | null | null | null | examples/stochastic/visualize_simple.py | JamesBrofos/Adaptive-Normalizing-Flow-Chains | bfcbbf622b3c8472f28a46d33fa71030c6d7cece | [
"MIT"
] | null | null | null | examples/stochastic/visualize_simple.py | JamesBrofos/Adaptive-Normalizing-Flow-Chains | bfcbbf622b3c8472f28a46d33fa71030c6d7cece | [
"MIT"
] | null | null | null | import os
import pickle
import arviz
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats as spst
import targets
with open(os.path.join('samples', 'haario-target-multimodal-num-samples-100000.pkl'), 'rb') as f:
h = pickle.load(f)
with open(os.path.join('samples', 'adaptive-target-multimodal-step-size-0.001-step-size-decay-0.9999-num-samples-100000.pkl'), 'rb') as f:
a = pickle.load(f)
with open(os.path.join('samples', 'langevin-target-multimodal-step-size-0.1-num-samples-100000.pkl'), 'rb') as f:
l = pickle.load(f)
burn = 1000
ess = [
np.array([arviz.ess(h['samples'][burn:, i]) for i in range(2)]),
np.array([arviz.ess(a['samples'][burn:, i]) for i in range(2)]),
np.array([arviz.ess(l['samples'][burn:, i]) for i in range(2)]),
]
ess_per_sec = [
np.array([arviz.ess(h['samples'][burn:, i]) for i in range(2)]) / h['time'],
np.array([arviz.ess(a['samples'][burn:, i]) for i in range(2)]) / a['time'],
np.array([arviz.ess(l['samples'][burn:, i]) for i in range(2)]) / l['time'],
]
plt.figure()
plt.boxplot(ess_per_sec, vert=False)
plt.grid(linestyle=':')
plt.yticks([1, 2, 3], ['Haario\n(R.W.M.)', 'Pseudo-Likelihood\n(I.M.H.)', 'Langevin'], fontsize=20)
plt.xlabel('Effective Sample Size per Second', fontsize=20)
plt.tight_layout()
plt.savefig(os.path.join('images', 'multimodal-ess-per-sec.png'))
target = targets.multimodal_target()[0]
iid = np.array([target.sample() for _ in range(100000)])
ks = []
for m in (h, a, l):
stats = np.zeros(100)
for i in range(len(stats)):
u = np.random.normal(size=(2, ))
u = u / np.linalg.norm(u)
stats[i] = spst.ks_2samp(m['samples']@u, iid@u).statistic
ks.append(stats)
plt.figure()
plt.boxplot(ks, vert=False)
plt.grid(linestyle=':')
plt.yticks([1, 2, 3], ['Haario\n(R.W.M.)', 'Pseudo-Likelihood\n(I.M.H.)', 'Langevin'], fontsize=20)
plt.xlabel('Kolmogorov-Smirnov Statistic', fontsize=20)
plt.tight_layout()
plt.savefig(os.path.join('images', 'multimodal-ks.png'))
num_samples = 100000
w = 1000
r = np.arange(num_samples) + 1
plt.figure()
plt.plot(r, pd.Series(h['ap']).rolling(window=w).mean(), label='Haario')
plt.plot(r, pd.Series(a['ap']).rolling(window=w).mean(), label='Pseudo-Likelihood')
plt.legend(fontsize=20)
plt.grid(linestyle=':')
plt.xlabel('Sampling Iteration', fontsize=20)
plt.ylabel('Acceptance Probability', fontsize=20)
plt.savefig(os.path.join('images', 'multimodal-ap.png'))
with open(os.path.join('samples', 'haario-target-neal-funnel-num-samples-100000.pkl'), 'rb') as f:
h = pickle.load(f)
with open(os.path.join('samples', 'adaptive-target-neal-funnel-step-size-0.001-step-size-decay-0.9999-num-samples-100000.pkl'), 'rb') as f:
a = pickle.load(f)
with open(os.path.join('samples', 'langevin-target-neal-funnel-step-size-0.1-num-samples-100000.pkl'), 'rb') as f:
l = pickle.load(f)
target = targets.neal_funnel_target()[0]
iid = np.array([target.sample() for _ in range(100000)])
ks = []
for m in (h, a, l):
stats = np.zeros(100)
for i in range(len(stats)):
u = np.random.normal(size=(2, ))
u = u / np.linalg.norm(u)
stats[i] = spst.ks_2samp(m['samples']@u, iid@u).statistic
ks.append(stats)
plt.figure()
plt.boxplot(ks, vert=False)
plt.grid(linestyle=':')
plt.yticks([1, 2, 3], ['Haario\n(R.W.M.)', 'Pseudo-Likelihood\n(I.M.H.)', 'Langevin'], fontsize=20)
plt.xlabel('Kolmogorov-Smirnov Statistic', fontsize=20)
plt.tight_layout()
plt.savefig(os.path.join('images', 'neal-funnel-ks.png'))
num_samples = 100000
w = 1000
r = np.arange(num_samples) + 1
plt.figure()
plt.plot(r, pd.Series(h['ap']).rolling(window=w).mean(), label='Haario')
plt.plot(r, pd.Series(a['ap']).rolling(window=w).mean(), label='Pseudo-Likelihood')
plt.legend(fontsize=20)
plt.grid(linestyle=':')
plt.xlabel('Sampling Iteration', fontsize=20)
plt.ylabel('Acceptance Probability', fontsize=20)
plt.savefig(os.path.join('images', 'neal-funnel-ap.png'))
burn = 1000
ess = [
np.array([arviz.ess(h['samples'][burn:, i]) for i in range(2)]),
np.array([arviz.ess(a['samples'][burn:, i]) for i in range(2)]),
np.array([arviz.ess(l['samples'][burn:, i]) for i in range(2)]),
]
ess_per_sec = [
np.array([arviz.ess(h['samples'][burn:, i]) for i in range(2)]) / h['time'],
np.array([arviz.ess(a['samples'][burn:, i]) for i in range(2)]) / a['time'],
np.array([arviz.ess(l['samples'][burn:, i]) for i in range(2)]) / l['time'],
]
plt.figure()
plt.boxplot(ess_per_sec, vert=False)
plt.grid(linestyle=':')
plt.yticks([1, 2, 3], ['Haario\n(R.W.M.)', 'Pseudo-Likelihood\n(I.M.H.)', 'Langevin'], fontsize=20)
plt.xlabel('Effective Sample Size per Second', fontsize=20)
plt.tight_layout()
plt.savefig(os.path.join('images', 'neal-funnel-ess-per-sec.png'))
| 35.962406 | 139 | 0.659628 | 813 | 4,783 | 3.852399 | 0.135301 | 0.03576 | 0.02682 | 0.04917 | 0.92848 | 0.922095 | 0.915709 | 0.912516 | 0.885696 | 0.885696 | 0 | 0.038817 | 0.116663 | 4,783 | 132 | 140 | 36.234848 | 0.702485 | 0 | 0 | 0.767857 | 0 | 0.035714 | 0.25047 | 0.117081 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
432e7b5a440bd7890ef382f4f9f02a1361faee8a | 170 | py | Python | zvt/recorders/tonglian/fundamental/__init__.py | markqiu/zvt | 1bcfb71279f2652c3600f0f8e45d941f98ceaa10 | [
"MIT"
] | 6 | 2020-09-03T10:02:00.000Z | 2021-02-04T02:51:47.000Z | zvt/recorders/tonglian/fundamental/__init__.py | wlwd13303/zvt | 23105a5bfdc3a5080c6c22d11e9e53d216688dea | [
"MIT"
] | null | null | null | zvt/recorders/tonglian/fundamental/__init__.py | wlwd13303/zvt | 23105a5bfdc3a5080c6c22d11e9e53d216688dea | [
"MIT"
] | 2 | 2020-07-08T04:15:40.000Z | 2021-06-08T08:51:31.000Z | # -*- coding: utf-8 -*-
from zvt.recorders.tonglian.fundamental.etf_valuation_recorder import *
from zvt.recorders.tonglian.fundamental.stock_valuation_recorder import * | 42.5 | 73 | 0.811765 | 21 | 170 | 6.380952 | 0.619048 | 0.104478 | 0.238806 | 0.358209 | 0.522388 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 0.076471 | 170 | 4 | 73 | 42.5 | 0.847134 | 0.123529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4a3a1d2ea05694ef4c04d2eb56d46e463647f38f | 36,545 | py | Python | test/azure/low-level/Expected/AcceptanceTests/PagingLowLevel/paginglowlevel/rest/paging/_request_builders_py3.py | cfculhane/autorest.python | 8cbca95faee88d933a58bbbd17b76834faa8d387 | [
"MIT"
] | null | null | null | test/azure/low-level/Expected/AcceptanceTests/PagingLowLevel/paginglowlevel/rest/paging/_request_builders_py3.py | cfculhane/autorest.python | 8cbca95faee88d933a58bbbd17b76834faa8d387 | [
"MIT"
] | null | null | null | test/azure/low-level/Expected/AcceptanceTests/PagingLowLevel/paginglowlevel/rest/paging/_request_builders_py3.py | cfculhane/autorest.python | 8cbca95faee88d933a58bbbd17b76834faa8d387 | [
"MIT"
] | 1 | 2022-03-28T08:58:03.000Z | 2022-03-28T08:58:03.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, Dict, Optional
from azure.core.rest import HttpRequest
from msrest import Serializer
from ..._vendor import _format_url_section
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
def build_get_no_item_name_pages_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that must return result of the default 'value' node.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"value": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/noitemname"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_null_next_link_name_pages_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that must ignore any kind of nextLink, and stop after page 1.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/nullnextlink"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_single_pages_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that finishes on the first call without a nextlink.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/single"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_first_response_empty_request(**kwargs: Any) -> HttpRequest:
"""A paging operation whose first response's items list is empty, but still returns a next link.
Second (and final) call, will give you an items list of 1.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"value": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/firstResponseEmpty/1"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_multiple_pages_request(
*,
client_request_id: Optional[str] = None,
maxresults: Optional[int] = None,
timeout: Optional[int] = 30,
**kwargs: Any
) -> HttpRequest:
"""A paging operation that includes a nextLink that has 10 pages.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword client_request_id:
:paramtype client_request_id: str
:keyword maxresults: Sets the maximum number of items to return in the response.
:paramtype maxresults: int
:keyword timeout: Sets the maximum time that the server can spend processing the request, in
seconds. The default is 30 seconds.
:paramtype timeout: int
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if client_request_id is not None:
header_parameters["client-request-id"] = _SERIALIZER.header("client_request_id", client_request_id, "str")
if maxresults is not None:
header_parameters["maxresults"] = _SERIALIZER.header("maxresults", maxresults, "int")
if timeout is not None:
header_parameters["timeout"] = _SERIALIZER.header("timeout", timeout, "int")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_with_query_params_request(*, required_query_parameter: int, **kwargs: Any) -> HttpRequest:
"""A paging operation that includes a next operation. It has a different query parameter from it's
next operation nextOperationWithQueryParams. Returns a ProductResult.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword query_constant: A constant. Must be True and will be passed as a query parameter to
nextOperationWithQueryParams. The default value is True. Note that overriding this default
value may result in unsupported behavior.
:paramtype query_constant: bool
:keyword required_query_parameter: A required integer query parameter. Put in value '100' to
pass test.
:paramtype required_query_parameter: int
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
query_constant = kwargs.pop("query_constant", True) # type: bool
accept = "application/json"
# Construct URL
url = "/paging/multiple/getWithQueryParams"
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters["requiredQueryParameter"] = _SERIALIZER.query(
"required_query_parameter", required_query_parameter, "int"
)
query_parameters["queryConstant"] = _SERIALIZER.query("query_constant", query_constant, "bool")
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, params=query_parameters, headers=header_parameters, **kwargs)
def build_next_operation_with_query_params_request(**kwargs: Any) -> HttpRequest:
"""Next operation for getWithQueryParams. Pass in next=True to pass test. Returns a ProductResult.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword query_constant: A constant. Must be True. The default value is True. Note that
overriding this default value may result in unsupported behavior.
:paramtype query_constant: bool
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
query_constant = kwargs.pop("query_constant", True) # type: bool
accept = "application/json"
# Construct URL
url = "/paging/multiple/nextOperationWithQueryParams"
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters["queryConstant"] = _SERIALIZER.query("query_constant", query_constant, "bool")
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, params=query_parameters, headers=header_parameters, **kwargs)
def build_get_odata_multiple_pages_request(
*,
client_request_id: Optional[str] = None,
maxresults: Optional[int] = None,
timeout: Optional[int] = 30,
**kwargs: Any
) -> HttpRequest:
"""A paging operation that includes a nextLink in odata format that has 10 pages.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword client_request_id:
:paramtype client_request_id: str
:keyword maxresults: Sets the maximum number of items to return in the response.
:paramtype maxresults: int
:keyword timeout: Sets the maximum time that the server can spend processing the request, in
seconds. The default is 30 seconds.
:paramtype timeout: int
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"odata.nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/odata"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if client_request_id is not None:
header_parameters["client-request-id"] = _SERIALIZER.header("client_request_id", client_request_id, "str")
if maxresults is not None:
header_parameters["maxresults"] = _SERIALIZER.header("maxresults", maxresults, "int")
if timeout is not None:
header_parameters["timeout"] = _SERIALIZER.header("timeout", timeout, "int")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_multiple_pages_with_offset_request(
offset: int,
*,
client_request_id: Optional[str] = None,
maxresults: Optional[int] = None,
timeout: Optional[int] = 30,
**kwargs: Any
) -> HttpRequest:
"""A paging operation that includes a nextLink that has 10 pages.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:param offset: Offset of return value.
:type offset: int
:keyword client_request_id:
:paramtype client_request_id: str
:keyword maxresults: Sets the maximum number of items to return in the response.
:paramtype maxresults: int
:keyword timeout: Sets the maximum time that the server can spend processing the request, in
seconds. The default is 30 seconds.
:paramtype timeout: int
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/withpath/{offset}"
path_format_arguments = {
"offset": _SERIALIZER.url("offset", offset, "int"),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if client_request_id is not None:
header_parameters["client-request-id"] = _SERIALIZER.header("client_request_id", client_request_id, "str")
if maxresults is not None:
header_parameters["maxresults"] = _SERIALIZER.header("maxresults", maxresults, "int")
if timeout is not None:
header_parameters["timeout"] = _SERIALIZER.header("timeout", timeout, "int")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_multiple_pages_retry_first_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that fails on the first call with 500 and then retries and then get a
response including a nextLink that has 10 pages.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/retryfirst"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_multiple_pages_retry_second_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that includes a nextLink that has 10 pages, of which the 2nd call fails
first with 500. The client should retry and finish all 10 pages eventually.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/retrysecond"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_single_pages_failure_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that receives a 400 on the first call.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/single/failure"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_multiple_pages_failure_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that receives a 400 on the second call.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/failure"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_multiple_pages_failure_uri_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that receives an invalid nextLink.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/failureuri"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
def build_get_multiple_pages_fragment_next_link_request(tenant: str, *, api_version: str, **kwargs: Any) -> HttpRequest:
"""A paging operation that doesn't return a full URL, just a fragment.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:param tenant: Sets the tenant to use.
:type tenant: str
:keyword api_version: Sets the api version to use.
:paramtype api_version: str
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"odata.nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/fragment/{tenant}"
path_format_arguments = {
"tenant": _SERIALIZER.url("tenant", tenant, "str"),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters["api_version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, params=query_parameters, headers=header_parameters, **kwargs)
def build_get_multiple_pages_fragment_with_grouping_next_link_request(
tenant: str, *, api_version: str, **kwargs: Any
) -> HttpRequest:
"""A paging operation that doesn't return a full URL, just a fragment with parameters grouped.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:param tenant: Sets the tenant to use.
:type tenant: str
:keyword api_version: Sets the api version to use.
:paramtype api_version: str
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"odata.nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/fragmentwithgrouping/{tenant}"
path_format_arguments = {
"tenant": _SERIALIZER.url("tenant", tenant, "str"),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters["api_version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, params=query_parameters, headers=header_parameters, **kwargs)
def build_get_multiple_pages_lro_request(
*,
client_request_id: Optional[str] = None,
maxresults: Optional[int] = None,
timeout: Optional[int] = 30,
**kwargs: Any
) -> HttpRequest:
"""A long-running paging operation that includes a nextLink that has 10 pages.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword client_request_id:
:paramtype client_request_id: str
:keyword maxresults: Sets the maximum number of items to return in the response.
:paramtype maxresults: int
:keyword timeout: Sets the maximum time that the server can spend processing the request, in
seconds. The default is 30 seconds.
:paramtype timeout: int
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 202
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/lro"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if client_request_id is not None:
header_parameters["client-request-id"] = _SERIALIZER.header("client_request_id", client_request_id, "str")
if maxresults is not None:
header_parameters["maxresults"] = _SERIALIZER.header("maxresults", maxresults, "int")
if timeout is not None:
header_parameters["timeout"] = _SERIALIZER.header("timeout", timeout, "int")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="POST", url=url, headers=header_parameters, **kwargs)
def build_next_fragment_request(tenant: str, next_link: str, *, api_version: str, **kwargs: Any) -> HttpRequest:
"""A paging operation that doesn't return a full URL, just a fragment.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:param tenant: Sets the tenant to use.
:type tenant: str
:param next_link: Next link for list operation.
:type next_link: str
:keyword api_version: Sets the api version to use.
:paramtype api_version: str
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"odata.nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/fragment/{tenant}/{nextLink}"
path_format_arguments = {
"tenant": _SERIALIZER.url("tenant", tenant, "str"),
"nextLink": _SERIALIZER.url("next_link", next_link, "str", skip_quote=True),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters["api_version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, params=query_parameters, headers=header_parameters, **kwargs)
def build_next_fragment_with_grouping_request(
tenant: str, next_link: str, *, api_version: str, **kwargs: Any
) -> HttpRequest:
"""A paging operation that doesn't return a full URL, just a fragment.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:param tenant: Sets the tenant to use.
:type tenant: str
:param next_link: Next link for list operation.
:type next_link: str
:keyword api_version: Sets the api version to use.
:paramtype api_version: str
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"odata.nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/multiple/fragmentwithgrouping/{tenant}/{nextLink}"
path_format_arguments = {
"tenant": _SERIALIZER.url("tenant", tenant, "str"),
"nextLink": _SERIALIZER.url("next_link", next_link, "str", skip_quote=True),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters["api_version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, params=query_parameters, headers=header_parameters, **kwargs)
def build_get_paging_model_with_item_name_with_xms_client_name_request(**kwargs: Any) -> HttpRequest:
"""A paging operation that returns a paging model whose item name is is overriden by
x-ms-client-name 'indexes'.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response.json() == {
"nextLink": "str", # Optional.
"values": [
{
"properties": {
"id": 0, # Optional.
"name": "str" # Optional.
}
}
]
}
"""
accept = "application/json"
# Construct URL
url = "/paging/itemNameWithXMSClientName"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=url, headers=header_parameters, **kwargs)
| 37.559096 | 120 | 0.60208 | 4,042 | 36,545 | 5.339436 | 0.061108 | 0.053378 | 0.024697 | 0.024094 | 0.914883 | 0.914883 | 0.914883 | 0.91289 | 0.905291 | 0.896395 | 0 | 0.004976 | 0.285101 | 36,545 | 972 | 121 | 37.597737 | 0.821098 | 0.598851 | 0 | 0.729358 | 0 | 0 | 0.168863 | 0.044173 | 0 | 0 | 0 | 0 | 0 | 1 | 0.091743 | false | 0 | 0.018349 | 0 | 0.201835 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4a3f6450ce25de6626e3bc7c312aa7d2a1244929 | 4,934 | py | Python | onnx/backend/test/case/node/reducel2.py | stillmatic/onnx | 8d5eb62d5299f6dcb6ac787f0ea8e6cf5b8331a7 | [
"Apache-2.0"
] | null | null | null | onnx/backend/test/case/node/reducel2.py | stillmatic/onnx | 8d5eb62d5299f6dcb6ac787f0ea8e6cf5b8331a7 | [
"Apache-2.0"
] | null | null | null | onnx/backend/test/case/node/reducel2.py | stillmatic/onnx | 8d5eb62d5299f6dcb6ac787f0ea8e6cf5b8331a7 | [
"Apache-2.0"
] | null | null | null | # SPDX-License-Identifier: Apache-2.0
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import onnx
from ..base import Base
from . import expect
class ReduceL2(Base):
@staticmethod
def export_do_not_keepdims(): # type: () -> None
shape = [3, 2, 2]
axes = [2]
keepdims = 0
node = onnx.helper.make_node(
'ReduceL2',
inputs=['data'],
outputs=['reduced'],
axes=axes,
keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
#print(data)
#[[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
#print(reduced)
#[[2.23606798, 5.],
# [7.81024968, 10.63014581],
# [13.45362405, 16.2788206]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_do_not_keepdims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_do_not_keepdims_random')
@staticmethod
def export_keepdims(): # type: () -> None
shape = [3, 2, 2]
axes = [2]
keepdims = 1
node = onnx.helper.make_node(
'ReduceL2',
inputs=['data'],
outputs=['reduced'],
axes=axes,
keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
#print(data)
#[[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
#print(reduced)
#[[[2.23606798], [5.]]
# [[7.81024968], [10.63014581]]
# [[13.45362405], [16.2788206 ]]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_keep_dims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced], name='test_reduce_l2_keep_dims_random')
@staticmethod
def export_default_axes_keepdims(): # type: () -> None
shape = [3, 2, 2]
axes = None
keepdims = 1
node = onnx.helper.make_node(
'ReduceL2',
inputs=['data'],
outputs=['reduced'],
keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
#print(data)
#[[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sqrt(np.sum(
a=np.square(data), axis=axes, keepdims=keepdims == 1))
#print(reduced)
#[[[25.49509757]]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_default_axes_keepdims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sqrt(np.sum(
a=np.square(data), axis=axes, keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_default_axes_keepdims_random')
@staticmethod
def export_negative_axes_keepdims(): # type: () -> None
shape = [3, 2, 2]
axes = [-1]
keepdims = 1
node = onnx.helper.make_node(
'ReduceL2',
inputs=['data'],
outputs=['reduced'],
axes=axes,
keepdims=keepdims
)
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
# print(data)
#[[[1., 2.], [3., 4.]], [[5., 6.], [7., 8.]], [[9., 10.], [11., 12.]]]
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
# print(reduced)
#[[[2.23606798], [5.]]
# [[7.81024968], [10.63014581]]
# [[13.45362405], [16.2788206 ]]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_negative_axes_keep_dims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.sqrt(np.sum(
a=np.square(data), axis=tuple(axes), keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_l2_negative_axes_keep_dims_random')
| 31.628205 | 94 | 0.538103 | 598 | 4,934 | 4.304348 | 0.140468 | 0.06993 | 0.079254 | 0.111888 | 0.879953 | 0.858974 | 0.854312 | 0.854312 | 0.854312 | 0.829448 | 0 | 0.085373 | 0.287799 | 4,934 | 155 | 95 | 31.832258 | 0.647126 | 0.149169 | 0 | 0.717172 | 0 | 0 | 0.094005 | 0.075779 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040404 | false | 0 | 0.080808 | 0 | 0.131313 | 0.010101 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4a419f2e5e19177f8955541d96edf37cd8ef3abf | 371 | py | Python | tests/test.py | justnat3/smlibppm | 6e9affb4c0f07602c2e2149a90ee6ebb6cac2124 | [
"MIT"
] | 2 | 2022-01-14T01:58:00.000Z | 2022-01-26T04:31:12.000Z | tests/test.py | justnat3/smlibppm | 6e9affb4c0f07602c2e2149a90ee6ebb6cac2124 | [
"MIT"
] | 4 | 2022-01-14T01:17:27.000Z | 2022-02-04T03:31:58.000Z | tests/test.py | justnat3/smlibppm | 6e9affb4c0f07602c2e2149a90ee6ebb6cac2124 | [
"MIT"
] | 2 | 2022-01-16T23:14:11.000Z | 2022-01-17T00:13:56.000Z | # fill_rect(generate_rand_color(), pixels, w, h, 50, 50, [1,1])
# draw_rect(generate_rand_color(), pixels, w, h, 3, 3, [3,3])
# fill_rect(generate_rand_color(), pixels, w, h, 3, 3, [3,3])
# draw_rect_rand_color(generate_rand_color(), pixels, w, h, 16, 16, [0, 0])
# draw_line(generate_rand_color(), pixels, w, (1,1), (5,5))
# draw_line_rand_color(pixels, w, (1,1), (5,5)) | 61.833333 | 75 | 0.660377 | 71 | 371 | 3.169014 | 0.225352 | 0.28 | 0.4 | 0.426667 | 0.782222 | 0.746667 | 0.635556 | 0.635556 | 0.293333 | 0.293333 | 0 | 0.085366 | 0.115903 | 371 | 6 | 76 | 61.833333 | 0.60061 | 0.967655 | 0 | null | 1 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4aa0f348224a262d2ffc6bdb4c8ebe94a04831d8 | 104,424 | py | Python | Ene-Jun-2020/lopez-flores-jorge-luis/Extraordinario/app/api.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 41 | 2017-09-26T09:36:32.000Z | 2022-03-19T18:05:25.000Z | Ene-Jun-2020/lopez-flores-jorge-luis/Extraordinario/app/api.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 67 | 2017-09-11T05:06:12.000Z | 2022-02-14T04:44:04.000Z | Ene-Jun-2020/lopez-flores-jorge-luis/Extraordinario/app/api.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 210 | 2017-09-01T00:10:08.000Z | 2022-03-19T18:05:12.000Z |
from .package import Package
class BaseClient(object):
def __init__(self):
self.album = Album(self)
self.artist = Artist(self)
self.auth = Auth(self)
self.chart = Chart(self)
self.event = Event(self)
self.geo = Geo(self)
self.group = Group(self)
self.library = Library(self)
self.playlist = Playlist(self)
self.radio = Radio(self)
self.tag = Tag(self)
self.tasteometer = Tasteometer(self)
self.track = Track(self)
self.user = User(self)
self.venue = Venue(self)
class Album(Package):
def add_tags(self, album, artist, tags):
"""
Tag an album using a list of user supplied tags.
Authorization required.
http://www.last.fm/api/show/album.addTags
:param album: required
(Required) : The album name
:param artist: required
(Required) : The artist name
:param tags: required
(Required) : A comma delimited list of user supplied tags to apply to
this album. Accepts a maximum of 10 tags.
"""
return self._call('POST', 'addTags', auth=True, album=album, artist=artist, tags=tags)
def get_buylinks(self, album, artist, country, autocorrect=None, mbid=None):
"""
Get a list of Buy Links for a particular Album. It is required that
you supply either the artist and track params or the mbid param.
Authorization not required.
http://www.last.fm/api/show/album.getBuylinks
:param album: required
(Required (unless mbid)] : The album
:param artist: required
(Required (unless mbid)] : The artist name
:param country: required
(Required) : A country name or two character country code, as defined
by the ISO 3166-1 country names standard.
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the album
"""
return self._call('GET', 'getBuylinks', auth=False, album=album, artist=artist, country=country,
autocorrect=autocorrect, mbid=mbid)
def get_info(self, album, artist, lang=None, autocorrect=None, username=None, mbid=None):
"""
Get the metadata and tracklist for an album on Last.fm using the album
name or a musicbrainz id.
Authorization not required.
http://www.last.fm/api/show/album.getInfo
:param album: required
(Required (unless mbid)] : The album name
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param lang: optional
(Optional) : The language to return the biography in, expressed as an
ISO 639 alpha-2 code.
:param mbid: optional
(Optional) : The musicbrainz id for the album
:param username: optional
(Optional) : The username for the context of the request. If supplied,
the user's playcount for this album is included in the response.
"""
return self._call('GET', 'getInfo', auth=False, album=album, artist=artist, autocorrect=autocorrect, lang=lang,
mbid=mbid, username=username)
def get_shouts(self, artist, autocorrect=None, page=None, limit=None, mbid=None):
"""
Get shouts for this album. Also available as an rss feed.
Authorization not required.
http://www.last.fm/api/show/album.getShouts
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getShouts', auth=False, artist=artist, autocorrect=autocorrect, limit=limit,
mbid=mbid, page=page)
def get_tags(self, album, artist, autocorrect=None, user=None, mbid=None):
"""
Get the tags applied by an individual user to an album on Last.fm. To
retrieve the list of top tags applied to an album by all users use
album.getTopTags.
Authorization not required.
http://www.last.fm/api/show/album.getTags
:param album: required
(Required (unless mbid)] : The album name
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the album
:param user: optional
(Optional) : If called in non-authenticated mode you must specify the
user to look up
"""
return self._call('GET', 'getTags', auth=False, album=album, artist=artist, autocorrect=autocorrect, mbid=mbid,
user=user)
def get_top_tags(self, album, artist, autocorrect=None, mbid=None):
"""
Get the top tags for an album on Last.fm, ordered by popularity.
Authorization not required.
http://www.last.fm/api/show/album.getTopTags
:param album: required
(Required (unless mbid)] : The album name
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the album
"""
return self._call('GET', 'getTopTags', auth=False, album=album, artist=artist, autocorrect=autocorrect,
mbid=mbid)
def remove_tag(self, album, artist, tag):
"""
Remove a user's tag from an album.
Authorization required.
http://www.last.fm/api/show/album.removeTag
:param album: required
(Required) : The album name
:param artist: required
(Required) : The artist name
:param tag: required
(Required) : A single user tag to remove from this album.
"""
return self._call('POST', 'removeTag', auth=True, album=album, artist=artist, tag=tag)
def search(self, album, limit=None, page=None):
"""
Search for an album by name. Returns album matches sorted by
relevance.
Authorization not required.
http://www.last.fm/api/show/album.search
:param album: required
(Required) : The album name
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 30.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'search', auth=False, album=album, limit=limit, page=page)
def share(self, album, artist, recipient, message=None, public=None):
"""
Share an album with one or more Last.fm users or other friends.
Authorization required.
http://www.last.fm/api/show/album.share
:param album: required
(Required) : An album name.
:param artist: required
(Required) : An artist name.
:param recipient: required
(Required): Email Address | Last.fm Username - A comma delimited list
of email addresses or Last.fm usernames. Maximum is 10.
:param message: optional
(Optional): An optional message to send with the recommendation. If
not supplied a default message will be used.
:param public: optional
(Optional): Optionally show in the sharing users activity feed.
Defaults to 0 (false).
"""
return self._call('POST', 'share', auth=True, album=album, artist=artist, recipient=recipient, message=message,
public=public)
class Artist(Package):
def add_tags(self, artist, tags):
"""
Tag an artist with one or more user supplied tags.
Authorization required.
http://www.last.fm/api/show/artist.addTags
:param artist: required
(Required) : The artist name
:param tags: required
(Required) : A comma delimited list of user supplied tags to apply to
this artist. Accepts a maximum of 10 tags.
"""
return self._call('POST', 'addTags', auth=True, artist=artist, tags=tags)
def get_correction(self, artist):
"""
Use the last.fm corrections data to check whether the supplied artist
has a correction to a canonical artist
Authorization not required.
http://www.last.fm/api/show/artist.getCorrection
:param artist: required
(Required) : The artist name to correct.
"""
return self._call('GET', 'getCorrection', auth=False, artist=artist)
def get_events(self, artist, autocorrect=None, festivalsonly=None, mbid=None, limit=None, page=None):
"""
Get a list of upcoming events for this artist. Easily integratable
into calendars, using the ical standard (see feeds section below).
Authorization not required.
http://www.last.fm/api/show/artist.getEvents
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param festivalsonly: optional, boolean
[0|1] (Optional) : Whether only festivals should be returned, or all
events.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
:param page: optional
(Optiona) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getEvents', auth=False, artist=artist, autocorrect=autocorrect,
festivalsonly=festivalsonly, limit=limit, mbid=mbid, page=page)
def get_info(self, artist, lang=None, username=None, autocorrect=None, mbid=None):
"""
Get the metadata for an artist. Includes biography, truncated at 300
characters.
Authorization not required.
http://www.last.fm/api/show/artist.getInfo
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param lang: optional
(Optional) : The language to return the biography in, expressed as an
ISO 639 alpha-2 code.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
:param username: optional
(Optional) : The username for the context of the request. If supplied,
the user's playcount for this artist is included in the response.
"""
return self._call('GET', 'getInfo', auth=False, artist=artist, autocorrect=autocorrect, lang=lang, mbid=mbid,
username=username)
def get_past_events(self, artist, autocorrect=None, page=None, limit=None, mbid=None):
"""
Get a paginated list of all the events this artist has played at in
the past.
Authorization not required.
http://www.last.fm/api/show/artist.getPastEvents
:param artist: required
(Required (unless mbid)] :The name of the artist you would like to
fetch event listings for.
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
:param page: optional
(Optional) :The page of results to return.
"""
return self._call('GET', 'getPastEvents', auth=False, artist=artist, autocorrect=autocorrect, limit=limit,
mbid=mbid, page=page)
def get_podcast(self, artist, autocorrect=None, mbid=None):
"""
Get a podcast of free mp3s based on an artist
Authorization not required.
http://www.last.fm/api/show/artist.getPodcast
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
"""
return self._call('GET', 'getPodcast', auth=False, artist=artist, autocorrect=autocorrect, mbid=mbid)
def get_shouts(self, artist, autocorrect=None, page=None, limit=None, mbid=None):
"""
Get shouts for this artist. Also available as an rss feed.
Authorization not required.
http://www.last.fm/api/show/artist.getShouts
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getShouts', auth=False, artist=artist, autocorrect=autocorrect, limit=limit,
mbid=mbid, page=page)
def get_similar(self, artist, autocorrect=None, limit=None, mbid=None):
"""
Get all the artists similar to this artist
Authorization not required.
http://www.last.fm/api/show/artist.getSimilar
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param limit: optional
(Optional) : Limit the number of similar artists returned
:param mbid: optional
(Optional) : The musicbrainz id for the artist
"""
return self._call('GET', 'getSimilar', auth=False, artist=artist, autocorrect=autocorrect, limit=limit,
mbid=mbid)
def get_tags(self, artist, user=None, autocorrect=None, mbid=None):
"""
Get the tags applied by an individual user to an artist on Last.fm. If
accessed as an authenticated service /and/ you don't supply a user
parameter then this service will return tags for the authenticated
user. To retrieve the list of top tags applied to an artist by all
users use artist.getTopTags.
Authorization not required.
http://www.last.fm/api/show/artist.getTags
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
:param user: optional
(Optional) : If called in non-authenticated mode you must specify the
user to look up
"""
return self._call('GET', 'getTags', auth=False, artist=artist, autocorrect=autocorrect, mbid=mbid, user=user)
def get_top_albums(self, artist, autocorrect=None, page=None, limit=None, mbid=None):
"""
Get the top albums for an artist on Last.fm, ordered by popularity.
Authorization not required.
http://www.last.fm/api/show/artist.getTopAlbums
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopAlbums', auth=False, artist=artist, autocorrect=autocorrect, limit=limit,
mbid=mbid, page=page)
def get_top_fans(self, artist, autocorrect=None, mbid=None):
"""
Get the top fans for an artist on Last.fm, based on listening data.
Authorization not required.
http://www.last.fm/api/show/artist.getTopFans
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
"""
return self._call('GET', 'getTopFans', auth=False, artist=artist, autocorrect=autocorrect, mbid=mbid)
def get_top_tags(self, artist, autocorrect=None, mbid=None):
"""
Get the top tags for an artist on Last.fm, ordered by popularity.
Authorization not required.
http://www.last.fm/api/show/artist.getTopTags
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
"""
return self._call('GET', 'getTopTags', auth=False, artist=artist, autocorrect=autocorrect, mbid=mbid)
def get_top_tracks(self, artist, autocorrect=None, page=None, limit=None, mbid=None):
"""
Get the top tracks by an artist on Last.fm, ordered by popularity
Authorization not required.
http://www.last.fm/api/show/artist.getTopTracks
:param artist: required
(Required (unless mbid)] : The artist name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist names into correct
artist names, returning the correct version instead. The corrected
artist name will be returned in the response.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param mbid: optional
(Optional) : The musicbrainz id for the artist
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopTracks', auth=False, artist=artist, autocorrect=autocorrect, limit=limit,
mbid=mbid, page=page)
def remove_tag(self, artist, tag):
"""
Remove a user's tag from an artist.
Authorization required.
http://www.last.fm/api/show/artist.removeTag
:param artist: required
(Required) : The artist name
:param tag: required
(Required) : A single user tag to remove from this artist.
"""
return self._call('POST', 'removeTag', auth=True, artist=artist, tag=tag)
def search(self, artist, limit=None, page=None):
"""
Search for an artist by name. Returns artist matches sorted by
relevance.
Authorization not required.
http://www.last.fm/api/show/artist.search
:param artist: required
(Required) : The artist name
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 30.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'search', auth=False, artist=artist, limit=limit, page=page)
def share(self, artist, recipient, message=None, public=None):
"""
Share an artist with Last.fm users or other friends.
Authorization required.
http://www.last.fm/api/show/artist.share
:param artist: required
(Required) : The artist to share.
:param recipient: required
(Required): Email Address | Last.fm Username - A comma delimited list
of email addresses or Last.fm usernames. Maximum is 10.
:param message: optional
(Optional): An optional message to send with the recommendation. If
not supplied a default message will be used.
:param public: optional
(Optional): Optionally show in the sharing users activity feed.
Defaults to 0 (false).
"""
return self._call('POST', 'share', auth=True, artist=artist, recipient=recipient, message=message,
public=public)
def shout(self, artist, message):
"""
Shout in this artist's shoutbox
Authorization required.
http://www.last.fm/api/show/artist.shout
:param artist: required
(Required) : The name of the artist to shout on.
:param message: required
(Required) : The message to post to the shoutbox.
"""
return self._call('POST', 'shout', auth=True, artist=artist, message=message)
class Auth(Package):
def get_mobile_session(self, password, username):
"""
Create a web service session for a user. Used for authenticating a
user when the password can be inputted by the user. Accepts email
address as well, so please use the username supplied in the output.
Only suitable for standalone mobile devices. See the authentication
how-to for more. You must use HTTPS and POST in order to use this
method.
Authorization not required.
http://www.last.fm/api/show/auth.getMobileSession
:param password: required
(Required) : The password in plain text.
:param username: required
(Required) : The last.fm username or email address.
"""
return self._call('GET', 'getMobileSession', auth=False, password=password, username=username)
def get_session(self, token):
"""
Fetch a session key for a user. The third step in the authentication
process. See the authentication how-to for more information.
Authorization not required.
http://www.last.fm/api/show/auth.getSession
:param token: required
(Required) : A 32-character ASCII hexadecimal MD5 hash returned by
step 1 of the authentication process (following the granting of
permissions to the application by the user)
"""
return self._call('GET', 'getSession', auth=False, token=token)
def get_token(self):
"""
Fetch an unathorized request token for an API account. This is step 2
of the authentication process for desktop applications. Web
applications do not need to use this service.
Authorization not required.
http://www.last.fm/api/show/auth.getToken
"""
return self._call('GET', 'getToken', auth=False)
class Chart(Package):
def get_hyped_artists(self, limit=None, page=None):
"""
Get the hyped artists chart
Authorization not required.
http://www.last.fm/api/show/chart.getHypedArtists
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getHypedArtists', auth=False, limit=limit, page=page)
def get_hyped_tracks(self, limit=None, page=None):
"""
Get the top artists chart
Authorization not required.
http://www.last.fm/api/show/chart.getHypedTracks
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getHypedTracks', auth=False, limit=limit, page=page)
def get_loved_tracks(self, limit=None, page=None):
"""
Get the most loved tracks chart
Authorization not required.
http://www.last.fm/api/show/chart.getLovedTracks
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getLovedTracks', auth=False, limit=limit, page=page)
def get_top_artists(self, limit=None, page=None):
"""
Get the top artists chart
Authorization not required.
http://www.last.fm/api/show/chart.getTopArtists
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopArtists', auth=False, limit=limit, page=page)
def get_top_tags(self, limit=None, page=None):
"""
Get the top artists chart
Authorization not required.
http://www.last.fm/api/show/chart.getTopTags
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopTags', auth=False, limit=limit, page=page)
def get_top_tracks(self, limit=None, page=None):
"""
Get the top tracks chart
Authorization not required.
http://www.last.fm/api/show/chart.getTopTracks
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopTracks', auth=False, limit=limit, page=page)
class Event(Package):
def attend(self, event, status):
"""
Set a user's attendance status for an event.
Authorization required.
http://www.last.fm/api/show/event.attend
:param event: required
(Required) : The numeric last.fm event id
:param status: required
(Required) : The attendance status (0=Attending, 1=Maybe attending,
2=Not attending)
"""
return self._call('POST', 'attend', auth=True, event=event, status=status)
def get_attendees(self, event, limit=None, page=None):
"""
Get a list of attendees for an event.
Authorization not required.
http://www.last.fm/api/show/event.getAttendees
:param event: required
(Required) : The numeric last.fm event id
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optiona) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getAttendees', auth=False, event=event, limit=limit, page=page)
def get_info(self, event):
"""
Get the metadata for an event on Last.fm. Includes attendance and
lineup information.
Authorization not required.
http://www.last.fm/api/show/event.getInfo
:param event: required
(Required) : The numeric last.fm event id
"""
return self._call('GET', 'getInfo', auth=False, event=event)
def get_shouts(self, event, limit=None, page=None):
"""
Get shouts for this event. Also available as an rss feed.
Authorization not required.
http://www.last.fm/api/show/event.getShouts
:param event: required
(Required) : The numeric last.fm event id
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getShouts', auth=False, event=event, limit=limit, page=page)
def share(self, event, recipient, message=None, public=None):
"""
Share an event with one or more Last.fm users or other friends.
Authorization required.
http://www.last.fm/api/show/event.share
:param event: required
(Required) : An event ID
:param recipient: required
(Required): Email Address | Last.fm Username - A comma delimited list
of email addresses or Last.fm usernames. Maximum is 10.
:param message: optional
(Optional): An optional message to send with the recommendation. If
not supplied a default message will be used.
:param public: optional
(Optional): Optionally show the share in the sharing users recent
activity. Defaults to 0 (false).
"""
return self._call('POST', 'share', auth=True, event=event, recipient=recipient, message=message, public=public)
def shout(self, event, message):
"""
Shout in this event's shoutbox
Authorization required.
http://www.last.fm/api/show/event.shout
:param event: required
(Required) : The id of the event to shout on
:param message: required
(Required) : The message to post to the shoutbox
"""
return self._call('POST', 'shout', auth=True, event=event, message=message)
class Geo(Package):
def get_events(self, distance=None, festivalsonly=None, long=None, tag=None, limit=None, location=None, lat=None,
page=None):
"""
Get all events in a specific location by country or city name.
Authorization not required.
http://www.last.fm/api/show/geo.getEvents
:param distance: optional
(Optional) : Find events within a specified radius (in kilometres)
:param festivalsonly: optional, boolean
[0|1] (Optional) : Whether only festivals should be returned, or all
events.
:param lat: optional
(Optional) : Specifies a latitude value to retrieve events for
(service returns nearby events by default)
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 10.
:param location: optional
(Optional) : Specifies a location to retrieve events for (service
returns nearby events by default)
:param long: optional
(Optional) : Specifies a longitude value to retrieve events for
(service returns nearby events by default)
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param tag: optional
(Optional) : Specifies a tag to filter by.
"""
return self._call('GET', 'getEvents', auth=False, distance=distance, festivalsonly=festivalsonly, lat=lat,
limit=limit, location=location, long=long, page=page, tag=tag)
def get_metro_artist_chart(self, country, metro, end=None, start=None, limit=None, page=None):
"""
Get a chart of artists for a metro
Authorization not required.
http://www.last.fm/api/show/geo.getMetroArtistChart
:param country: required
(Required) : A country name, as defined by the ISO 3166-1 country
names standard
:param metro: required
(Required) : The metro's name
:param end: optional
(Optional) : Ending timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param start: optional
(Optional) : Beginning timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
"""
return self._call('GET', 'getMetroArtistChart', auth=False, country=country, metro=metro, end=end, limit=limit,
page=page, start=start)
def get_metro_hype_artist_chart(self, country, metro, end=None, start=None, limit=None, page=None):
"""
Get a chart of hyped (up and coming) artists for a metro
Authorization not required.
http://www.last.fm/api/show/geo.getMetroHypeArtistChart
:param country: required
(Required) : A country name, as defined by the ISO 3166-1 country
names standard
:param metro: required
(Required) : The metro's name
:param end: optional
(Optional) : Ending timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param start: optional
(Optional) : Beginning timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
"""
return self._call('GET', 'getMetroHypeArtistChart', auth=False, country=country, metro=metro, end=end,
limit=limit, page=page, start=start)
def get_metro_hype_track_chart(self, country, metro, end=None, start=None, limit=None, page=None):
"""
Get a chart of tracks for a metro
Authorization not required.
http://www.last.fm/api/show/geo.getMetroHypeTrackChart
:param country: required
(Required) : A country name, as defined by the ISO 3166-1 country
names standard
:param metro: required
(Required) : The metro's name
:param end: optional
(Optional) : Ending timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param start: optional
(Optional) : Beginning timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
"""
return self._call('GET', 'getMetroHypeTrackChart', auth=False, country=country, metro=metro, end=end,
limit=limit, page=page, start=start)
def get_metro_track_chart(self, country, metro, end=None, start=None, limit=None, page=None):
"""
Get a chart of tracks for a metro
Authorization not required.
http://www.last.fm/api/show/geo.getMetroTrackChart
:param country: required
(Required) : A country name, as defined by the ISO 3166-1 country
names standard
:param metro: required
(Required) : The metro's name
:param end: optional
(Optional) : Ending timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param start: optional
(Optional) : Beginning timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
"""
return self._call('GET', 'getMetroTrackChart', auth=False, country=country, metro=metro, end=end, limit=limit,
page=page, start=start)
def get_metro_unique_artist_chart(self, country, metro, end=None, start=None, limit=None, page=None):
"""
Get a chart of the artists which make that metro unique
Authorization not required.
http://www.last.fm/api/show/geo.getMetroUniqueArtistChart
:param country: required
(Required) : A country name, as defined by the ISO 3166-1 country
names standard
:param metro: required
(Required) : The metro's name
:param end: optional
(Optional) : Ending timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param start: optional
(Optional) : Beginning timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
"""
return self._call('GET', 'getMetroUniqueArtistChart', auth=False, country=country, metro=metro, end=end,
limit=limit, page=page, start=start)
def get_metro_unique_track_chart(self, country, metro, end=None, start=None, limit=None, page=None):
"""
Get a chart of tracks for a metro
Authorization not required.
http://www.last.fm/api/show/geo.getMetroUniqueTrackChart
:param country: required
(Required) : A country name, as defined by the ISO 3166-1 country
names standard
:param metro: required
(Required) : The metro's name
:param end: optional
(Optional) : Ending timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param start: optional
(Optional) : Beginning timestamp of the weekly range requested (c.f.
geo.getWeeklyChartlist)
"""
return self._call('GET', 'getMetroUniqueTrackChart', auth=False, country=country, metro=metro, end=end,
limit=limit, page=page, start=start)
def get_metro_weekly_chartlist(self, metro=None):
"""
Get a list of available chart periods for this metro, expressed as
date ranges which can be sent to the chart services.
Authorization not required.
http://www.last.fm/api/show/geo.getMetroWeeklyChartlist
:param metro: optional
: The metro name to fetch the charts list for.
"""
return self._call('GET', 'getMetroWeeklyChartlist', auth=False, metro=metro)
def get_metros(self, country=None):
"""
Get a list of valid countries and metros for use in the other
webservices
Authorization not required.
http://www.last.fm/api/show/geo.getMetros
:param country: optional
(Optional) : Optionally restrict the results to those Metros from a
particular country, as defined by the ISO 3166-1 country names
standard
"""
return self._call('GET', 'getMetros', auth=False, country=country)
def get_top_artists(self, country, limit=None, page=None):
"""
Get the most popular artists on Last.fm by country
Authorization not required.
http://www.last.fm/api/show/geo.getTopArtists
:param country: required
(Required) : A country name, as defined by the ISO 3166-1 country
names standard
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopArtists', auth=False, country=country, limit=limit, page=page)
def get_top_tracks(self, country, limit=None, location=None, page=None):
"""
Get the most popular tracks on Last.fm last week by country
Authorization not required.
http://www.last.fm/api/show/geo.getTopTracks
:param country: required
(Required) : A country name, as defined by the ISO 3166-1 country
names standard
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param location: optional
(Optional) : A metro name, to fetch the charts for (must be within the
country specified)
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopTracks', auth=False, country=country, limit=limit, location=location, page=page)
class Group(Package):
def get_hype(self, Group):
"""
Get the hype list for a group
Authorization not required.
http://www.last.fm/api/show/group.getHype
:param Group: required
(Required) : The last.fm group name
"""
return self._call('GET', 'getHype', auth=False, Group=Group)
def get_members(self, group, limit=None, page=None):
"""
Get a list of members for this group.
Authorization not required.
http://www.last.fm/api/show/group.getMembers
:param group: required
(Required) : The group name to fetch the members of.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The results page you would like to fetch
"""
return self._call('GET', 'getMembers', auth=False, group=group, limit=limit, page=page)
def get_weekly_album_chart(self, group, from_=None, to=None):
"""
Get an album chart for a group, for a given date range. If no date
range is supplied, it will return the most recent album chart for this
group.
Authorization not required.
http://www.last.fm/api/show/group.getWeeklyAlbumChart
:param group: required
(Required) : The last.fm group name to fetch the charts of.
:param from_: optional
(Optional) : The date at which the chart should start from. See
Group.getWeeklyChartList for more.
:param to: optional
(Optional) : The date at which the chart should end on. See
Group.getWeeklyChartList for more.
"""
return self._call('GET', 'getWeeklyAlbumChart', auth=False, group=group, from_=from_, to=to)
def get_weekly_artist_chart(self, group, from_=None, to=None):
"""
Get an artist chart for a group, for a given date range. If no date
range is supplied, it will return the most recent album chart for this
group.
Authorization not required.
http://www.last.fm/api/show/group.getWeeklyArtistChart
:param group: required
(Required) : The last.fm group name to fetch the charts of.
:param from_: optional
(Optional) : The date at which the chart should start from. See
Group.getWeeklyChartList for more.
:param to: optional
(Optional) : The date at which the chart should end on. See
Group.getWeeklyChartList for more.
"""
return self._call('GET', 'getWeeklyArtistChart', auth=False, group=group, from_=from_, to=to)
def get_weekly_chart_list(self, group):
"""
Get a list of available charts for this group, expressed as date
ranges which can be sent to the chart services.
Authorization not required.
http://www.last.fm/api/show/group.getWeeklyChartList
:param group: required
(Required) : The last.fm group name to fetch the charts list for.
"""
return self._call('GET', 'getWeeklyChartList', auth=False, group=group)
def get_weekly_track_chart(self, group, from_=None, to=None):
"""
Get a track chart for a group, for a given date range. If no date
range is supplied, it will return the most recent album chart for this
group.
Authorization not required.
http://www.last.fm/api/show/group.getWeeklyTrackChart
:param group: required
(Required) : The last.fm group name to fetch the charts of.
:param from_: optional
(Optional) : The date at which the chart should start from. See
Group.getWeeklyChartList for more.
:param to: optional
(Optional) : The date at which the chart should end on. See
Group.getWeeklyChartList for more.
"""
return self._call('GET', 'getWeeklyTrackChart', auth=False, group=group, from_=from_, to=to)
class Library(Package):
def add_album(self, album, artist):
"""
Add an album or collection of albums to a user's Last.fm library
Authorization required.
http://www.last.fm/api/show/library.addAlbum
:param album: required, multiple
[i] (Required) : The album or collection of albums that you wish to
add. The indices of the albums that you pass MUST correspond to those
of the artists.
:param artist: required, multiple
[i] (Required) : The artist or collection of artists that you wish to
add.
"""
return self._call('POST', 'addAlbum', auth=True, album=album, artist=artist)
def add_artist(self, artist):
"""
Add an artist to a user's Last.fm library
Authorization required.
http://www.last.fm/api/show/library.addArtist
:param artist: required, multiple
[i] (Required) : The artist or collection of artists you wish to add.
"""
return self._call('POST', 'addArtist', auth=True, artist=artist)
def add_track(self, artist, track):
"""
Add a track to a user's Last.fm library
Authorization required.
http://www.last.fm/api/show/library.addTrack
:param artist: required
(Required) : The artist that composed the track
:param track: required
(Required) : The track name you wish to add
"""
return self._call('POST', 'addTrack', auth=True, artist=artist, track=track)
def get_albums(self, user, limit=None, page=None, artist=None):
"""
A paginated list of all the albums in a user's library, with play
counts and tag counts.
Authorization not required.
http://www.last.fm/api/show/library.getAlbums
:param user: required
(Required) : The user whose library you want to fetch.
:param artist: optional
(Optional) : An artist by which to filter tracks
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number you wish to scan to.
"""
return self._call('GET', 'getAlbums', auth=False, user=user, artist=artist, limit=limit, page=page)
def get_artists(self, user, limit=None, page=None):
"""
A paginated list of all the artists in a user's library, with play
counts and tag counts.
Authorization not required.
http://www.last.fm/api/show/library.getArtists
:param user: required
(Required) : The user whose library you want to fetch.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number you wish to scan to.
"""
return self._call('GET', 'getArtists', auth=False, user=user, limit=limit, page=page)
def get_tracks(self, user, album=None, limit=None, page=None, artist=None):
"""
A paginated list of all the tracks in a user's library, with play
counts and tag counts.
Authorization not required.
http://www.last.fm/api/show/library.getTracks
:param user: required
(Required) : The user whose library you want to fetch.
:param album: optional
(Optional) : An album by which to filter tracks (needs an artist)
:param artist: optional
(Optional) : An artist by which to filter tracks
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number you wish to scan to.
"""
return self._call('GET', 'getTracks', auth=False, user=user, album=album, artist=artist, limit=limit, page=page)
def remove_album(self, album, artist):
"""
Remove an album from a user's Last.fm library
Authorization required.
http://www.last.fm/api/show/library.removeAlbum
:param album: required
(Required) : The name of the album you wish to remove
:param artist: required
(Required) : The artist that composed the album
"""
return self._call('POST', 'removeAlbum', auth=True, album=album, artist=artist)
def remove_artist(self, artist):
"""
Remove an artist from a user's Last.fm library
Authorization required.
http://www.last.fm/api/show/library.removeArtist
:param artist: required
(Required) : The artist name you wish to remove
"""
return self._call('POST', 'removeArtist', auth=True, artist=artist)
def remove_scrobble(self, artist, timestamp, track):
"""
Remove a scrobble from a user's Last.fm library
Authorization required.
http://www.last.fm/api/show/library.removeScrobble
:param artist: required
(Required) : The artist that composed the track
:param timestamp: required
(Required) : The unix timestamp of the scrobble that you wish to
remove
:param track: required
(Required) : The name of the track
"""
return self._call('POST', 'removeScrobble', auth=True, artist=artist, timestamp=timestamp, track=track)
def remove_track(self, artist, track):
"""
Remove a track from a user's Last.fm library
Authorization required.
http://www.last.fm/api/show/library.removeTrack
:param artist: required
(Required) : The artist that composed the track
:param track: required
(Required) : The name of the track that you wish to remove
"""
return self._call('POST', 'removeTrack', auth=True, artist=artist, track=track)
class Playlist(Package):
def add_track(self, artist, playlistID, track):
"""
Add a track to a Last.fm user's playlist
Authorization required.
http://www.last.fm/api/show/playlist.addTrack
:param artist: required
(Required) : The artist name that corresponds to the track to be
added.
:param playlistID: required
(Required) : The ID of the playlist - this is available in
user.getPlaylists.
:param track: required
(Required) : The track name to add to the playlist.
"""
return self._call('POST', 'addTrack', auth=True, artist=artist, playlistID=playlistID, track=track)
def create(self, description=None, title=None):
"""
Create a Last.fm playlist on behalf of a user
Authorization required.
http://www.last.fm/api/show/playlist.create
:param description: optional
(Optional) : Description for the playlist
:param title: optional
(Optional) : Title for the playlist
"""
return self._call('POST', 'create', auth=True, description=description, title=title)
class Radio(Package):
def get_playlist(self, speed_multiplier=None, buylinks=None, bitrate=None, rtp=None, discovery=None):
"""
Fetch new radio content periodically in an XSPF format.
Authorization not required.
http://www.last.fm/api/show/radio.getPlaylist
:param bitrate: optional
(Optional) : What bitrate to stream content at, in kbps (supported
bitrates are 64 and 128)
:param buylinks: optional
(Optional) : Whether the response should contain links for
purchase/download, if available (default false)
:param discovery: optional
(Optional) : Whether to request last.fm content with discovery mode
switched on.
:param rtp: optional
(Optional) : Whether the user is scrobbling or not during this radio
session (helps content generation)
:param speed_multiplier: optional
(Optional) : The rate at which to provide the stream (supported
multipliers are 1.0 and 2.0)
"""
return self._call('GET', 'getPlaylist', auth=False, bitrate=bitrate, buylinks=buylinks, discovery=discovery,
rtp=rtp, speed_multiplier=speed_multiplier)
def search(self, name):
"""
Resolve the name of a resource into a station depending on which
resource it is most likely to represent
Authorization not required.
http://www.last.fm/api/show/radio.search
:param name: required
(Required) : The tag or artist to resolve
"""
return self._call('GET', 'search', auth=False, name=name)
def tune(self, station, lang=None):
"""
Tune in to a Last.fm radio station.
Authorization not required.
http://www.last.fm/api/show/radio.tune
:param station: required
(Required) : A lastfm:// radio URL
:param lang: optional
(Optional) : An ISO language code to determine the language to return
the station name in, expressed as an ISO 639 alpha-2 code.
"""
return self._call('GET', 'tune', auth=False, station=station, lang=lang)
class Tag(Package):
def get_info(self, artist, lang=None):
"""
Get the metadata for a tag
Authorization not required.
http://www.last.fm/api/show/tag.getInfo
:param artist: required
(Required (unless mbid)] : The artist name
:param lang: optional
(Optional) : The language to return the biography in, expressed as an
ISO 639 alpha-2 code.
"""
return self._call('GET', 'getInfo', auth=False, artist=artist, lang=lang)
def get_similar(self, tag):
"""
Search for tags similar to this one. Returns tags ranked by
similarity, based on listening data.
Authorization not required.
http://www.last.fm/api/show/tag.getSimilar
:param tag: required
(Required) : The tag name
"""
return self._call('GET', 'getSimilar', auth=False, tag=tag)
def get_top_albums(self, tag, limit=None, page=None):
"""
Get the top albums tagged by this tag, ordered by tag count.
Authorization not required.
http://www.last.fm/api/show/tag.getTopAlbums
:param tag: required
(Required) : The tag name
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopAlbums', auth=False, tag=tag, limit=limit, page=page)
def get_top_artists(self, tag, limit=None, page=None):
"""
Get the top artists tagged by this tag, ordered by tag count.
Authorization not required.
http://www.last.fm/api/show/tag.getTopArtists
:param tag: required
(Required) : The tag name
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopArtists', auth=False, tag=tag, limit=limit, page=page)
def get_top_tags(self):
"""
Fetches the top global tags on Last.fm, sorted by popularity (number
of times used)
Authorization not required.
http://www.last.fm/api/show/tag.getTopTags
"""
return self._call('GET', 'getTopTags', auth=False)
def get_top_tracks(self, tag, limit=None, page=None):
"""
Get the top tracks tagged by this tag, ordered by tag count.
Authorization not required.
http://www.last.fm/api/show/tag.getTopTracks
:param tag: required
(Required) : The tag name
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getTopTracks', auth=False, tag=tag, limit=limit, page=page)
def get_weekly_artist_chart(self, tag, limit=None, from_=None, to=None):
"""
Get an artist chart for a tag, for a given date range. If no date
range is supplied, it will return the most recent artist chart for
this tag.
Authorization not required.
http://www.last.fm/api/show/tag.getWeeklyArtistChart
:param tag: required
(Required) : The tag name
:param from_: optional
(Optional) : The date at which the chart should start from. See
Tag.getWeeklyChartList for more.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param to: optional
(Optional) : The date at which the chart should end on. See
Tag.getWeeklyChartList for more.
"""
return self._call('GET', 'getWeeklyArtistChart', auth=False, tag=tag, from_=from_, limit=limit, to=to)
def get_weekly_chart_list(self, tag):
"""
Get a list of available charts for this tag, expressed as date ranges
which can be sent to the chart services.
Authorization not required.
http://www.last.fm/api/show/tag.getWeeklyChartList
:param tag: required
(Required) : The tag name
"""
return self._call('GET', 'getWeeklyChartList', auth=False, tag=tag)
def search(self, tag, limit=None, page=None):
"""
Search for a tag by name. Returns matches sorted by relevance.
Authorization not required.
http://www.last.fm/api/show/tag.search
:param tag: required
(Required) : The tag name
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 30.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'search', auth=False, tag=tag, limit=limit, page=page)
class Tasteometer(Package):
def compare(self, type, value, limit=None):
"""
Get a Tasteometer score from two inputs, along with a list of shared
artists. If the input is a User some additional information is
returned.
Authorization not required.
http://www.last.fm/api/show/tasteometer.compare
:param type: required, multiple
[1|2] (Required x 2) : 'user' | 'artists'
:param value: required, multiple
[1|2] (Required x 2) : [Last.fm username] | [Comma-separated artist
names (max. 100)]
:param limit: optional
(Optional, default = 5) : How many shared artists to display
"""
return self._call('GET', 'compare', auth=False, type=type, value=value, limit=limit)
def compare_group(self):
"""
This service has been deprecated and is no longer available.
Authorization not required.
http://www.last.fm/api/show/tasteometer.compareGroup
"""
return self._call('GET', 'compareGroup', auth=False)
class Track(Package):
def add_tags(self, artist, tags, track):
"""
Tag an album using a list of user supplied tags.
Authorization required.
http://www.last.fm/api/show/track.addTags
:param artist: required
(Required) : The artist name
:param tags: required
(Required) : A comma delimited list of user supplied tags to apply to
this track. Accepts a maximum of 10 tags.
:param track: required
(Required) : The track name
"""
return self._call('POST', 'addTags', auth=True, artist=artist, tags=tags, track=track)
def ban(self, artist, track):
"""
Ban a track for a given user profile.
Authorization required.
http://www.last.fm/api/show/track.ban
:param artist: required
(Required) : An artist name (utf8 encoded)
:param track: required
(Required) : A track name (utf8 encoded)
"""
return self._call('POST', 'ban', auth=True, artist=artist, track=track)
def get_buylinks(self, artist, country, track, autocorrect=None, mbid=None):
"""
Get a list of Buy Links for a particular Track. It is required that
you supply either the artist and track params or the mbid param.
Authorization not required.
http://www.last.fm/api/show/track.getBuylinks
:param artist: required
(Required (unless mbid)] : The artist name
:param country: required
(Required) : A country name or two character country code, as defined
by the ISO 3166-1 country names standard.
:param track: required
(Required (unless mbid)] : The track name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist and track names into
correct artist and track names, returning the correct version instead.
The corrected artist and track name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the track
"""
return self._call('GET', 'getBuylinks', auth=False, artist=artist, country=country, track=track,
autocorrect=autocorrect, mbid=mbid)
def get_correction(self, artist, track):
"""
Use the last.fm corrections data to check whether the supplied track
has a correction to a canonical track
Authorization not required.
http://www.last.fm/api/show/track.getCorrection
:param artist: required
(Required) : The artist name to correct.
:param track: required
(Required) : The track name to correct.
"""
return self._call('GET', 'getCorrection', auth=False, artist=artist, track=track)
def get_fingerprint_metadata(self, fingerprintid):
"""
Retrieve track metadata associated with a fingerprint id generated by
the Last.fm Fingerprinter. Returns track elements, along with a 'rank'
value between 0 and 1 reflecting the confidence for each match. See
this blog post for more info.
Authorization not required.
http://www.last.fm/api/show/track.getFingerprintMetadata
:param fingerprintid: required
(Required) : The fingerprint id to look up
"""
return self._call('GET', 'getFingerprintMetadata', auth=False, fingerprintid=fingerprintid)
def get_info(self, artist, track, username=None, autocorrect=None, mbid=None):
"""
Get the metadata for a track on Last.fm using the artist/track name or
a musicbrainz id.
Authorization not required.
http://www.last.fm/api/show/track.getInfo
:param artist: required
(Required (unless mbid)] : The artist name
:param track: required
(Required (unless mbid)] : The track name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist and track names into
correct artist and track names, returning the correct version instead.
The corrected artist and track name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the track
:param username: optional
(Optional) : The username for the context of the request. If supplied,
the user's playcount for this track and whether they have loved the
track is included in the response.
"""
return self._call('GET', 'getInfo', auth=False, artist=artist, track=track, autocorrect=autocorrect, mbid=mbid,
username=username)
def get_shouts(self, artist, track, autocorrect=None, mbid=None, limit=None, page=None):
"""
Get shouts for this track. Also available as an rss feed.
Authorization not required.
http://www.last.fm/api/show/track.getShouts
:param artist: required
(Required (unless mbid)] : The artist name
:param track: required
(Required (unless mbid)] : The track name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist and track names into
correct artist and track names, returning the correct version instead.
The corrected artist and track name will be returned in the response.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param mbid: optional
(Optional) : The musicbrainz id for the track
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getShouts', auth=False, artist=artist, track=track, autocorrect=autocorrect,
limit=limit, mbid=mbid, page=page)
def get_similar(self, artist, track, autocorrect=None, limit=None, mbid=None):
"""
Get the similar tracks for this track on Last.fm, based on listening
data.
Authorization not required.
http://www.last.fm/api/show/track.getSimilar
:param artist: required
(Required (unless mbid)] : The artist name
:param track: required
(Required (unless mbid)] : The track name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist and track names into
correct artist and track names, returning the correct version instead.
The corrected artist and track name will be returned in the response.
:param limit: optional
(Optional) : Maximum number of similar tracks to return
:param mbid: optional
(Optional) : The musicbrainz id for the track
"""
return self._call('GET', 'getSimilar', auth=False, artist=artist, track=track, autocorrect=autocorrect,
limit=limit, mbid=mbid)
def get_tags(self, artist, track, autocorrect=None, user=None, mbid=None):
"""
Get the tags applied by an individual user to a track on Last.fm. To
retrieve the list of top tags applied to a track by all users use
track.getTopTags.
Authorization not required.
http://www.last.fm/api/show/track.getTags
:param artist: required
(Required (unless mbid)] : The artist name
:param track: required
(Required (unless mbid)] : The track name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist and track names into
correct artist and track names, returning the correct version instead.
The corrected artist and track name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the track
:param user: optional
(Optional) : If called in non-authenticated mode you must specify the
user to look up
"""
return self._call('GET', 'getTags', auth=False, artist=artist, track=track, autocorrect=autocorrect, mbid=mbid,
user=user)
def get_top_fans(self, artist, track, autocorrect=None, mbid=None):
"""
Get the top fans for this track on Last.fm, based on listening data.
Supply either track & artist name or musicbrainz id.
Authorization not required.
http://www.last.fm/api/show/track.getTopFans
:param artist: required
(Required (unless mbid)] : The artist name
:param track: required
(Required (unless mbid)] : The track name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist and track names into
correct artist and track names, returning the correct version instead.
The corrected artist and track name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the track
"""
return self._call('GET', 'getTopFans', auth=False, artist=artist, track=track, autocorrect=autocorrect,
mbid=mbid)
def get_top_tags(self, artist, track, autocorrect=None, mbid=None):
"""
Get the top tags for this track on Last.fm, ordered by tag count.
Supply either track & artist name or mbid.
Authorization not required.
http://www.last.fm/api/show/track.getTopTags
:param artist: required
(Required (unless mbid)] : The artist name
:param track: required
(Required (unless mbid)] : The track name
:param autocorrect: optional, boolean
[0|1] (Optional) : Transform misspelled artist and track names into
correct artist and track names, returning the correct version instead.
The corrected artist and track name will be returned in the response.
:param mbid: optional
(Optional) : The musicbrainz id for the track
"""
return self._call('GET', 'getTopTags', auth=False, artist=artist, track=track, autocorrect=autocorrect,
mbid=mbid)
def love(self, artist, track):
"""
Love a track for a user profile.
Authorization required.
http://www.last.fm/api/show/track.love
:param artist: required
(Required) : An artist name (utf8 encoded)
:param track: required
(Required) : A track name (utf8 encoded)
"""
return self._call('POST', 'love', auth=True, artist=artist, track=track)
def remove_tag(self, artist, tag, track):
"""
Remove a user's tag from a track.
Authorization required.
http://www.last.fm/api/show/track.removeTag
:param artist: required
(Required) : The artist name
:param tag: required
(Required) : A single user tag to remove from this track.
:param track: required
(Required) : The track name
"""
return self._call('POST', 'removeTag', auth=True, artist=artist, tag=tag, track=track)
def scrobble(self, artist, timestamp, track, album=None, mbid=None, albumArtist=None, context=None, streamId=None,
duration=None, trackNumber=None, chosenByUser=None):
"""
Used to add a track-play to a user's profile. Scrobble a track, or a
batch of tracks. Tracks are passed to the service using array notation
for each of the below params, up to a maximum of 50 scrobbles per
batch [0<=i<=49]. If you are only sending a single scrobble the array
notation may be ommited. Note: Extra care should be taken while
calculating the signature when using array notation as the parameter
names MUST be sorted according to the ASCII table (i.e., artist[10]
comes before artist[1]). It is important to not use the corrections
returned by the now playing service as input for the scrobble request,
unless they have been explicitly approved by the user. Parameter names
are case sensitive.
Authorization required.
http://www.last.fm/api/show/track.scrobble
:param artist: required, multiple
[i] (Required) : The artist name.
:param timestamp: required, multiple
[i] (Required) : The time the track started playing, in UNIX timestamp
format (integer number of seconds since 00:00:00, January 1st 1970
UTC). This must be in the UTC time zone.
:param track: required, multiple
[i] (Required) : The track name.
:param album: optional, multiple
[i] (Optional) : The album name.
:param albumArtist: optional, multiple
[i] (Optional) : The album artist - if this differs from the track
artist.
:param chosenByUser: optional, multiple
[i] (Optional) : Set to 1 if the user chose this song, or 0 if the
song was chosen by someone else (such as a radio station or
recommendation service). Assumes 1 if not specified
:param context: optional, multiple
[i] (Optional) : Sub-client version (not public, only enabled for
certain API keys)
:param duration: optional, multiple
[i] (Optional) : The length of the track in seconds.
:param mbid: optional, multiple
[i] (Optional) : The MusicBrainz Track ID.
:param streamId: optional, multiple
[i] (Optional) : The stream id for this track received from the
radio.getPlaylist service, if scrobbling Last.fm radio
:param trackNumber: optional, multiple
[i] (Optional) : The track number of the track on the album.
"""
return self._call('POST', 'scrobble', auth=True, artist=artist, timestamp=timestamp, track=track, album=album,
albumArtist=albumArtist, chosenByUser=chosenByUser, context=context, duration=duration,
mbid=mbid, streamId=streamId, trackNumber=trackNumber)
def search(self, track, limit=None, page=None, artist=None):
"""
Search for a track by track name. Returns track matches sorted by
relevance.
Authorization not required.
http://www.last.fm/api/show/track.search
:param track: required
(Required) : The track name
:param artist: optional
(Optional) : Narrow your search by specifying an artist.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 30.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'search', auth=False, track=track, artist=artist, limit=limit, page=page)
def share(self, artist, recipient, track, message=None, public=None):
"""
Share a track twith one or more Last.fm users or other friends.
Authorization required.
http://www.last.fm/api/show/track.share
:param artist: required
(Required) : An artist name.
:param recipient: required
(Required): Email Address | Last.fm Username - A comma delimited list
of email addresses or Last.fm usernames. Maximum is 10.
:param track: required
(Required) : A track name.
:param message: optional
(Optional): An optional message to send with the recommendation. If
not supplied a default message will be used.
:param public: optional
(Optional): Optionally show in the sharing users activity feed.
Defaults to 0 (false).
"""
return self._call('POST', 'share', auth=True, artist=artist, recipient=recipient, track=track, message=message,
public=public)
def unban(self, artist, track):
"""
UnBan a track for a user profile.
Authorization required.
http://www.last.fm/api/show/track.unban
:param artist: required
(Required) : An artist name (utf8 encoded)
:param track: required
(Required) : A track name (utf8 encoded)
"""
return self._call('POST', 'unban', auth=True, artist=artist, track=track)
def unlove(self, artist, track):
"""
UnLove a track for a user profile.
Authorization required.
http://www.last.fm/api/show/track.unlove
:param artist: required
(Required) : An artist name (utf8 encoded)
:param track: required
(Required) : A track name (utf8 encoded)
"""
return self._call('POST', 'unlove', auth=True, artist=artist, track=track)
def update_now_playing(self, artist, track, album=None, mbid=None, albumArtist=None, context=None, duration=None,
trackNumber=None):
"""
Used to notify Last.fm that a user has started listening to a track.
Parameter names are case sensitive.
Authorization required.
http://www.last.fm/api/show/track.updateNowPlaying
:param artist: required
(Required) : The artist name.
:param track: required
(Required) : The track name.
:param album: optional
(Optional) : The album name.
:param albumArtist: optional
(Optional) : The album artist - if this differs from the track artist.
:param context: optional
(Optional) : Sub-client version (not public, only enabled for certain
API keys)
:param duration: optional
(Optional) : The length of the track in seconds.
:param mbid: optional
(Optional) : The MusicBrainz Track ID.
:param trackNumber: optional
(Optional) : The track number of the track on the album.
"""
return self._call('POST', 'updateNowPlaying', auth=True, artist=artist, track=track, album=album,
albumArtist=albumArtist, context=context, duration=duration, mbid=mbid,
trackNumber=trackNumber)
class User(Package):
def get_artist_tracks(self, artist, user, startTimestamp=None, page=None, endTimestamp=None):
"""
Get a list of tracks by a given artist scrobbled by this user,
including scrobble time. Can be limited to specific timeranges,
defaults to all time.
Authorization not required.
http://www.last.fm/api/show/user.getArtistTracks
:param artist: required
(Required) : The artist name you are interested in
:param user: required
(Required) : The last.fm username to fetch the recent tracks of.
:param endTimestamp: optional
(Optional) : An unix timestamp to end at.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param startTimestamp: optional
(Optional) : An unix timestamp to start at.
"""
return self._call('GET', 'getArtistTracks', auth=False, artist=artist, user=user, endTimestamp=endTimestamp,
page=page, startTimestamp=startTimestamp)
def get_banned_tracks(self, user, limit=None, page=None):
"""
Returns the tracks banned by the user
Authorization not required.
http://www.last.fm/api/show/user.getBannedTracks
:param user: required
(Required) : The user name
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getBannedTracks', auth=False, user=user, limit=limit, page=page)
def get_events(self, user, limit=None, festivalsonly=None, page=None):
"""
Get a list of upcoming events that this user is attending. Easily
integratable into calendars, using the ical standard (see 'more
formats' section below).
Authorization not required.
http://www.last.fm/api/show/user.getEvents
:param user: required
(Required) : The user to fetch the events for.
:param festivalsonly: optional, boolean
[0|1] (Optional) : Whether only festivals should be returned, or all
events.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getEvents', auth=False, user=user, festivalsonly=festivalsonly, limit=limit,
page=page)
def get_friends(self, user, limit=None, page=None, recenttracks=None):
"""
Get a list of the user's friends on Last.fm.
Authorization not required.
http://www.last.fm/api/show/user.getFriends
:param user: required
(Required) : The last.fm username to fetch the friends of.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param recenttracks: optional
(Optional) : Whether or not to include information about friends'
recent listening in the response.
"""
return self._call('GET', 'getFriends', auth=False, user=user, limit=limit, page=page, recenttracks=recenttracks)
def get_info(self, user=None):
"""
Get information about a user profile.
Authorization not required.
http://www.last.fm/api/show/user.getInfo
:param user: optional
(Optional) : The user to fetch info for. Defaults to the authenticated
user.
"""
return self._call('GET', 'getInfo', auth=False, user=user)
def get_loved_tracks(self, user, limit=None, page=None):
"""
Get the last 50 tracks loved by a user.
Authorization not required.
http://www.last.fm/api/show/user.getLovedTracks
:param user: required
(Required) : The user name to fetch the loved tracks for.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getLovedTracks', auth=False, user=user, limit=limit, page=page)
def get_neighbours(self, user, limit=None):
"""
Get a list of a user's neighbours on Last.fm.
Authorization not required.
http://www.last.fm/api/show/user.getNeighbours
:param user: required
(Required) : The last.fm username to fetch the neighbours of.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
"""
return self._call('GET', 'getNeighbours', auth=False, user=user, limit=limit)
def get_new_releases(self, user, userecs=None):
"""
Gets a list of forthcoming releases based on a user's musical taste.
Authorization not required.
http://www.last.fm/api/show/user.getNewReleases
:param user: required
(Required) : The Last.fm username.
:param userecs: optional
(Optional) : 0 or 1. If 1, the feed contains new releases based on
Last.fm's artist recommendations for this user. Otherwise, it is based
on their library (the default).
"""
return self._call('GET', 'getNewReleases', auth=False, user=user, userecs=userecs)
def get_past_events(self, user, limit=None, page=None):
"""
Get a paginated list of all events a user has attended in the past.
Authorization not required.
http://www.last.fm/api/show/user.getPastEvents
:param user: required
(Required) : The username to fetch the events for.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to scan to.
"""
return self._call('GET', 'getPastEvents', auth=False, user=user, limit=limit, page=page)
def get_personal_tags(self, tag, taggingtype, user, limit=None, page=None):
"""
Get the user's personal tags
Authorization not required.
http://www.last.fm/api/show/user.getPersonalTags
:param tag: required
(Required) : The tag you're interested in.
:param taggingtype: required
[artist|album|track] (Required) : The type of items which have been
tagged
:param user: required
(Required) : The user who performed the taggings.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getPersonalTags', auth=False, tag=tag, taggingtype=taggingtype, user=user,
limit=limit, page=page)
def get_playlists(self, user):
"""
Get a list of a user's playlists on Last.fm.
Authorization not required.
http://www.last.fm/api/show/user.getPlaylists
:param user: required
(Required) : The last.fm username to fetch the playlists of.
"""
return self._call('GET', 'getPlaylists', auth=False, user=user)
def get_recent_stations(self, user, limit=None, page=None):
"""
Get a list of the recent Stations listened to by this user.
Authorization required.
http://www.last.fm/api/show/user.getRecentStations
:param user: required
(Required) : The last.fm username to fetch the recent Stations of.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 10.
Maximum is 25.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getRecentStations', auth=True, user=user, limit=limit, page=page)
def get_recent_tracks(self, user, extended=None, from_=None, to=None, limit=None, page=None):
"""
Get a list of the recent tracks listened to by this user. Also
includes the currently playing track with the nowplaying="true"
attribute if the user is currently listening.
Authorization not required.
http://www.last.fm/api/show/user.getRecentTracks
:param user: required
(Required) : The last.fm username to fetch the recent tracks of.
:param extended: optional
(0|1) (Optional) : Includes extended data in each artist, and whether
or not the user has loved each track
:param from_: optional
(Optional) : Beginning timestamp of a range - only display scrobbles
after this time, in UNIX timestamp format (integer number of seconds
since 00:00:00, January 1st 1970 UTC). This must be in the UTC time
zone.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
Maximum is 200.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param to: optional
(Optional) : End timestamp of a range - only display scrobbles before
this time, in UNIX timestamp format (integer number of seconds since
00:00:00, January 1st 1970 UTC). This must be in the UTC time zone.
"""
return self._call('GET', 'getRecentTracks', auth=False, user=user, extended=extended, from_=from_, limit=limit,
page=page, to=to)
def get_recommended_artists(self, limit=None, page=None):
"""
Get Last.fm artist recommendations for a user
Authorization required.
http://www.last.fm/api/show/user.getRecommendedArtists
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getRecommendedArtists', auth=True, limit=limit, page=page)
def get_recommended_events(self, country=None, festivalsonly=None, longitude=None, limit=None, latitude=None,
page=None):
"""
Get a paginated list of all events recommended to a user by Last.fm,
based on their listening profile.
Authorization required.
http://www.last.fm/api/show/user.getRecommendedEvents
:param country: optional
(Optional) : Optionally find events in a particular country (use
EITHER lat/long or country)
:param festivalsonly: optional, boolean
[0|1] (Optional) : Whether only festivals should be returned, or all
events.
:param latitude: optional
(Optional) : Optionally find events at a particular location (must be
paired with a valid longitude)
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 20.
:param longitude: optional
(Optional) : Optionally find events at a particular location (must be
paired with a valid latitude)
:param page: optional
(Optional) : The page number to scan to.
"""
return self._call('GET', 'getRecommendedEvents', auth=True, country=country, festivalsonly=festivalsonly,
latitude=latitude, limit=limit, longitude=longitude, page=page)
def get_shouts(self, user, limit=None, page=None):
"""
Get shouts for this user. Also available as an rss feed.
Authorization not required.
http://www.last.fm/api/show/user.getShouts
:param user: required
(Required) : The username to fetch shouts for
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
"""
return self._call('GET', 'getShouts', auth=False, user=user, limit=limit, page=page)
def get_top_albums(self, user, limit=None, page=None, period=None):
"""
Get the top albums listened to by a user. You can stipulate a time
period. Sends the overall chart by default.
Authorization not required.
http://www.last.fm/api/show/user.getTopAlbums
:param user: required
(Required) : The user name to fetch top albums for.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param period: optional
(Optional) : overall | 7day | 1month | 3month | 6month | 12month - The
time period over which to retrieve top albums for.
"""
return self._call('GET', 'getTopAlbums', auth=False, user=user, limit=limit, page=page, period=period)
def get_top_artists(self, user, limit=None, page=None, period=None):
"""
Get the top artists listened to by a user. You can stipulate a time
period. Sends the overall chart by default.
Authorization not required.
http://www.last.fm/api/show/user.getTopArtists
:param user: required
(Required) : The user name to fetch top artists for.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param period: optional
(Optional) : overall | 7day | 1month | 3month | 6month | 12month - The
time period over which to retrieve top artists for.
"""
return self._call('GET', 'getTopArtists', auth=False, user=user, limit=limit, page=page, period=period)
def get_top_tags(self, user, limit=None):
"""
Get the top tags used by this user.
Authorization not required.
http://www.last.fm/api/show/user.getTopTags
:param user: required
(Required) : The user name
:param limit: optional
(Optional) : Limit the number of tags returned
"""
return self._call('GET', 'getTopTags', auth=False, user=user, limit=limit)
def get_top_tracks(self, user, limit=None, page=None, period=None):
"""
Get the top tracks listened to by a user. You can stipulate a time
period. Sends the overall chart by default.
Authorization not required.
http://www.last.fm/api/show/user.getTopTracks
:param user: required
(Required) : The user name to fetch top tracks for.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The page number to fetch. Defaults to first page.
:param period: optional
(Optional) : overall | 7day | 1month | 3month | 6month | 12month - The
time period over which to retrieve top tracks for.
"""
return self._call('GET', 'getTopTracks', auth=False, user=user, limit=limit, page=page, period=period)
def get_weekly_album_chart(self, user, to=None, from_=None):
"""
Get an album chart for a user profile, for a given date range. If no
date range is supplied, it will return the most recent album chart for
this user.
Authorization not required.
http://www.last.fm/api/show/user.getWeeklyAlbumChart
:param user: required
(Required) : The last.fm username to fetch the charts of.
:param from_: optional
(Optional) : The date at which the chart should start from. See
User.getChartsList for more.
:param to: optional
(Optional) : The date at which the chart should end on. See
User.getChartsList for more.
"""
return self._call('GET', 'getWeeklyAlbumChart', auth=False, user=user, from_=from_, to=to)
def get_weekly_artist_chart(self, user, to=None, from_=None):
"""
Get an artist chart for a user profile, for a given date range. If no
date range is supplied, it will return the most recent artist chart
for this user.
Authorization not required.
http://www.last.fm/api/show/user.getWeeklyArtistChart
:param user: required
(Required) : The last.fm username to fetch the charts of.
:param from_: optional
(Optional) : The date at which the chart should start from. See
User.getWeeklyChartList for more.
:param to: optional
(Optional) : The date at which the chart should end on. See
User.getWeeklyChartList for more.
"""
return self._call('GET', 'getWeeklyArtistChart', auth=False, user=user, from_=from_, to=to)
def get_weekly_chart_list(self, user):
"""
Get a list of available charts for this user, expressed as date ranges
which can be sent to the chart services.
Authorization not required.
http://www.last.fm/api/show/user.getWeeklyChartList
:param user: required
(Required) : The last.fm username to fetch the charts list for.
"""
return self._call('GET', 'getWeeklyChartList', auth=False, user=user)
def get_weekly_track_chart(self, user, to=None, from_=None):
"""
Get a track chart for a user profile, for a given date range. If no
date range is supplied, it will return the most recent track chart for
this user.
Authorization not required.
http://www.last.fm/api/show/user.getWeeklyTrackChart
:param user: required
(Required) : The last.fm username to fetch the charts of.
:param from_: optional
(Optional) : The date at which the chart should start from. See
User.getWeeklyChartList for more.
:param to: optional
(Optional) : The date at which the chart should end on. See
User.getWeeklyChartList for more.
"""
return self._call('GET', 'getWeeklyTrackChart', auth=False, user=user, from_=from_, to=to)
def shout(self, message, user):
"""
Shout on this user's shoutbox
Authorization required.
http://www.last.fm/api/show/user.shout
:param message: required
(Required) : The message to post to the shoutbox.
:param user: required
(Required) : The name of the user to shout on.
"""
return self._call('POST', 'shout', auth=True, message=message, user=user)
def sign_up(self):
"""
Authorization required.
http://www.last.fm/api/show/user.signUp
"""
return self._call('GET', 'signUp', auth=True)
def terms(self):
"""
Authorization required.
http://www.last.fm/api/show/user.terms
"""
return self._call('GET', 'terms', auth=True)
class Venue(Package):
def get_events(self, venue, festivalsonly=None):
"""
Get a list of upcoming events at this venue.
Authorization not required.
http://www.last.fm/api/show/venue.getEvents
:param venue: required
(Required) :The id for the venue you would like to fetch event
listings for.
:param festivalsonly: optional, boolean
[0|1] (Optional) : Whether only festivals should be returned, or all
events.
"""
return self._call('GET', 'getEvents', auth=False, venue=venue, festivalsonly=festivalsonly)
def get_past_events(self, venue, limit=None, festivalsonly=None, page=None):
"""
Get a paginated list of all the events held at this venue in the past.
Authorization not required.
http://www.last.fm/api/show/venue.getPastEvents
:param venue: required
(Required) :The id for the venue you would like to fetch event
listings for.
:param festivalsonly: optional, boolean
[0|1] (Optional) : Whether only festivals should be returned, or all
events.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) :The page of results to return.
"""
return self._call('GET', 'getPastEvents', auth=False, venue=venue, festivalsonly=festivalsonly, limit=limit,
page=page)
def search(self, venue, country=None, limit=None, page=None):
"""
Search for a venue by venue name
Authorization not required.
http://www.last.fm/api/show/venue.search
:param venue: required
(Required) : The venue name you would like to search for.
:param country: optional
(Optional) : Filter your results by country. Expressed as an ISO
3166-2 code.
:param limit: optional
(Optional) : The number of results to fetch per page. Defaults to 50.
:param page: optional
(Optional) : The results page you would like to fetch
"""
return self._call('GET', 'search', auth=False, venue=venue, country=country, limit=limit, page=page) | 37.188034 | 120 | 0.617693 | 12,821 | 104,424 | 5.002964 | 0.049762 | 0.052882 | 0.045321 | 0.039396 | 0.803829 | 0.779617 | 0.752787 | 0.717288 | 0.687168 | 0.65711 | 0 | 0.005079 | 0.296752 | 104,424 | 2,808 | 121 | 37.188034 | 0.868375 | 0.616841 | 0 | 0.093294 | 0 | 0 | 0.074048 | 0.006184 | 0 | 0 | 0 | 0 | 0 | 1 | 0.390671 | false | 0.005831 | 0.002915 | 0 | 0.827988 | 0.005831 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
434d9770f64f9fbbd2362ccd7eb9a899a001c8c3 | 164 | py | Python | binance_chain/ledger/__init__.py | rzchangcheng/python-binance-chain | 9193517f6272fdf39f34ef630c4be94be5ce43a4 | [
"MIT"
] | 241 | 2019-03-02T06:41:39.000Z | 2022-03-31T11:40:36.000Z | binance_chain/ledger/__init__.py | rzchangcheng/python-binance-chain | 9193517f6272fdf39f34ef630c4be94be5ce43a4 | [
"MIT"
] | 47 | 2019-04-16T11:31:58.000Z | 2022-03-03T06:10:17.000Z | binance_chain/ledger/__init__.py | rzchangcheng/python-binance-chain | 9193517f6272fdf39f34ef630c4be94be5ce43a4 | [
"MIT"
] | 90 | 2019-03-31T21:11:25.000Z | 2022-03-12T09:30:21.000Z | from btchip.btchip import getDongle # noqa
from binance_chain.ledger.client import LedgerApp # noqa
from binance_chain.ledger.wallet import LedgerWallet # noqa
| 32.8 | 60 | 0.817073 | 22 | 164 | 6 | 0.545455 | 0.121212 | 0.227273 | 0.30303 | 0.393939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134146 | 164 | 4 | 61 | 41 | 0.929577 | 0.085366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
43511e80ff7af0af6de6ba5d11fd2419cbc1b2bc | 477 | py | Python | haystack/file_converter/__init__.py | adithyaur99/haystack | 6db9e7eed48520d7e8aeb061a3cc1d1a4b542ab0 | [
"Apache-2.0"
] | 4,544 | 2019-11-14T11:57:49.000Z | 2022-03-31T17:41:18.000Z | haystack/file_converter/__init__.py | adithyaur99/haystack | 6db9e7eed48520d7e8aeb061a3cc1d1a4b542ab0 | [
"Apache-2.0"
] | 1,679 | 2020-01-14T15:55:58.000Z | 2022-03-31T20:55:25.000Z | haystack/file_converter/__init__.py | adithyaur99/haystack | 6db9e7eed48520d7e8aeb061a3cc1d1a4b542ab0 | [
"Apache-2.0"
] | 820 | 2019-11-27T13:01:42.000Z | 2022-03-31T12:54:34.000Z | from haystack.file_converter.base import FileTypeClassifier
from haystack.file_converter.docx import DocxToTextConverter
from haystack.file_converter.markdown import MarkdownConverter
from haystack.file_converter.pdf import PDFToTextConverter
from haystack.file_converter.tika import TikaConverter
from haystack.file_converter.txt import TextConverter
from haystack.file_converter.image import ImageToTextConverter
from haystack.file_converter.pdf import PDFToTextOCRConverter
| 53 | 62 | 0.899371 | 56 | 477 | 7.517857 | 0.357143 | 0.228029 | 0.304038 | 0.475059 | 0.16152 | 0.16152 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067086 | 477 | 8 | 63 | 59.625 | 0.946067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4370b15414be03bd60a634fddc7fd093f8ebb506 | 19,071 | py | Python | tests/zmq_stream_test.py | blueyed/aiozmq | 1765fbd6ae13935aeb0438e2630f8b280f6112f0 | [
"BSD-2-Clause"
] | 2 | 2017-10-11T07:11:53.000Z | 2017-12-26T00:49:55.000Z | tests/zmq_stream_test.py | blueyed/aiozmq | 1765fbd6ae13935aeb0438e2630f8b280f6112f0 | [
"BSD-2-Clause"
] | null | null | null | tests/zmq_stream_test.py | blueyed/aiozmq | 1765fbd6ae13935aeb0438e2630f8b280f6112f0 | [
"BSD-2-Clause"
] | 1 | 2020-02-10T05:22:28.000Z | 2020-02-10T05:22:28.000Z | import unittest
import asyncio
import aiozmq
import zmq
from unittest import mock
from aiozmq.core import SocketEvent
from aiozmq._test_util import check_errno, find_unused_port
from aiozmq.rpc.base import ensure_future
ZMQ_EVENTS = [
getattr(zmq, attr) for attr in dir(zmq) if attr.startswith('EVENT_')]
class ZmqStreamTests(unittest.TestCase):
def setUp(self):
self.loop = aiozmq.ZmqEventLoop()
asyncio.set_event_loop(None)
def tearDown(self):
self.loop.close()
def test_req_rep(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
s2 = yield from aiozmq.create_zmq_stream(
zmq.ROUTER,
connect='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
s1.write([b'request'])
req = yield from s2.read()
self.assertEqual([mock.ANY, b'request'], req)
s2.write([req[0], b'answer'])
answer = yield from s1.read()
self.assertEqual([b'answer'], answer)
self.loop.run_until_complete(go())
def test_closed(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
s2 = yield from aiozmq.create_zmq_stream(
zmq.ROUTER,
connect='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
self.assertFalse(s2.at_closing())
s2.close()
s1.write([b'request'])
with self.assertRaises(aiozmq.ZmqStreamClosed):
yield from s2.read()
self.assertTrue(s2.at_closing())
self.loop.run_until_complete(go())
def test_transport(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
self.assertIsInstance(s1.transport, aiozmq.ZmqTransport)
s1.close()
with self.assertRaises(aiozmq.ZmqStreamClosed):
yield from s1.read()
self.assertIsNone(s1.transport)
self.loop.run_until_complete(go())
def test_get_extra_info(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
self.assertIsInstance(s1.get_extra_info('zmq_socket'),
zmq.Socket)
self.loop.run_until_complete(go())
def test_exception(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
self.assertIsNone(s1.exception())
self.loop.run_until_complete(go())
def test_default_loop(self):
asyncio.set_event_loop(self.loop)
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*')
s1.close()
self.loop.run_until_complete(go())
def test_set_read_buffer_limits1(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
s1.set_read_buffer_limits(low=10)
self.assertEqual(10, s1._low_water)
self.assertEqual(40, s1._high_water)
s1.close()
self.loop.run_until_complete(go())
def test_set_read_buffer_limits2(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
s1.set_read_buffer_limits(high=60)
self.assertEqual(15, s1._low_water)
self.assertEqual(60, s1._high_water)
s1.close()
self.loop.run_until_complete(go())
def test_set_read_buffer_limits3(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
with self.assertRaises(ValueError):
s1.set_read_buffer_limits(high=1, low=2)
s1.close()
self.loop.run_until_complete(go())
def test_pause_reading(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
s2 = yield from aiozmq.create_zmq_stream(
zmq.ROUTER,
connect='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
s2.set_read_buffer_limits(high=5)
s1.write([b'request'])
yield from asyncio.sleep(0.01, loop=self.loop)
self.assertTrue(s2._paused)
msg = yield from s2.read()
self.assertEqual([mock.ANY, b'request'], msg)
self.assertFalse(s2._paused)
self.loop.run_until_complete(go())
def test_set_exception(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
exc = RuntimeError('some exc')
s1.set_exception(exc)
self.assertIs(exc, s1.exception())
with self.assertRaisesRegex(RuntimeError, 'some exc'):
yield from s1.read()
self.loop.run_until_complete(go())
def test_set_exception_with_waiter(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
def f():
yield from s1.read()
t1 = ensure_future(f(), loop=self.loop)
# to run f() up to yield from
yield from asyncio.sleep(0.001, loop=self.loop)
self.assertIsNotNone(s1._waiter)
exc = RuntimeError('some exc')
s1.set_exception(exc)
self.assertIs(exc, s1.exception())
with self.assertRaisesRegex(RuntimeError, 'some exc'):
yield from s1.read()
t1.cancel()
self.loop.run_until_complete(go())
def test_set_exception_with_cancelled_waiter(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
def f():
yield from s1.read()
t1 = ensure_future(f(), loop=self.loop)
# to run f() up to yield from
yield from asyncio.sleep(0.001, loop=self.loop)
self.assertIsNotNone(s1._waiter)
t1.cancel()
exc = RuntimeError('some exc')
s1.set_exception(exc)
self.assertIs(exc, s1.exception())
with self.assertRaisesRegex(RuntimeError, 'some exc'):
yield from s1.read()
self.loop.run_until_complete(go())
def test_double_reading(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
def f():
yield from s1.read()
t1 = ensure_future(f(), loop=self.loop)
# to run f() up to yield from
yield from asyncio.sleep(0.001, loop=self.loop)
with self.assertRaises(RuntimeError):
yield from s1.read()
t1.cancel()
self.loop.run_until_complete(go())
def test_close_on_reading(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
def f():
yield from s1.read()
t1 = ensure_future(f(), loop=self.loop)
# to run f() up to yield from
yield from asyncio.sleep(0.001, loop=self.loop)
s1.close()
yield from asyncio.sleep(0.001, loop=self.loop)
with self.assertRaises(aiozmq.ZmqStreamClosed):
t1.result()
self.loop.run_until_complete(go())
def test_close_on_cancelled_reading(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
def f():
yield from s1.read()
t1 = ensure_future(f(), loop=self.loop)
# to run f() up to yield from
yield from asyncio.sleep(0.001, loop=self.loop)
t1.cancel()
s1.feed_closing()
yield from asyncio.sleep(0.001, loop=self.loop)
with self.assertRaises(asyncio.CancelledError):
t1.result()
self.loop.run_until_complete(go())
def test_feed_cancelled_msg(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
def f():
yield from s1.read()
t1 = ensure_future(f(), loop=self.loop)
# to run f() up to yield from
yield from asyncio.sleep(0.001, loop=self.loop)
t1.cancel()
s1.feed_msg([b'data'])
yield from asyncio.sleep(0.001, loop=self.loop)
with self.assertRaises(asyncio.CancelledError):
t1.result()
self.assertEqual(4, s1._queue_len)
self.assertEqual((4, [b'data']), s1._queue.popleft())
self.loop.run_until_complete(go())
def test_error_on_read(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.REP,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
handler = mock.Mock()
self.loop.set_exception_handler(handler)
s1.write([b'data'])
with self.assertRaises(OSError) as ctx:
yield from s1.read()
check_errno(zmq.EFSM, ctx.exception)
with self.assertRaises(OSError) as ctx2:
yield from s1.drain()
check_errno(zmq.EFSM, ctx2.exception)
self.loop.run_until_complete(go())
def test_drain(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.REP,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
yield from s1.drain()
self.loop.run_until_complete(go())
def test_pause_resume_connection(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
self.assertFalse(s1._paused)
s1._protocol.pause_writing()
self.assertTrue(s1._protocol._paused)
s1._protocol.resume_writing()
self.assertFalse(s1._protocol._paused)
s1.close()
self.loop.run_until_complete(go())
def test_resume_paused_with_drain(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
self.assertFalse(s1._paused)
s1._protocol.pause_writing()
@asyncio.coroutine
def f():
yield from s1.drain()
fut = ensure_future(f(), loop=self.loop)
yield from asyncio.sleep(0.01, loop=self.loop)
self.assertTrue(s1._protocol._paused)
s1._protocol.resume_writing()
self.assertFalse(s1._protocol._paused)
yield from fut
s1.close()
self.loop.run_until_complete(go())
def test_close_paused_connection(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
s1._protocol.pause_writing()
s1.close()
self.loop.run_until_complete(go())
def test_close_paused_with_drain(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
self.assertFalse(s1._paused)
s1._protocol.pause_writing()
@asyncio.coroutine
def f():
yield from s1.drain()
fut = ensure_future(f(), loop=self.loop)
yield from asyncio.sleep(0.01, loop=self.loop)
s1.close()
yield from fut
self.loop.run_until_complete(go())
def test_drain_after_closing(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
s1.close()
yield from asyncio.sleep(0, loop=self.loop)
with self.assertRaises(ConnectionResetError):
yield from s1.drain()
self.loop.run_until_complete(go())
def test_exception_after_drain(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
self.assertFalse(s1._paused)
s1._protocol.pause_writing()
@asyncio.coroutine
def f():
yield from s1.drain()
fut = ensure_future(f(), loop=self.loop)
yield from asyncio.sleep(0.01, loop=self.loop)
exc = RuntimeError("exception")
s1._protocol.connection_lost(exc)
with self.assertRaises(RuntimeError) as cm:
yield from fut
self.assertIs(cm.exception, exc)
self.loop.run_until_complete(go())
def test_double_read_of_closed_stream(self):
port = find_unused_port()
@asyncio.coroutine
def go():
s2 = yield from aiozmq.create_zmq_stream(
zmq.ROUTER,
connect='tcp://127.0.0.1:{}'.format(port),
loop=self.loop)
self.assertFalse(s2.at_closing())
s2.close()
with self.assertRaises(aiozmq.ZmqStreamClosed):
yield from s2.read()
self.assertTrue(s2.at_closing())
with self.assertRaises(aiozmq.ZmqStreamClosed):
yield from s2.read()
self.assertTrue(s2.at_closing())
self.loop.run_until_complete(go())
@unittest.skipIf(
zmq.zmq_version_info() < (4,) or zmq.pyzmq_version_info() < (14, 4,),
"Socket monitor requires libzmq >= 4 and pyzmq >= 14.4")
def test_monitor(self):
port = find_unused_port()
@asyncio.coroutine
def go():
addr = 'tcp://127.0.0.1:{}'.format(port)
s1 = yield from aiozmq.create_zmq_stream(
zmq.ROUTER,
bind=addr,
loop=self.loop)
@asyncio.coroutine
def f(s, events):
try:
while True:
event = yield from s.read_event()
events.append(event)
except aiozmq.ZmqStreamClosed:
pass
s2 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
loop=self.loop)
events = []
t = ensure_future(f(s2, events), loop=self.loop)
yield from s2.transport.enable_monitor()
yield from s2.transport.connect(addr)
yield from s2.transport.disconnect(addr)
yield from s2.transport.connect(addr)
s2.write([b'request'])
req = yield from s1.read()
self.assertEqual([mock.ANY, b'request'], req)
s1.write([req[0], b'answer'])
answer = yield from s2.read()
self.assertEqual([b'answer'], answer)
s2.close()
s1.close()
yield from t
# Confirm that the events received by the monitor were valid.
self.assertGreater(len(events), 0)
while len(events):
event = events.pop()
self.assertIsInstance(event, SocketEvent)
self.assertIn(event.event, ZMQ_EVENTS)
self.loop.run_until_complete(go())
def test_default_events_backlog(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop)
self.assertEqual(100, s1._event_queue.maxlen)
self.loop.run_until_complete(go())
def test_custom_events_backlog(self):
@asyncio.coroutine
def go():
s1 = yield from aiozmq.create_zmq_stream(
zmq.DEALER,
bind='tcp://127.0.0.1:*',
loop=self.loop,
events_backlog=1)
self.assertEqual(1, s1._event_queue.maxlen)
self.loop.run_until_complete(go())
if __name__ == '__main__':
unittest.main()
| 29.116031 | 77 | 0.529443 | 2,202 | 19,071 | 4.425068 | 0.084469 | 0.073071 | 0.070197 | 0.071121 | 0.821429 | 0.798645 | 0.768165 | 0.764573 | 0.728448 | 0.692323 | 0 | 0.034885 | 0.356667 | 19,071 | 654 | 78 | 29.16055 | 0.759312 | 0.011903 | 0 | 0.754167 | 0 | 0 | 0.041516 | 0 | 0 | 0 | 0 | 0 | 0.11875 | 1 | 0.145833 | false | 0.002083 | 0.016667 | 0 | 0.164583 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
43b889c9d2b11778e7b43dae1721221ee709a4bc | 859 | py | Python | eve/optimizers.py | iyaja/eve | a367a58628957068632e51a49730b3a527b69b43 | [
"MIT"
] | 3 | 2020-02-29T11:31:44.000Z | 2020-12-06T12:50:41.000Z | eve/optimizers.py | iyaja/eve | a367a58628957068632e51a49730b3a527b69b43 | [
"MIT"
] | null | null | null | eve/optimizers.py | iyaja/eve | a367a58628957068632e51a49730b3a527b69b43 | [
"MIT"
] | null | null | null | from .ralamb import Ralamb
from .lookahead import Lookahead
from .radam import RAdam
from .eve import EVE
def ralamb(params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0):
ralamb = Ralamb(params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0)
return ralamb
def ranger(params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0):
radam = RAdam(params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0)
return Lookahead(radam)
def rangerlars(params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0):
ralamb = Ralamb(params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0)
return Lookahead(ralamb)
def eve(params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, diffgrad=True):
eve_base = EVE(params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, diffgrad=True)
return Lookahead(eve_base) | 42.95 | 94 | 0.69383 | 169 | 859 | 3.467456 | 0.147929 | 0.109215 | 0.136519 | 0.150171 | 0.709898 | 0.709898 | 0.709898 | 0.709898 | 0.709898 | 0.709898 | 0 | 0.116711 | 0.122235 | 859 | 20 | 95 | 42.95 | 0.660477 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
43bd57b4c9e1ab3ac7fa5045f05361fac37d4fe1 | 5,229 | py | Python | test/operators/test_mul_linear_operator.py | Balandat/linear_operator | 34c1bc6a0bf4010d54243a4503fb24b9c3201b95 | [
"MIT"
] | 18 | 2020-11-13T14:21:38.000Z | 2022-03-01T22:14:07.000Z | test/operators/test_mul_linear_operator.py | Balandat/linear_operator | 34c1bc6a0bf4010d54243a4503fb24b9c3201b95 | [
"MIT"
] | 7 | 2020-11-16T00:53:27.000Z | 2021-01-15T06:10:14.000Z | test/operators/test_mul_linear_operator.py | Balandat/linear_operator | 34c1bc6a0bf4010d54243a4503fb24b9c3201b95 | [
"MIT"
] | 2 | 2020-11-13T02:31:11.000Z | 2021-06-04T12:43:05.000Z | #!/usr/bin/env python3
from __future__ import annotations
import unittest
import torch
from linear_operator.operators import LinearOperator, RootLinearOperator
from linear_operator.test.linear_operator_test_case import LinearOperatorTestCase
def make_random_mat(size, rank, batch_shape=torch.Size(())):
res = torch.randn(*batch_shape, size, rank)
return res
class TestMulLinearOperator(LinearOperatorTestCase, unittest.TestCase):
seed = 10
def create_linear_operator(self):
mat1 = make_random_mat(6, 6)
mat2 = make_random_mat(6, 6)
res = RootLinearOperator(mat1) * RootLinearOperator(mat2)
return res.add_diag(torch.tensor(2.0))
def evaluate_linear_operator(self, linear_operator):
diag_tensor = linear_operator._diag_tensor.to_dense()
res = torch.mul(
linear_operator._linear_operator.left_linear_operator.to_dense(),
linear_operator._linear_operator.right_linear_operator.to_dense(),
)
res = res + diag_tensor
return res
def test_quad_form_derivative(self):
linear_operator = self.create_linear_operator().requires_grad_(True)
linear_operator._diag_tensor.requires_grad_(False)
linear_operator_clone = linear_operator.clone().detach_().requires_grad_(True)
linear_operator_clone._diag_tensor.requires_grad_(False)
left_vecs = torch.randn(*linear_operator.batch_shape, linear_operator.size(-2), 2)
right_vecs = torch.randn(*linear_operator.batch_shape, linear_operator.size(-1), 2)
deriv_custom = linear_operator._quad_form_derivative(left_vecs, right_vecs)
deriv_auto = LinearOperator._quad_form_derivative(linear_operator_clone, left_vecs, right_vecs)
for dc, da in zip(deriv_custom, deriv_auto):
if dc is not None or da is not None:
self.assertAllClose(dc, da)
class TestMulLinearOperatorBatch(LinearOperatorTestCase, unittest.TestCase):
seed = 2
def create_linear_operator(self):
mat1 = make_random_mat(6, rank=6, batch_shape=torch.Size((2,)))
mat2 = make_random_mat(6, rank=6, batch_shape=torch.Size((2,)))
res = RootLinearOperator(mat1) * RootLinearOperator(mat2)
return res.add_diag(torch.tensor(2.0))
def evaluate_linear_operator(self, linear_operator):
diag_tensor = linear_operator._diag_tensor.to_dense()
res = torch.mul(
linear_operator._linear_operator.left_linear_operator.to_dense(),
linear_operator._linear_operator.right_linear_operator.to_dense(),
)
res = res + diag_tensor
return res
def test_quad_form_derivative(self):
linear_operator = self.create_linear_operator().requires_grad_(True)
linear_operator._diag_tensor.requires_grad_(False)
linear_operator_clone = linear_operator.clone().detach_().requires_grad_(True)
linear_operator_clone._diag_tensor.requires_grad_(False)
left_vecs = torch.randn(*linear_operator.batch_shape, linear_operator.size(-2), 2)
right_vecs = torch.randn(*linear_operator.batch_shape, linear_operator.size(-1), 2)
deriv_custom = linear_operator._quad_form_derivative(left_vecs, right_vecs)
deriv_auto = LinearOperator._quad_form_derivative(linear_operator_clone, left_vecs, right_vecs)
for dc, da in zip(deriv_custom, deriv_auto):
if dc is not None or da is not None:
self.assertAllClose(dc, da)
class TestMulLinearOperatorMultiBatch(LinearOperatorTestCase, unittest.TestCase):
seed = 1
skip_slq_tests = True
def create_linear_operator(self):
mat1 = make_random_mat(6, rank=6, batch_shape=torch.Size((2, 3)))
mat2 = make_random_mat(6, rank=6, batch_shape=torch.Size((2, 3)))
res = RootLinearOperator(mat1) * RootLinearOperator(mat2)
return res.add_diag(torch.tensor(0.5))
def evaluate_linear_operator(self, linear_operator):
diag_tensor = linear_operator._diag_tensor.to_dense()
res = torch.mul(
linear_operator._linear_operator.left_linear_operator.to_dense(),
linear_operator._linear_operator.right_linear_operator.to_dense(),
)
res = res + diag_tensor
return res
def test_inv_quad_logdet(self):
pass
def test_quad_form_derivative(self):
linear_operator = self.create_linear_operator().requires_grad_(True)
linear_operator._diag_tensor.requires_grad_(False)
linear_operator_clone = linear_operator.clone().detach_().requires_grad_(True)
linear_operator_clone._diag_tensor.requires_grad_(False)
left_vecs = torch.randn(*linear_operator.batch_shape, linear_operator.size(-2), 2)
right_vecs = torch.randn(*linear_operator.batch_shape, linear_operator.size(-1), 2)
deriv_custom = linear_operator._quad_form_derivative(left_vecs, right_vecs)
deriv_auto = LinearOperator._quad_form_derivative(linear_operator_clone, left_vecs, right_vecs)
for dc, da in zip(deriv_custom, deriv_auto):
if dc is not None or da is not None:
self.assertAllClose(dc, da)
if __name__ == "__main__":
unittest.main()
| 42.512195 | 103 | 0.718876 | 666 | 5,229 | 5.247748 | 0.130631 | 0.276395 | 0.065236 | 0.061803 | 0.839199 | 0.833476 | 0.833476 | 0.833476 | 0.833476 | 0.833476 | 0 | 0.012547 | 0.192197 | 5,229 | 122 | 104 | 42.860656 | 0.814867 | 0.004016 | 0 | 0.702128 | 0 | 0 | 0.001536 | 0 | 0 | 0 | 0 | 0 | 0.031915 | 1 | 0.117021 | false | 0.010638 | 0.053191 | 0 | 0.319149 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
43c6411b48a815d760797e9e48d9d7d163b44721 | 50,649 | py | Python | huaweicloud-sdk-cpts/huaweicloudsdkcpts/v1/cpts_async_client.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 64 | 2020-06-12T07:05:07.000Z | 2022-03-30T03:32:50.000Z | huaweicloud-sdk-cpts/huaweicloudsdkcpts/v1/cpts_async_client.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 11 | 2020-07-06T07:56:54.000Z | 2022-01-11T11:14:40.000Z | huaweicloud-sdk-cpts/huaweicloudsdkcpts/v1/cpts_async_client.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 24 | 2020-06-08T11:42:13.000Z | 2022-03-04T06:44:08.000Z | # coding: utf-8
from __future__ import absolute_import
import datetime
import re
import importlib
import six
from huaweicloudsdkcore.client import Client, ClientBuilder
from huaweicloudsdkcore.exceptions import exceptions
from huaweicloudsdkcore.utils import http_utils
from huaweicloudsdkcore.sdk_stream_request import SdkStreamRequest
class CptsAsyncClient(Client):
"""
:param configuration: .Configuration object for this client
:param pool_threads: The number of threads to use for async requests
to the API. More threads means more concurrent API requests.
"""
PRIMITIVE_TYPES = (float, bool, bytes, six.text_type) + six.integer_types
NATIVE_TYPES_MAPPING = {
'int': int,
'long': int if six.PY3 else long,
'float': float,
'str': str,
'bool': bool,
'date': datetime.date,
'datetime': datetime.datetime,
'object': object,
}
def __init__(self):
super(CptsAsyncClient, self).__init__()
self.model_package = importlib.import_module("huaweicloudsdkcpts.v1.model")
self.preset_headers = {'User-Agent': 'HuaweiCloud-SDK-Python'}
@classmethod
def new_builder(cls, clazz=None):
if clazz is None:
return ClientBuilder(cls)
if clazz.__name__ != "CptsClient":
raise TypeError("client type error, support client type is CptsClient")
return ClientBuilder(clazz)
def create_case_async(self, request):
"""创建用例
创建用例
:param CreateCaseRequest request
:return: CreateCaseResponse
"""
return self.create_case_with_http_info(request)
def create_case_with_http_info(self, request):
"""创建用例
创建用例
:param CreateCaseRequest request
:return: CreateCaseResponse
"""
all_params = ['create_case_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/task-cases',
method='POST',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='CreateCaseResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def create_task_async(self, request):
"""创建任务
创建任务
:param CreateTaskRequest request
:return: CreateTaskResponse
"""
return self.create_task_with_http_info(request)
def create_task_with_http_info(self, request):
"""创建任务
创建任务
:param CreateTaskRequest request
:return: CreateTaskResponse
"""
all_params = ['create_task_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/tasks',
method='POST',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='CreateTaskResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def create_temp_async(self, request):
"""创建事务
创建事务
:param CreateTempRequest request
:return: CreateTempResponse
"""
return self.create_temp_with_http_info(request)
def create_temp_with_http_info(self, request):
"""创建事务
创建事务
:param CreateTempRequest request
:return: CreateTempResponse
"""
all_params = ['create_temp_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/templates',
method='POST',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='CreateTempResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def create_variable_async(self, request):
"""创建变量
创建变量
:param CreateVariableRequest request
:return: CreateVariableResponse
"""
return self.create_variable_with_http_info(request)
def create_variable_with_http_info(self, request):
"""创建变量
创建变量
:param CreateVariableRequest request
:return: CreateVariableResponse
"""
all_params = ['test_suite_id', 'create_variable_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/variables/{test_suite_id}',
method='POST',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='CreateVariableResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def debug_case_async(self, request):
"""调试用例
调试用例
:param DebugCaseRequest request
:return: DebugCaseResponse
"""
return self.debug_case_with_http_info(request)
def debug_case_with_http_info(self, request):
"""调试用例
调试用例
:param DebugCaseRequest request
:return: DebugCaseResponse
"""
all_params = ['test_suite_id', 'task_id', 'case_id', 'debug_case_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
if 'task_id' in local_var_params:
path_params['task_id'] = local_var_params['task_id']
if 'case_id' in local_var_params:
path_params['case_id'] = local_var_params['case_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/test-suites/{test_suite_id}/tasks/{task_id}/cases/{case_id}/debug',
method='POST',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='DebugCaseResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def delete_case_async(self, request):
"""删除用例
删除用例
:param DeleteCaseRequest request
:return: DeleteCaseResponse
"""
return self.delete_case_with_http_info(request)
def delete_case_with_http_info(self, request):
"""删除用例
删除用例
:param DeleteCaseRequest request
:return: DeleteCaseResponse
"""
all_params = ['case_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'case_id' in local_var_params:
path_params['case_id'] = local_var_params['case_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/task-cases/{case_id}',
method='DELETE',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='DeleteCaseResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def delete_task_async(self, request):
"""删除任务
删除任务
:param DeleteTaskRequest request
:return: DeleteTaskResponse
"""
return self.delete_task_with_http_info(request)
def delete_task_with_http_info(self, request):
"""删除任务
删除任务
:param DeleteTaskRequest request
:return: DeleteTaskResponse
"""
all_params = ['task_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'task_id' in local_var_params:
path_params['task_id'] = local_var_params['task_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/tasks/{task_id}',
method='DELETE',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='DeleteTaskResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def delete_temp_async(self, request):
"""删除事务
删除事务
:param DeleteTempRequest request
:return: DeleteTempResponse
"""
return self.delete_temp_with_http_info(request)
def delete_temp_with_http_info(self, request):
"""删除事务
删除事务
:param DeleteTempRequest request
:return: DeleteTempResponse
"""
all_params = ['template_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'template_id' in local_var_params:
path_params['template_id'] = local_var_params['template_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/templates/{template_id}',
method='DELETE',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='DeleteTempResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def list_variables_async(self, request):
"""查询全局变量
查询全局变量
:param ListVariablesRequest request
:return: ListVariablesResponse
"""
return self.list_variables_with_http_info(request)
def list_variables_with_http_info(self, request):
"""查询全局变量
查询全局变量
:param ListVariablesRequest request
:return: ListVariablesResponse
"""
all_params = ['variable_type', 'test_suite_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'variable_type' in local_var_params:
path_params['variable_type'] = local_var_params['variable_type']
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/variables/{variable_type}/test-suites/{test_suite_id}',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ListVariablesResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def show_history_run_info_async(self, request):
"""查询CPTS任务离线报告列表
查询CPTS任务离线报告列表
:param ShowHistoryRunInfoRequest request
:return: ShowHistoryRunInfoResponse
"""
return self.show_history_run_info_with_http_info(request)
def show_history_run_info_with_http_info(self, request):
"""查询CPTS任务离线报告列表
查询CPTS任务离线报告列表
:param ShowHistoryRunInfoRequest request
:return: ShowHistoryRunInfoResponse
"""
all_params = ['task_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'task_id' in local_var_params:
path_params['task_id'] = local_var_params['task_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/tasks/history-run-list/{task_id}',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ShowHistoryRunInfoResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def show_report_async(self, request):
"""查询报告
查询报告
:param ShowReportRequest request
:return: ShowReportResponse
"""
return self.show_report_with_http_info(request)
def show_report_with_http_info(self, request):
"""查询报告
查询报告
:param ShowReportRequest request
:return: ShowReportResponse
"""
all_params = ['task_run_id', 'case_run_id', 'brokens_limit_count']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'task_run_id' in local_var_params:
path_params['task_run_id'] = local_var_params['task_run_id']
if 'case_run_id' in local_var_params:
path_params['case_run_id'] = local_var_params['case_run_id']
query_params = []
if 'brokens_limit_count' in local_var_params:
query_params.append(('brokens_limit_count', local_var_params['brokens_limit_count']))
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/task-run-infos/{task_run_id}/case-run-infos/{case_run_id}/reports',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ShowReportResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def show_task_async(self, request):
"""查询任务
查询任务
:param ShowTaskRequest request
:return: ShowTaskResponse
"""
return self.show_task_with_http_info(request)
def show_task_with_http_info(self, request):
"""查询任务
查询任务
:param ShowTaskRequest request
:return: ShowTaskResponse
"""
all_params = ['task_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'task_id' in local_var_params:
path_params['task_id'] = local_var_params['task_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/tasks/{task_id}',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ShowTaskResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def show_task_set_async(self, request):
"""查询任务集
查询任务集
:param ShowTaskSetRequest request
:return: ShowTaskSetResponse
"""
return self.show_task_set_with_http_info(request)
def show_task_set_with_http_info(self, request):
"""查询任务集
查询任务集
:param ShowTaskSetRequest request
:return: ShowTaskSetResponse
"""
all_params = ['test_suite_id', 'offset', 'limit']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
query_params = []
if 'offset' in local_var_params:
query_params.append(('offset', local_var_params['offset']))
if 'limit' in local_var_params:
query_params.append(('limit', local_var_params['limit']))
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/all-tasks/{test_suite_id}',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ShowTaskSetResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def show_temp_async(self, request):
"""查询事务
查询事务
:param ShowTempRequest request
:return: ShowTempResponse
"""
return self.show_temp_with_http_info(request)
def show_temp_with_http_info(self, request):
"""查询事务
查询事务
:param ShowTempRequest request
:return: ShowTempResponse
"""
all_params = ['template_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'template_id' in local_var_params:
path_params['template_id'] = local_var_params['template_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/templates/{template_id}',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ShowTempResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def show_temp_set_async(self, request):
"""查询事务集
查询事务集
:param ShowTempSetRequest request
:return: ShowTempSetResponse
"""
return self.show_temp_set_with_http_info(request)
def show_temp_set_with_http_info(self, request):
"""查询事务集
查询事务集
:param ShowTempSetRequest request
:return: ShowTempSetResponse
"""
all_params = ['test_suite_id', 'offset', 'limit']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
query_params = []
if 'offset' in local_var_params:
query_params.append(('offset', local_var_params['offset']))
if 'limit' in local_var_params:
query_params.append(('limit', local_var_params['limit']))
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/all-templates/{test_suite_id}',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ShowTempSetResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def update_case_async(self, request):
"""修改用例
修改用例
:param UpdateCaseRequest request
:return: UpdateCaseResponse
"""
return self.update_case_with_http_info(request)
def update_case_with_http_info(self, request):
"""修改用例
修改用例
:param UpdateCaseRequest request
:return: UpdateCaseResponse
"""
all_params = ['case_id', 'target', 'update_case_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'case_id' in local_var_params:
path_params['case_id'] = local_var_params['case_id']
if 'target' in local_var_params:
path_params['target'] = local_var_params['target']
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/task-cases/{case_id}/target/{target}',
method='PUT',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='UpdateCaseResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def update_task_async(self, request):
"""修改任务
修改任务
:param UpdateTaskRequest request
:return: UpdateTaskResponse
"""
return self.update_task_with_http_info(request)
def update_task_with_http_info(self, request):
"""修改任务
修改任务
:param UpdateTaskRequest request
:return: UpdateTaskResponse
"""
all_params = ['task_id', 'update_task_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'task_id' in local_var_params:
path_params['task_id'] = local_var_params['task_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/tasks/{task_id}',
method='PUT',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='UpdateTaskResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def update_task_status_async(self, request):
"""更新任务状态
更新任务状态
:param UpdateTaskStatusRequest request
:return: UpdateTaskStatusResponse
"""
return self.update_task_status_with_http_info(request)
def update_task_status_with_http_info(self, request):
"""更新任务状态
更新任务状态
:param UpdateTaskStatusRequest request
:return: UpdateTaskStatusResponse
"""
all_params = ['test_suite_id', 'task_id', 'update_task_status_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
if 'task_id' in local_var_params:
path_params['task_id'] = local_var_params['task_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/test-suites/{test_suite_id}/tasks/{task_id}',
method='POST',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='UpdateTaskStatusResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def update_temp_async(self, request):
"""修改事务
修改事务
:param UpdateTempRequest request
:return: UpdateTempResponse
"""
return self.update_temp_with_http_info(request)
def update_temp_with_http_info(self, request):
"""修改事务
修改事务
:param UpdateTempRequest request
:return: UpdateTempResponse
"""
all_params = ['template_id', 'update_temp_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'template_id' in local_var_params:
path_params['template_id'] = local_var_params['template_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/templates/{template_id}',
method='PUT',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='UpdateTempResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def update_variable_async(self, request):
"""修改变量
修改变量
:param UpdateVariableRequest request
:return: UpdateVariableResponse
"""
return self.update_variable_with_http_info(request)
def update_variable_with_http_info(self, request):
"""修改变量
修改变量
:param UpdateVariableRequest request
:return: UpdateVariableResponse
"""
all_params = ['test_suite_id', 'update_variable_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/variables/{test_suite_id}',
method='PUT',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='UpdateVariableResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def create_project_async(self, request):
"""创建工程
创建工程
:param CreateProjectRequest request
:return: CreateProjectResponse
"""
return self.create_project_with_http_info(request)
def create_project_with_http_info(self, request):
"""创建工程
创建工程
:param CreateProjectRequest request
:return: CreateProjectResponse
"""
all_params = ['create_project_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/test-suites',
method='POST',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='CreateProjectResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def delete_project_async(self, request):
"""删除工程
删除工程
:param DeleteProjectRequest request
:return: DeleteProjectResponse
"""
return self.delete_project_with_http_info(request)
def delete_project_with_http_info(self, request):
"""删除工程
删除工程
:param DeleteProjectRequest request
:return: DeleteProjectResponse
"""
all_params = ['test_suite_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/test-suites/{test_suite_id}',
method='DELETE',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='DeleteProjectResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def list_project_sets_async(self, request):
"""查询工程集
查询工程集
:param ListProjectSetsRequest request
:return: ListProjectSetsResponse
"""
return self.list_project_sets_with_http_info(request)
def list_project_sets_with_http_info(self, request):
"""查询工程集
查询工程集
:param ListProjectSetsRequest request
:return: ListProjectSetsResponse
"""
all_params = ['offset', 'limit']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
query_params = []
if 'offset' in local_var_params:
query_params.append(('offset', local_var_params['offset']))
if 'limit' in local_var_params:
query_params.append(('limit', local_var_params['limit']))
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/test-suites',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ListProjectSetsResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def show_process_async(self, request):
"""查询导入进度
查询导入进度
:param ShowProcessRequest request
:return: ShowProcessResponse
"""
return self.show_process_with_http_info(request)
def show_process_with_http_info(self, request):
"""查询导入进度
查询导入进度
:param ShowProcessRequest request
:return: ShowProcessResponse
"""
all_params = []
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/test-suites/upload/processes',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ShowProcessResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def show_project_async(self, request):
"""查询工程
查询工程
:param ShowProjectRequest request
:return: ShowProjectResponse
"""
return self.show_project_with_http_info(request)
def show_project_with_http_info(self, request):
"""查询工程
查询工程
:param ShowProjectRequest request
:return: ShowProjectResponse
"""
all_params = ['test_suite_id']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/test-suites/{test_suite_id}',
method='GET',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='ShowProjectResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def update_project_async(self, request):
"""修改工程
修改工程
:param UpdateProjectRequest request
:return: UpdateProjectResponse
"""
return self.update_project_with_http_info(request)
def update_project_with_http_info(self, request):
"""修改工程
修改工程
:param UpdateProjectRequest request
:return: UpdateProjectResponse
"""
all_params = ['test_suite_id', 'update_project_request_body']
local_var_params = {}
for attr in request.attribute_map:
if hasattr(request, attr):
local_var_params[attr] = getattr(request, attr)
collection_formats = {}
path_params = {}
if 'test_suite_id' in local_var_params:
path_params['test_suite_id'] = local_var_params['test_suite_id']
query_params = []
header_params = {}
form_params = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
if isinstance(request, SdkStreamRequest):
body_params = request.get_file_stream()
response_headers = []
header_params['Content-Type'] = http_utils.select_header_content_type(
['application/json'])
auth_settings = []
return self.call_api(
resource_path='/v1/{project_id}/test-suites/{test_suite_id}',
method='PUT',
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body_params,
post_params=form_params,
response_type='UpdateProjectResponse',
response_headers=response_headers,
auth_settings=auth_settings,
collection_formats=collection_formats,
request_type=request.__class__.__name__)
def call_api(self, resource_path, method, path_params=None, query_params=None, header_params=None, body=None,
post_params=None, response_type=None, response_headers=None, auth_settings=None,
collection_formats=None, request_type=None):
"""Makes the HTTP request and returns deserialized data.
:param resource_path: Path to method endpoint.
:param method: Method to call.
:param path_params: Path parameters in the url.
:param query_params: Query parameters in the url.
:param header_params: Header parameters to be
placed in the request header.
:param body: Request body.
:param post_params dict: Request post form parameters,
for `application/x-www-form-urlencoded`, `multipart/form-data`.
:param auth_settings list: Auth Settings names for the request.
:param response_type: Response data type.
:param response_headers: Header should be added to response data.
:param collection_formats: dict of collection formats for path, query,
header, and post parameters.
:param request_type: Request data type.
:return:
Return the response directly.
"""
return self.do_http_request(
method=method,
resource_path=resource_path,
path_params=path_params,
query_params=query_params,
header_params=header_params,
body=body,
post_params=post_params,
response_type=response_type,
response_headers=response_headers,
collection_formats=collection_formats,
request_type=request_type,
async_request=True)
| 28.728871 | 113 | 0.611562 | 5,146 | 50,649 | 5.610183 | 0.049942 | 0.039349 | 0.06886 | 0.039037 | 0.882092 | 0.869761 | 0.828403 | 0.814998 | 0.812574 | 0.674991 | 0 | 0.000821 | 0.302612 | 50,649 | 1,762 | 114 | 28.745176 | 0.816517 | 0.099745 | 0 | 0.812249 | 0 | 0.001004 | 0.098925 | 0.038829 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055221 | false | 0 | 0.01004 | 0 | 0.123494 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
43cff427aa7d4133b3f0760ed9006c99998766e7 | 5,332 | py | Python | test/test_real_estate_agent_data_calls.py | Kleijwegt/real_estate_agent_scraping | 78264f2e276bd5d721b79c69b4bb416cdb13f4b1 | [
"CC-BY-4.0"
] | null | null | null | test/test_real_estate_agent_data_calls.py | Kleijwegt/real_estate_agent_scraping | 78264f2e276bd5d721b79c69b4bb416cdb13f4b1 | [
"CC-BY-4.0"
] | null | null | null | test/test_real_estate_agent_data_calls.py | Kleijwegt/real_estate_agent_scraping | 78264f2e276bd5d721b79c69b4bb416cdb13f4b1 | [
"CC-BY-4.0"
] | null | null | null | """
Tests for the calls to the real estate agents websites to check if they still work as intended.
Note: these test cases might fail in case no houses fall within the set requirements. (price/square meters)
"""
import unittest
from data_calls_real_estate_agent import get_arnold_taal_data, get_frisia_makelaars_data, get_bvl_data, get_langezaal_data, \
get_elzenaar_data, get_oltshoorn_data, get_estata_data, get_nelisse_data, get_doen_data, get_van_aalst_data, \
get_klap_makelaars_data, get_diva_makelaars_data, get_belderbos_data, get_hekking_data
class Test_data_calls_real_estate_agents(unittest.TestCase):
def test_arnold(self):
huizen_data = get_arnold_taal_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_frisiamakelaars_data(self):
huizen_data = get_frisia_makelaars_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_bvl_data(self):
huizen_data = get_bvl_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_langezaal_data(self):
huizen_data = get_langezaal_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_elzenaar_data(self):
huizen_data = get_elzenaar_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_oltshoorn_data(self):
huizen_data = get_oltshoorn_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_nelisse_data(self):
huizen_data = get_nelisse_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_estate_data(self):
huizen_data = get_estata_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_doen_data(self):
huizen_data = get_doen_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_belderbos_data(self):
huizen_data = get_belderbos_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_van_aalst_data(self):
huizen_data = get_van_aalst_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_hekking_data(self):
huizen_data = get_hekking_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_klap_makelaars_data(self):
huizen_data = get_klap_makelaars_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
def test_get_diva_makelaars_data(self):
huizen_data = get_diva_makelaars_data()
self.assertIsInstance(huizen_data[0], list)
self.assertGreaterEqual(len(huizen_data[0]), 1)
# Links
self.assertIsInstance(huizen_data[1], list)
self.assertGreaterEqual(len(huizen_data[1]), len(huizen_data[0]))
if __name__ == '__main__':
unittest.main() | 37.815603 | 125 | 0.694111 | 686 | 5,332 | 5.088921 | 0.103499 | 0.240619 | 0.13234 | 0.240619 | 0.829275 | 0.727872 | 0.710685 | 0.710685 | 0.710685 | 0.710685 | 0 | 0.019599 | 0.196174 | 5,332 | 141 | 126 | 37.815603 | 0.794914 | 0.054014 | 0 | 0.615385 | 0 | 0 | 0.001592 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 1 | 0.153846 | false | 0 | 0.021978 | 0 | 0.186813 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
78db4990192c23ce3764e0a87bf0445a8225facd | 16,249 | py | Python | dsl_parser/tests/test_relationships_overloading.py | mistio/cloudify-common | 3b706ba31a3371052fbdd12486d4a0befbcf491b | [
"Apache-2.0"
] | 6 | 2018-10-13T20:36:40.000Z | 2021-07-04T17:19:13.000Z | dsl_parser/tests/test_relationships_overloading.py | mistio/cloudify-common | 3b706ba31a3371052fbdd12486d4a0befbcf491b | [
"Apache-2.0"
] | 97 | 2018-05-25T12:10:19.000Z | 2022-03-30T10:16:40.000Z | dsl_parser/tests/test_relationships_overloading.py | mistio/cloudify-common | 3b706ba31a3371052fbdd12486d4a0befbcf491b | [
"Apache-2.0"
] | 15 | 2018-10-13T20:36:42.000Z | 2021-09-06T15:19:11.000Z | ########
# Copyright (c) 2019 Cloudify Platform Ltd. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
from dsl_parser import constants, exceptions
from dsl_parser.tests.utils import ResolverWithBlueprintSupport
from dsl_parser.tests.abstract_test_parser import AbstractTestParser
class TestRelationshipsOverloading(AbstractTestParser):
def verify_relationships(self, relationships, expected_rels):
source_rels = [(rel['target_id'], rel['type'])
for rel in relationships]
self.assertEqual(set(source_rels), set(expected_rels))
def validate_expected_relationships(self, main_yaml,
test_node_name,
expected_relationships,
resolver=None):
parsed = self.parse_1_3(main_yaml, resolver=resolver)
test_node = self.get_node_by_name(parsed, test_node_name)
self.assertEqual(len(test_node[constants.RELATIONSHIPS]),
len(expected_relationships))
self.verify_relationships(test_node[constants.RELATIONSHIPS],
expected_relationships)
def test_relationships_overloading_local_import_with_namespace(self):
node_blueprint = """
node_types:
test_type:
properties:
prop1:
default: value2
node_templates:
test_node:
type: test_type
"""
import_file_name = self.make_yaml_file(node_blueprint)
main_yaml = """
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test--test_type
test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: other_node
imports:
- test--{0}
""".format(import_file_name)
self.validate_expected_relationships(
main_yaml,
'test--test_node',
[('other_node', 'cloudify.relationships.depends_on')])
def test_relationships_overloading_local_import_without_namespace(self):
node_blueprint = """
node_types:
test_type:
properties:
prop1:
default: value2
node_templates:
test_node:
type: test_type
"""
import_file_name = self.make_yaml_file(node_blueprint)
main_yaml = """
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test_type
test_node:
relationships:
- type: cloudify.relationships.depends_on
target: other_node
imports:
- {0}
""".format(import_file_name)
self.validate_expected_relationships(
main_yaml,
'test_node',
[('other_node', 'cloudify.relationships.depends_on')])
def test_relationships_overloading_blueprint_import(self):
node_blueprint = """
tosca_definitions_version: cloudify_dsl_1_3
node_types:
test_type:
properties:
prop1:
default: value2
node_templates:
test_node:
type: test_type
"""
resolver = ResolverWithBlueprintSupport(
{'blueprint:node': node_blueprint})
main_yaml = """
tosca_definitions_version: cloudify_dsl_1_3
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test--test_type
test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: other_node
imports:
- test--blueprint:node
"""
self.validate_expected_relationships(
main_yaml,
'test--test_node',
[('other_node', 'cloudify.relationships.depends_on')],
resolver)
def test_extending_relationships(self):
node_blueprint = """
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_types:
test_type:
properties:
prop1:
default: value2
node_templates:
other_node:
type: test_type
test_node:
type: test_type
relationships:
- type: cloudify.relationships.depends_on
target: other_node
"""
import_file_name = self.make_yaml_file(node_blueprint)
main_yaml = """
node_templates:
test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--other_node
imports:
- test--{0}
""".format(import_file_name)
self.validate_expected_relationships(
main_yaml,
'test--test_node',
[('test--other_node', 'cloudify.relationships.depends_on'),
('test--other_node', 'cloudify.relationships.depends_on')])
def test_not_existing_overloading_target(self):
node_blueprint = """
node_types:
test_type:
properties:
prop1:
default: value2
node_templates:
test_node:
type: test_type
"""
import_file_name = self.make_yaml_file(node_blueprint)
main_yaml = """
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test--test_type
test--test_node1:
relationships:
- type: cloudify.relationships.depends_on
target: other_node
imports:
- test--{0}
""".format(import_file_name)
self.assertRaises(exceptions.DSLParsingFormatException,
self.parse_1_3,
main_yaml)
def test_multi_levels_extend(self):
node_blueprint = """
node_types:
test_type:
properties:
prop1:
default: value2
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test_type
test_node:
type: test_type
"""
node_file_name = self.make_yaml_file(node_blueprint)
middle_extender_blueprint = """
node_templates:
test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--other_node
imports:
- test--{0}
""".format(node_file_name)
extender_file_name = self.make_yaml_file(middle_extender_blueprint)
main_yaml = """
node_templates:
test--test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--test--other_node
imports:
- test--{0}
""".format(extender_file_name)
self.validate_expected_relationships(
main_yaml,
'test--test--test_node',
[('test--test--other_node', 'cloudify.relationships.depends_on'),
('test--test--other_node', 'cloudify.relationships.depends_on')])
def test_multi_levels_extend_with_blueprint_import(self):
node_blueprint = """
tosca_definitions_version: cloudify_dsl_1_3
node_types:
test_type:
properties:
prop1:
default: value2
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test_type
test_node:
type: test_type
"""
middle_extender_blueprint = """
tosca_definitions_version: cloudify_dsl_1_3
node_templates:
test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--other_node
imports:
- test--blueprint:node
"""
resolver = ResolverWithBlueprintSupport(
{'blueprint:node': node_blueprint,
'blueprint:middle': middle_extender_blueprint})
main_yaml = """
tosca_definitions_version: cloudify_dsl_1_3
node_templates:
test--test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--test--other_node
imports:
- test--blueprint:middle
"""
self.validate_expected_relationships(
main_yaml,
'test--test--test_node',
[('test--test--other_node', 'cloudify.relationships.depends_on'),
('test--test--other_node', 'cloudify.relationships.depends_on')],
resolver)
def test_middle_layer_disrupt(self):
node_blueprint = """
node_types:
test_type:
properties:
prop1:
default: value2
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test_type
test_node:
type: test_type
"""
node_file_name = self.make_yaml_file(node_blueprint)
extender_blueprint = """
node_templates:
test--test_node:
type: test--test_type
imports:
- test--{0}
""".format(node_file_name)
extender_file_name = self.make_yaml_file(extender_blueprint)
main_yaml = """
node_templates:
test--test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--test--other_node
imports:
- test--{0}
""".format(extender_file_name)
self.assertRaises(exceptions.DSLParsingLogicException,
self.parse_1_3,
main_yaml)
def test_not_extending_at_main_blueprint_failure(self):
node_blueprint = """
node_types:
test_type:
properties:
prop1:
default: value2
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test_type
test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--other_node
"""
node_file_name = self.make_yaml_file(node_blueprint)
middle_extender_blueprint = """
node_templates:
test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--other_node
imports:
- test--{0}
""".format(node_file_name)
extender_file_name = self.make_yaml_file(middle_extender_blueprint)
main_yaml = """
node_templates:
test--test--test_node:
type: test_type
imports:
- test--{0}
""".format(extender_file_name)
self.assertRaises(exceptions.DSLParsingLogicException,
self.parse_1_3,
main_yaml)
def test_extending_other_fields_failure(self):
node_blueprint = """
node_types:
test_type:
properties:
prop1:
default: value2
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test_type
test_node:
type: test_type
"""
node_file_name = self.make_yaml_file(node_blueprint)
extender_blueprint = """
node_templates:
test--test_node:
properties:
prop1: e
imports:
- test--{0}
""".format(node_file_name)
extender_file_name = self.make_yaml_file(extender_blueprint)
main_yaml = """
node_templates:
test--test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--test--other_node
imports:
- test--{0}
""".format(extender_file_name)
self.assertRaises(exceptions.DSLParsingLogicException,
self.parse_1_3,
main_yaml)
def test_extending_only_other_fields_failure(self):
node_blueprint = """
node_types:
test_type:
properties:
prop1:
default: value2
relationships:
cloudify.relationships.depends_on:
properties:
connection_type:
default: 'all_to_all'
node_templates:
other_node:
type: test_type
test_node:
type: test_type
"""
node_file_name = self.make_yaml_file(node_blueprint)
extender_blueprint = """
node_templates:
test--test_node:
properties:
prop1: e
imports:
- test--{0}
""".format(node_file_name)
extender_file_name = self.make_yaml_file(extender_blueprint)
main_yaml = """
node_templates:
test--test--test_node:
properties:
prop1: e
imports:
- test--{0}
""".format(extender_file_name)
self.assertRaises(exceptions.DSLParsingLogicException,
self.parse_1_3,
main_yaml)
def test_side_import_expending_relationships(self):
side_blueprint = """
imports:
- http://local-test-resolver/types.yaml
node_types:
test_other_type:
properties:
prop1:
default: value2
node_templates:
some_node:
type: test_other_type
test_node:
relationships:
- type: cloudify.relationships.depends_on
target: some_node
"""
side_import_file_name = self.make_yaml_file(side_blueprint)
node_blueprint = """
imports:
- http://local-test-resolver/types.yaml
node_types:
test_type:
properties:
prop1:
default: value2
node_templates:
other_node:
type: test_type
test_node:
type: test_type
relationships:
- type: cloudify.relationships.depends_on
target: other_node
"""
import_file_name = self.make_yaml_file(node_blueprint)
main_yaml = """
node_templates:
test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--other_node
imports:
- test--{0}
- test--{1}
""".format(import_file_name, side_import_file_name)
self.validate_expected_relationships(main_yaml, 'test--test_node', [
('test--other_node', 'cloudify.relationships.depends_on'),
('test--other_node', 'cloudify.relationships.depends_on'),
('test--some_node', 'cloudify.relationships.depends_on'),
])
def test_side_import_not_expending_relationships_failure(self):
side_blueprint = """
imports:
- http://local-test-resolver/types.yaml
node_types:
test_other_type:
properties:
prop1:
default: value2
node_templates:
some_node:
type: test_other_type
test_node:
properties:
prop1: value2
"""
side_import_file_name = self.make_yaml_file(side_blueprint)
node_blueprint = """
imports:
- http://local-test-resolver/types.yaml
node_types:
test_type:
properties:
prop1:
default: value2
node_templates:
other_node:
type: test_type
test_node:
type: test_type
relationships:
- type: cloudify.relationships.depends_on
target: other_node
"""
import_file_name = self.make_yaml_file(node_blueprint)
main_yaml = """
node_templates:
test--test_node:
relationships:
- type: cloudify.relationships.depends_on
target: test--other_node
imports:
- test--{0}
- test--{1}
""".format(import_file_name, side_import_file_name)
self.assertRaises(exceptions.DSLParsingLogicException,
self.parse_1_3,
main_yaml)
| 25.629338 | 79 | 0.626192 | 1,695 | 16,249 | 5.657227 | 0.086136 | 0.037543 | 0.122641 | 0.131401 | 0.852122 | 0.845865 | 0.818751 | 0.814475 | 0.811346 | 0.805819 | 0 | 0.00739 | 0.283833 | 16,249 | 633 | 80 | 25.669826 | 0.816619 | 0.036987 | 0 | 0.90824 | 0 | 0 | 0.582043 | 0.121528 | 0 | 0 | 0 | 0 | 0.014981 | 1 | 0.02809 | false | 0 | 0.086142 | 0 | 0.116105 | 0.095506 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
603078db7f6c8d1cfa43b41cd0c8f4e3e19c8f72 | 91 | py | Python | bazel/pyd/py_app/main.py | laszlocsomor/projects | fb0f8b62046c0c420dc409d2e44c10728028ea1a | [
"Apache-2.0"
] | 2 | 2018-02-21T13:51:38.000Z | 2019-09-10T19:35:15.000Z | bazel/pyd/py_app/main.py | laszlocsomor/projects | fb0f8b62046c0c420dc409d2e44c10728028ea1a | [
"Apache-2.0"
] | null | null | null | bazel/pyd/py_app/main.py | laszlocsomor/projects | fb0f8b62046c0c420dc409d2e44c10728028ea1a | [
"Apache-2.0"
] | 2 | 2018-08-14T11:31:21.000Z | 2020-01-10T15:47:49.000Z | from __future__ import print_function
import alice
print("alice.qux(3)=%d" % alice.qux(3))
| 22.75 | 39 | 0.758242 | 15 | 91 | 4.266667 | 0.6 | 0.25 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.098901 | 91 | 3 | 40 | 30.333333 | 0.756098 | 0 | 0 | 0 | 0 | 0 | 0.164835 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
603d20bdba6c91427a3df8ddf53517f5034d1613 | 13,829 | py | Python | dlkit/json_/grading/default_mdata.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 2 | 2018-02-23T12:16:11.000Z | 2020-10-08T17:54:24.000Z | dlkit/json_/grading/default_mdata.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 87 | 2017-04-21T18:57:15.000Z | 2021-12-13T19:43:57.000Z | dlkit/json_/grading/default_mdata.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 1 | 2018-03-01T16:44:25.000Z | 2018-03-01T16:44:25.000Z | """JSON osid metadata configurations for grading service."""
from .. import types
from ..primitives import Type
DEFAULT_LANGUAGE_TYPE = Type(**types.Language().get_type_data("DEFAULT"))
DEFAULT_SCRIPT_TYPE = Type(**types.Script().get_type_data("DEFAULT"))
DEFAULT_FORMAT_TYPE = Type(**types.Format().get_type_data("DEFAULT"))
DEFAULT_GENUS_TYPE = Type(**types.Genus().get_type_data("DEFAULT"))
def get_grade_mdata():
"""Return default mdata map for Grade"""
return {
'output_score': {
'element_label': {
'text': 'output score',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter a decimal value.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_decimal_values': [None],
'syntax': 'DECIMAL',
'decimal_scale': None,
'minimum_decimal': None,
'maximum_decimal': None,
'decimal_set': [],
},
'grade_system': {
'element_label': {
'text': 'grade system',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'accepts an osid.id.Id object',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_id_values': [''],
'syntax': 'ID',
'id_set': [],
},
'input_score_end_range': {
'element_label': {
'text': 'input score end range',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter a decimal value.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_decimal_values': [None],
'syntax': 'DECIMAL',
'decimal_scale': None,
'minimum_decimal': None,
'maximum_decimal': None,
'decimal_set': [],
},
'input_score_start_range': {
'element_label': {
'text': 'input score start range',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter a decimal value.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_decimal_values': [None],
'syntax': 'DECIMAL',
'decimal_scale': None,
'minimum_decimal': None,
'maximum_decimal': None,
'decimal_set': [],
},
}
def get_grade_system_mdata():
"""Return default mdata map for GradeSystem"""
return {
'numeric_score_increment': {
'element_label': {
'text': 'numeric score increment',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter a decimal value.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_decimal_values': [None],
'syntax': 'DECIMAL',
'decimal_scale': None,
'minimum_decimal': None,
'maximum_decimal': None,
'decimal_set': [],
},
'lowest_numeric_score': {
'element_label': {
'text': 'lowest numeric score',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter a decimal value.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_decimal_values': [None],
'syntax': 'DECIMAL',
'decimal_scale': None,
'minimum_decimal': None,
'maximum_decimal': None,
'decimal_set': [],
},
'based_on_grades': {
'element_label': {
'text': 'based on grades',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter either true or false.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_boolean_values': [None],
'syntax': 'BOOLEAN',
},
'highest_numeric_score': {
'element_label': {
'text': 'highest numeric score',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter a decimal value.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_decimal_values': [None],
'syntax': 'DECIMAL',
'decimal_scale': None,
'minimum_decimal': None,
'maximum_decimal': None,
'decimal_set': [],
},
}
def get_grade_entry_mdata():
"""Return default mdata map for GradeEntry"""
return {
'resource': {
'element_label': {
'text': 'resource',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'accepts an osid.id.Id object',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_id_values': [''],
'syntax': 'ID',
'id_set': [],
},
'grade': {
'element_label': {
'text': 'grade',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'accepts an osid.id.Id object',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_id_values': [''],
'syntax': 'ID',
'id_set': [],
},
'ignored_for_calculations': {
'element_label': {
'text': 'ignored for calculations',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter either true or false.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_boolean_values': [None],
'syntax': 'BOOLEAN',
},
'score': {
'element_label': {
'text': 'score',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'enter a decimal value.',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_decimal_values': [None],
'syntax': 'DECIMAL',
'decimal_scale': None,
'minimum_decimal': None,
'maximum_decimal': None,
'decimal_set': [],
},
'gradebook_column': {
'element_label': {
'text': 'gradebook column',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'accepts an osid.id.Id object',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_id_values': [''],
'syntax': 'ID',
'id_set': [],
},
}
def get_gradebook_column_mdata():
"""Return default mdata map for GradebookColumn"""
return {
'grade_system': {
'element_label': {
'text': 'grade system',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'accepts an osid.id.Id object',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_id_values': [''],
'syntax': 'ID',
'id_set': [],
},
}
def get_gradebook_column_summary_mdata():
"""Return default mdata map for GradebookColumnSummary"""
return {
'gradebook_column': {
'element_label': {
'text': 'gradebook column',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'instructions': {
'text': 'accepts an osid.id.Id object',
'languageTypeId': str(DEFAULT_LANGUAGE_TYPE),
'scriptTypeId': str(DEFAULT_SCRIPT_TYPE),
'formatTypeId': str(DEFAULT_FORMAT_TYPE),
},
'required': False,
'read_only': False,
'linked': False,
'array': False,
'default_id_values': [''],
'syntax': 'ID',
'id_set': [],
},
}
def get_gradebook_mdata():
"""Return default mdata map for Gradebook"""
return {
}
| 36.488127 | 73 | 0.495914 | 1,104 | 13,829 | 5.914855 | 0.074275 | 0.137825 | 0.090199 | 0.147014 | 0.895253 | 0.875191 | 0.839051 | 0.839051 | 0.839051 | 0.839051 | 0 | 0 | 0.378697 | 13,829 | 378 | 74 | 36.584656 | 0.760009 | 0.022127 | 0 | 0.76257 | 0 | 0 | 0.282049 | 0.022985 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01676 | false | 0 | 0.005587 | 0 | 0.039106 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6040ac6e59fc37687862276584f924b334f0972d | 10,173 | py | Python | tests/device/cli/piv/test_generate_cert_and_csr.py | maxthomas/yubikey-manager | 79bf111093401dbbe18ef7627d45e8c472ba17dd | [
"BSD-2-Clause"
] | null | null | null | tests/device/cli/piv/test_generate_cert_and_csr.py | maxthomas/yubikey-manager | 79bf111093401dbbe18ef7627d45e8c472ba17dd | [
"BSD-2-Clause"
] | null | null | null | tests/device/cli/piv/test_generate_cert_and_csr.py | maxthomas/yubikey-manager | 79bf111093401dbbe18ef7627d45e8c472ba17dd | [
"BSD-2-Clause"
] | null | null | null | from binascii import b2a_hex
from cryptography import x509
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric import ec, rsa, padding
from .util import DEFAULT_PIN, DEFAULT_MANAGEMENT_KEY, NON_DEFAULT_MANAGEMENT_KEY
from ... import condition
import pytest
def _verify_cert(cert, pubkey):
cert_signature = cert.signature
cert_bytes = cert.tbs_certificate_bytes
if isinstance(pubkey, rsa.RSAPublicKey):
pubkey.verify(
cert_signature,
cert_bytes,
padding.PKCS1v15(),
cert.signature_hash_algorithm,
)
elif isinstance(pubkey, ec.EllipticCurvePublicKey):
pubkey.verify(
cert_signature, cert_bytes, ec.ECDSA(cert.signature_hash_algorithm)
)
else:
raise ValueError("Unsupported public key value")
def not_roca(version):
return not ((4, 2, 0) <= version < (4, 3, 5))
class TestNonDefaultMgmKey:
@pytest.fixture(autouse=True)
def set_mgmt_key(self, ykman_cli):
ykman_cli(
"piv",
"access",
"change-management-key",
"-P",
DEFAULT_PIN,
"-m",
DEFAULT_MANAGEMENT_KEY,
"-n",
NON_DEFAULT_MANAGEMENT_KEY,
)
def _test_generate_self_signed(self, ykman_cli, slot, algo):
pubkey_output = ykman_cli(
"piv",
"keys",
"generate",
slot,
"-a",
algo,
"-m",
NON_DEFAULT_MANAGEMENT_KEY,
"-",
).output
ykman_cli(
"piv",
"certificates",
"generate",
slot,
"-m",
NON_DEFAULT_MANAGEMENT_KEY,
"-s",
"subject-" + algo,
"-P",
DEFAULT_PIN,
"-",
input=pubkey_output,
)
output = ykman_cli("piv", "certificates", "export", slot, "-").output
cert = x509.load_pem_x509_certificate(output.encode(), default_backend())
_verify_cert(cert, cert.public_key())
fingerprint = b2a_hex(cert.fingerprint(hashes.SHA256())).decode("ascii")
output = ykman_cli("piv", "info").output
assert "Fingerprint:\t" + fingerprint in output
@condition.fips(False)
@condition(not_roca)
def test_generate_self_signed_slot_9a_rsa1024(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9a", "RSA1024")
def test_generate_self_signed_slot_9a_eccp256(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9a", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_self_signed_slot_9c_rsa1024(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9c", "RSA1024")
def test_generate_self_signed_slot_9c_eccp256(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9c", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_self_signed_slot_9d_rsa1024(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9d", "RSA1024")
def test_generate_self_signed_slot_9d_eccp256(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9d", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_self_signed_slot_9e_rsa1024(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9e", "RSA1024")
def test_generate_self_signed_slot_9e_eccp256(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9e", "ECCP256")
def _test_generate_csr(self, ykman_cli, slot, algo):
subject_input = "subject-" + algo
pubkey_output = ykman_cli(
"piv",
"keys",
"generate",
slot,
"-a",
algo,
"-m",
NON_DEFAULT_MANAGEMENT_KEY,
"-",
).output
csr_output = ykman_cli(
"piv",
"certificates",
"request",
slot,
"-P",
DEFAULT_PIN,
"-",
"-",
"-s",
subject_input,
input=pubkey_output,
).output
csr = x509.load_pem_x509_csr(csr_output.encode("utf-8"), default_backend())
subject_output = csr.subject.get_attributes_for_oid(x509.NameOID.COMMON_NAME)[
0
].value
assert subject_input == subject_output
@condition.fips(False)
@condition(not_roca)
def test_generate_csr_slot_9a_rsa1024(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9a", "RSA1024")
def test_generate_csr_slot_9a_eccp256(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9a", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_csr_slot_9c_rsa1024(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9c", "RSA1024")
def test_generate_csr_slot_9c_eccp256(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9c", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_csr_slot_9d_rsa1024(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9d", "RSA1024")
def test_generate_csr_slot_9d_eccp256(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9d", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_csr_slot_9e_rsa1024(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9e", "RSA1024")
def test_generate_csr_slot_9e_eccp256(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9e", "ECCP256")
class TestProtectedMgmKey:
@pytest.fixture(autouse=True)
def protect_mgmt_key(self, ykman_cli):
ykman_cli(
"piv",
"access",
"change-management-key",
"-p",
"-P",
DEFAULT_PIN,
"-m",
DEFAULT_MANAGEMENT_KEY,
)
def _test_generate_self_signed(self, ykman_cli, slot, algo):
pubkey_output = ykman_cli(
"piv", "keys", "generate", slot, "-a", algo, "-P", DEFAULT_PIN, "-"
).output
ykman_cli(
"piv",
"certificates",
"generate",
slot,
"-P",
DEFAULT_PIN,
"-s",
"subject-" + algo,
"-",
input=pubkey_output,
)
output = ykman_cli("piv", "certificates", "export", slot, "-").output
cert = x509.load_pem_x509_certificate(output.encode(), default_backend())
_verify_cert(cert, cert.public_key())
fingerprint = b2a_hex(cert.fingerprint(hashes.SHA256())).decode("ascii")
output = ykman_cli("piv", "info").output
assert "Fingerprint:\t" + fingerprint in output
@condition.fips(False)
@condition(not_roca)
def test_generate_self_signed_slot_9a_rsa1024(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9a", "RSA1024")
def test_generate_self_signed_slot_9a_eccp256(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9a", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_self_signed_slot_9c_rsa1024(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9c", "RSA1024")
def test_generate_self_signed_slot_9c_eccp256(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9c", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_self_signed_slot_9d_rsa1024(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9d", "RSA1024")
def test_generate_self_signed_slot_9d_eccp256(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9d", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_self_signed_slot_9e_rsa1024(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9e", "RSA1024")
def test_generate_self_signed_slot_9e_eccp256(self, ykman_cli):
self._test_generate_self_signed(ykman_cli, "9e", "ECCP256")
def _test_generate_csr(self, ykman_cli, slot, algo):
subject_input = "subject-" + algo
pubkey_output = ykman_cli(
"piv", "keys", "generate", slot, "-a", algo, "-P", DEFAULT_PIN, "-"
).output
csr_output = ykman_cli(
"piv",
"certificates",
"request",
slot,
"-P",
DEFAULT_PIN,
"-",
"-",
"-s",
subject_input,
input=pubkey_output,
).output
csr = x509.load_pem_x509_csr(csr_output.encode("utf-8"), default_backend())
subject_output = csr.subject.get_attributes_for_oid(x509.NameOID.COMMON_NAME)[
0
].value
assert subject_input == subject_output
@condition.fips(False)
@condition(not_roca)
def test_generate_csr_slot_9a_rsa1024(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9a", "RSA1024")
def test_generate_csr_slot_9a_eccp256(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9a", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_csr_slot_9c_rsa1024(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9c", "RSA1024")
def test_generate_csr_slot_9c_eccp256(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9c", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_csr_slot_9d_rsa1024(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9d", "RSA1024")
def test_generate_csr_slot_9d_eccp256(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9d", "ECCP256")
@condition.fips(False)
@condition(not_roca)
def test_generate_csr_slot_9e_rsa1024(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9e", "RSA1024")
def test_generate_csr_slot_9e_eccp256(self, ykman_cli):
self._test_generate_csr(ykman_cli, "9e", "ECCP256")
| 33.136808 | 86 | 0.632852 | 1,212 | 10,173 | 4.893564 | 0.096535 | 0.113303 | 0.076884 | 0.126117 | 0.880796 | 0.867645 | 0.852976 | 0.833923 | 0.833923 | 0.833923 | 0 | 0.045322 | 0.25607 | 10,173 | 306 | 87 | 33.245098 | 0.738372 | 0 | 0 | 0.859922 | 0 | 0 | 0.070776 | 0.004129 | 0 | 0 | 0 | 0 | 0.015564 | 1 | 0.155642 | false | 0 | 0.031128 | 0.003891 | 0.198444 | 0.015564 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6064ede76a66a98b325fc12a181bdc94f742fdf2 | 18,029 | py | Python | greenmst/tests.py | biancini/GreenMST-ryu | 170c4b272d4c6a13e672cfb00230daa739213c6e | [
"Apache-2.0"
] | 1 | 2019-12-07T12:08:36.000Z | 2019-12-07T12:08:36.000Z | greenmst/tests.py | biancini/GreenMST-ryu | 170c4b272d4c6a13e672cfb00230daa739213c6e | [
"Apache-2.0"
] | null | null | null | greenmst/tests.py | biancini/GreenMST-ryu | 170c4b272d4c6a13e672cfb00230daa739213c6e | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2014 Andrea Biancini <andrea.biancini@gmail.com>.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__author__ = 'Andrea Biancini <andrea.biancini@gmail.com>'
import random
import ConfigParser
from nose.tools import assert_equals, assert_true
from mock import Mock, call, patch
from link import Link
from controller import Controller
from simple_switch import SimpleSwitch
from topology_costs import TopologyCosts
from ryu.ofproto import ofproto_v1_0, ofproto_v1_0_parser, ether
from ryu.lib.packet.ethernet import ethernet
from ryu.lib.packet import bfd
config = ConfigParser.RawConfigParser()
config.read('tests.cfg')
def random_mac():
mac = [0x00, 0x16, 0x3e, random.randint(0x00, 0x7f), random.randint(0x00, 0xff), random.randint(0x00, 0xff)]
return mac
def verify_port_mod(msg, hw_addr, port_num, config):
assert_equals(msg.hw_addr, hw_addr)
assert_equals(msg.port_no, port_num)
assert_equals(msg.mask, ofproto_v1_0.OFPPC_NO_FLOOD)
assert_equals(msg.config, config)
def test_update_links():
# arrange
topo_edges = []
topo_edges.append(Link(src=1, src_port=1, dst=2, dst_port=1, cost=1))
topo_edges.append(Link(src=1, src_port=2, dst=3, dst_port=1, cost=4))
topo_edges.append(Link(src=1, src_port=3, dst=4, dst_port=1, cost=2))
topo_edges.append(Link(src=2, src_port=2, dst=3, dst_port=2, cost=3))
topo_edges.append(Link(src=2, src_port=3, dst=4, dst_port=2, cost=4))
topo_edges.append(Link(src=3, src_port=3, dst=4, dst_port=3, cost=1))
redundant_expected = []
redundant_expected.append(Link(src=1, src_port=2, dst=3, dst_port=1, cost=4))
redundant_expected.append(Link(src=2, src_port=3, dst=4, dst_port=2, cost=4))
redundant_expected.append(Link(src=2, src_port=2, dst=3, dst_port=2, cost=3))
controller = Controller()
controller.mod_port = Mock()
mod_ports = []
mod_ports.append(call(1, 2, False))
mod_ports.append(call(2, 3, False))
mod_ports.append(call(2, 2, False))
mod_ports.append(call(3, 1, False))
mod_ports.append(call(4, 2, False))
mod_ports.append(call(3, 2, False))
# act
controller.topo_edges = topo_edges
controller.update_links()
result = controller.redundant_edges
# assert
assert_equals(len(redundant_expected), len(result))
for expected_link in redundant_expected:
assert_true(expected_link in result)
for cur_link in result:
if cur_link == expected_link:
assert_equals(cur_link.cost, expected_link.cost)
assert_equals(len(mod_ports), controller.mod_port.call_count)
controller.mod_port.assert_has_calls(mod_ports, any_order=True)
def test_find_redundant_edges():
# arrange
topo_edges = []
topo_edges.append(Link(src=1, src_port=1, dst=2, dst_port=1, cost=1))
topo_edges.append(Link(src=1, src_port=2, dst=3, dst_port=1, cost=4))
topo_edges.append(Link(src=1, src_port=3, dst=4, dst_port=1, cost=2))
topo_edges.append(Link(src=2, src_port=2, dst=3, dst_port=2, cost=3))
topo_edges.append(Link(src=2, src_port=3, dst=4, dst_port=2, cost=4))
topo_edges.append(Link(src=3, src_port=3, dst=4, dst_port=3, cost=1))
mst_edges = []
mst_edges.append(Link(src=1, src_port=1, dst=2, dst_port=1, cost=1))
mst_edges.append(Link(src=1, src_port=3, dst=4, dst_port=1, cost=2))
mst_edges.append(Link(src=3, src_port=3, dst=4, dst_port=3, cost=1))
redundant_expected = []
redundant_expected.append(Link(src=1, src_port=2, dst=3, dst_port=1, cost=4))
redundant_expected.append(Link(src=2, src_port=3, dst=4, dst_port=2, cost=4))
redundant_expected.append(Link(src=2, src_port=2, dst=3, dst_port=2, cost=3))
controller = Controller()
# act
controller.topo_edges = topo_edges
result = controller.find_redundant_edges(mst_edges)
# assert
assert_equals(len(redundant_expected), len(result))
for expected_link in redundant_expected:
assert_true(expected_link in result)
for cur_link in result:
if cur_link == expected_link:
assert_equals(cur_link.cost, expected_link.cost)
@patch('ryu.topology.api.get_switch')
def test_mod_port_open(mock_get_switch):
# arrange
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
ports = []
for i in range(4):
port = Mock()
port.hw_addr = random_mac()
ports.append(port)
switch = Mock(dp=mock_datapath, ports=ports)
mock_get_switch.return_value = [switch]
controller = Controller()
# act
controller.mod_port(1, 2, True)
# assert
assert_equals(1, mock_datapath.send_msg.call_count)
verify_port_mod(mock_datapath.send_msg.call_args[0][0], ports[1].hw_addr, 2, 0)
@patch('ryu.topology.api.get_switch')
def test_mod_port_close(mock_get_switch):
# arrange
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
ports = []
for i in range(4):
port = Mock()
port.hw_addr = random_mac()
ports.append(port)
switch = Mock(dp=mock_datapath, ports=ports)
mock_get_switch.return_value = [switch]
controller = Controller()
# act
controller.mod_port(1, 2, False)
# assert
assert_equals(1, mock_datapath.send_msg.call_count)
verify_port_mod(mock_datapath.send_msg.call_args[0][0], ports[1].hw_addr, 2, 63)
def test_event_link_add():
# arrange
link = Mock()
link.to_dict.return_value = {'src': { 'dpid': 1, 'port_no': 1 }, 'dst': {'dpid': 2, 'port_no': 1 }}
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
event = Mock(link=link)
controller = Controller()
controller.update_links = Mock()
# act
controller._event_link_add_handler(event)
# assert
assert_equals([Link(src=1, src_port=1, dst=2, dst_port=1)], controller.topo_edges)
assert_equals(1, controller.update_links.call_count)
def test_event_link_add_inverse():
# arrange
link = Mock()
link.to_dict.return_value = {'src': { 'dpid': 1, 'port_no': 1 }, 'dst': {'dpid': 2, 'port_no': 1 }}
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
event = Mock(link=link)
controller = Controller()
controller.update_links = Mock()
controller.topo_edges = [Link(src=2, src_port=1, dst=1, dst_port=1)]
# act
controller._event_link_add_handler(event)
# assert
assert_equals([Link(src=2, src_port=1, dst=1, dst_port=1)], controller.topo_edges)
assert_equals(0, controller.update_links.call_count)
def test_event_link_delete():
# arrange
link = Mock()
link.to_dict.return_value = {'src': { 'dpid': 1, 'port_no': 1 }, 'dst': {'dpid': 2, 'port_no': 1 }}
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
event = Mock(link=link)
controller = Controller()
controller.update_links = Mock()
controller.topo_edges = [Link(src=1, src_port=1, dst=2, dst_port=1)]
# act
controller._event_link_delete_handler(event)
# assert
assert_equals([], controller.topo_edges)
assert_equals(1, controller.update_links.call_count)
def test_event_link_delete_inverse():
# arrange
link = Mock()
link.to_dict.return_value = {'src': { 'dpid': 1, 'port_no': 1 }, 'dst': {'dpid': 2, 'port_no': 1 }}
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
event = Mock(link=link)
controller = Controller()
controller.update_links = Mock()
controller.topo_edges = [Link(src=2, src_port=1, dst=1, dst_port=1)]
# act
controller._event_link_delete_handler(event)
# assert
assert_equals([], controller.topo_edges)
assert_equals(1, controller.update_links.call_count)
def test_event_link_delete_none():
# arrange
link = Mock()
link.to_dict.return_value = {'src': { 'dpid': 1, 'port_no': 1 }, 'dst': {'dpid': 2, 'port_no': 1 }}
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
event = Mock(link=link)
controller = Controller()
controller.update_links = Mock()
controller.topo_edges = [Link(src=1, src_port=2, dst=3, dst_port=1)]
# act
controller._event_link_delete_handler(event)
# assert
assert_equals([Link(src=1, src_port=2, dst=3, dst_port=1)], controller.topo_edges)
assert_equals(0, controller.update_links.call_count)
def test_set_costs():
# arrange
costs = { '1,2': 10, '2,3': 5, '1,3': 1 }
controller = Controller()
controller.update_links = Mock()
# act
controller.set_costs(costs)
# assert
topo_costs = TopologyCosts()
assert_equals(costs, topo_costs.costs)
assert_equals(1, controller.update_links.call_count)
def test_set_cost():
# arrange
source = 1
destination = 2
cost = 10
topo_costs = TopologyCosts()
topo_costs.costs = {}
# act
topo_costs.set_cost(source, destination, cost)
# assert
assert_equals({'%s,%s' % (source,destination): cost}, topo_costs.costs)
def test_get_cost():
# arrange
source = 1
destination = 2
cost = 10
topo_costs = TopologyCosts()
topo_costs.costs = {'%s,%s' % (source,destination): cost}
# act
result = topo_costs.get_cost(source, destination)
# assert
assert_equals(result, cost)
def test_get_cost_default():
# arrange
source = 1
destination = 2
default_cost = 1
topo_costs = TopologyCosts()
topo_costs.costs = {}
# act
result = topo_costs.get_cost(source, destination)
# assert
assert_equals(result, default_cost)
def test_add_flow_singleport():
# arrange
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
in_port = 1
out_port = 3
destination = 2
actions = [ofproto_v1_0_parser.OFPActionOutput(out_port)]
controller = SimpleSwitch()
# act
controller.add_flow(mock_datapath, in_port, destination, actions)
# assert
assert_equals(1, mock_datapath.send_msg.call_count)
mod = mock_datapath.send_msg.call_args[0][0]
assert_equals(in_port, mod.match.in_port)
assert_equals(ofproto_v1_0.OFPFC_ADD, mod.command)
assert_equals(0, mod.idle_timeout)
assert_equals(0, mod.hard_timeout)
assert_equals(ofproto_v1_0.OFP_DEFAULT_PRIORITY, mod.priority)
assert_equals(1, len(mod.actions))
assert_true(isinstance(mod.actions[0], ofproto_v1_0_parser.OFPActionOutput))
assert_equals(out_port, mod.actions[0].port)
def test_add_flow_flood():
# arrange
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
in_port = 1
out_port = ofproto_v1_0.OFPP_FLOOD
destination = 2
actions = [ofproto_v1_0_parser.OFPActionOutput(out_port)]
controller = SimpleSwitch()
# act
controller.add_flow(mock_datapath, in_port, destination, actions)
# assert
assert_equals(1, mock_datapath.send_msg.call_count)
mod = mock_datapath.send_msg.call_args[0][0]
assert_equals(in_port, mod.match.in_port)
assert_equals(ofproto_v1_0.OFPFC_ADD, mod.command)
assert_equals(0, mod.idle_timeout)
assert_equals(0, mod.hard_timeout)
assert_equals(ofproto_v1_0.OFP_DEFAULT_PRIORITY, mod.priority)
assert_equals(1, len(mod.actions))
assert_true(isinstance(mod.actions[0], ofproto_v1_0_parser.OFPActionOutput))
assert_equals(out_port, mod.actions[0].port)
def test_packet_in_lldp():
# arrange
src = config.get('main', 'MAC_ADDR_1')
dst = config.get('main', 'MAC_ADDR_2')
packet = ethernet(src=src, dst=dst, ethertype=ether.ETH_TYPE_LLDP)
mock_datapath = Mock(ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
message = Mock(datapath=mock_datapath, data=packet.serialize(None, None))
event = Mock(msg=message)
controller = SimpleSwitch()
# act
controller._packet_in_handler(event)
# assert
assert_equals(0, mock_datapath.send_msg.call_count)
def test_packet_in_flood():
# arrange
src = config.get('main', 'MAC_ADDR_1')
dst = config.get('main', 'MAC_ADDR_2')
in_port = 1
out_port = ofproto_v1_0.OFPP_FLOOD
packet = ethernet(src=src, dst=dst, ethertype=ether.ETH_TYPE_IP)
mock_datapath = Mock(id=1, ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
message = Mock(datapath=mock_datapath, data=packet.serialize(None, None), in_port=in_port)
event = Mock(msg=message)
controller = SimpleSwitch()
controller.add_flow = Mock()
# act
controller._packet_in_handler(event)
# assert
assert_true(1 in controller.mac_to_port)
assert_equals({src: in_port}, controller.mac_to_port[1])
assert_equals(0, controller.add_flow.call_count)
assert_equals(1, mock_datapath.send_msg.call_count)
mod = mock_datapath.send_msg.call_args[0][0]
assert_equals(in_port, mod.in_port)
assert_equals(1, len(mod.actions))
assert_true(isinstance(mod.actions[0], ofproto_v1_0_parser.OFPActionOutput))
assert_equals(out_port, mod.actions[0].port)
def test_packet_in_noflood():
# arrange
src = config.get('main', 'MAC_ADDR_1')
dst = config.get('main', 'MAC_ADDR_2')
in_port = 1
out_port = 3
packet = ethernet(src=src, dst=dst, ethertype=ether.ETH_TYPE_IP)
mock_datapath = Mock(id=1, ofproto=ofproto_v1_0, ofproto_parser=ofproto_v1_0_parser)
message = Mock(datapath=mock_datapath, data=packet.serialize(None, None), in_port=in_port)
event = Mock(msg=message)
controller = SimpleSwitch()
controller.add_flow = Mock()
controller.mac_to_port[1] = {dst: out_port}
# act
controller._packet_in_handler(event)
# assert
assert_true(1 in controller.mac_to_port)
assert_equals({src: in_port, dst: out_port}, controller.mac_to_port[1])
assert_equals(1, controller.add_flow.call_count)
assert_equals(mock_datapath, controller.add_flow.call_args[0][0])
assert_equals(in_port, controller.add_flow.call_args[0][1])
assert_equals(dst, controller.add_flow.call_args[0][2])
assert_equals(1, len(controller.add_flow.call_args[0][3]))
assert_equals(out_port, controller.add_flow.call_args[0][3][0].port)
assert_equals(1, mock_datapath.send_msg.call_count)
mod = mock_datapath.send_msg.call_args[0][0]
assert_equals(in_port, mod.in_port)
assert_equals(1, len(mod.actions))
assert_true(isinstance(mod.actions[0], ofproto_v1_0_parser.OFPActionOutput))
assert_equals(out_port, mod.actions[0].port)
def test_port_status_handler_delete_otherport():
# arrange
dst = config.get('main', 'MAC_ADDR_2')
in_port = 2
out_port = 3
reason = ofproto_v1_0.OFPPR_DELETE
mock_datapath = Mock(id=1, ofproto=ofproto_v1_0)
desc = Mock(port_no=in_port)
message = Mock(datapath=mock_datapath, reason=reason, desc=desc)
event = Mock(msg=message)
controller = SimpleSwitch()
controller.mac_to_port[1] = {dst: out_port}
# act
controller._port_status_handler(event)
# assert
assert_true(1 in controller.mac_to_port)
assert_equals({dst: out_port}, controller.mac_to_port[1])
def test_port_status_handler_delete():
# arrange
dst = config.get('main', 'MAC_ADDR_2')
in_port = 3
out_port = 3
reason = ofproto_v1_0.OFPPR_DELETE
mock_datapath = Mock(id=1, ofproto=ofproto_v1_0)
desc = Mock(port_no=in_port)
message = Mock(datapath=mock_datapath, reason=reason, desc=desc)
event = Mock(msg=message)
controller = SimpleSwitch()
controller.mac_to_port[1] = {dst: out_port}
# act
controller._port_status_handler(event)
# assert
assert_true(1 not in controller.mac_to_port or controller.mac_to_port[1] == {})
def test_port_status_handler_modify_otherport():
# arrange
dst = config.get('main', 'MAC_ADDR_2')
in_port = 2
out_port = 3
reason = ofproto_v1_0.OFPPR_MODIFY
mock_datapath = Mock(id=1, ofproto=ofproto_v1_0)
desc = Mock(port_no=in_port)
message = Mock(datapath=mock_datapath, reason=reason, desc=desc)
event = Mock(msg=message)
controller = SimpleSwitch()
controller.mac_to_port[1] = {dst: out_port}
# act
controller._port_status_handler(event)
# assert
assert_true(1 in controller.mac_to_port)
assert_equals({dst: out_port}, controller.mac_to_port[1])
def test_port_status_handler_modify():
# arrange
dst = config.get('main', 'MAC_ADDR_2')
in_port = 3
out_port = 3
reason = ofproto_v1_0.OFPPR_MODIFY
mock_datapath = Mock(id=1, ofproto=ofproto_v1_0)
desc = Mock(port_no=in_port)
message = Mock(datapath=mock_datapath, reason=reason, desc=desc)
event = Mock(msg=message)
controller = SimpleSwitch()
controller.mac_to_port[1] = {dst: out_port}
# act
controller._port_status_handler(event)
# assert
assert_true(1 not in controller.mac_to_port or controller.mac_to_port[1] == {})
def test_port_status_handler_add():
# arrange
dst = config.get('main', 'MAC_ADDR_2')
in_port = 3
out_port = 3
reason = ofproto_v1_0.OFPPR_ADD
mock_datapath = Mock(id=1, ofproto=ofproto_v1_0)
desc = Mock(port_no=in_port)
message = Mock(datapath=mock_datapath, reason=reason, desc=desc)
event = Mock(msg=message)
controller = SimpleSwitch()
controller.mac_to_port[1] = {dst: out_port}
# act
controller._port_status_handler(event)
# assert
assert_true(1 in controller.mac_to_port)
assert_equals({dst: out_port}, controller.mac_to_port[1])
| 32.426259 | 112 | 0.705863 | 2,703 | 18,029 | 4.415464 | 0.076952 | 0.064349 | 0.041056 | 0.031839 | 0.853624 | 0.841642 | 0.814328 | 0.79447 | 0.776288 | 0.776288 | 0 | 0.029376 | 0.174885 | 18,029 | 555 | 113 | 32.484685 | 0.772923 | 0.056631 | 0 | 0.738028 | 0 | 0 | 0.024758 | 0.004786 | 0 | 0 | 0.002127 | 0 | 0.219718 | 1 | 0.070423 | false | 0 | 0.030986 | 0 | 0.104225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
609f2e0a60bb99b5aedcea1906a83d01780b8c96 | 8,012 | py | Python | bin/ADFRsuite/CCSBpckgs/DejaVu2/bitPatterns.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | null | null | null | bin/ADFRsuite/CCSBpckgs/DejaVu2/bitPatterns.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | null | null | null | bin/ADFRsuite/CCSBpckgs/DejaVu2/bitPatterns.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | 1 | 2021-11-04T21:48:14.000Z | 2021-11-04T21:48:14.000Z | ################################################################################
##
## This library is free software; you can redistribute it and/or
## modify it under the terms of the GNU Lesser General Public
## License as published by the Free Software Foundation; either
## version 2.1 of the License, or (at your option) any later version.
##
## This library is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## Lesser General Public License for more details.
##
## You should have received a copy of the GNU Lesser General Public
## License along with this library; if not, write to the Free Software
## Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
##
## (C) Copyrights Dr. Michel F. Sanner and TSRI 2016
##
################################################################################
#############################################################################
#
# Author: Michel F. SANNER
#
# Copyright: M. Sanner TSRI 2000
#
#############################################################################
#
# $Header: /mnt/raid/services/cvs/DejaVu2/bitPatterns.py,v 1.1.1.1.4.1 2017/07/13 22:28:33 annao Exp $
#
# $Id: bitPatterns.py,v 1.1.1.1.4.1 2017/07/13 22:28:33 annao Exp $
#
import numpy
#finest grain 50% solid:
pat1 = numpy.array((
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55 ), 'B' )
pat2 = numpy.array((
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA,
0x55, 0x55, 0x55, 0x55, 0xAA, 0xAA, 0xAA, 0xAA ), 'B')
#medium grained 25% solid:
pat3 = numpy.array((
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00,
0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0x00, 0x00, 0x00), 'B')
pat4 = numpy.array((
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00,
0x55, 0x55, 0x55, 0x55, 0x00, 0x00, 0x00, 0x00), 'B')
pat5 = numpy.array((
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA,
0x00, 0x00, 0x00, 0x00, 0xAA, 0xAA, 0xAA, 0xAA ),'B')
pat6 = numpy.array((
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55,
0x00, 0x00, 0x00, 0x00, 0x55, 0x55, 0x55, 0x55), 'B')
#these is 25% but adjacent pixels colored so looks striped
pat7 = numpy.array((
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44,
0x88, 0x88, 0x88, 0x88, 0x44, 0x44, 0x44, 0x44), 'B')
#12.5% faint staggered stripe
pat8 = numpy.array((
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00,
0x44, 0x44, 0x44, 0x44, 0x00, 0x00, 0x00, 0x00,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00,
0x44, 0x44, 0x44, 0x44, 0x00, 0x00, 0x00, 0x00,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00,
0x44, 0x44, 0x44, 0x44, 0x00, 0x00, 0x00, 0x00,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00,
0x44, 0x44, 0x44, 0x44, 0x00, 0x00, 0x00, 0x00,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00,
0x44, 0x44, 0x44, 0x44, 0x00, 0x00, 0x00, 0x00,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00,
0x44, 0x44, 0x44, 0x44, 0x00, 0x00, 0x00, 0x00,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00,
0x44, 0x44, 0x44, 0x44, 0x00, 0x00, 0x00, 0x00,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x00,
0x44, 0x44, 0x44, 0x44, 0x00, 0x00, 0x00, 0x00), 'B')
patternList=[pat1, pat2, pat3, pat4, pat5, pat6, pat7, pat8]
| 42.617021 | 102 | 0.641163 | 1,268 | 8,012 | 4.051262 | 0.113565 | 0.373759 | 0.373759 | 0.249173 | 0.850107 | 0.848744 | 0.826942 | 0.813705 | 0.813705 | 0.813705 | 0 | 0.39703 | 0.167873 | 8,012 | 187 | 103 | 42.84492 | 0.373481 | 0.138293 | 0 | 0.869565 | 0 | 0 | 0.001223 | 0 | 0 | 0 | 0.626204 | 0 | 0 | 1 | 0 | false | 0 | 0.007246 | 0 | 0.007246 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
609f4c22ef4be73790079c2f12b325fb7f4c570c | 44 | py | Python | test/fixtures/python/corpus/if-statement.A.py | matsubara0507/semantic | 67899f701abc0f1f0cb4374d8d3c249afc33a272 | [
"MIT"
] | 8,844 | 2019-05-31T15:47:12.000Z | 2022-03-31T18:33:51.000Z | test/fixtures/python/corpus/if-statement.A.py | matsubara0507/semantic | 67899f701abc0f1f0cb4374d8d3c249afc33a272 | [
"MIT"
] | 401 | 2019-05-31T18:30:26.000Z | 2022-03-31T16:32:29.000Z | test/fixtures/python/corpus/if-statement.A.py | matsubara0507/semantic | 67899f701abc0f1f0cb4374d8d3c249afc33a272 | [
"MIT"
] | 504 | 2019-05-31T17:55:03.000Z | 2022-03-30T04:15:04.000Z | if a:
b
c
elif d:
a
b
else:
x
y
| 4.4 | 7 | 0.409091 | 11 | 44 | 1.636364 | 0.818182 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.522727 | 44 | 9 | 8 | 4.888889 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
60e160df3dff29f2d63fb19d6ec9afc3c241b801 | 64 | py | Python | torchlib/__init__.py | pedrodiamel/kaggle-tgs-salt-identification | 0555296ae66122095560aea23ead16d23ebb5b2f | [
"MIT"
] | null | null | null | torchlib/__init__.py | pedrodiamel/kaggle-tgs-salt-identification | 0555296ae66122095560aea23ead16d23ebb5b2f | [
"MIT"
] | null | null | null | torchlib/__init__.py | pedrodiamel/kaggle-tgs-salt-identification | 0555296ae66122095560aea23ead16d23ebb5b2f | [
"MIT"
] | null | null | null |
from torchlib import datasets
from torchlib import netmodels
| 10.666667 | 30 | 0.828125 | 8 | 64 | 6.625 | 0.625 | 0.45283 | 0.679245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171875 | 64 | 5 | 31 | 12.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7184f69b10d739b7337e8e7dd821565bb592a25c | 13,799 | py | Python | rmp/initial_data.py | CrazyWcY/HandyDelivery-Flask | 35951ad0fc4ccd18fbb01d58af85a9981c939ec9 | [
"MIT"
] | null | null | null | rmp/initial_data.py | CrazyWcY/HandyDelivery-Flask | 35951ad0fc4ccd18fbb01d58af85a9981c939ec9 | [
"MIT"
] | null | null | null | rmp/initial_data.py | CrazyWcY/HandyDelivery-Flask | 35951ad0fc4ccd18fbb01d58af85a9981c939ec9 | [
"MIT"
] | null | null | null | import sys
sys.path.append('..')
import Task
user = {
"root": {
"id": "root",
"name": "WCY",
"signature": "胡博头号粉丝",
"avatar": "https://www.gx8899.com/uploads/allimg/2016062815/yddciyonaq3.jpg",
"friends": ['hjh', 'lxr', 'lh', 'wyx', 'csd'],
"address": "上海交通大学软件学院",
"receiveNum": 124,
"sendNum": 523,
"star": 608
},
"hjh": {
"id": "hjh",
"name": "HJH",
"signature": "胡博本人",
"avatar": "https://tse2-mm.cn.bing.net/th/id/OIP.vRY1U0-rSP2pXM-5qIQIuAAAAA?pid=Api&rs=1",
"friends": ['root', 'lxr', 'lh'],
"address": "上海交通大学软件学院",
"receiveNum": 124,
"sendNum": 523,
"star": 608
},
"lxr": {
"id": "lxr",
"name": "LXR",
"signature": "胡博二号粉丝",
"avatar": "https://www.keaidian.com/uploads/allimg/180927/co1P92F95035-0-9.jpg",
"friends": ['root', 'hjh', 'lh'],
"address": "上海交通大学软件学院",
"receiveNum": 124,
"sendNum": 523,
"star": 608
},
"lh": {
"id": "lh",
"name": "HUGE",
"signature": "情商单位",
"avatar": "http://ist.sjtu.edu.cn/getpic/20200907140708090_lihu.png",
"friends": ['root', 'hjh', 'lxr'],
"address": "上海交通大学软件学院",
"receiveNum": 124,
"sendNum": 523,
"star": 608
},
"wyx": {
"id": "wyx",
"name": "真·王博",
"signature": "IST之光",
"avatar": "http://ist.sjtu.edu.cn/getpic/20200907142605254_wangyuxiao.png",
"friends": ['root', 'hjh', 'lxr'],
"address": "上海交通大学软件学院",
"receiveNum": 124,
"sendNum": 523,
"star": 608
},
"csd": {
"id": "csd",
"name": "真·蔡少",
"signature": "屏东之光",
"avatar": "http://ist.sjtu.edu.cn/getpic/20200907154122023_caishengdong.png",
"friends": ['root', 'hjh', 'lxr'],
"address": "上海交通大学软件学院",
"receiveNum": 124,
"sendNum": 523,
"star": 608
}
}
tasks = [
Task.Task(
1,
"求购复旦大学纪念章",
"复旦大学纪念章",
[
"https://pic13.997788.com/_pic_auction/00/08/01/22/8012210c.jpg",
"https://tse2-mm.cn.bing.net/th/id/OIP.J2MgFnLale171Ppo-l0ZUwAAAA?pid=Api&w=460&h=460&rs=1"
],
{
'name': "复旦大学",
'longitude': 121.50904,
'latitude': 31.33541
},
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
"1000",
"2020-12-25 11:32:45",
"求复旦纪念章。",
0,
"15619202209",
user['root'],
"2020-12-10 08:15:44"
),
Task.Task(
2,
"2999求购原价RTX3060Ti",
"NVIDIA RTX 3060Ti",
[
"https://3c.3dmgame.com/uploadfile/2020/1202/20201202115919476.jpg",
"https://tse1-mm.cn.bing.net/th/id/OIP.PIlOjgDn1rEkZJznZ5thjQHaJd?pid=Api&rs=1",
"https://tse4-mm.cn.bing.net/th/id/OIP.O4_--i7Wb6Gq9R6U3Ii5pAHaEK?pid=Api&rs=1"
],
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
"1000",
"2020-12-13 11:32:45",
"2999收3060ti。",
2,
"15619202209",
user['root'],
"2020-12-14 15:30:31",
puchaser=user['lxr'],
p_location='x36',
p_details='易摔坏',
p_finish_time='2020-12-17 11:32:45',
p_send_location={
'name': "x36阿姨处",
'longitude': 121.43101,
'latitude': 31.0176
},
p_money='100'
),
Task.Task(
3,
"求上海交大马克杯",
"上海交大马克杯",
[
"https://gd2.alicdn.com/imgextra/i2/2200771700801/O1CN01jJjQjq1HmtsGaaGCl_!!2200771700801.jpg",
"https://gd1.alicdn.com/imgextra/i2/475407752/O1CN01D5fHF1278SstKxHbC_!!475407752.jpg",
"https://gd3.alicdn.com/imgextra/i3/475407752/O1CN0113fcY9278Sslva6px_!!475407752.jpg"
],
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
{
'name': "莘庄地铁站",
'longitude': 121.392186,
'latitude': 31.116872
},
"30",
"2020-12-20 11:32:45",
"求上海交大LOGO纪念款马克杯。",
1,
"15619202209",
user['hjh'],
"2020-12-13 19:23:10",
puchaser=user['root'],
p_location='上海交大纪念品店',
p_details='易碎品,已放至软件大楼圆厅。',
p_finish_time='2020-12-15 11:32:45',
p_send_location={
'name': "莘庄地铁站",
'longitude': 121.392186,
'latitude': 31.116872
},
p_money='15'
),
Task.Task(
4,
"求同济大学书签",
"同济大学书签",
[
"https://sh.eastday.com/eastday/shnews/slideshow/20070520_3/images/00048523.jpg",
"https://sh.eastday.com/images/thumbnailimg/month_1706/201706200654469656.jpg"
],
{
'name': "同济大学四平路校区",
'longitude': 121.508532,
'latitude': 31.289027
},
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
"10",
"2020-12-19 09:00:02",
"求同济大学限定书签QAQ。",
3,
"15619202209",
user['root'],
"2020-12-13 21:30:11",
puchaser=user['lh'],
p_location='同济大学纪念品店',
p_details='限定款书签',
p_finish_time='2020-12-16 11:32:45',
p_send_location={
'name': "同济大学四平路校区邮政室",
'longitude': 121.508532,
'latitude': 31.289027
},
p_money='5',
deliver=user['lh'],
d_current_location={
'name': "同济大学四平路校区邮政室",
'longitude': 121.508532,
'latitude': 31.289027
},
# d_finish_time='2020-12-18 12:30:15',
# d_money='15'
),
Task.Task(
5,
"上海交大钥匙扣",
"上海交大钥匙扣",
[
"https://img.alicdn.com/imgextra/i2/67428829/TB24AZudpXXXXaFXXXXXXXXXXXX_!!67428829.jpg",
],
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
{
'name': "虹桥机场T2",
'longitude': 121.333972,
'latitude': 31.200914
},
"25",
"2020-12-11 22:35:43",
"交大青花瓷钥匙扣。",
3,
"15619202209",
user['lxr'],
"2020-12-13 21:30:11",
puchaser=user['hjh'],
p_location='上海交大纪念品店',
p_details='限定款钥匙扣',
p_finish_time='2020-12-13 11:45:15',
p_send_location={
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
p_money='20',
deliver=user['root'],
d_current_location={
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
# d_finish_time='2020-12-15 14:30:11',
# d_money='10'
),
Task.Task(
6,
"测试:待付款ITEM",
"上海交大钥匙扣",
[
"https://img.alicdn.com/imgextra/i2/67428829/TB24AZudpXXXXaFXXXXXXXXXXXX_!!67428829.jpg",
],
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
{
'name': "虹桥机场T2",
'longitude': 121.333972,
'latitude': 31.200914
},
"25",
"2020-12-11 22:35:43",
"交大青花瓷钥匙扣。",
4,
"15619202209",
user['root'],
"2020-12-13 21:30:11",
puchaser=user['hjh'],
p_location='上海交大纪念品店',
p_details='限定款钥匙扣',
p_finish_time='2020-12-13 11:45:15',
p_send_location={
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
p_money='20',
deliver=user['lxr'],
d_current_location={
'name': "虹桥机场T2",
'longitude': 121.333972,
'latitude': 31.200914
},
d_finish_time='2020-12-15 14:30:11',
d_money='10'
),
Task.Task(
7,
"测试:待评价ITEM",
"上海交大钥匙扣",
[
"https://img.alicdn.com/imgextra/i2/67428829/TB24AZudpXXXXaFXXXXXXXXXXXX_!!67428829.jpg",
],
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
{
'name': "虹桥机场T2",
'longitude': 121.333972,
'latitude': 31.200914
},
"25",
"2020-12-11 22:35:43",
"交大青花瓷钥匙扣。",
5,
"15619202209",
user['root'],
"2020-12-13 21:30:11",
puchaser=user['hjh'],
p_location='上海交大纪念品店',
p_details='限定款钥匙扣',
p_finish_time='2020-12-13 11:45:15',
p_send_location={
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
p_money='20',
deliver=user['lxr'],
d_current_location={
'name': "虹桥机场T2",
'longitude': 121.333972,
'latitude': 31.200914
},
d_finish_time='2020-12-15 14:30:11',
d_money='10'
),
Task.Task(
8,
"测试:已结束ITEM",
"上海交大钥匙扣",
[
"https://img.alicdn.com/imgextra/i2/67428829/TB24AZudpXXXXaFXXXXXXXXXXXX_!!67428829.jpg",
],
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
{
'name': "虹桥机场T2",
'longitude': 121.333972,
'latitude': 31.200914
},
"25",
"2020-12-11 22:35:43",
"交大青花瓷钥匙扣。",
6,
"15619202209",
user['root'],
"2020-12-13 21:30:11",
puchaser=user['hjh'],
p_location='上海交大纪念品店',
p_details='限定款钥匙扣',
p_finish_time='2020-12-13 11:45:15',
p_send_location={
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
p_money='20',
deliver=user['lxr'],
d_current_location={
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
d_finish_time='2020-12-15 14:30:11',
d_money='10',
appraise=5,
p_appraise=5,
d_appraise=5
),
Task.Task(
9,
"求购复旦大学纪念章",
"复旦大学纪念章",
[
"https://pic13.997788.com/_pic_auction/00/08/01/22/8012210c.jpg",
"https://tse2-mm.cn.bing.net/th/id/OIP.J2MgFnLale171Ppo-l0ZUwAAAA?pid=Api&w=460&h=460&rs=1"
],
{
'name': "复旦大学",
'longitude': 121.50904,
'latitude': 31.33541
},
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
"1000",
"2020-12-25 11:32:45",
"求复旦纪念章。",
0,
"15619202209",
user['lh'],
"2020-12-10 08:15:44"
),
Task.Task(
10,
"2999求购原价RTX3060Ti",
"NVIDIA RTX 3060Ti",
[
"https://3c.3dmgame.com/uploadfile/2020/1202/20201202115919476.jpg",
"https://tse1-mm.cn.bing.net/th/id/OIP.PIlOjgDn1rEkZJznZ5thjQHaJd?pid=Api&rs=1",
"https://tse4-mm.cn.bing.net/th/id/OIP.O4_--i7Wb6Gq9R6U3Ii5pAHaEK?pid=Api&rs=1"
],
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
"1000",
"2020-12-13 11:32:45",
"2999收3060ti。",
2,
"15619202209",
user['lh'],
"2020-12-14 15:30:31",
puchaser=user['lxr'],
p_location='x36',
p_details='易摔坏',
p_finish_time='2020-12-17 11:32:45',
p_send_location={
'name': "x36阿姨处",
'longitude': 121.43101,
'latitude': 31.0176
},
p_money='100'
),
Task.Task(
11,
"求购复旦大学纪念章(NOTE专属任务)",
"复旦大学纪念章",
[
"https://pic13.997788.com/_pic_auction/00/08/01/22/8012210c.jpg",
"https://tse2-mm.cn.bing.net/th/id/OIP.J2MgFnLale171Ppo-l0ZUwAAAA?pid=Api&w=460&h=460&rs=1"
],
{
'name': "复旦大学",
'longitude': 121.50904,
'latitude': 31.33541
},
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
"1000",
"2020-12-25 11:32:45",
"求复旦纪念章。",
0,
"15619202209",
user['root'],
"2020-12-10 08:15:44",
special_user=user['hjh']
),
Task.Task(
12,
"求购复旦大学纪念章(NOTE专属任务)",
"复旦大学纪念章",
[
"https://pic13.997788.com/_pic_auction/00/08/01/22/8012210c.jpg",
"https://tse2-mm.cn.bing.net/th/id/OIP.J2MgFnLale171Ppo-l0ZUwAAAA?pid=Api&w=460&h=460&rs=1"
],
{
'name': "复旦大学",
'longitude': 121.50904,
'latitude': 31.33541
},
{
'name': "交通大学闵行校区",
'longitude': 121.43102,
'latitude': 31.0176
},
"1000",
"2020-12-25 11:32:45",
"求复旦纪念章。",
0,
"15619202209",
user['hjh'],
"2020-12-10 08:15:44",
special_user=user['root']
),
] | 27.003914 | 107 | 0.455758 | 1,370 | 13,799 | 4.513869 | 0.164234 | 0.071798 | 0.049806 | 0.07762 | 0.801746 | 0.783473 | 0.777975 | 0.759541 | 0.726876 | 0.680789 | 0 | 0.222441 | 0.373505 | 13,799 | 511 | 108 | 27.003914 | 0.492655 | 0.007174 | 0 | 0.677228 | 0 | 0.029703 | 0.378797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.00396 | 0 | 0.00396 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
719bb71d2124d5a55e0b1dc35cde78a21b830206 | 11,654 | py | Python | src/main.py | hyren/PathCon | 017bb740df31764ed9b53504bb1e278807afba3f | [
"MIT"
] | 30 | 2020-05-22T03:53:45.000Z | 2022-03-29T04:32:56.000Z | src/main.py | hyren/PathCon | 017bb740df31764ed9b53504bb1e278807afba3f | [
"MIT"
] | 1 | 2020-10-29T14:29:44.000Z | 2021-09-15T21:12:47.000Z | src/main.py | hyren/PathCon | 017bb740df31764ed9b53504bb1e278807afba3f | [
"MIT"
] | 12 | 2020-05-17T12:12:52.000Z | 2021-12-14T04:55:00.000Z | import argparse
from data_loader import load_data
from train import train
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
def print_setting(args):
assert args.use_context or args.use_path
print()
print('=============================================')
print('dataset: ' + args.dataset)
print('epoch: ' + str(args.epoch))
print('batch_size: ' + str(args.batch_size))
print('dim: ' + str(args.dim))
print('l2: ' + str(args.l2))
print('lr: ' + str(args.lr))
print('feature_type: ' + args.feature_type)
print('use relational context: ' + str(args.use_context))
if args.use_context:
print('context_hops: ' + str(args.context_hops))
print('neighbor_samples: ' + str(args.neighbor_samples))
print('neighbor_agg: ' + args.neighbor_agg)
print('use relational path: ' + str(args.use_path))
if args.use_path:
print('max_path_len: ' + str(args.max_path_len))
print('path_type: ' + args.path_type)
if args.path_type == 'rnn':
print('path_samples: ' + str(args.path_samples))
print('path_agg: ' + args.path_agg)
print('=============================================')
print()
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--cuda', default=False, help='use gpu', action='store_true')
'''
# ===== FB15k ===== #
parser.add_argument('--dataset', type=str, default='FB15k', help='dataset name')
parser.add_argument('--epoch', type=int, default=20, help='number of epochs')
parser.add_argument('--batch_size', type=int, default=128, help='batch size')
parser.add_argument('--dim', type=int, default=64, help='hidden dimension')
parser.add_argument('--l2', type=float, default=1e-7, help='l2 regularization weight')
parser.add_argument('--lr', type=float, default=5e-3, help='learning rate')
parser.add_argument('--feature_type', type=str, default='id', help='type of relation features: id, bow, bert')
# settings for relational context
parser.add_argument('--use_context', type=bool, default=True, help='whether use relational context')
parser.add_argument('--context_hops', type=int, default=2, help='number of context hops')
parser.add_argument('--neighbor_samples', type=int, default=32, help='number of sampled neighbors for one hop')
parser.add_argument('--neighbor_agg', type=str, default='concat', help='neighbor aggregator: mean, concat, cross')
# settings for relational path
parser.add_argument('--use_path', type=bool, default=True, help='whether use relational path')
parser.add_argument('--max_path_len', type=int, default=2, help='max length of a path')
parser.add_argument('--path_type', type=str, default='embedding', help='path representation type: embedding, rnn')
parser.add_argument('--path_samples', type=int, default=8, help='number of sampled paths if using rnn')
parser.add_argument('--path_agg', type=str, default='att', help='path aggregator if using rnn: mean, att')
'''
'''
# ===== FB15k-237 ===== #
parser.add_argument('--dataset', type=str, default='FB15k-237', help='dataset name')
parser.add_argument('--epoch', type=int, default=20, help='number of epochs')
parser.add_argument('--batch_size', type=int, default=128, help='batch size')
parser.add_argument('--dim', type=int, default=64, help='hidden dimension')
parser.add_argument('--l2', type=float, default=1e-7, help='l2 regularization weight')
parser.add_argument('--lr', type=float, default=5e-3, help='learning rate')
parser.add_argument('--feature_type', type=str, default='id', help='type of relation features: id, bow, bert')
# settings for relational context
parser.add_argument('--use_context', type=bool, default=True, help='whether use relational context')
parser.add_argument('--context_hops', type=int, default=2, help='number of context hops')
parser.add_argument('--neighbor_samples', type=int, default=32, help='number of sampled neighbors for one hop')
parser.add_argument('--neighbor_agg', type=str, default='concat', help='neighbor aggregator: mean, concat, cross')
# settings for relational path
parser.add_argument('--use_path', type=bool, default=True, help='whether use relational path')
parser.add_argument('--max_path_len', type=int, default=3, help='max length of a path')
parser.add_argument('--path_type', type=str, default='embedding', help='path representation type: embedding, rnn')
parser.add_argument('--path_samples', type=int, default=8, help='number of sampled paths if using rnn')
parser.add_argument('--path_agg', type=str, default='att', help='path aggregator if using rnn: mean, att')
'''
'''
# ===== wn18 ===== #
parser.add_argument('--dataset', type=str, default='wn18', help='dataset name')
parser.add_argument('--epoch', type=int, default=20, help='number of epochs')
parser.add_argument('--batch_size', type=int, default=128, help='batch size')
parser.add_argument('--dim', type=int, default=64, help='hidden dimension')
parser.add_argument('--l2', type=float, default=1e-7, help='l2 regularization weight')
parser.add_argument('--lr', type=float, default=5e-3, help='learning rate')
parser.add_argument('--feature_type', type=str, default='id', help='type of relation features: id, bow, bert')
# settings for relational context
parser.add_argument('--use_context', type=bool, default=True, help='whether use relational context')
parser.add_argument('--context_hops', type=int, default=3, help='number of context hops')
parser.add_argument('--neighbor_samples', type=int, default=16, help='number of sampled neighbors for one hop')
parser.add_argument('--neighbor_agg', type=str, default='cross', help='neighbor aggregator: mean, concat, cross')
# settings for relational path
parser.add_argument('--use_path', type=bool, default=True, help='whether use relational path')
parser.add_argument('--max_path_len', type=int, default=3, help='max length of a path')
parser.add_argument('--path_type', type=str, default='embedding', help='path representation type: embedding, rnn')
parser.add_argument('--path_samples', type=int, default=8, help='number of sampled paths if using rnn')
parser.add_argument('--path_agg', type=str, default='att', help='path aggregator if using rnn: mean, att')
'''
# ===== wn18rr ===== #
parser.add_argument('--dataset', type=str, default='wn18rr', help='dataset name')
parser.add_argument('--epoch', type=int, default=20, help='number of epochs')
parser.add_argument('--batch_size', type=int, default=128, help='batch size')
parser.add_argument('--dim', type=int, default=64, help='hidden dimension')
parser.add_argument('--l2', type=float, default=1e-7, help='l2 regularization weight')
parser.add_argument('--lr', type=float, default=5e-3, help='learning rate')
parser.add_argument('--feature_type', type=str, default='id', help='type of relation features: id, bow, bert')
# settings for relational context
parser.add_argument('--use_context', type=bool, default=True, help='whether use relational context')
parser.add_argument('--context_hops', type=int, default=3, help='number of context hops')
parser.add_argument('--neighbor_samples', type=int, default=8, help='number of sampled neighbors for one hop')
parser.add_argument('--neighbor_agg', type=str, default='cross', help='neighbor aggregator: mean, concat, cross')
# settings for relational path
parser.add_argument('--use_path', type=bool, default=True, help='whether use relational path')
parser.add_argument('--max_path_len', type=int, default=4, help='max length of a path')
parser.add_argument('--path_type', type=str, default='embedding', help='path representation type: embedding, rnn')
parser.add_argument('--path_samples', type=int, default=8, help='number of sampled paths if using rnn')
parser.add_argument('--path_agg', type=str, default='att', help='path aggregator if using rnn: mean, att')
'''
# ===== NELL995 ===== #
parser.add_argument('--dataset', type=str, default='NELL995', help='dataset name')
parser.add_argument('--epoch', type=int, default=20, help='number of epochs')
parser.add_argument('--batch_size', type=int, default=128, help='batch size')
parser.add_argument('--dim', type=int, default=64, help='hidden dimension')
parser.add_argument('--l2', type=float, default=1e-7, help='l2 regularization weight')
parser.add_argument('--lr', type=float, default=5e-3, help='learning rate')
parser.add_argument('--feature_type', type=str, default='id', help='type of relation features: id, bow, bert')
# settings for relational context
parser.add_argument('--use_context', type=bool, default=True, help='whether use relational context')
parser.add_argument('--context_hops', type=int, default=2, help='number of context hops')
parser.add_argument('--neighbor_samples', type=int, default=8, help='number of sampled neighbors for one hop')
parser.add_argument('--neighbor_agg', type=str, default='concat', help='neighbor aggregator: mean, concat, cross')
# settings for relational path
parser.add_argument('--use_path', type=bool, default=True, help='whether use relational path')
parser.add_argument('--max_path_len', type=int, default=3, help='max length of a path')
parser.add_argument('--path_type', type=str, default='embedding', help='path representation type: embedding, rnn')
parser.add_argument('--path_samples', type=int, default=8, help='number of sampled paths if using rnn')
parser.add_argument('--path_agg', type=str, default='att', help='path aggregator if using rnn: mean, att')
'''
'''
# ===== DDB14 ===== #
parser.add_argument('--dataset', type=str, default='DDB14', help='dataset name')
parser.add_argument('--epoch', type=int, default=20, help='number of epochs')
parser.add_argument('--batch_size', type=int, default=128, help='batch size')
parser.add_argument('--dim', type=int, default=64, help='hidden dimension')
parser.add_argument('--l2', type=float, default=1e-7, help='l2 regularization weight')
parser.add_argument('--lr', type=float, default=5e-3, help='learning rate')
parser.add_argument('--feature_type', type=str, default='id', help='type of relation features: id, bow, bert')
# settings for relational context
parser.add_argument('--use_context', type=bool, default=True, help='whether use relational context')
parser.add_argument('--context_hops', type=int, default=3, help='number of context hops')
parser.add_argument('--neighbor_samples', type=int, default=8, help='number of sampled neighbors for one hop')
parser.add_argument('--neighbor_agg', type=str, default='cross', help='neighbor aggregator: mean, concat, cross')
# settings for relational path
parser.add_argument('--use_path', type=bool, default=True, help='whether use relational path')
parser.add_argument('--max_path_len', type=int, default=4, help='max length of a path')
parser.add_argument('--path_type', type=str, default='embedding', help='path representation type: embedding, rnn')
parser.add_argument('--path_samples', type=int, default=8, help='number of sampled paths if using rnn')
parser.add_argument('--path_agg', type=str, default='att', help='path aggregator if using rnn: mean, att')
'''
args = parser.parse_args()
print_setting(args)
data = load_data(args)
train(args, data)
if __name__ == '__main__':
main()
| 60.697917 | 118 | 0.690664 | 1,603 | 11,654 | 4.888334 | 0.067374 | 0.111409 | 0.210439 | 0.048239 | 0.884635 | 0.884507 | 0.884507 | 0.86511 | 0.854135 | 0.854135 | 0 | 0.013946 | 0.138579 | 11,654 | 191 | 119 | 61.015707 | 0.76661 | 0.006865 | 0 | 0.072727 | 0 | 0 | 0.309027 | 0.028911 | 0 | 0 | 0 | 0 | 0.018182 | 1 | 0.036364 | false | 0 | 0.072727 | 0 | 0.109091 | 0.4 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
71c9e7aa5937e5c6555242ba391bb8dd80bc5898 | 4,899 | py | Python | rlrd/memory.py | yannbouteiller/rlrd | 391925c2a243f602599b74c2a168b669a8233066 | [
"MIT"
] | 15 | 2021-01-13T08:15:21.000Z | 2022-03-30T23:35:30.000Z | rlrd/memory.py | yannbouteiller/rlrd | 391925c2a243f602599b74c2a168b669a8233066 | [
"MIT"
] | null | null | null | rlrd/memory.py | yannbouteiller/rlrd | 391925c2a243f602599b74c2a168b669a8233066 | [
"MIT"
] | 7 | 2021-01-11T15:31:50.000Z | 2022-01-21T01:45:04.000Z | from collections import deque
from random import randint
from rlrd.util import collate
class Memory:
keep_reset_transitions: int = 0
def __init__(self, memory_size, batchsize, device, remove_size=100):
self.device = device
self.batchsize = batchsize
self.capacity = memory_size
self.memory = [] # list is much faster to index than deque for big sizes
self.remove_size = remove_size
self.last_observation = None
self.last_action = None
def append(self, r, done, info, obs, action):
if self.last_observation is not None:
if self.keep_reset_transitions:
store = True
else:
# info["reset"] = True means the episode reset shouldn't be treated as a true terminal state
store = not info.get('TimeLimit.truncated', False) and not info.get('reset', False)
if store:
self.memory.append((self.last_observation, self.last_action, r, obs, done))
self.last_observation = obs
self.last_action = action
# remove old entries if necessary (delete generously so we don't have to do it often)
if len(self.memory) > self.capacity:
del self.memory[:self.capacity // self.remove_size + 1]
return self
def __len__(self):
return len(self.memory)
def __getitem__(self, item):
return self.memory[item]
def sample_indices(self):
return (randint(0, len(self.memory) - 1) for _ in range(self.batchsize))
def sample(self, indices=None):
indices = self.sample_indices() if indices is None else indices
batch = [self.memory[idx] for idx in indices]
batch = collate(batch, self.device)
return batch
class TrajMemory:
keep_reset_transitions: int = 0
def __init__(self, memory_size, batchsize, device, history=1, remove_size=100):
self.device = device
self.batchsize = batchsize
self.capacity = memory_size
self.memory = [] # list is much faster to index than deque for big sizes
self.history = deque(maxlen=history + 1)
self.remove_size = remove_size
def append(self, r, done, info, obs, h, action):
self.history.append((r, obs, h, action))
if not self.keep_reset_transitions and (info.get('TimeLimit.truncated', False) or info.get('reset', False)):
self.history.clear()
if len(self.history) == self.history.maxlen:
(_, *r), m, h, a = zip(*self.history)
self.memory.append((m, h, a, r, done))
if done:
self.history.clear()
# remove old entries if necessary (delete generously so we don't have to do it often)
if len(self.memory) > self.capacity:
del self.memory[:self.capacity // self.remove_size + 1]
return self
def __len__(self):
return len(self.memory)
def __getitem__(self, item):
return self.memory[item]
def sample_indices(self):
return (randint(0, len(self.memory) - 1) for _ in range(self.batchsize))
def sample(self, indices=None):
indices = self.sample_indices() if indices is None else indices
batch = [self.memory[idx] for idx in indices]
batch = collate(batch, self.device)
return batch
class TrajMemoryNoHidden:
keep_reset_transitions: int = 0
def __init__(self, memory_size, batchsize, device, history=1, remove_size=100):
self.device = device
self.batchsize = batchsize
self.capacity = memory_size
self.memory = [] # list is much faster to index than deque for big sizes
self.history = deque(maxlen=history + 1)
self.remove_size = remove_size
def append(self, r, done, info, obs, action):
self.history.append((r, obs, action))
if not self.keep_reset_transitions and (info.get('TimeLimit.truncated', False) or info.get('reset', False)):
self.history.clear()
if len(self.history) == self.history.maxlen:
(_, *r), m, a = zip(*self.history)
self.memory.append((m, a, r, done))
if done:
self.history.clear()
# remove old entries if necessary (delete generously so we don't have to do it often)
if len(self.memory) > self.capacity:
del self.memory[:self.capacity // self.remove_size + 1]
return self
def __len__(self):
return len(self.memory)
def __getitem__(self, item):
return self.memory[item]
def sample_indices(self):
return (randint(0, len(self.memory) - 1) for _ in range(self.batchsize))
def sample(self, indices=None):
indices = self.sample_indices() if indices is None else indices
batch = [self.memory[idx] for idx in indices]
batch = collate(batch, self.device)
return batch
| 34.258741 | 116 | 0.627271 | 650 | 4,899 | 4.598462 | 0.150769 | 0.090331 | 0.039144 | 0.044162 | 0.854801 | 0.838073 | 0.822014 | 0.822014 | 0.788223 | 0.788223 | 0 | 0.007017 | 0.272709 | 4,899 | 142 | 117 | 34.5 | 0.831883 | 0.102878 | 0 | 0.752475 | 0 | 0 | 0.016412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178218 | false | 0 | 0.029703 | 0.089109 | 0.415842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e0dcb200acbd7a6580e0fae6865d56b6086c68ad | 2,881 | py | Python | tests/functional/test_upgrade_cluster_imports_data/config_generator_strict_import.py | arenadata/adcm | a499caa30adc2a53e7b3f46c96a865f9e4079e4e | [
"Apache-2.0"
] | 16 | 2019-11-28T18:05:21.000Z | 2021-12-08T18:09:18.000Z | tests/functional/test_upgrade_cluster_imports_data/config_generator_strict_import.py | arenadata/adcm | a499caa30adc2a53e7b3f46c96a865f9e4079e4e | [
"Apache-2.0"
] | 1,127 | 2019-11-29T08:57:25.000Z | 2022-03-31T20:21:32.000Z | tests/functional/test_upgrade_cluster_imports_data/config_generator_strict_import.py | arenadata/adcm | a499caa30adc2a53e7b3f46c96a865f9e4079e4e | [
"Apache-2.0"
] | 10 | 2019-11-28T18:05:06.000Z | 2022-01-13T06:16:40.000Z | import os
SERVICE_VERSIONS = (
("service-less-equal", "2.2", "3.0"),
("service-greater-equal", "1.5", "2.2"),
("service-equal", "2.2", '2.2'),
('service-less', '2.3', '2.4'),
("service-greater", '1', '2'),
)
CLUSTER_VERSIONS = (
("cluster-less-equal", "1.6", "2.0"),
("cluster-greater-equal", "1.0", "1.6"),
("cluster-equal", "1.6", '1.6'),
('cluster-less', '1.7', '2.4'),
("cluster-greater", '0.5', '0.9'),
)
TEMPLATE_SERVICE = """
-
type: cluster
name: ADH
version: 1.6
upgrade:
- versions:
min: 0.4
max: 1.5
name: upgrade to 1.6
description: New cool upgrade
states:
available: any
on_success: upgradable
- versions:
min: 1.0
max: 1.8
description: Super new upgrade
name: upgrade 2
states:
available: [created, installed, upgradable]
on_success: upgradated
import:
hadoop:
versions:
min_strict: {0}
max_strict: {1}
ADH:
versions:
min_strict: 0.1
max_strict: 4.0
- type: service
name: hadoop
version: 2.2
config:
core-site:
param1:
type: string
required: false
param2:
type: integer
required: false
quorum:
type: integer
default: 3
"""
TEMPLATE_CLUSTER = """
-
type: cluster
name: ADH
version: 1.6
upgrade:
- versions:
min: 0.4
max: 1.5
name: upgrade to 1.6
description: New cool upgrade
states:
available: any
on_success: upgradable
- versions:
min: 1.0
max: 1.8
description: Super new upgrade
name: upgrade 2
states:
available: [created, installed, upgradable]
on_success: upgradated
import:
hadoop:
versions:
min_strict: 1.5
max_strict: 2.5
ADH:
versions:
min_strict: {0}
max_strict: {1}
- type: service
name: hadoop
version: 2.2
config:
core-site:
param1:
type: string
required: false
param2:
type: integer
required: false
quorum:
type: integer
default: 3
"""
for t in SERVICE_VERSIONS:
d_name = f"upgradable_cluster_with_strict_incorrect_version/{t[0]}"
os.makedirs(d_name)
with open(f"{d_name}/config.yaml", "w+", encoding='utf_8') as f:
f.write(TEMPLATE_SERVICE.format(t[1], t[2]))
for t in CLUSTER_VERSIONS:
d_name = f"upgradable_cluster_with_strict_incorrect_version/{t[0]}"
os.makedirs(d_name)
with open(f"{d_name}/config.yaml", "w+", encoding='utf_8') as f:
f.write(TEMPLATE_CLUSTER.format(t[1], t[2]))
| 22.685039 | 71 | 0.519264 | 348 | 2,881 | 4.189655 | 0.198276 | 0.010974 | 0.046639 | 0.037037 | 0.783951 | 0.753772 | 0.753772 | 0.727023 | 0.727023 | 0.727023 | 0 | 0.050321 | 0.351614 | 2,881 | 126 | 72 | 22.865079 | 0.730193 | 0 | 0 | 0.769231 | 0 | 0 | 0.785144 | 0.052759 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
46094ae181d6dfd1f936e56bc551067ff11cb349 | 132,628 | py | Python | venv/Lib/site-packages/tensorflow/python/ops/gen_parsing_ops.py | rexliu3/StockTradingBotCloud | 46b732b9c05f73bc0e856a3c4a16854b6d12e18e | [
"MIT"
] | null | null | null | venv/Lib/site-packages/tensorflow/python/ops/gen_parsing_ops.py | rexliu3/StockTradingBotCloud | 46b732b9c05f73bc0e856a3c4a16854b6d12e18e | [
"MIT"
] | null | null | null | venv/Lib/site-packages/tensorflow/python/ops/gen_parsing_ops.py | rexliu3/StockTradingBotCloud | 46b732b9c05f73bc0e856a3c4a16854b6d12e18e | [
"MIT"
] | 1 | 2020-06-28T11:47:47.000Z | 2020-06-28T11:47:47.000Z | """Python wrappers around TensorFlow ops.
This file is MACHINE GENERATED! Do not edit.
Original C++ source file: parsing_ops.cc
"""
import collections
from tensorflow.python import pywrap_tfe as pywrap_tfe
from tensorflow.python.eager import context as _context
from tensorflow.python.eager import core as _core
from tensorflow.python.eager import execute as _execute
from tensorflow.python.framework import dtypes as _dtypes
from tensorflow.python.framework import op_def_registry as _op_def_registry
from tensorflow.python.framework import ops as _ops
from tensorflow.python.framework import op_def_library as _op_def_library
from tensorflow.python.util.deprecation import deprecated_endpoints
from tensorflow.python.util import dispatch as _dispatch
from tensorflow.python.util.tf_export import tf_export
def decode_csv(records, record_defaults, field_delim=",", use_quote_delim=True, na_value="", select_cols=[], name=None):
r"""Convert CSV records to tensors. Each column maps to one tensor.
RFC 4180 format is expected for the CSV records.
(https://tools.ietf.org/html/rfc4180)
Note that we allow leading and trailing spaces with int or float field.
Args:
records: A `Tensor` of type `string`.
Each string is a record/row in the csv and all records should have
the same format.
record_defaults: A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`.
One tensor per column of the input record, with either a
scalar default value for that column or an empty vector if the column is
required.
field_delim: An optional `string`. Defaults to `","`.
char delimiter to separate fields in a record.
use_quote_delim: An optional `bool`. Defaults to `True`.
If false, treats double quotation marks as regular
characters inside of the string fields (ignoring RFC 4180, Section 2,
Bullet 5).
na_value: An optional `string`. Defaults to `""`.
Additional string to recognize as NA/NaN.
select_cols: An optional list of `ints`. Defaults to `[]`.
name: A name for the operation (optional).
Returns:
A list of `Tensor` objects. Has the same type as `record_defaults`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DecodeCSV", name,
tld.op_callbacks, records, record_defaults, "field_delim",
field_delim, "use_quote_delim", use_quote_delim, "na_value", na_value,
"select_cols", select_cols)
return _result
except _core._FallbackException:
try:
return decode_csv_eager_fallback(
records, record_defaults, field_delim=field_delim,
use_quote_delim=use_quote_delim, na_value=na_value,
select_cols=select_cols, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
if field_delim is None:
field_delim = ","
field_delim = _execute.make_str(field_delim, "field_delim")
if use_quote_delim is None:
use_quote_delim = True
use_quote_delim = _execute.make_bool(use_quote_delim, "use_quote_delim")
if na_value is None:
na_value = ""
na_value = _execute.make_str(na_value, "na_value")
if select_cols is None:
select_cols = []
if not isinstance(select_cols, (list, tuple)):
raise TypeError(
"Expected list for 'select_cols' argument to "
"'decode_csv' Op, not %r." % select_cols)
select_cols = [_execute.make_int(_i, "select_cols") for _i in select_cols]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DecodeCSV", records=records, record_defaults=record_defaults,
field_delim=field_delim, use_quote_delim=use_quote_delim,
na_value=na_value, select_cols=select_cols, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("OUT_TYPE", _op.get_attr("OUT_TYPE"), "field_delim",
_op.get_attr("field_delim"), "use_quote_delim",
_op._get_attr_bool("use_quote_delim"), "na_value",
_op.get_attr("na_value"), "select_cols",
_op.get_attr("select_cols"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"DecodeCSV", _inputs_flat, _attrs, _result)
return _result
DecodeCSV = tf_export("raw_ops.DecodeCSV")(_ops.to_raw_op(decode_csv))
def decode_csv_eager_fallback(records, record_defaults, field_delim, use_quote_delim, na_value, select_cols, name, ctx):
if field_delim is None:
field_delim = ","
field_delim = _execute.make_str(field_delim, "field_delim")
if use_quote_delim is None:
use_quote_delim = True
use_quote_delim = _execute.make_bool(use_quote_delim, "use_quote_delim")
if na_value is None:
na_value = ""
na_value = _execute.make_str(na_value, "na_value")
if select_cols is None:
select_cols = []
if not isinstance(select_cols, (list, tuple)):
raise TypeError(
"Expected list for 'select_cols' argument to "
"'decode_csv' Op, not %r." % select_cols)
select_cols = [_execute.make_int(_i, "select_cols") for _i in select_cols]
_attr_OUT_TYPE, record_defaults = _execute.convert_to_mixed_eager_tensors(record_defaults, ctx)
records = _ops.convert_to_tensor(records, _dtypes.string)
_inputs_flat = [records] + list(record_defaults)
_attrs = ("OUT_TYPE", _attr_OUT_TYPE, "field_delim", field_delim,
"use_quote_delim", use_quote_delim, "na_value", na_value, "select_cols",
select_cols)
_result = _execute.execute(b"DecodeCSV", len(record_defaults),
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DecodeCSV", _inputs_flat, _attrs, _result)
return _result
@_dispatch.add_dispatch_list
@tf_export('io.decode_compressed', v1=['io.decode_compressed', 'decode_compressed'])
@deprecated_endpoints('decode_compressed')
def decode_compressed(bytes, compression_type="", name=None):
r"""Decompress strings.
This op decompresses each element of the `bytes` input `Tensor`, which
is assumed to be compressed using the given `compression_type`.
The `output` is a string `Tensor` of the same shape as `bytes`,
each element containing the decompressed data from the corresponding
element in `bytes`.
Args:
bytes: A `Tensor` of type `string`.
A Tensor of string which is compressed.
compression_type: An optional `string`. Defaults to `""`.
A scalar containing either (i) the empty string (no
compression), (ii) "ZLIB", or (iii) "GZIP".
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DecodeCompressed", name,
tld.op_callbacks, bytes, "compression_type", compression_type)
return _result
except _core._FallbackException:
try:
return decode_compressed_eager_fallback(
bytes, compression_type=compression_type, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
decode_compressed, bytes=bytes,
compression_type=compression_type, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
if compression_type is None:
compression_type = ""
compression_type = _execute.make_str(compression_type, "compression_type")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DecodeCompressed", bytes=bytes, compression_type=compression_type,
name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
decode_compressed, bytes=bytes, compression_type=compression_type,
name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("compression_type", _op.get_attr("compression_type"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"DecodeCompressed", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DecodeCompressed = tf_export("raw_ops.DecodeCompressed")(_ops.to_raw_op(decode_compressed))
def decode_compressed_eager_fallback(bytes, compression_type, name, ctx):
if compression_type is None:
compression_type = ""
compression_type = _execute.make_str(compression_type, "compression_type")
bytes = _ops.convert_to_tensor(bytes, _dtypes.string)
_inputs_flat = [bytes]
_attrs = ("compression_type", compression_type)
_result = _execute.execute(b"DecodeCompressed", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DecodeCompressed", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('io.decode_json_example', v1=['io.decode_json_example', 'decode_json_example'])
@deprecated_endpoints('decode_json_example')
def decode_json_example(json_examples, name=None):
r"""Convert JSON-encoded Example records to binary protocol buffer strings.
This op translates a tensor containing Example records, encoded using
the [standard JSON
mapping](https://developers.google.com/protocol-buffers/docs/proto3#json),
into a tensor containing the same records encoded as binary protocol
buffers. The resulting tensor can then be fed to any of the other
Example-parsing ops.
Args:
json_examples: A `Tensor` of type `string`.
Each string is a JSON object serialized according to the JSON
mapping of the Example proto.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DecodeJSONExample", name,
tld.op_callbacks, json_examples)
return _result
except _core._FallbackException:
try:
return decode_json_example_eager_fallback(
json_examples, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
decode_json_example, json_examples=json_examples, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DecodeJSONExample", json_examples=json_examples, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
decode_json_example, json_examples=json_examples, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"DecodeJSONExample", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DecodeJSONExample = tf_export("raw_ops.DecodeJSONExample")(_ops.to_raw_op(decode_json_example))
def decode_json_example_eager_fallback(json_examples, name, ctx):
json_examples = _ops.convert_to_tensor(json_examples, _dtypes.string)
_inputs_flat = [json_examples]
_attrs = None
_result = _execute.execute(b"DecodeJSONExample", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DecodeJSONExample", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def decode_padded_raw(input_bytes, fixed_length, out_type, little_endian=True, name=None):
r"""Reinterpret the bytes of a string as a vector of numbers.
Args:
input_bytes: A `Tensor` of type `string`. Tensor of string to be decoded.
fixed_length: A `Tensor` of type `int32`.
Length in bytes for each element of the decoded output. Must be a multiple
of the size of the output type.
out_type: A `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint16, tf.uint8, tf.int16, tf.int8, tf.int64`.
little_endian: An optional `bool`. Defaults to `True`.
Whether the input `input_bytes` is in little-endian order. Ignored for
`out_type` values that are stored in a single byte, like `uint8`
name: A name for the operation (optional).
Returns:
A `Tensor` of type `out_type`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DecodePaddedRaw", name,
tld.op_callbacks, input_bytes, fixed_length, "out_type", out_type,
"little_endian", little_endian)
return _result
except _core._FallbackException:
try:
return decode_padded_raw_eager_fallback(
input_bytes, fixed_length, out_type=out_type,
little_endian=little_endian, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
out_type = _execute.make_type(out_type, "out_type")
if little_endian is None:
little_endian = True
little_endian = _execute.make_bool(little_endian, "little_endian")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DecodePaddedRaw", input_bytes=input_bytes, fixed_length=fixed_length,
out_type=out_type, little_endian=little_endian,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("out_type", _op._get_attr_type("out_type"), "little_endian",
_op._get_attr_bool("little_endian"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"DecodePaddedRaw", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DecodePaddedRaw = tf_export("raw_ops.DecodePaddedRaw")(_ops.to_raw_op(decode_padded_raw))
def decode_padded_raw_eager_fallback(input_bytes, fixed_length, out_type, little_endian, name, ctx):
out_type = _execute.make_type(out_type, "out_type")
if little_endian is None:
little_endian = True
little_endian = _execute.make_bool(little_endian, "little_endian")
input_bytes = _ops.convert_to_tensor(input_bytes, _dtypes.string)
fixed_length = _ops.convert_to_tensor(fixed_length, _dtypes.int32)
_inputs_flat = [input_bytes, fixed_length]
_attrs = ("out_type", out_type, "little_endian", little_endian)
_result = _execute.execute(b"DecodePaddedRaw", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DecodePaddedRaw", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def decode_raw(bytes, out_type, little_endian=True, name=None):
r"""Reinterpret the bytes of a string as a vector of numbers.
Args:
bytes: A `Tensor` of type `string`.
All the elements must have the same length.
out_type: A `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint16, tf.uint8, tf.int16, tf.int8, tf.int64, tf.complex64, tf.complex128, tf.bool`.
little_endian: An optional `bool`. Defaults to `True`.
Whether the input `bytes` are in little-endian order.
Ignored for `out_type` values that are stored in a single byte like
`uint8`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `out_type`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DecodeRaw", name,
tld.op_callbacks, bytes, "out_type", out_type, "little_endian",
little_endian)
return _result
except _core._FallbackException:
try:
return decode_raw_eager_fallback(
bytes, out_type=out_type, little_endian=little_endian, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
out_type = _execute.make_type(out_type, "out_type")
if little_endian is None:
little_endian = True
little_endian = _execute.make_bool(little_endian, "little_endian")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DecodeRaw", bytes=bytes, out_type=out_type,
little_endian=little_endian, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("out_type", _op._get_attr_type("out_type"), "little_endian",
_op._get_attr_bool("little_endian"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"DecodeRaw", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DecodeRaw = tf_export("raw_ops.DecodeRaw")(_ops.to_raw_op(decode_raw))
def decode_raw_eager_fallback(bytes, out_type, little_endian, name, ctx):
out_type = _execute.make_type(out_type, "out_type")
if little_endian is None:
little_endian = True
little_endian = _execute.make_bool(little_endian, "little_endian")
bytes = _ops.convert_to_tensor(bytes, _dtypes.string)
_inputs_flat = [bytes]
_attrs = ("out_type", out_type, "little_endian", little_endian)
_result = _execute.execute(b"DecodeRaw", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DecodeRaw", _inputs_flat, _attrs, _result)
_result, = _result
return _result
_ParseExampleOutput = collections.namedtuple(
"ParseExample",
["sparse_indices", "sparse_values", "sparse_shapes", "dense_values"])
def parse_example(serialized, names, sparse_keys, dense_keys, dense_defaults, sparse_types, dense_shapes, name=None):
r"""Transforms a vector of brain.Example protos (as strings) into typed tensors.
Args:
serialized: A `Tensor` of type `string`.
A vector containing a batch of binary serialized Example protos.
names: A `Tensor` of type `string`.
A vector containing the names of the serialized protos.
May contain, for example, table key (descriptive) names for the
corresponding serialized protos. These are purely useful for debugging
purposes, and the presence of values here has no effect on the output.
May also be an empty vector if no names are available.
If non-empty, this vector must be the same length as "serialized".
sparse_keys: A list of `Tensor` objects with type `string`.
A list of Nsparse string Tensors (scalars).
The keys expected in the Examples' features associated with sparse values.
dense_keys: A list of `Tensor` objects with type `string`.
A list of Ndense string Tensors (scalars).
The keys expected in the Examples' features associated with dense values.
dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A list of Ndense Tensors (some may be empty).
dense_defaults[j] provides default values
when the example's feature_map lacks dense_key[j]. If an empty Tensor is
provided for dense_defaults[j], then the Feature dense_keys[j] is required.
The input type is inferred from dense_defaults[j], even when it's empty.
If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined,
then the shape of dense_defaults[j] must match that of dense_shapes[j].
If dense_shapes[j] has an undefined major dimension (variable strides dense
feature), dense_defaults[j] must contain a single element:
the padding element.
sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.
A list of Nsparse types; the data types of data in each Feature
given in sparse_keys.
Currently the ParseExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).
A list of Ndense shapes; the shapes of data in each Feature
given in dense_keys.
The number of elements in the Feature corresponding to dense_key[j]
must always equal dense_shapes[j].NumEntries().
If dense_shapes[j] == (D0, D1, ..., DN) then the shape of output
Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN):
The dense outputs are just the inputs row-stacked by batch.
This works for dense_shapes[j] = (-1, D1, ..., DN). In this case
the shape of the output Tensor dense_values[j] will be
(|serialized|, M, D1, .., DN), where M is the maximum number of blocks
of elements of length D1 * .... * DN, across all minibatch entries
in the input. Any minibatch entry with less than M blocks of elements of
length D1 * ... * DN will be padded with the corresponding default_value
scalar element along the second dimension.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values).
sparse_indices: A list with the same length as `sparse_keys` of `Tensor` objects with type `int64`.
sparse_values: A list of `Tensor` objects of type `sparse_types`.
sparse_shapes: A list with the same length as `sparse_keys` of `Tensor` objects with type `int64`.
dense_values: A list of `Tensor` objects. Has the same type as `dense_defaults`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseExample", name,
tld.op_callbacks, serialized, names, sparse_keys, dense_keys,
dense_defaults, "sparse_types", sparse_types, "dense_shapes",
dense_shapes)
_result = _ParseExampleOutput._make(_result)
return _result
except _core._FallbackException:
try:
return parse_example_eager_fallback(
serialized, names, sparse_keys, dense_keys, dense_defaults,
sparse_types=sparse_types, dense_shapes=dense_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'parse_example' Op, not %r." % sparse_keys)
_attr_Nsparse = len(sparse_keys)
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'parse_example' Op, not %r." % dense_keys)
_attr_Ndense = len(dense_keys)
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_example' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_example' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseExample", serialized=serialized, names=names,
sparse_keys=sparse_keys, dense_keys=dense_keys,
dense_defaults=dense_defaults,
sparse_types=sparse_types, dense_shapes=dense_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("Nsparse", _op._get_attr_int("Nsparse"), "Ndense",
_op._get_attr_int("Ndense"), "sparse_types",
_op.get_attr("sparse_types"), "Tdense", _op.get_attr("Tdense"),
"dense_shapes", _op.get_attr("dense_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseExample", _inputs_flat, _attrs, _result)
_result = [_result[:_attr_Nsparse]] + _result[_attr_Nsparse:]
_result = _result[:1] + [_result[1:1 + len(sparse_types)]] + _result[1 + len(sparse_types):]
_result = _result[:2] + [_result[2:2 + _attr_Nsparse]] + _result[2 + _attr_Nsparse:]
_result = _result[:3] + [_result[3:]]
_result = _ParseExampleOutput._make(_result)
return _result
ParseExample = tf_export("raw_ops.ParseExample")(_ops.to_raw_op(parse_example))
def parse_example_eager_fallback(serialized, names, sparse_keys, dense_keys, dense_defaults, sparse_types, dense_shapes, name, ctx):
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'parse_example' Op, not %r." % sparse_keys)
_attr_Nsparse = len(sparse_keys)
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'parse_example' Op, not %r." % dense_keys)
_attr_Ndense = len(dense_keys)
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_example' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_example' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
_attr_Tdense, dense_defaults = _execute.convert_to_mixed_eager_tensors(dense_defaults, ctx)
serialized = _ops.convert_to_tensor(serialized, _dtypes.string)
names = _ops.convert_to_tensor(names, _dtypes.string)
sparse_keys = _ops.convert_n_to_tensor(sparse_keys, _dtypes.string)
dense_keys = _ops.convert_n_to_tensor(dense_keys, _dtypes.string)
_inputs_flat = [serialized, names] + list(sparse_keys) + list(dense_keys) + list(dense_defaults)
_attrs = ("Nsparse", _attr_Nsparse, "Ndense", _attr_Ndense, "sparse_types",
sparse_types, "Tdense", _attr_Tdense, "dense_shapes", dense_shapes)
_result = _execute.execute(b"ParseExample", _attr_Nsparse +
len(sparse_types) + _attr_Nsparse +
len(dense_defaults), inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseExample", _inputs_flat, _attrs, _result)
_result = [_result[:_attr_Nsparse]] + _result[_attr_Nsparse:]
_result = _result[:1] + [_result[1:1 + len(sparse_types)]] + _result[1 + len(sparse_types):]
_result = _result[:2] + [_result[2:2 + _attr_Nsparse]] + _result[2 + _attr_Nsparse:]
_result = _result[:3] + [_result[3:]]
_result = _ParseExampleOutput._make(_result)
return _result
_ParseExampleV2Output = collections.namedtuple(
"ParseExampleV2",
["sparse_indices", "sparse_values", "sparse_shapes", "dense_values", "ragged_values", "ragged_row_splits"])
def parse_example_v2(serialized, names, sparse_keys, dense_keys, ragged_keys, dense_defaults, num_sparse, sparse_types, ragged_value_types, ragged_split_types, dense_shapes, name=None):
r"""Transforms a vector of tf.Example protos (as strings) into typed tensors.
Args:
serialized: A `Tensor` of type `string`.
A scalar or vector containing binary serialized Example protos.
names: A `Tensor` of type `string`.
A tensor containing the names of the serialized protos.
Corresponds 1:1 with the `serialized` tensor.
May contain, for example, table key (descriptive) names for the
corresponding serialized protos. These are purely useful for debugging
purposes, and the presence of values here has no effect on the output.
May also be an empty vector if no names are available.
If non-empty, this tensor must have the same shape as "serialized".
sparse_keys: A `Tensor` of type `string`. Vector of strings.
The keys expected in the Examples' features associated with sparse values.
dense_keys: A `Tensor` of type `string`. Vector of strings.
The keys expected in the Examples' features associated with dense values.
ragged_keys: A `Tensor` of type `string`. Vector of strings.
The keys expected in the Examples' features associated with ragged values.
dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A list of Tensors (some may be empty). Corresponds 1:1 with `dense_keys`.
dense_defaults[j] provides default values
when the example's feature_map lacks dense_key[j]. If an empty Tensor is
provided for dense_defaults[j], then the Feature dense_keys[j] is required.
The input type is inferred from dense_defaults[j], even when it's empty.
If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined,
then the shape of dense_defaults[j] must match that of dense_shapes[j].
If dense_shapes[j] has an undefined major dimension (variable strides dense
feature), dense_defaults[j] must contain a single element:
the padding element.
num_sparse: An `int` that is `>= 0`. The number of sparse keys.
sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.
A list of `num_sparse` types; the data types of data in each Feature
given in sparse_keys.
Currently the ParseExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
ragged_value_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.
A list of `num_ragged` types; the data types of data in each Feature
given in ragged_keys (where `num_ragged = sparse_keys.size()`).
Currently the ParseExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
ragged_split_types: A list of `tf.DTypes` from: `tf.int32, tf.int64`.
A list of `num_ragged` types; the data types of row_splits in each Feature
given in ragged_keys (where `num_ragged = sparse_keys.size()`).
May be DT_INT32 or DT_INT64.
dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).
A list of `num_dense` shapes; the shapes of data in each Feature
given in dense_keys (where `num_dense = dense_keys.size()`).
The number of elements in the Feature corresponding to dense_key[j]
must always equal dense_shapes[j].NumEntries().
If dense_shapes[j] == (D0, D1, ..., DN) then the shape of output
Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN):
The dense outputs are just the inputs row-stacked by batch.
This works for dense_shapes[j] = (-1, D1, ..., DN). In this case
the shape of the output Tensor dense_values[j] will be
(|serialized|, M, D1, .., DN), where M is the maximum number of blocks
of elements of length D1 * .... * DN, across all minibatch entries
in the input. Any minibatch entry with less than M blocks of elements of
length D1 * ... * DN will be padded with the corresponding default_value
scalar element along the second dimension.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values, ragged_values, ragged_row_splits).
sparse_indices: A list of `num_sparse` `Tensor` objects with type `int64`.
sparse_values: A list of `Tensor` objects of type `sparse_types`.
sparse_shapes: A list of `num_sparse` `Tensor` objects with type `int64`.
dense_values: A list of `Tensor` objects. Has the same type as `dense_defaults`.
ragged_values: A list of `Tensor` objects of type `ragged_value_types`.
ragged_row_splits: A list of `Tensor` objects of type `ragged_split_types`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseExampleV2", name,
tld.op_callbacks, serialized, names, sparse_keys, dense_keys,
ragged_keys, dense_defaults, "num_sparse", num_sparse, "sparse_types",
sparse_types, "ragged_value_types", ragged_value_types,
"ragged_split_types", ragged_split_types, "dense_shapes",
dense_shapes)
_result = _ParseExampleV2Output._make(_result)
return _result
except _core._FallbackException:
try:
return parse_example_v2_eager_fallback(
serialized, names, sparse_keys, dense_keys, ragged_keys,
dense_defaults, num_sparse=num_sparse, sparse_types=sparse_types,
ragged_value_types=ragged_value_types,
ragged_split_types=ragged_split_types, dense_shapes=dense_shapes,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
num_sparse = _execute.make_int(num_sparse, "num_sparse")
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_example_v2' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_value_types' argument to "
"'parse_example_v2' Op, not %r." % ragged_value_types)
ragged_value_types = [_execute.make_type(_t, "ragged_value_types") for _t in ragged_value_types]
if not isinstance(ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_split_types' argument to "
"'parse_example_v2' Op, not %r." % ragged_split_types)
ragged_split_types = [_execute.make_type(_t, "ragged_split_types") for _t in ragged_split_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_example_v2' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseExampleV2", serialized=serialized, names=names,
sparse_keys=sparse_keys, dense_keys=dense_keys,
ragged_keys=ragged_keys,
dense_defaults=dense_defaults,
num_sparse=num_sparse, sparse_types=sparse_types,
ragged_value_types=ragged_value_types,
ragged_split_types=ragged_split_types,
dense_shapes=dense_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("Tdense", _op.get_attr("Tdense"), "num_sparse",
_op._get_attr_int("num_sparse"), "sparse_types",
_op.get_attr("sparse_types"), "ragged_value_types",
_op.get_attr("ragged_value_types"), "ragged_split_types",
_op.get_attr("ragged_split_types"), "dense_shapes",
_op.get_attr("dense_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseExampleV2", _inputs_flat, _attrs, _result)
_result = [_result[:num_sparse]] + _result[num_sparse:]
_result = _result[:1] + [_result[1:1 + len(sparse_types)]] + _result[1 + len(sparse_types):]
_result = _result[:2] + [_result[2:2 + num_sparse]] + _result[2 + num_sparse:]
_result = _result[:3] + [_result[3:3 + len(dense_defaults)]] + _result[3 + len(dense_defaults):]
_result = _result[:4] + [_result[4:4 + len(ragged_value_types)]] + _result[4 + len(ragged_value_types):]
_result = _result[:5] + [_result[5:]]
_result = _ParseExampleV2Output._make(_result)
return _result
ParseExampleV2 = tf_export("raw_ops.ParseExampleV2")(_ops.to_raw_op(parse_example_v2))
def parse_example_v2_eager_fallback(serialized, names, sparse_keys, dense_keys, ragged_keys, dense_defaults, num_sparse, sparse_types, ragged_value_types, ragged_split_types, dense_shapes, name, ctx):
num_sparse = _execute.make_int(num_sparse, "num_sparse")
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_example_v2' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_value_types' argument to "
"'parse_example_v2' Op, not %r." % ragged_value_types)
ragged_value_types = [_execute.make_type(_t, "ragged_value_types") for _t in ragged_value_types]
if not isinstance(ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_split_types' argument to "
"'parse_example_v2' Op, not %r." % ragged_split_types)
ragged_split_types = [_execute.make_type(_t, "ragged_split_types") for _t in ragged_split_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_example_v2' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
_attr_Tdense, dense_defaults = _execute.convert_to_mixed_eager_tensors(dense_defaults, ctx)
serialized = _ops.convert_to_tensor(serialized, _dtypes.string)
names = _ops.convert_to_tensor(names, _dtypes.string)
sparse_keys = _ops.convert_to_tensor(sparse_keys, _dtypes.string)
dense_keys = _ops.convert_to_tensor(dense_keys, _dtypes.string)
ragged_keys = _ops.convert_to_tensor(ragged_keys, _dtypes.string)
_inputs_flat = [serialized, names, sparse_keys, dense_keys, ragged_keys] + list(dense_defaults)
_attrs = ("Tdense", _attr_Tdense, "num_sparse", num_sparse, "sparse_types",
sparse_types, "ragged_value_types", ragged_value_types,
"ragged_split_types", ragged_split_types, "dense_shapes", dense_shapes)
_result = _execute.execute(b"ParseExampleV2", num_sparse + len(sparse_types)
+ num_sparse + len(dense_defaults) +
len(ragged_value_types) +
len(ragged_split_types), inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseExampleV2", _inputs_flat, _attrs, _result)
_result = [_result[:num_sparse]] + _result[num_sparse:]
_result = _result[:1] + [_result[1:1 + len(sparse_types)]] + _result[1 + len(sparse_types):]
_result = _result[:2] + [_result[2:2 + num_sparse]] + _result[2 + num_sparse:]
_result = _result[:3] + [_result[3:3 + len(dense_defaults)]] + _result[3 + len(dense_defaults):]
_result = _result[:4] + [_result[4:4 + len(ragged_value_types)]] + _result[4 + len(ragged_value_types):]
_result = _result[:5] + [_result[5:]]
_result = _ParseExampleV2Output._make(_result)
return _result
_ParseSequenceExampleOutput = collections.namedtuple(
"ParseSequenceExample",
["context_sparse_indices", "context_sparse_values", "context_sparse_shapes", "context_dense_values", "feature_list_sparse_indices", "feature_list_sparse_values", "feature_list_sparse_shapes", "feature_list_dense_values", "feature_list_dense_lengths"])
def parse_sequence_example(serialized, debug_name, context_dense_defaults, feature_list_dense_missing_assumed_empty, context_sparse_keys, context_dense_keys, feature_list_sparse_keys, feature_list_dense_keys, Ncontext_sparse=0, Ncontext_dense=0, Nfeature_list_sparse=0, Nfeature_list_dense=0, context_sparse_types=[], feature_list_dense_types=[], context_dense_shapes=[], feature_list_sparse_types=[], feature_list_dense_shapes=[], name=None):
r"""Transforms a vector of brain.SequenceExample protos (as strings) into typed tensors.
Args:
serialized: A `Tensor` of type `string`.
A vector containing binary serialized SequenceExample protos.
debug_name: A `Tensor` of type `string`.
A vector containing the names of the serialized protos.
May contain, for example, table key (descriptive) name for the
corresponding serialized proto. This is purely useful for debugging
purposes, and the presence of values here has no effect on the output.
May also be an empty vector if no name is available.
context_dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A list of Ncontext_dense Tensors (some may be empty).
context_dense_defaults[j] provides default values
when the SequenceExample's context map lacks context_dense_key[j].
If an empty Tensor is provided for context_dense_defaults[j],
then the Feature context_dense_keys[j] is required.
The input type is inferred from context_dense_defaults[j], even when it's
empty. If context_dense_defaults[j] is not empty, its shape must match
context_dense_shapes[j].
feature_list_dense_missing_assumed_empty: A list of `strings`.
A vector listing the
FeatureList keys which may be missing from the SequenceExamples. If the
associated FeatureList is missing, it is treated as empty. By default,
any FeatureList not listed in this vector must exist in the SequenceExamples.
context_sparse_keys: A list of `strings`.
A list of Ncontext_sparse string Tensors (scalars).
The keys expected in the Examples' features associated with context_sparse
values.
context_dense_keys: A list of `strings`.
A list of Ncontext_dense string Tensors (scalars).
The keys expected in the SequenceExamples' context features associated with
dense values.
feature_list_sparse_keys: A list of `strings`.
A list of Nfeature_list_sparse string Tensors
(scalars). The keys expected in the FeatureLists associated with sparse
values.
feature_list_dense_keys: A list of `strings`.
A list of Nfeature_list_dense string Tensors (scalars).
The keys expected in the SequenceExamples' feature_lists associated
with lists of dense values.
Ncontext_sparse: An optional `int` that is `>= 0`. Defaults to `0`.
Ncontext_dense: An optional `int` that is `>= 0`. Defaults to `0`.
Nfeature_list_sparse: An optional `int` that is `>= 0`. Defaults to `0`.
Nfeature_list_dense: An optional `int` that is `>= 0`. Defaults to `0`.
context_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
A list of Ncontext_sparse types; the data types of data in
each context Feature given in context_sparse_keys.
Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
feature_list_dense_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
context_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.
A list of Ncontext_dense shapes; the shapes of data in
each context Feature given in context_dense_keys.
The number of elements in the Feature corresponding to context_dense_key[j]
must always equal context_dense_shapes[j].NumEntries().
The shape of context_dense_values[j] will match context_dense_shapes[j].
feature_list_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
A list of Nfeature_list_sparse types; the data types
of data in each FeatureList given in feature_list_sparse_keys.
Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
feature_list_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.
A list of Nfeature_list_dense shapes; the shapes of
data in each FeatureList given in feature_list_dense_keys.
The shape of each Feature in the FeatureList corresponding to
feature_list_dense_key[j] must always equal
feature_list_dense_shapes[j].NumEntries().
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values, feature_list_dense_lengths).
context_sparse_indices: A list of `Ncontext_sparse` `Tensor` objects with type `int64`.
context_sparse_values: A list of `Tensor` objects of type `context_sparse_types`.
context_sparse_shapes: A list of `Ncontext_sparse` `Tensor` objects with type `int64`.
context_dense_values: A list of `Tensor` objects. Has the same type as `context_dense_defaults`.
feature_list_sparse_indices: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`.
feature_list_sparse_values: A list of `Tensor` objects of type `feature_list_sparse_types`.
feature_list_sparse_shapes: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`.
feature_list_dense_values: A list of `Tensor` objects of type `feature_list_dense_types`.
feature_list_dense_lengths: A list of `Nfeature_list_dense` `Tensor` objects with type `int64`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseSequenceExample", name,
tld.op_callbacks, serialized, debug_name, context_dense_defaults,
"feature_list_dense_missing_assumed_empty",
feature_list_dense_missing_assumed_empty, "context_sparse_keys",
context_sparse_keys, "context_dense_keys", context_dense_keys,
"feature_list_sparse_keys", feature_list_sparse_keys,
"feature_list_dense_keys", feature_list_dense_keys, "Ncontext_sparse",
Ncontext_sparse, "Ncontext_dense", Ncontext_dense,
"Nfeature_list_sparse", Nfeature_list_sparse, "Nfeature_list_dense",
Nfeature_list_dense, "context_sparse_types", context_sparse_types,
"feature_list_dense_types", feature_list_dense_types,
"context_dense_shapes", context_dense_shapes,
"feature_list_sparse_types", feature_list_sparse_types,
"feature_list_dense_shapes", feature_list_dense_shapes)
_result = _ParseSequenceExampleOutput._make(_result)
return _result
except _core._FallbackException:
try:
return parse_sequence_example_eager_fallback(
serialized, debug_name, context_dense_defaults,
feature_list_dense_missing_assumed_empty=feature_list_dense_missing_assumed_empty,
context_sparse_keys=context_sparse_keys,
context_dense_keys=context_dense_keys,
feature_list_sparse_keys=feature_list_sparse_keys,
feature_list_dense_keys=feature_list_dense_keys,
Ncontext_sparse=Ncontext_sparse, Ncontext_dense=Ncontext_dense,
Nfeature_list_sparse=Nfeature_list_sparse,
Nfeature_list_dense=Nfeature_list_dense,
context_sparse_types=context_sparse_types,
feature_list_dense_types=feature_list_dense_types,
context_dense_shapes=context_dense_shapes,
feature_list_sparse_types=feature_list_sparse_types,
feature_list_dense_shapes=feature_list_dense_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
if not isinstance(feature_list_dense_missing_assumed_empty, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_missing_assumed_empty' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_dense_missing_assumed_empty)
feature_list_dense_missing_assumed_empty = [_execute.make_str(_s, "feature_list_dense_missing_assumed_empty") for _s in feature_list_dense_missing_assumed_empty]
if not isinstance(context_sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_keys' argument to "
"'parse_sequence_example' Op, not %r." % context_sparse_keys)
context_sparse_keys = [_execute.make_str(_s, "context_sparse_keys") for _s in context_sparse_keys]
if not isinstance(context_dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_keys' argument to "
"'parse_sequence_example' Op, not %r." % context_dense_keys)
context_dense_keys = [_execute.make_str(_s, "context_dense_keys") for _s in context_dense_keys]
if not isinstance(feature_list_sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_keys' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_sparse_keys)
feature_list_sparse_keys = [_execute.make_str(_s, "feature_list_sparse_keys") for _s in feature_list_sparse_keys]
if not isinstance(feature_list_dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_keys' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_dense_keys)
feature_list_dense_keys = [_execute.make_str(_s, "feature_list_dense_keys") for _s in feature_list_dense_keys]
if Ncontext_sparse is None:
Ncontext_sparse = 0
Ncontext_sparse = _execute.make_int(Ncontext_sparse, "Ncontext_sparse")
if Ncontext_dense is None:
Ncontext_dense = 0
Ncontext_dense = _execute.make_int(Ncontext_dense, "Ncontext_dense")
if Nfeature_list_sparse is None:
Nfeature_list_sparse = 0
Nfeature_list_sparse = _execute.make_int(Nfeature_list_sparse, "Nfeature_list_sparse")
if Nfeature_list_dense is None:
Nfeature_list_dense = 0
Nfeature_list_dense = _execute.make_int(Nfeature_list_dense, "Nfeature_list_dense")
if context_sparse_types is None:
context_sparse_types = []
if not isinstance(context_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_types' argument to "
"'parse_sequence_example' Op, not %r." % context_sparse_types)
context_sparse_types = [_execute.make_type(_t, "context_sparse_types") for _t in context_sparse_types]
if feature_list_dense_types is None:
feature_list_dense_types = []
if not isinstance(feature_list_dense_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_types' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_dense_types)
feature_list_dense_types = [_execute.make_type(_t, "feature_list_dense_types") for _t in feature_list_dense_types]
if context_dense_shapes is None:
context_dense_shapes = []
if not isinstance(context_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_shapes' argument to "
"'parse_sequence_example' Op, not %r." % context_dense_shapes)
context_dense_shapes = [_execute.make_shape(_s, "context_dense_shapes") for _s in context_dense_shapes]
if feature_list_sparse_types is None:
feature_list_sparse_types = []
if not isinstance(feature_list_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_types' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_sparse_types)
feature_list_sparse_types = [_execute.make_type(_t, "feature_list_sparse_types") for _t in feature_list_sparse_types]
if feature_list_dense_shapes is None:
feature_list_dense_shapes = []
if not isinstance(feature_list_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_shapes' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_dense_shapes)
feature_list_dense_shapes = [_execute.make_shape(_s, "feature_list_dense_shapes") for _s in feature_list_dense_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseSequenceExample", serialized=serialized, debug_name=debug_name,
context_dense_defaults=context_dense_defaults,
feature_list_dense_missing_assumed_empty=feature_list_dense_missing_assumed_empty,
context_sparse_keys=context_sparse_keys,
context_dense_keys=context_dense_keys,
feature_list_sparse_keys=feature_list_sparse_keys,
feature_list_dense_keys=feature_list_dense_keys,
Ncontext_sparse=Ncontext_sparse,
Ncontext_dense=Ncontext_dense,
Nfeature_list_sparse=Nfeature_list_sparse,
Nfeature_list_dense=Nfeature_list_dense,
context_sparse_types=context_sparse_types,
feature_list_dense_types=feature_list_dense_types,
context_dense_shapes=context_dense_shapes,
feature_list_sparse_types=feature_list_sparse_types,
feature_list_dense_shapes=feature_list_dense_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("feature_list_dense_missing_assumed_empty",
_op.get_attr("feature_list_dense_missing_assumed_empty"),
"context_sparse_keys", _op.get_attr("context_sparse_keys"),
"context_dense_keys", _op.get_attr("context_dense_keys"),
"feature_list_sparse_keys",
_op.get_attr("feature_list_sparse_keys"),
"feature_list_dense_keys",
_op.get_attr("feature_list_dense_keys"), "Ncontext_sparse",
_op._get_attr_int("Ncontext_sparse"), "Ncontext_dense",
_op._get_attr_int("Ncontext_dense"), "Nfeature_list_sparse",
_op._get_attr_int("Nfeature_list_sparse"),
"Nfeature_list_dense", _op._get_attr_int("Nfeature_list_dense"),
"context_sparse_types", _op.get_attr("context_sparse_types"),
"Tcontext_dense", _op.get_attr("Tcontext_dense"),
"feature_list_dense_types",
_op.get_attr("feature_list_dense_types"),
"context_dense_shapes", _op.get_attr("context_dense_shapes"),
"feature_list_sparse_types",
_op.get_attr("feature_list_sparse_types"),
"feature_list_dense_shapes",
_op.get_attr("feature_list_dense_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseSequenceExample", _inputs_flat, _attrs, _result)
_result = [_result[:Ncontext_sparse]] + _result[Ncontext_sparse:]
_result = _result[:1] + [_result[1:1 + len(context_sparse_types)]] + _result[1 + len(context_sparse_types):]
_result = _result[:2] + [_result[2:2 + Ncontext_sparse]] + _result[2 + Ncontext_sparse:]
_result = _result[:3] + [_result[3:3 + len(context_dense_defaults)]] + _result[3 + len(context_dense_defaults):]
_result = _result[:4] + [_result[4:4 + Nfeature_list_sparse]] + _result[4 + Nfeature_list_sparse:]
_result = _result[:5] + [_result[5:5 + len(feature_list_sparse_types)]] + _result[5 + len(feature_list_sparse_types):]
_result = _result[:6] + [_result[6:6 + Nfeature_list_sparse]] + _result[6 + Nfeature_list_sparse:]
_result = _result[:7] + [_result[7:7 + len(feature_list_dense_types)]] + _result[7 + len(feature_list_dense_types):]
_result = _result[:8] + [_result[8:]]
_result = _ParseSequenceExampleOutput._make(_result)
return _result
ParseSequenceExample = tf_export("raw_ops.ParseSequenceExample")(_ops.to_raw_op(parse_sequence_example))
def parse_sequence_example_eager_fallback(serialized, debug_name, context_dense_defaults, feature_list_dense_missing_assumed_empty, context_sparse_keys, context_dense_keys, feature_list_sparse_keys, feature_list_dense_keys, Ncontext_sparse, Ncontext_dense, Nfeature_list_sparse, Nfeature_list_dense, context_sparse_types, feature_list_dense_types, context_dense_shapes, feature_list_sparse_types, feature_list_dense_shapes, name, ctx):
if not isinstance(feature_list_dense_missing_assumed_empty, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_missing_assumed_empty' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_dense_missing_assumed_empty)
feature_list_dense_missing_assumed_empty = [_execute.make_str(_s, "feature_list_dense_missing_assumed_empty") for _s in feature_list_dense_missing_assumed_empty]
if not isinstance(context_sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_keys' argument to "
"'parse_sequence_example' Op, not %r." % context_sparse_keys)
context_sparse_keys = [_execute.make_str(_s, "context_sparse_keys") for _s in context_sparse_keys]
if not isinstance(context_dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_keys' argument to "
"'parse_sequence_example' Op, not %r." % context_dense_keys)
context_dense_keys = [_execute.make_str(_s, "context_dense_keys") for _s in context_dense_keys]
if not isinstance(feature_list_sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_keys' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_sparse_keys)
feature_list_sparse_keys = [_execute.make_str(_s, "feature_list_sparse_keys") for _s in feature_list_sparse_keys]
if not isinstance(feature_list_dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_keys' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_dense_keys)
feature_list_dense_keys = [_execute.make_str(_s, "feature_list_dense_keys") for _s in feature_list_dense_keys]
if Ncontext_sparse is None:
Ncontext_sparse = 0
Ncontext_sparse = _execute.make_int(Ncontext_sparse, "Ncontext_sparse")
if Ncontext_dense is None:
Ncontext_dense = 0
Ncontext_dense = _execute.make_int(Ncontext_dense, "Ncontext_dense")
if Nfeature_list_sparse is None:
Nfeature_list_sparse = 0
Nfeature_list_sparse = _execute.make_int(Nfeature_list_sparse, "Nfeature_list_sparse")
if Nfeature_list_dense is None:
Nfeature_list_dense = 0
Nfeature_list_dense = _execute.make_int(Nfeature_list_dense, "Nfeature_list_dense")
if context_sparse_types is None:
context_sparse_types = []
if not isinstance(context_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_types' argument to "
"'parse_sequence_example' Op, not %r." % context_sparse_types)
context_sparse_types = [_execute.make_type(_t, "context_sparse_types") for _t in context_sparse_types]
if feature_list_dense_types is None:
feature_list_dense_types = []
if not isinstance(feature_list_dense_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_types' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_dense_types)
feature_list_dense_types = [_execute.make_type(_t, "feature_list_dense_types") for _t in feature_list_dense_types]
if context_dense_shapes is None:
context_dense_shapes = []
if not isinstance(context_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_shapes' argument to "
"'parse_sequence_example' Op, not %r." % context_dense_shapes)
context_dense_shapes = [_execute.make_shape(_s, "context_dense_shapes") for _s in context_dense_shapes]
if feature_list_sparse_types is None:
feature_list_sparse_types = []
if not isinstance(feature_list_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_types' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_sparse_types)
feature_list_sparse_types = [_execute.make_type(_t, "feature_list_sparse_types") for _t in feature_list_sparse_types]
if feature_list_dense_shapes is None:
feature_list_dense_shapes = []
if not isinstance(feature_list_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_shapes' argument to "
"'parse_sequence_example' Op, not %r." % feature_list_dense_shapes)
feature_list_dense_shapes = [_execute.make_shape(_s, "feature_list_dense_shapes") for _s in feature_list_dense_shapes]
_attr_Tcontext_dense, context_dense_defaults = _execute.convert_to_mixed_eager_tensors(context_dense_defaults, ctx)
serialized = _ops.convert_to_tensor(serialized, _dtypes.string)
debug_name = _ops.convert_to_tensor(debug_name, _dtypes.string)
_inputs_flat = [serialized, debug_name] + list(context_dense_defaults)
_attrs = ("feature_list_dense_missing_assumed_empty",
feature_list_dense_missing_assumed_empty, "context_sparse_keys",
context_sparse_keys, "context_dense_keys", context_dense_keys,
"feature_list_sparse_keys", feature_list_sparse_keys,
"feature_list_dense_keys", feature_list_dense_keys, "Ncontext_sparse",
Ncontext_sparse, "Ncontext_dense", Ncontext_dense, "Nfeature_list_sparse",
Nfeature_list_sparse, "Nfeature_list_dense", Nfeature_list_dense,
"context_sparse_types", context_sparse_types, "Tcontext_dense",
_attr_Tcontext_dense, "feature_list_dense_types", feature_list_dense_types,
"context_dense_shapes", context_dense_shapes, "feature_list_sparse_types",
feature_list_sparse_types, "feature_list_dense_shapes",
feature_list_dense_shapes)
_result = _execute.execute(b"ParseSequenceExample", Ncontext_sparse +
len(context_sparse_types) + Ncontext_sparse +
len(context_dense_defaults) +
Nfeature_list_sparse +
len(feature_list_sparse_types) +
Nfeature_list_sparse +
len(feature_list_dense_types) +
Nfeature_list_dense, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseSequenceExample", _inputs_flat, _attrs, _result)
_result = [_result[:Ncontext_sparse]] + _result[Ncontext_sparse:]
_result = _result[:1] + [_result[1:1 + len(context_sparse_types)]] + _result[1 + len(context_sparse_types):]
_result = _result[:2] + [_result[2:2 + Ncontext_sparse]] + _result[2 + Ncontext_sparse:]
_result = _result[:3] + [_result[3:3 + len(context_dense_defaults)]] + _result[3 + len(context_dense_defaults):]
_result = _result[:4] + [_result[4:4 + Nfeature_list_sparse]] + _result[4 + Nfeature_list_sparse:]
_result = _result[:5] + [_result[5:5 + len(feature_list_sparse_types)]] + _result[5 + len(feature_list_sparse_types):]
_result = _result[:6] + [_result[6:6 + Nfeature_list_sparse]] + _result[6 + Nfeature_list_sparse:]
_result = _result[:7] + [_result[7:7 + len(feature_list_dense_types)]] + _result[7 + len(feature_list_dense_types):]
_result = _result[:8] + [_result[8:]]
_result = _ParseSequenceExampleOutput._make(_result)
return _result
_ParseSequenceExampleV2Output = collections.namedtuple(
"ParseSequenceExampleV2",
["context_sparse_indices", "context_sparse_values", "context_sparse_shapes", "context_dense_values", "context_ragged_values", "context_ragged_row_splits", "feature_list_sparse_indices", "feature_list_sparse_values", "feature_list_sparse_shapes", "feature_list_dense_values", "feature_list_dense_lengths", "feature_list_ragged_values", "feature_list_ragged_outer_splits", "feature_list_ragged_inner_splits"])
def parse_sequence_example_v2(serialized, debug_name, context_sparse_keys, context_dense_keys, context_ragged_keys, feature_list_sparse_keys, feature_list_dense_keys, feature_list_ragged_keys, feature_list_dense_missing_assumed_empty, context_dense_defaults, Ncontext_sparse=0, context_sparse_types=[], context_ragged_value_types=[], context_ragged_split_types=[], context_dense_shapes=[], Nfeature_list_sparse=0, Nfeature_list_dense=0, feature_list_dense_types=[], feature_list_sparse_types=[], feature_list_ragged_value_types=[], feature_list_ragged_split_types=[], feature_list_dense_shapes=[], name=None):
r"""Transforms a vector of tf.io.SequenceExample protos (as strings) into
typed tensors.
Args:
serialized: A `Tensor` of type `string`.
A scalar or vector containing binary serialized SequenceExample protos.
debug_name: A `Tensor` of type `string`.
A scalar or vector containing the names of the serialized protos.
May contain, for example, table key (descriptive) name for the
corresponding serialized proto. This is purely useful for debugging
purposes, and the presence of values here has no effect on the output.
May also be an empty vector if no name is available.
context_sparse_keys: A `Tensor` of type `string`.
The keys expected in the Examples' features associated with context_sparse
values.
context_dense_keys: A `Tensor` of type `string`.
The keys expected in the SequenceExamples' context features associated with
dense values.
context_ragged_keys: A `Tensor` of type `string`.
The keys expected in the Examples' features associated with context_ragged
values.
feature_list_sparse_keys: A `Tensor` of type `string`.
The keys expected in the FeatureLists associated with sparse values.
feature_list_dense_keys: A `Tensor` of type `string`.
The keys expected in the SequenceExamples' feature_lists associated
with lists of dense values.
feature_list_ragged_keys: A `Tensor` of type `string`.
The keys expected in the FeatureLists associated with ragged values.
feature_list_dense_missing_assumed_empty: A `Tensor` of type `bool`.
A vector corresponding 1:1 with featue_list_dense_keys, indicating which
features may be missing from the SequenceExamples. If the associated
FeatureList is missing, it is treated as empty.
context_dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A list of Ncontext_dense Tensors (some may be empty).
context_dense_defaults[j] provides default values
when the SequenceExample's context map lacks context_dense_key[j].
If an empty Tensor is provided for context_dense_defaults[j],
then the Feature context_dense_keys[j] is required.
The input type is inferred from context_dense_defaults[j], even when it's
empty. If context_dense_defaults[j] is not empty, its shape must match
context_dense_shapes[j].
Ncontext_sparse: An optional `int` that is `>= 0`. Defaults to `0`.
context_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
A list of Ncontext_sparse types; the data types of data in
each context Feature given in context_sparse_keys.
Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
context_ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
RaggedTensor.value dtypes for the ragged context features.
context_ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`.
RaggedTensor.row_split dtypes for the ragged context features.
context_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.
A list of Ncontext_dense shapes; the shapes of data in
each context Feature given in context_dense_keys.
The number of elements in the Feature corresponding to context_dense_key[j]
must always equal context_dense_shapes[j].NumEntries().
The shape of context_dense_values[j] will match context_dense_shapes[j].
Nfeature_list_sparse: An optional `int` that is `>= 0`. Defaults to `0`.
Nfeature_list_dense: An optional `int` that is `>= 0`. Defaults to `0`.
feature_list_dense_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
feature_list_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
A list of Nfeature_list_sparse types; the data types
of data in each FeatureList given in feature_list_sparse_keys.
Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
feature_list_ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
RaggedTensor.value dtypes for the ragged FeatureList features.
feature_list_ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`.
RaggedTensor.row_split dtypes for the ragged FeatureList features.
feature_list_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.
A list of Nfeature_list_dense shapes; the shapes of
data in each FeatureList given in feature_list_dense_keys.
The shape of each Feature in the FeatureList corresponding to
feature_list_dense_key[j] must always equal
feature_list_dense_shapes[j].NumEntries().
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, context_ragged_values, context_ragged_row_splits, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values, feature_list_dense_lengths, feature_list_ragged_values, feature_list_ragged_outer_splits, feature_list_ragged_inner_splits).
context_sparse_indices: A list of `Ncontext_sparse` `Tensor` objects with type `int64`.
context_sparse_values: A list of `Tensor` objects of type `context_sparse_types`.
context_sparse_shapes: A list of `Ncontext_sparse` `Tensor` objects with type `int64`.
context_dense_values: A list of `Tensor` objects. Has the same type as `context_dense_defaults`.
context_ragged_values: A list of `Tensor` objects of type `context_ragged_value_types`.
context_ragged_row_splits: A list of `Tensor` objects of type `context_ragged_split_types`.
feature_list_sparse_indices: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`.
feature_list_sparse_values: A list of `Tensor` objects of type `feature_list_sparse_types`.
feature_list_sparse_shapes: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`.
feature_list_dense_values: A list of `Tensor` objects of type `feature_list_dense_types`.
feature_list_dense_lengths: A list of `Nfeature_list_dense` `Tensor` objects with type `int64`.
feature_list_ragged_values: A list of `Tensor` objects of type `feature_list_ragged_value_types`.
feature_list_ragged_outer_splits: A list of `Tensor` objects of type `feature_list_ragged_split_types`.
feature_list_ragged_inner_splits: A list of `Tensor` objects of type `feature_list_ragged_split_types`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseSequenceExampleV2", name,
tld.op_callbacks, serialized, debug_name, context_sparse_keys,
context_dense_keys, context_ragged_keys, feature_list_sparse_keys,
feature_list_dense_keys, feature_list_ragged_keys,
feature_list_dense_missing_assumed_empty, context_dense_defaults,
"Ncontext_sparse", Ncontext_sparse, "context_sparse_types",
context_sparse_types, "context_ragged_value_types",
context_ragged_value_types, "context_ragged_split_types",
context_ragged_split_types, "context_dense_shapes",
context_dense_shapes, "Nfeature_list_sparse", Nfeature_list_sparse,
"Nfeature_list_dense", Nfeature_list_dense,
"feature_list_dense_types", feature_list_dense_types,
"feature_list_sparse_types", feature_list_sparse_types,
"feature_list_ragged_value_types", feature_list_ragged_value_types,
"feature_list_ragged_split_types", feature_list_ragged_split_types,
"feature_list_dense_shapes", feature_list_dense_shapes)
_result = _ParseSequenceExampleV2Output._make(_result)
return _result
except _core._FallbackException:
try:
return parse_sequence_example_v2_eager_fallback(
serialized, debug_name, context_sparse_keys, context_dense_keys,
context_ragged_keys, feature_list_sparse_keys,
feature_list_dense_keys, feature_list_ragged_keys,
feature_list_dense_missing_assumed_empty, context_dense_defaults,
Ncontext_sparse=Ncontext_sparse,
context_sparse_types=context_sparse_types,
context_ragged_value_types=context_ragged_value_types,
context_ragged_split_types=context_ragged_split_types,
context_dense_shapes=context_dense_shapes,
Nfeature_list_sparse=Nfeature_list_sparse,
Nfeature_list_dense=Nfeature_list_dense,
feature_list_dense_types=feature_list_dense_types,
feature_list_sparse_types=feature_list_sparse_types,
feature_list_ragged_value_types=feature_list_ragged_value_types,
feature_list_ragged_split_types=feature_list_ragged_split_types,
feature_list_dense_shapes=feature_list_dense_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
if Ncontext_sparse is None:
Ncontext_sparse = 0
Ncontext_sparse = _execute.make_int(Ncontext_sparse, "Ncontext_sparse")
if context_sparse_types is None:
context_sparse_types = []
if not isinstance(context_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % context_sparse_types)
context_sparse_types = [_execute.make_type(_t, "context_sparse_types") for _t in context_sparse_types]
if context_ragged_value_types is None:
context_ragged_value_types = []
if not isinstance(context_ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_ragged_value_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % context_ragged_value_types)
context_ragged_value_types = [_execute.make_type(_t, "context_ragged_value_types") for _t in context_ragged_value_types]
if context_ragged_split_types is None:
context_ragged_split_types = []
if not isinstance(context_ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_ragged_split_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % context_ragged_split_types)
context_ragged_split_types = [_execute.make_type(_t, "context_ragged_split_types") for _t in context_ragged_split_types]
if context_dense_shapes is None:
context_dense_shapes = []
if not isinstance(context_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_shapes' argument to "
"'parse_sequence_example_v2' Op, not %r." % context_dense_shapes)
context_dense_shapes = [_execute.make_shape(_s, "context_dense_shapes") for _s in context_dense_shapes]
if Nfeature_list_sparse is None:
Nfeature_list_sparse = 0
Nfeature_list_sparse = _execute.make_int(Nfeature_list_sparse, "Nfeature_list_sparse")
if Nfeature_list_dense is None:
Nfeature_list_dense = 0
Nfeature_list_dense = _execute.make_int(Nfeature_list_dense, "Nfeature_list_dense")
if feature_list_dense_types is None:
feature_list_dense_types = []
if not isinstance(feature_list_dense_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_dense_types)
feature_list_dense_types = [_execute.make_type(_t, "feature_list_dense_types") for _t in feature_list_dense_types]
if feature_list_sparse_types is None:
feature_list_sparse_types = []
if not isinstance(feature_list_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_sparse_types)
feature_list_sparse_types = [_execute.make_type(_t, "feature_list_sparse_types") for _t in feature_list_sparse_types]
if feature_list_ragged_value_types is None:
feature_list_ragged_value_types = []
if not isinstance(feature_list_ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_ragged_value_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_ragged_value_types)
feature_list_ragged_value_types = [_execute.make_type(_t, "feature_list_ragged_value_types") for _t in feature_list_ragged_value_types]
if feature_list_ragged_split_types is None:
feature_list_ragged_split_types = []
if not isinstance(feature_list_ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_ragged_split_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_ragged_split_types)
feature_list_ragged_split_types = [_execute.make_type(_t, "feature_list_ragged_split_types") for _t in feature_list_ragged_split_types]
if feature_list_dense_shapes is None:
feature_list_dense_shapes = []
if not isinstance(feature_list_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_shapes' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_dense_shapes)
feature_list_dense_shapes = [_execute.make_shape(_s, "feature_list_dense_shapes") for _s in feature_list_dense_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseSequenceExampleV2", serialized=serialized,
debug_name=debug_name,
context_sparse_keys=context_sparse_keys,
context_dense_keys=context_dense_keys,
context_ragged_keys=context_ragged_keys,
feature_list_sparse_keys=feature_list_sparse_keys,
feature_list_dense_keys=feature_list_dense_keys,
feature_list_ragged_keys=feature_list_ragged_keys,
feature_list_dense_missing_assumed_empty=feature_list_dense_missing_assumed_empty,
context_dense_defaults=context_dense_defaults,
Ncontext_sparse=Ncontext_sparse,
context_sparse_types=context_sparse_types,
context_ragged_value_types=context_ragged_value_types,
context_ragged_split_types=context_ragged_split_types,
context_dense_shapes=context_dense_shapes,
Nfeature_list_sparse=Nfeature_list_sparse,
Nfeature_list_dense=Nfeature_list_dense,
feature_list_dense_types=feature_list_dense_types,
feature_list_sparse_types=feature_list_sparse_types,
feature_list_ragged_value_types=feature_list_ragged_value_types,
feature_list_ragged_split_types=feature_list_ragged_split_types,
feature_list_dense_shapes=feature_list_dense_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("Ncontext_sparse", _op._get_attr_int("Ncontext_sparse"),
"Tcontext_dense", _op.get_attr("Tcontext_dense"),
"context_sparse_types", _op.get_attr("context_sparse_types"),
"context_ragged_value_types",
_op.get_attr("context_ragged_value_types"),
"context_ragged_split_types",
_op.get_attr("context_ragged_split_types"),
"context_dense_shapes", _op.get_attr("context_dense_shapes"),
"Nfeature_list_sparse",
_op._get_attr_int("Nfeature_list_sparse"),
"Nfeature_list_dense", _op._get_attr_int("Nfeature_list_dense"),
"feature_list_dense_types",
_op.get_attr("feature_list_dense_types"),
"feature_list_sparse_types",
_op.get_attr("feature_list_sparse_types"),
"feature_list_ragged_value_types",
_op.get_attr("feature_list_ragged_value_types"),
"feature_list_ragged_split_types",
_op.get_attr("feature_list_ragged_split_types"),
"feature_list_dense_shapes",
_op.get_attr("feature_list_dense_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseSequenceExampleV2", _inputs_flat, _attrs, _result)
_result = [_result[:Ncontext_sparse]] + _result[Ncontext_sparse:]
_result = _result[:1] + [_result[1:1 + len(context_sparse_types)]] + _result[1 + len(context_sparse_types):]
_result = _result[:2] + [_result[2:2 + Ncontext_sparse]] + _result[2 + Ncontext_sparse:]
_result = _result[:3] + [_result[3:3 + len(context_dense_defaults)]] + _result[3 + len(context_dense_defaults):]
_result = _result[:4] + [_result[4:4 + len(context_ragged_value_types)]] + _result[4 + len(context_ragged_value_types):]
_result = _result[:5] + [_result[5:5 + len(context_ragged_split_types)]] + _result[5 + len(context_ragged_split_types):]
_result = _result[:6] + [_result[6:6 + Nfeature_list_sparse]] + _result[6 + Nfeature_list_sparse:]
_result = _result[:7] + [_result[7:7 + len(feature_list_sparse_types)]] + _result[7 + len(feature_list_sparse_types):]
_result = _result[:8] + [_result[8:8 + Nfeature_list_sparse]] + _result[8 + Nfeature_list_sparse:]
_result = _result[:9] + [_result[9:9 + len(feature_list_dense_types)]] + _result[9 + len(feature_list_dense_types):]
_result = _result[:10] + [_result[10:10 + Nfeature_list_dense]] + _result[10 + Nfeature_list_dense:]
_result = _result[:11] + [_result[11:11 + len(feature_list_ragged_value_types)]] + _result[11 + len(feature_list_ragged_value_types):]
_result = _result[:12] + [_result[12:12 + len(feature_list_ragged_split_types)]] + _result[12 + len(feature_list_ragged_split_types):]
_result = _result[:13] + [_result[13:]]
_result = _ParseSequenceExampleV2Output._make(_result)
return _result
ParseSequenceExampleV2 = tf_export("raw_ops.ParseSequenceExampleV2")(_ops.to_raw_op(parse_sequence_example_v2))
def parse_sequence_example_v2_eager_fallback(serialized, debug_name, context_sparse_keys, context_dense_keys, context_ragged_keys, feature_list_sparse_keys, feature_list_dense_keys, feature_list_ragged_keys, feature_list_dense_missing_assumed_empty, context_dense_defaults, Ncontext_sparse, context_sparse_types, context_ragged_value_types, context_ragged_split_types, context_dense_shapes, Nfeature_list_sparse, Nfeature_list_dense, feature_list_dense_types, feature_list_sparse_types, feature_list_ragged_value_types, feature_list_ragged_split_types, feature_list_dense_shapes, name, ctx):
if Ncontext_sparse is None:
Ncontext_sparse = 0
Ncontext_sparse = _execute.make_int(Ncontext_sparse, "Ncontext_sparse")
if context_sparse_types is None:
context_sparse_types = []
if not isinstance(context_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % context_sparse_types)
context_sparse_types = [_execute.make_type(_t, "context_sparse_types") for _t in context_sparse_types]
if context_ragged_value_types is None:
context_ragged_value_types = []
if not isinstance(context_ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_ragged_value_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % context_ragged_value_types)
context_ragged_value_types = [_execute.make_type(_t, "context_ragged_value_types") for _t in context_ragged_value_types]
if context_ragged_split_types is None:
context_ragged_split_types = []
if not isinstance(context_ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_ragged_split_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % context_ragged_split_types)
context_ragged_split_types = [_execute.make_type(_t, "context_ragged_split_types") for _t in context_ragged_split_types]
if context_dense_shapes is None:
context_dense_shapes = []
if not isinstance(context_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_shapes' argument to "
"'parse_sequence_example_v2' Op, not %r." % context_dense_shapes)
context_dense_shapes = [_execute.make_shape(_s, "context_dense_shapes") for _s in context_dense_shapes]
if Nfeature_list_sparse is None:
Nfeature_list_sparse = 0
Nfeature_list_sparse = _execute.make_int(Nfeature_list_sparse, "Nfeature_list_sparse")
if Nfeature_list_dense is None:
Nfeature_list_dense = 0
Nfeature_list_dense = _execute.make_int(Nfeature_list_dense, "Nfeature_list_dense")
if feature_list_dense_types is None:
feature_list_dense_types = []
if not isinstance(feature_list_dense_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_dense_types)
feature_list_dense_types = [_execute.make_type(_t, "feature_list_dense_types") for _t in feature_list_dense_types]
if feature_list_sparse_types is None:
feature_list_sparse_types = []
if not isinstance(feature_list_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_sparse_types)
feature_list_sparse_types = [_execute.make_type(_t, "feature_list_sparse_types") for _t in feature_list_sparse_types]
if feature_list_ragged_value_types is None:
feature_list_ragged_value_types = []
if not isinstance(feature_list_ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_ragged_value_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_ragged_value_types)
feature_list_ragged_value_types = [_execute.make_type(_t, "feature_list_ragged_value_types") for _t in feature_list_ragged_value_types]
if feature_list_ragged_split_types is None:
feature_list_ragged_split_types = []
if not isinstance(feature_list_ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_ragged_split_types' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_ragged_split_types)
feature_list_ragged_split_types = [_execute.make_type(_t, "feature_list_ragged_split_types") for _t in feature_list_ragged_split_types]
if feature_list_dense_shapes is None:
feature_list_dense_shapes = []
if not isinstance(feature_list_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_shapes' argument to "
"'parse_sequence_example_v2' Op, not %r." % feature_list_dense_shapes)
feature_list_dense_shapes = [_execute.make_shape(_s, "feature_list_dense_shapes") for _s in feature_list_dense_shapes]
_attr_Tcontext_dense, context_dense_defaults = _execute.convert_to_mixed_eager_tensors(context_dense_defaults, ctx)
serialized = _ops.convert_to_tensor(serialized, _dtypes.string)
debug_name = _ops.convert_to_tensor(debug_name, _dtypes.string)
context_sparse_keys = _ops.convert_to_tensor(context_sparse_keys, _dtypes.string)
context_dense_keys = _ops.convert_to_tensor(context_dense_keys, _dtypes.string)
context_ragged_keys = _ops.convert_to_tensor(context_ragged_keys, _dtypes.string)
feature_list_sparse_keys = _ops.convert_to_tensor(feature_list_sparse_keys, _dtypes.string)
feature_list_dense_keys = _ops.convert_to_tensor(feature_list_dense_keys, _dtypes.string)
feature_list_ragged_keys = _ops.convert_to_tensor(feature_list_ragged_keys, _dtypes.string)
feature_list_dense_missing_assumed_empty = _ops.convert_to_tensor(feature_list_dense_missing_assumed_empty, _dtypes.bool)
_inputs_flat = [serialized, debug_name, context_sparse_keys, context_dense_keys, context_ragged_keys, feature_list_sparse_keys, feature_list_dense_keys, feature_list_ragged_keys, feature_list_dense_missing_assumed_empty] + list(context_dense_defaults)
_attrs = ("Ncontext_sparse", Ncontext_sparse, "Tcontext_dense",
_attr_Tcontext_dense, "context_sparse_types", context_sparse_types,
"context_ragged_value_types", context_ragged_value_types,
"context_ragged_split_types", context_ragged_split_types,
"context_dense_shapes", context_dense_shapes, "Nfeature_list_sparse",
Nfeature_list_sparse, "Nfeature_list_dense", Nfeature_list_dense,
"feature_list_dense_types", feature_list_dense_types,
"feature_list_sparse_types", feature_list_sparse_types,
"feature_list_ragged_value_types", feature_list_ragged_value_types,
"feature_list_ragged_split_types", feature_list_ragged_split_types,
"feature_list_dense_shapes", feature_list_dense_shapes)
_result = _execute.execute(b"ParseSequenceExampleV2", Ncontext_sparse +
len(context_sparse_types) + Ncontext_sparse +
len(context_dense_defaults) +
len(context_ragged_value_types) +
len(context_ragged_split_types) +
Nfeature_list_sparse +
len(feature_list_sparse_types) +
Nfeature_list_sparse +
len(feature_list_dense_types) +
Nfeature_list_dense +
len(feature_list_ragged_value_types) +
len(feature_list_ragged_split_types) +
len(feature_list_ragged_split_types),
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseSequenceExampleV2", _inputs_flat, _attrs, _result)
_result = [_result[:Ncontext_sparse]] + _result[Ncontext_sparse:]
_result = _result[:1] + [_result[1:1 + len(context_sparse_types)]] + _result[1 + len(context_sparse_types):]
_result = _result[:2] + [_result[2:2 + Ncontext_sparse]] + _result[2 + Ncontext_sparse:]
_result = _result[:3] + [_result[3:3 + len(context_dense_defaults)]] + _result[3 + len(context_dense_defaults):]
_result = _result[:4] + [_result[4:4 + len(context_ragged_value_types)]] + _result[4 + len(context_ragged_value_types):]
_result = _result[:5] + [_result[5:5 + len(context_ragged_split_types)]] + _result[5 + len(context_ragged_split_types):]
_result = _result[:6] + [_result[6:6 + Nfeature_list_sparse]] + _result[6 + Nfeature_list_sparse:]
_result = _result[:7] + [_result[7:7 + len(feature_list_sparse_types)]] + _result[7 + len(feature_list_sparse_types):]
_result = _result[:8] + [_result[8:8 + Nfeature_list_sparse]] + _result[8 + Nfeature_list_sparse:]
_result = _result[:9] + [_result[9:9 + len(feature_list_dense_types)]] + _result[9 + len(feature_list_dense_types):]
_result = _result[:10] + [_result[10:10 + Nfeature_list_dense]] + _result[10 + Nfeature_list_dense:]
_result = _result[:11] + [_result[11:11 + len(feature_list_ragged_value_types)]] + _result[11 + len(feature_list_ragged_value_types):]
_result = _result[:12] + [_result[12:12 + len(feature_list_ragged_split_types)]] + _result[12 + len(feature_list_ragged_split_types):]
_result = _result[:13] + [_result[13:]]
_result = _ParseSequenceExampleV2Output._make(_result)
return _result
_ParseSingleExampleOutput = collections.namedtuple(
"ParseSingleExample",
["sparse_indices", "sparse_values", "sparse_shapes", "dense_values"])
def parse_single_example(serialized, dense_defaults, num_sparse, sparse_keys, dense_keys, sparse_types, dense_shapes, name=None):
r"""Transforms a tf.Example proto (as a string) into typed tensors.
Args:
serialized: A `Tensor` of type `string`.
A vector containing a batch of binary serialized Example protos.
dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A list of Tensors (some may be empty), whose length matches
the length of `dense_keys`. dense_defaults[j] provides default values
when the example's feature_map lacks dense_key[j]. If an empty Tensor is
provided for dense_defaults[j], then the Feature dense_keys[j] is required.
The input type is inferred from dense_defaults[j], even when it's empty.
If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined,
then the shape of dense_defaults[j] must match that of dense_shapes[j].
If dense_shapes[j] has an undefined major dimension (variable strides dense
feature), dense_defaults[j] must contain a single element:
the padding element.
num_sparse: An `int` that is `>= 0`.
The number of sparse features to be parsed from the example. This
must match the lengths of `sparse_keys` and `sparse_types`.
sparse_keys: A list of `strings`. A list of `num_sparse` strings.
The keys expected in the Examples' features associated with sparse values.
dense_keys: A list of `strings`.
The keys expected in the Examples' features associated with dense
values.
sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.
A list of `num_sparse` types; the data types of data in each
Feature given in sparse_keys.
Currently the ParseSingleExample op supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).
The shapes of data in each Feature given in dense_keys.
The length of this list must match the length of `dense_keys`. The
number of elements in the Feature corresponding to dense_key[j] must
always equal dense_shapes[j].NumEntries(). If dense_shapes[j] ==
(D0, D1, ..., DN) then the shape of output Tensor dense_values[j]
will be (D0, D1, ..., DN): In the case dense_shapes[j] = (-1, D1,
..., DN), the shape of the output Tensor dense_values[j] will be (M,
D1, .., DN), where M is the number of blocks of elements of length
D1 * .... * DN, in the input.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values).
sparse_indices: A list of `num_sparse` `Tensor` objects with type `int64`.
sparse_values: A list of `Tensor` objects of type `sparse_types`.
sparse_shapes: A list of `num_sparse` `Tensor` objects with type `int64`.
dense_values: A list of `Tensor` objects. Has the same type as `dense_defaults`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseSingleExample", name,
tld.op_callbacks, serialized, dense_defaults, "num_sparse",
num_sparse, "sparse_keys", sparse_keys, "dense_keys", dense_keys,
"sparse_types", sparse_types, "dense_shapes", dense_shapes)
_result = _ParseSingleExampleOutput._make(_result)
return _result
except _core._FallbackException:
try:
return parse_single_example_eager_fallback(
serialized, dense_defaults, num_sparse=num_sparse,
sparse_keys=sparse_keys, dense_keys=dense_keys,
sparse_types=sparse_types, dense_shapes=dense_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
num_sparse = _execute.make_int(num_sparse, "num_sparse")
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'parse_single_example' Op, not %r." % sparse_keys)
sparse_keys = [_execute.make_str(_s, "sparse_keys") for _s in sparse_keys]
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'parse_single_example' Op, not %r." % dense_keys)
dense_keys = [_execute.make_str(_s, "dense_keys") for _s in dense_keys]
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_single_example' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_single_example' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseSingleExample", serialized=serialized,
dense_defaults=dense_defaults,
num_sparse=num_sparse, sparse_keys=sparse_keys,
dense_keys=dense_keys,
sparse_types=sparse_types,
dense_shapes=dense_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("num_sparse", _op._get_attr_int("num_sparse"), "sparse_keys",
_op.get_attr("sparse_keys"), "dense_keys",
_op.get_attr("dense_keys"), "sparse_types",
_op.get_attr("sparse_types"), "Tdense", _op.get_attr("Tdense"),
"dense_shapes", _op.get_attr("dense_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseSingleExample", _inputs_flat, _attrs, _result)
_result = [_result[:num_sparse]] + _result[num_sparse:]
_result = _result[:1] + [_result[1:1 + len(sparse_types)]] + _result[1 + len(sparse_types):]
_result = _result[:2] + [_result[2:2 + num_sparse]] + _result[2 + num_sparse:]
_result = _result[:3] + [_result[3:]]
_result = _ParseSingleExampleOutput._make(_result)
return _result
ParseSingleExample = tf_export("raw_ops.ParseSingleExample")(_ops.to_raw_op(parse_single_example))
def parse_single_example_eager_fallback(serialized, dense_defaults, num_sparse, sparse_keys, dense_keys, sparse_types, dense_shapes, name, ctx):
num_sparse = _execute.make_int(num_sparse, "num_sparse")
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'parse_single_example' Op, not %r." % sparse_keys)
sparse_keys = [_execute.make_str(_s, "sparse_keys") for _s in sparse_keys]
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'parse_single_example' Op, not %r." % dense_keys)
dense_keys = [_execute.make_str(_s, "dense_keys") for _s in dense_keys]
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_single_example' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_single_example' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
_attr_Tdense, dense_defaults = _execute.convert_to_mixed_eager_tensors(dense_defaults, ctx)
serialized = _ops.convert_to_tensor(serialized, _dtypes.string)
_inputs_flat = [serialized] + list(dense_defaults)
_attrs = ("num_sparse", num_sparse, "sparse_keys", sparse_keys,
"dense_keys", dense_keys, "sparse_types", sparse_types, "Tdense",
_attr_Tdense, "dense_shapes", dense_shapes)
_result = _execute.execute(b"ParseSingleExample", num_sparse +
len(sparse_types) + num_sparse +
len(dense_defaults), inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseSingleExample", _inputs_flat, _attrs, _result)
_result = [_result[:num_sparse]] + _result[num_sparse:]
_result = _result[:1] + [_result[1:1 + len(sparse_types)]] + _result[1 + len(sparse_types):]
_result = _result[:2] + [_result[2:2 + num_sparse]] + _result[2 + num_sparse:]
_result = _result[:3] + [_result[3:]]
_result = _ParseSingleExampleOutput._make(_result)
return _result
_ParseSingleSequenceExampleOutput = collections.namedtuple(
"ParseSingleSequenceExample",
["context_sparse_indices", "context_sparse_values", "context_sparse_shapes", "context_dense_values", "feature_list_sparse_indices", "feature_list_sparse_values", "feature_list_sparse_shapes", "feature_list_dense_values"])
def parse_single_sequence_example(serialized, feature_list_dense_missing_assumed_empty, context_sparse_keys, context_dense_keys, feature_list_sparse_keys, feature_list_dense_keys, context_dense_defaults, debug_name, context_sparse_types=[], feature_list_dense_types=[], context_dense_shapes=[], feature_list_sparse_types=[], feature_list_dense_shapes=[], name=None):
r"""Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors.
Args:
serialized: A `Tensor` of type `string`.
A scalar containing a binary serialized SequenceExample proto.
feature_list_dense_missing_assumed_empty: A `Tensor` of type `string`.
A vector listing the
FeatureList keys which may be missing from the SequenceExample. If the
associated FeatureList is missing, it is treated as empty. By default,
any FeatureList not listed in this vector must exist in the SequenceExample.
context_sparse_keys: A list of `Tensor` objects with type `string`.
A list of Ncontext_sparse string Tensors (scalars).
The keys expected in the Examples' features associated with context_sparse
values.
context_dense_keys: A list of `Tensor` objects with type `string`.
A list of Ncontext_dense string Tensors (scalars).
The keys expected in the SequenceExamples' context features associated with
dense values.
feature_list_sparse_keys: A list of `Tensor` objects with type `string`.
A list of Nfeature_list_sparse string Tensors
(scalars). The keys expected in the FeatureLists associated with sparse
values.
feature_list_dense_keys: A list of `Tensor` objects with type `string`.
A list of Nfeature_list_dense string Tensors (scalars).
The keys expected in the SequenceExamples' feature_lists associated
with lists of dense values.
context_dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A list of Ncontext_dense Tensors (some may be empty).
context_dense_defaults[j] provides default values
when the SequenceExample's context map lacks context_dense_key[j].
If an empty Tensor is provided for context_dense_defaults[j],
then the Feature context_dense_keys[j] is required.
The input type is inferred from context_dense_defaults[j], even when it's
empty. If context_dense_defaults[j] is not empty, its shape must match
context_dense_shapes[j].
debug_name: A `Tensor` of type `string`.
A scalar containing the name of the serialized proto.
May contain, for example, table key (descriptive) name for the
corresponding serialized proto. This is purely useful for debugging
purposes, and the presence of values here has no effect on the output.
May also be an empty scalar if no name is available.
context_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
A list of Ncontext_sparse types; the data types of data in
each context Feature given in context_sparse_keys.
Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
feature_list_dense_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
context_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.
A list of Ncontext_dense shapes; the shapes of data in
each context Feature given in context_dense_keys.
The number of elements in the Feature corresponding to context_dense_key[j]
must always equal context_dense_shapes[j].NumEntries().
The shape of context_dense_values[j] will match context_dense_shapes[j].
feature_list_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
A list of Nfeature_list_sparse types; the data types
of data in each FeatureList given in feature_list_sparse_keys.
Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),
DT_INT64 (Int64List), and DT_STRING (BytesList).
feature_list_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.
A list of Nfeature_list_dense shapes; the shapes of
data in each FeatureList given in feature_list_dense_keys.
The shape of each Feature in the FeatureList corresponding to
feature_list_dense_key[j] must always equal
feature_list_dense_shapes[j].NumEntries().
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values).
context_sparse_indices: A list with the same length as `context_sparse_keys` of `Tensor` objects with type `int64`.
context_sparse_values: A list of `Tensor` objects of type `context_sparse_types`.
context_sparse_shapes: A list with the same length as `context_sparse_keys` of `Tensor` objects with type `int64`.
context_dense_values: A list of `Tensor` objects. Has the same type as `context_dense_defaults`.
feature_list_sparse_indices: A list with the same length as `feature_list_sparse_keys` of `Tensor` objects with type `int64`.
feature_list_sparse_values: A list of `Tensor` objects of type `feature_list_sparse_types`.
feature_list_sparse_shapes: A list with the same length as `feature_list_sparse_keys` of `Tensor` objects with type `int64`.
feature_list_dense_values: A list of `Tensor` objects of type `feature_list_dense_types`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseSingleSequenceExample",
name, tld.op_callbacks, serialized,
feature_list_dense_missing_assumed_empty, context_sparse_keys,
context_dense_keys, feature_list_sparse_keys, feature_list_dense_keys,
context_dense_defaults, debug_name, "context_sparse_types",
context_sparse_types, "feature_list_dense_types",
feature_list_dense_types, "context_dense_shapes",
context_dense_shapes, "feature_list_sparse_types",
feature_list_sparse_types, "feature_list_dense_shapes",
feature_list_dense_shapes)
_result = _ParseSingleSequenceExampleOutput._make(_result)
return _result
except _core._FallbackException:
try:
return parse_single_sequence_example_eager_fallback(
serialized, feature_list_dense_missing_assumed_empty,
context_sparse_keys, context_dense_keys, feature_list_sparse_keys,
feature_list_dense_keys, context_dense_defaults, debug_name,
context_sparse_types=context_sparse_types,
feature_list_dense_types=feature_list_dense_types,
context_dense_shapes=context_dense_shapes,
feature_list_sparse_types=feature_list_sparse_types,
feature_list_dense_shapes=feature_list_dense_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
if not isinstance(context_sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_keys' argument to "
"'parse_single_sequence_example' Op, not %r." % context_sparse_keys)
_attr_Ncontext_sparse = len(context_sparse_keys)
if not isinstance(context_dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_keys' argument to "
"'parse_single_sequence_example' Op, not %r." % context_dense_keys)
_attr_Ncontext_dense = len(context_dense_keys)
if not isinstance(feature_list_sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_keys' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_sparse_keys)
_attr_Nfeature_list_sparse = len(feature_list_sparse_keys)
if not isinstance(feature_list_dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_keys' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_dense_keys)
_attr_Nfeature_list_dense = len(feature_list_dense_keys)
if context_sparse_types is None:
context_sparse_types = []
if not isinstance(context_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_types' argument to "
"'parse_single_sequence_example' Op, not %r." % context_sparse_types)
context_sparse_types = [_execute.make_type(_t, "context_sparse_types") for _t in context_sparse_types]
if feature_list_dense_types is None:
feature_list_dense_types = []
if not isinstance(feature_list_dense_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_types' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_dense_types)
feature_list_dense_types = [_execute.make_type(_t, "feature_list_dense_types") for _t in feature_list_dense_types]
if context_dense_shapes is None:
context_dense_shapes = []
if not isinstance(context_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_shapes' argument to "
"'parse_single_sequence_example' Op, not %r." % context_dense_shapes)
context_dense_shapes = [_execute.make_shape(_s, "context_dense_shapes") for _s in context_dense_shapes]
if feature_list_sparse_types is None:
feature_list_sparse_types = []
if not isinstance(feature_list_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_types' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_sparse_types)
feature_list_sparse_types = [_execute.make_type(_t, "feature_list_sparse_types") for _t in feature_list_sparse_types]
if feature_list_dense_shapes is None:
feature_list_dense_shapes = []
if not isinstance(feature_list_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_shapes' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_dense_shapes)
feature_list_dense_shapes = [_execute.make_shape(_s, "feature_list_dense_shapes") for _s in feature_list_dense_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseSingleSequenceExample", serialized=serialized,
feature_list_dense_missing_assumed_empty=feature_list_dense_missing_assumed_empty,
context_sparse_keys=context_sparse_keys,
context_dense_keys=context_dense_keys,
feature_list_sparse_keys=feature_list_sparse_keys,
feature_list_dense_keys=feature_list_dense_keys,
context_dense_defaults=context_dense_defaults,
debug_name=debug_name,
context_sparse_types=context_sparse_types,
feature_list_dense_types=feature_list_dense_types,
context_dense_shapes=context_dense_shapes,
feature_list_sparse_types=feature_list_sparse_types,
feature_list_dense_shapes=feature_list_dense_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("Ncontext_sparse", _op._get_attr_int("Ncontext_sparse"),
"Ncontext_dense", _op._get_attr_int("Ncontext_dense"),
"Nfeature_list_sparse",
_op._get_attr_int("Nfeature_list_sparse"),
"Nfeature_list_dense", _op._get_attr_int("Nfeature_list_dense"),
"context_sparse_types", _op.get_attr("context_sparse_types"),
"Tcontext_dense", _op.get_attr("Tcontext_dense"),
"feature_list_dense_types",
_op.get_attr("feature_list_dense_types"),
"context_dense_shapes", _op.get_attr("context_dense_shapes"),
"feature_list_sparse_types",
_op.get_attr("feature_list_sparse_types"),
"feature_list_dense_shapes",
_op.get_attr("feature_list_dense_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseSingleSequenceExample", _inputs_flat, _attrs, _result)
_result = [_result[:_attr_Ncontext_sparse]] + _result[_attr_Ncontext_sparse:]
_result = _result[:1] + [_result[1:1 + len(context_sparse_types)]] + _result[1 + len(context_sparse_types):]
_result = _result[:2] + [_result[2:2 + _attr_Ncontext_sparse]] + _result[2 + _attr_Ncontext_sparse:]
_result = _result[:3] + [_result[3:3 + len(context_dense_defaults)]] + _result[3 + len(context_dense_defaults):]
_result = _result[:4] + [_result[4:4 + _attr_Nfeature_list_sparse]] + _result[4 + _attr_Nfeature_list_sparse:]
_result = _result[:5] + [_result[5:5 + len(feature_list_sparse_types)]] + _result[5 + len(feature_list_sparse_types):]
_result = _result[:6] + [_result[6:6 + _attr_Nfeature_list_sparse]] + _result[6 + _attr_Nfeature_list_sparse:]
_result = _result[:7] + [_result[7:]]
_result = _ParseSingleSequenceExampleOutput._make(_result)
return _result
ParseSingleSequenceExample = tf_export("raw_ops.ParseSingleSequenceExample")(_ops.to_raw_op(parse_single_sequence_example))
def parse_single_sequence_example_eager_fallback(serialized, feature_list_dense_missing_assumed_empty, context_sparse_keys, context_dense_keys, feature_list_sparse_keys, feature_list_dense_keys, context_dense_defaults, debug_name, context_sparse_types, feature_list_dense_types, context_dense_shapes, feature_list_sparse_types, feature_list_dense_shapes, name, ctx):
if not isinstance(context_sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_keys' argument to "
"'parse_single_sequence_example' Op, not %r." % context_sparse_keys)
_attr_Ncontext_sparse = len(context_sparse_keys)
if not isinstance(context_dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_keys' argument to "
"'parse_single_sequence_example' Op, not %r." % context_dense_keys)
_attr_Ncontext_dense = len(context_dense_keys)
if not isinstance(feature_list_sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_keys' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_sparse_keys)
_attr_Nfeature_list_sparse = len(feature_list_sparse_keys)
if not isinstance(feature_list_dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_keys' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_dense_keys)
_attr_Nfeature_list_dense = len(feature_list_dense_keys)
if context_sparse_types is None:
context_sparse_types = []
if not isinstance(context_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'context_sparse_types' argument to "
"'parse_single_sequence_example' Op, not %r." % context_sparse_types)
context_sparse_types = [_execute.make_type(_t, "context_sparse_types") for _t in context_sparse_types]
if feature_list_dense_types is None:
feature_list_dense_types = []
if not isinstance(feature_list_dense_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_types' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_dense_types)
feature_list_dense_types = [_execute.make_type(_t, "feature_list_dense_types") for _t in feature_list_dense_types]
if context_dense_shapes is None:
context_dense_shapes = []
if not isinstance(context_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'context_dense_shapes' argument to "
"'parse_single_sequence_example' Op, not %r." % context_dense_shapes)
context_dense_shapes = [_execute.make_shape(_s, "context_dense_shapes") for _s in context_dense_shapes]
if feature_list_sparse_types is None:
feature_list_sparse_types = []
if not isinstance(feature_list_sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_sparse_types' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_sparse_types)
feature_list_sparse_types = [_execute.make_type(_t, "feature_list_sparse_types") for _t in feature_list_sparse_types]
if feature_list_dense_shapes is None:
feature_list_dense_shapes = []
if not isinstance(feature_list_dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'feature_list_dense_shapes' argument to "
"'parse_single_sequence_example' Op, not %r." % feature_list_dense_shapes)
feature_list_dense_shapes = [_execute.make_shape(_s, "feature_list_dense_shapes") for _s in feature_list_dense_shapes]
_attr_Tcontext_dense, context_dense_defaults = _execute.convert_to_mixed_eager_tensors(context_dense_defaults, ctx)
serialized = _ops.convert_to_tensor(serialized, _dtypes.string)
feature_list_dense_missing_assumed_empty = _ops.convert_to_tensor(feature_list_dense_missing_assumed_empty, _dtypes.string)
context_sparse_keys = _ops.convert_n_to_tensor(context_sparse_keys, _dtypes.string)
context_dense_keys = _ops.convert_n_to_tensor(context_dense_keys, _dtypes.string)
feature_list_sparse_keys = _ops.convert_n_to_tensor(feature_list_sparse_keys, _dtypes.string)
feature_list_dense_keys = _ops.convert_n_to_tensor(feature_list_dense_keys, _dtypes.string)
debug_name = _ops.convert_to_tensor(debug_name, _dtypes.string)
_inputs_flat = [serialized, feature_list_dense_missing_assumed_empty] + list(context_sparse_keys) + list(context_dense_keys) + list(feature_list_sparse_keys) + list(feature_list_dense_keys) + list(context_dense_defaults) + [debug_name]
_attrs = ("Ncontext_sparse", _attr_Ncontext_sparse, "Ncontext_dense",
_attr_Ncontext_dense, "Nfeature_list_sparse", _attr_Nfeature_list_sparse,
"Nfeature_list_dense", _attr_Nfeature_list_dense, "context_sparse_types",
context_sparse_types, "Tcontext_dense", _attr_Tcontext_dense,
"feature_list_dense_types", feature_list_dense_types,
"context_dense_shapes", context_dense_shapes, "feature_list_sparse_types",
feature_list_sparse_types, "feature_list_dense_shapes",
feature_list_dense_shapes)
_result = _execute.execute(b"ParseSingleSequenceExample",
_attr_Ncontext_sparse + len(context_sparse_types)
+ _attr_Ncontext_sparse +
len(context_dense_defaults) +
_attr_Nfeature_list_sparse +
len(feature_list_sparse_types) +
_attr_Nfeature_list_sparse +
len(feature_list_dense_types),
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseSingleSequenceExample", _inputs_flat, _attrs, _result)
_result = [_result[:_attr_Ncontext_sparse]] + _result[_attr_Ncontext_sparse:]
_result = _result[:1] + [_result[1:1 + len(context_sparse_types)]] + _result[1 + len(context_sparse_types):]
_result = _result[:2] + [_result[2:2 + _attr_Ncontext_sparse]] + _result[2 + _attr_Ncontext_sparse:]
_result = _result[:3] + [_result[3:3 + len(context_dense_defaults)]] + _result[3 + len(context_dense_defaults):]
_result = _result[:4] + [_result[4:4 + _attr_Nfeature_list_sparse]] + _result[4 + _attr_Nfeature_list_sparse:]
_result = _result[:5] + [_result[5:5 + len(feature_list_sparse_types)]] + _result[5 + len(feature_list_sparse_types):]
_result = _result[:6] + [_result[6:6 + _attr_Nfeature_list_sparse]] + _result[6 + _attr_Nfeature_list_sparse:]
_result = _result[:7] + [_result[7:]]
_result = _ParseSingleSequenceExampleOutput._make(_result)
return _result
@_dispatch.add_dispatch_list
@tf_export('io.parse_tensor', v1=['io.parse_tensor', 'parse_tensor'])
@deprecated_endpoints('parse_tensor')
def parse_tensor(serialized, out_type, name=None):
r"""Transforms a serialized tensorflow.TensorProto proto into a Tensor.
Args:
serialized: A `Tensor` of type `string`.
A scalar string containing a serialized TensorProto proto.
out_type: A `tf.DType`.
The type of the serialized tensor. The provided type must match the
type of the serialized tensor and no implicit conversion will take place.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `out_type`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseTensor", name,
tld.op_callbacks, serialized, "out_type", out_type)
return _result
except _core._FallbackException:
try:
return parse_tensor_eager_fallback(
serialized, out_type=out_type, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
parse_tensor, serialized=serialized, out_type=out_type,
name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
out_type = _execute.make_type(out_type, "out_type")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseTensor", serialized=serialized, out_type=out_type, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
parse_tensor, serialized=serialized, out_type=out_type, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("out_type", _op._get_attr_type("out_type"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseTensor", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ParseTensor = tf_export("raw_ops.ParseTensor")(_ops.to_raw_op(parse_tensor))
def parse_tensor_eager_fallback(serialized, out_type, name, ctx):
out_type = _execute.make_type(out_type, "out_type")
serialized = _ops.convert_to_tensor(serialized, _dtypes.string)
_inputs_flat = [serialized]
_attrs = ("out_type", out_type)
_result = _execute.execute(b"ParseTensor", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseTensor", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('io.serialize_tensor', v1=['io.serialize_tensor', 'serialize_tensor'])
@deprecated_endpoints('serialize_tensor')
def serialize_tensor(tensor, name=None):
r"""Transforms a Tensor into a serialized TensorProto proto.
Args:
tensor: A `Tensor`. A Tensor of type `T`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SerializeTensor", name,
tld.op_callbacks, tensor)
return _result
except _core._FallbackException:
try:
return serialize_tensor_eager_fallback(
tensor, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
serialize_tensor, tensor=tensor, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SerializeTensor", tensor=tensor, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
serialize_tensor, tensor=tensor, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"SerializeTensor", _inputs_flat, _attrs, _result)
_result, = _result
return _result
SerializeTensor = tf_export("raw_ops.SerializeTensor")(_ops.to_raw_op(serialize_tensor))
def serialize_tensor_eager_fallback(tensor, name, ctx):
_attr_T, (tensor,) = _execute.args_to_matching_eager([tensor], ctx)
_inputs_flat = [tensor]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"SerializeTensor", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"SerializeTensor", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def string_to_number(string_tensor, out_type=_dtypes.float32, name=None):
r"""Converts each string in the input Tensor to the specified numeric type.
(Note that int32 overflow results in an error while float overflow
results in a rounded value.)
Example:
>>> strings = ["5.0", "3.0", "7.0"]
>>> tf.strings.to_number(strings)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([5., 3., 7.], dtype=float32)>
Args:
string_tensor: A `Tensor` of type `string`.
out_type: An optional `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to `tf.float32`.
The numeric type to interpret each string in `string_tensor` as.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `out_type`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "StringToNumber", name,
tld.op_callbacks, string_tensor, "out_type", out_type)
return _result
except _core._FallbackException:
try:
return string_to_number_eager_fallback(
string_tensor, out_type=out_type, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
# Add nodes to the TensorFlow graph.
if out_type is None:
out_type = _dtypes.float32
out_type = _execute.make_type(out_type, "out_type")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"StringToNumber", string_tensor=string_tensor, out_type=out_type,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("out_type", _op._get_attr_type("out_type"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"StringToNumber", _inputs_flat, _attrs, _result)
_result, = _result
return _result
StringToNumber = tf_export("raw_ops.StringToNumber")(_ops.to_raw_op(string_to_number))
def string_to_number_eager_fallback(string_tensor, out_type, name, ctx):
if out_type is None:
out_type = _dtypes.float32
out_type = _execute.make_type(out_type, "out_type")
string_tensor = _ops.convert_to_tensor(string_tensor, _dtypes.string)
_inputs_flat = [string_tensor]
_attrs = ("out_type", out_type)
_result = _execute.execute(b"StringToNumber", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"StringToNumber", _inputs_flat, _attrs, _result)
_result, = _result
return _result
| 56.702864 | 609 | 0.725616 | 17,654 | 132,628 | 5.032174 | 0.025263 | 0.073178 | 0.055832 | 0.026002 | 0.923985 | 0.907179 | 0.89563 | 0.879308 | 0.866645 | 0.854837 | 0 | 0.006763 | 0.188346 | 132,628 | 2,338 | 610 | 56.727117 | 0.818501 | 0.261423 | 0 | 0.79559 | 1 | 0 | 0.168602 | 0.070932 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016687 | false | 0.008343 | 0.007151 | 0 | 0.061979 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1cb540ea2b4fb3a8254c37c60bc8ddafb1a614d7 | 43,554 | py | Python | infoarbre/resources.py | MarBesse/gestion_arbres | 9067ae6768b24a408fd2160e34401f9845c6a22b | [
"MIT"
] | null | null | null | infoarbre/resources.py | MarBesse/gestion_arbres | 9067ae6768b24a408fd2160e34401f9845c6a22b | [
"MIT"
] | null | null | null | infoarbre/resources.py | MarBesse/gestion_arbres | 9067ae6768b24a408fd2160e34401f9845c6a22b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Resource object code
#
# Created by: The Resource Compiler for PyQt5 (Qt v5.11.2)
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore
qt_resource_data = b"\
\x00\x00\x26\xf8\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x18\x00\x00\x00\x18\x08\x06\x00\x00\x00\xe0\x77\x3d\xf8\
\x00\x00\x11\xef\x7a\x54\x58\x74\x52\x61\x77\x20\x70\x72\x6f\x66\
\x69\x6c\x65\x20\x74\x79\x70\x65\x20\x65\x78\x69\x66\x00\x00\x78\
\xda\xad\x99\x59\x72\xde\xb8\x92\x46\xdf\xb1\x8a\x5e\x02\xc6\x44\
\x62\x39\x18\x23\x7a\x07\xbd\xfc\x3e\x09\x52\xb2\x6c\xcb\x55\x75\
\xa3\xae\x65\xe9\x1f\x48\x82\x40\x0e\xdf\x00\xba\xfd\x7f\xff\x7b\
\xdc\xff\xf0\x2f\xb7\x98\x5d\x2e\x55\xa5\x89\x78\xfe\xe5\xc6\x57\
\x9d\x37\xea\x9f\x7f\xfd\xfe\x0d\x3e\xdf\xbf\xcf\x87\x8f\x63\xe1\
\xe7\xef\xdd\xe7\x81\xc8\x57\x89\xd7\xf4\x7c\x54\x79\xcf\xff\xf8\
\x3e\x7c\x0e\xf0\xbc\x74\xde\x95\x2f\x03\xe9\x7c\x0f\x8c\x9f\x0f\
\xb4\xfc\x8e\xaf\xbf\x0c\xf4\xde\x28\xd9\x8c\x22\x6f\xd6\x3b\x50\
\x7b\x07\x4a\xf1\x39\x10\xde\x01\xfa\xb3\x2c\x2f\x4d\xeb\xd7\x25\
\x8c\xfd\xbc\xbe\xd7\x3f\x61\xe0\xd7\xd9\x1f\xad\xe1\x7e\x1f\xda\
\x7b\xec\x97\xcf\xb9\x12\xbd\x55\xb8\x4f\x8a\x71\xa7\x90\x3c\x7f\
\x63\x8a\xcf\x04\x92\xfd\x06\x97\x3a\x6f\x32\x7f\x43\x12\x4e\xe4\
\x2f\xef\x33\xaf\x3d\xa5\xf4\x11\x13\x02\xf2\x5d\x9c\xfc\x97\x59\
\xb9\x5f\xb3\xf2\xf9\x2e\xfc\xe1\xfb\x5f\x92\x92\xe4\xf9\xde\xf1\
\xc5\xcf\xc1\x94\xcf\xd7\x6f\xbf\x0f\xe5\xfb\xe0\xbb\x1b\xe2\x2f\
\x77\x4e\xf3\xf3\xce\x3f\x7d\x7f\x7a\x08\xbf\x2e\xe7\xe3\xf7\x9c\
\xa5\xee\x9c\xfd\xac\xae\x67\x21\xa4\xf2\x2e\xea\x33\x3a\x77\x90\
\xb3\x06\x21\x4f\xf7\x32\xe1\xa7\xf2\x5b\x78\x5f\xef\x4f\xe3\x47\
\x1d\xd5\x3b\x49\xf9\xf2\xd3\x0f\x7e\x66\x68\x21\x92\x96\x13\x72\
\x58\xa1\x87\x13\xf6\x7d\x9d\x61\x32\xc5\x1c\x77\xac\xbc\xc6\x38\
\x49\x94\x7d\xa7\xa9\xc6\x16\x67\xf2\x8e\xfc\x64\xfb\x09\x27\xd6\
\xd4\xd2\x4a\x4a\x32\x27\xe9\x4d\x7c\x1b\x3f\xe7\x12\xee\x7d\xdb\
\xbd\xdd\xa4\x1b\x96\x5f\x81\x33\x63\x60\xb0\x60\xa5\xe0\xec\xcf\
\x7f\xe3\xe7\x8f\x03\x9d\x63\x25\x4f\x80\xf5\x33\x56\xcc\x2b\x5a\
\x11\x32\x0d\xcb\x9c\xfd\xe5\x2c\x12\x12\xce\x47\x1d\x95\x1b\xe0\
\x8f\x9f\x5f\xff\x59\x5e\x13\x19\x2c\x37\xcc\xca\x02\xbb\x1f\xcf\
\x10\xa3\x84\xb7\xb6\xac\x8e\xd2\x4d\x74\xe2\xc4\xc2\xeb\xd3\x6b\
\xa1\xae\x77\x00\x42\xc4\xbd\x0b\x93\x09\x89\x0c\x78\x09\xa9\x04\
\x09\xbe\xc6\x58\x43\x20\x8e\x4a\x7e\x3a\x03\x69\x4c\x39\x0e\x52\
\x10\x4a\x89\x8b\x59\xc6\x9c\xe8\x96\x1a\x35\xda\xbd\xb9\xa6\x86\
\x7b\x6e\x2c\xf1\xf9\x1a\xcc\x22\x11\x85\x66\xaa\xa4\xa6\xa5\x4e\
\xae\x0c\xd8\xa8\x9f\x9a\x95\x1a\xea\x25\x95\x5c\x4a\x91\x52\x8b\
\x96\x56\xba\x24\xc9\x52\x44\xa4\x8a\x81\x5f\xaf\xa9\xe6\x5a\xaa\
\xd4\x5a\xb5\xb6\xda\x35\x69\xd6\xa2\xa2\x55\xd5\x69\xd3\xde\x62\
\x4b\x80\x63\x69\xd2\x6a\xd3\xd6\x5a\xef\xdc\xb4\x33\x72\xe7\xea\
\xce\x09\xbd\x8f\x38\xd2\xc8\xa3\x0c\x19\x75\xe8\x68\xa3\x4f\xca\
\x67\xe6\x59\xa6\xcc\x3a\xd5\xcd\x36\xfb\x8a\x2b\x2d\x70\x62\xc9\
\xaa\x4b\x57\x5b\x7d\x87\x4d\x29\xed\xbc\xcb\x96\x5d\xb7\xee\xb6\
\xfb\xa1\xd4\x4e\x3a\xf9\x94\x23\xa7\x1e\x3d\xed\xf4\xcf\xac\x05\
\xf7\xa4\xf5\xb7\x9f\x7f\x9e\xb5\xf0\x91\xb5\x78\x33\x65\x27\xd6\
\xcf\xac\x71\x69\xad\x1f\x43\x04\x83\x93\x62\x39\x23\x63\x31\x07\
\x32\x5e\x2d\x03\x14\x74\xb4\x9c\x79\x0d\x39\x47\x67\xa9\xb3\x9c\
\xf9\x16\xe9\x8a\x12\x99\x65\xb1\xe4\xac\x60\x19\x23\x83\x79\x87\
\x58\x4e\xf8\xcc\xdd\x8f\xcc\xfd\x94\x37\x97\xf3\xbf\xca\x5b\xfc\
\xc8\x9c\xb3\xd4\xfd\x37\x32\xe7\x2c\x75\x7f\xc8\xdc\xef\x79\xfb\
\x26\x6b\xcb\xd8\x66\xfa\xe4\x6e\x86\xac\x0d\x2d\xa8\x3e\xd1\x7e\
\x9c\xb0\xb5\x47\xed\x46\x6a\xff\xf8\xd5\x1d\xe2\x7b\xb4\xc8\xea\
\x89\x8c\x9e\xb9\xce\x61\xb0\x45\x72\xed\x75\x47\xe9\x45\xcf\x0e\
\x4d\xb4\xef\xd1\x12\x01\x3a\xab\x9e\x5d\x7b\xac\x87\x14\x94\x59\
\x4f\xd9\xa4\xc0\x95\xbd\xf8\x2e\x31\x95\x35\xce\xce\x12\x87\x1e\
\xa6\x9c\x85\x59\xfb\x15\xc7\xca\x7d\xa4\x34\xd3\xda\xbe\x2e\xc0\
\x2e\x8f\xd3\xe2\xe6\xbe\x75\x8c\x4a\x0a\x74\x4a\xce\xba\xbb\x03\
\xa5\x9b\xaf\x23\xea\xc9\x7d\xb7\x7a\x46\xda\xf1\x6c\x62\xb1\x25\
\x9d\x19\xc7\xdc\x33\xaf\xac\x8c\xd7\xe6\x0a\x89\x48\xa5\x2d\xad\
\xac\xd0\x46\x2e\xd2\xce\x91\x9a\xd3\x59\xdb\xd5\x33\x99\x9d\x52\
\x70\x8b\x21\x7c\xfd\x32\x46\x9b\x3a\xc2\xae\xed\x04\x96\xb1\x73\
\x59\xa3\xda\x72\xfb\x19\xe5\x2e\xdf\x87\x3d\x26\x19\x5c\xa7\xce\
\xcc\xd2\x18\x3b\x2e\x19\xf9\x28\xab\xdc\x52\x86\x52\x5a\x8d\xf9\
\x7a\xda\xd1\x87\xa9\x50\x36\x64\xc1\x29\x4d\x6d\x00\x02\x5a\x7a\
\xb4\x77\xf3\xac\x3b\xe4\x21\xa4\x8e\x98\xee\x48\x28\x00\x71\x22\
\xdb\x56\xea\x75\x8e\x98\x6a\xaf\x05\xa2\xa0\x0e\xda\x2e\x09\x11\
\xd5\x59\x36\xb3\x14\x10\xcb\x92\x50\xb7\xdc\x21\x36\x57\xdf\x79\
\x3a\xf0\x6e\x4a\xd5\x31\x08\xa3\x2c\x9f\x4a\xd1\x4e\x2d\x95\x8d\
\x00\x23\xc0\xc6\x4d\xb1\xf9\xbc\xf3\x48\x7e\x4f\x5f\xbb\xef\x7b\
\xf6\xaa\xcd\xee\x54\xe6\xe0\x7a\x02\xa8\xd3\xb5\xd5\x68\xa2\x5c\
\xf8\x5f\x33\x62\xa1\xdc\xf7\xfe\x3f\x7e\x75\xef\x9b\xb0\xee\xfa\
\xef\xe4\x09\xb5\x4d\x7d\x26\x9b\x78\xba\x75\x05\xf9\xbd\x2b\xfb\
\xf5\x20\xcb\x63\x52\xa4\xbf\x98\x88\x1c\x56\x54\xb3\x86\x33\xc4\
\x62\x02\xd8\xf8\xbe\x06\x95\x48\x95\xb7\xe1\xed\xf4\x41\x4f\xdc\
\x5b\xe5\x27\x4a\x1f\x27\xa0\xed\xee\x29\xee\xdc\x33\x18\x97\x33\
\x88\x28\xb3\xab\xbc\x0e\x4a\x8b\x31\x53\xa5\xe9\x48\x8d\xf4\xb5\
\xfa\xd2\x3c\x22\x60\xa1\x92\xcf\x58\xba\x73\xeb\x4c\xb0\x6d\xce\
\xb8\xc1\xde\x7b\x31\x4b\x1b\x0a\xcc\x9a\x2a\x14\xab\x20\x28\x19\
\x72\x42\xea\xa4\x91\x78\xd2\x23\x85\xaa\x43\x05\x9f\x52\xce\xdc\
\x0c\xba\x81\x81\x4c\xab\x75\xbe\x6e\xdc\xcb\x89\x8d\x5b\x26\x15\
\x4b\xed\xf3\x86\x2e\xf4\xd0\xd1\x46\x26\x14\xb0\xce\xb4\x5d\x2d\
\xf3\x44\x80\x81\xaa\x9c\x83\x5a\x1e\xa0\xf7\x96\x27\x66\x65\x80\
\x61\xc0\x64\x71\x4a\x19\xcf\x26\xa7\x30\x1d\x6a\x12\x78\x09\xa7\
\xab\x8c\xd1\x69\x81\x5a\x64\xa4\x02\x32\xc5\xe9\x81\xd5\x08\xc4\
\x41\x8c\x81\xc2\x94\x92\xf6\x28\x6b\x17\x53\x1a\x6b\x87\xe3\x32\
\x55\xd3\x0a\x88\x96\x5a\xa5\x25\x91\x49\x3b\xf5\x9e\x66\x39\xcc\
\xc3\x2b\xcb\xd6\x80\x70\xa2\xeb\x41\xcf\x91\xa6\x9c\xe8\x91\xd4\
\x40\x9f\xb6\x49\x81\x9d\x5a\xc2\xe6\xc6\x80\x3f\x0d\x3e\x76\x64\
\xe2\xb4\x56\x7b\xea\x15\xe8\x22\x73\xdd\xa7\x55\x1b\x40\x37\x92\
\x0a\x10\xda\x5b\x3e\xad\xe7\x2a\xe5\xa2\xa9\xec\xf9\xe0\xd1\xd6\
\x94\xbd\xeb\xf7\x03\xd9\xfc\xbb\x57\x03\x2e\x95\x9b\x21\x34\xf2\
\x9a\x86\x4b\xd6\xc6\x1b\x7e\xa2\x0f\xdc\xa0\x82\x2a\x60\xe4\xa5\
\x4e\x56\x77\x6b\x84\x09\xdd\x0b\xba\x67\xa2\x61\x27\x2a\x24\x4a\
\x29\x68\xf5\x34\x62\x21\x5f\x64\x26\xc0\x0b\x93\xf3\x13\x1a\xa8\
\x21\x18\x5d\xdc\x55\x89\x2e\x39\x23\x8f\xc5\xd7\x53\x77\x7e\x91\
\xa5\x6a\xaf\x11\x36\xe9\x8d\x60\x0e\xad\x44\x37\x5a\x0d\xa4\x53\
\x84\x1a\x68\x73\x78\x2c\x10\x69\x10\x88\xca\x15\xa1\x88\x50\xf5\
\x36\x83\xda\xa9\x48\x51\xe6\x08\x92\x53\x29\xa7\x8e\x33\xa5\x03\
\x2f\xd6\x44\x1c\x43\xc9\x2c\x20\x43\x74\x29\x90\x89\x5d\x99\xcc\
\x5a\xe0\xc0\x5c\xdc\x37\x0d\x18\x4a\x1c\x14\x0f\x00\x03\x53\x00\
\x38\x80\xd1\x81\x3b\x47\xdb\xb9\x1f\xa3\x25\x58\x4d\xd0\x4a\x2b\
\xaa\x31\x9e\x47\xa2\x26\x75\x91\x6a\xc2\x2b\x80\xed\xab\xa4\x2e\
\x17\x36\x5b\x3a\x89\x9a\xa3\x5b\x8e\x58\x7b\x7e\xf9\xc6\xf7\x49\
\xb4\xc8\x25\x2d\xd5\x6e\xa7\x05\xae\x04\xd3\x1c\x90\x76\x81\x29\
\x83\x65\x47\xfc\x81\xbe\xc9\xc7\xb6\x20\x0c\x6a\x1f\xaa\x56\x20\
\x7f\x83\x8f\x08\xeb\x0d\xc7\x89\x0c\x4a\xa0\xa8\x82\xf6\x59\x66\
\x37\x8b\x10\xeb\x70\xbd\x9d\x02\xea\xcf\x68\x08\x9f\x0c\xe1\x49\
\x2e\xa5\x06\x53\xc3\xcc\xda\xa9\xf8\xdd\xa5\xee\x40\xb9\x80\x6b\
\x07\x42\x5d\x1f\xf0\x7a\xd1\x55\xda\x6d\x45\x37\xe4\x81\x07\xe6\
\x7e\x67\x06\x87\xd2\xa4\x07\x04\x3c\x99\x3c\xad\x9c\x37\x0d\x01\
\x74\x02\xf3\x4f\xf9\xcd\x54\xbe\xa9\x36\x77\xdf\xd0\xe1\x2f\xac\
\x2f\x60\xe9\xf2\xa4\x56\xbb\x67\xbd\xf7\xa4\xf1\x0e\xc5\xa5\x55\
\x0e\xf0\xbd\xc2\x16\xb8\xa3\x35\x38\xa7\xca\x40\x5c\x60\x74\x17\
\xd2\x2f\x68\xa4\xb9\x86\x86\xdd\xca\x08\xa5\xd2\xe2\x67\x59\x24\
\x49\x3e\x08\x0d\xc6\x75\x31\xc2\xcd\x7a\xe2\x54\xb0\x1e\xac\xb6\
\xf6\x05\xab\x93\xac\xb5\xe2\x0e\xda\xa2\xa3\x58\x40\x22\x2b\x49\
\x24\x84\xd6\xd5\x7b\x5f\x10\x46\x07\x46\x41\x2a\xe2\xb3\x25\x96\
\x8f\xc2\x00\xae\x20\xc8\x14\xa7\x04\xc6\x9d\xbb\xa6\xb9\x28\x13\
\x12\xd4\xdc\x83\x43\x82\x39\xc9\x5d\xca\xb5\xec\x76\x95\x99\xdb\
\x5f\x5f\x91\x3a\xdc\x13\xa5\x43\x44\xab\xe7\xd6\x39\x20\x6d\x62\
\x5a\xb1\x9d\x8c\x3e\xea\xf8\x57\x28\xd1\xd3\x53\x25\x21\xea\x8e\
\x58\x9b\x3f\x17\xd0\x86\x88\x2a\x60\x91\x79\xd6\x20\x15\xbf\xa3\
\x38\xf9\x02\xbe\xce\x0b\x3f\x17\x2d\x88\x01\xdd\x4f\x31\xe4\xcc\
\xc5\xc7\xaa\x5d\x22\xb6\xad\x86\x49\xcd\xad\x0e\xfb\xee\x47\x9b\
\x64\x88\x15\x66\x46\x55\x48\x45\x67\xb6\xc3\x92\x6c\xa6\xe5\xe4\
\x89\x2e\xb1\xb7\xee\xbb\x55\x60\x1f\x6a\x41\xa6\x55\x2b\x1b\x44\
\xc7\x19\xbb\x1e\xb4\xec\x44\x8f\xf9\x11\x6a\x1c\xa6\x84\xf5\x19\
\xbb\x50\x58\x28\x04\x87\xa6\x14\x04\xcd\x62\x49\xdd\x56\x30\x49\
\x8c\x79\xbd\x81\xbf\x3b\xcc\xc8\x1a\x7e\xc0\x1b\x1b\xb4\x41\xf5\
\xe0\xd8\xa8\xad\x52\x8c\xb9\x26\xa0\xbf\x80\x0a\xb2\x97\x97\x23\
\x18\xfe\xe3\x33\x0e\x58\xcb\xb5\x24\x1b\xa5\x57\x98\xc3\x40\x31\
\xa7\xd0\x76\xec\x73\x6f\x70\x70\xd6\xaf\x0d\x1e\xa1\x1c\xe2\x88\
\xf2\x15\x71\xd8\xc2\x6e\x1b\x1a\x90\x81\x68\xa4\x27\xf2\x64\x40\
\x2a\x05\xcd\x92\xdb\xd0\x15\x81\xb0\x2f\xa7\x88\x24\xed\x72\x47\
\x45\x24\x83\xca\x3d\xaf\xc9\x44\x1d\x35\x60\x59\x03\x09\x8f\x91\
\xed\x92\x3e\x4a\x69\x6d\x6c\xc9\x66\xad\x57\x4e\x46\x96\x0b\x12\
\x49\xa8\xae\x51\x37\xdc\xc6\xec\x67\x79\xd1\x7d\xab\xa1\x7b\x18\
\xee\xd2\xeb\x37\xb0\xf0\x15\x15\xf6\x9c\x17\xf8\xe8\x04\x94\xdd\
\xcc\x35\xac\x47\x75\x0c\x98\xe0\x2d\x56\xf7\x03\xce\x0e\x00\xb5\
\x12\x3d\x0f\xcb\xa0\x13\x49\xda\xda\x06\x99\xb5\x7f\xf4\x8c\x61\
\x28\x53\xd5\x93\x56\xb0\x25\xb2\x2c\x31\xc2\x03\x1d\x1a\x30\x42\
\x73\x55\x2e\xeb\xa8\x8c\xbc\x4d\xd5\x91\x79\xc2\xdf\x07\x6e\x00\
\x1e\xa4\x01\x18\x64\x5d\x24\xb7\xf4\xe8\xec\xf9\xe3\xa3\xe5\xe6\
\x22\x2a\x1a\x92\x76\x22\x66\x23\x85\x2b\xd0\x6e\x7a\x38\x1d\xc1\
\xab\x81\xb3\x5a\x7f\x02\x6a\x1b\x4a\x16\x55\xfb\xf4\xa3\x49\x32\
\xba\x3f\xd1\x53\xd6\x1d\x66\xb3\x70\x12\x30\x9e\x1d\x10\x00\x1d\
\x29\x4f\x0f\x99\x3a\x15\xc6\xe1\xaa\xb4\x2c\xc9\x22\xe1\x66\x0b\
\x17\x6b\x5d\xb9\x2c\x8d\x5f\x4f\x72\x76\x16\xd5\x97\x4c\x66\xa3\
\xe1\xf0\x18\x40\x37\x0d\xdd\x8d\xdb\x43\xb1\x6c\xf5\xbf\xca\xd6\
\x47\xb2\xdc\xbf\xcd\xd6\xd5\x7a\x33\x31\xd0\x3b\x88\xe1\x0f\x83\
\x8c\x67\x90\x6c\x0c\xdd\x4f\xf2\xb0\x74\xee\xc1\xee\x56\xbd\x7f\
\x24\x71\x4f\xc6\xd3\xb4\x3e\x32\xe9\x25\xa8\x13\xaa\x0b\x46\x78\
\xa9\xc5\x81\xe8\x46\x28\x95\x66\xaa\xbc\x80\xf6\xdc\x15\xb9\x96\
\x96\x11\xad\xe2\x13\x05\x52\x3f\x50\x40\x02\x61\x7f\x55\xa2\xa1\
\x67\x60\xa4\x3d\x5d\x01\xe1\x09\x57\x0b\xc5\xbf\xe4\xe9\x8a\x27\
\x43\xe4\xa6\x86\xdb\x2d\x7f\x75\x92\xfb\x71\x56\xf4\x37\x43\xe4\
\x2f\x00\x45\xe9\xb6\x0e\xfd\x1c\x1b\x99\x6c\x1d\xb9\x6f\x7e\x37\
\xd0\x43\x90\x1f\x3c\xda\x23\x97\xc4\x65\xaa\xa9\x13\x12\x47\xa6\
\x4a\xcc\x1d\xa4\x21\xc7\xd5\x2a\x84\xe0\xef\x6d\x6d\x46\x4d\xd7\
\x0c\x59\xcb\x0c\xd3\x93\x43\xd4\xe3\x12\x94\x26\x6b\x99\x98\xc0\
\x05\x67\x41\x5c\x94\x9f\xc9\x60\xea\x88\x04\x5e\x67\x41\x00\xcf\
\x55\xd3\x28\xd0\xab\xa6\xa1\x26\x53\xd3\x6b\x3c\x6a\x1a\x4f\x10\
\x08\x2f\xe6\x2a\xa1\x40\x77\x2f\xd8\x4d\x54\x6e\x8b\x11\x73\x86\
\x83\x8c\xe6\xd7\xe9\x18\x44\x1c\x5a\xe3\x6a\x64\x2d\x90\x60\xc6\
\x23\x24\x13\x68\x99\xc2\xc2\x58\x4f\xab\x86\x75\x19\x71\x2c\xfa\
\xeb\xd3\x3e\x9a\x2a\x0e\x62\xbd\x66\xd2\x7d\xea\xa5\x48\xbc\xd8\
\x23\xb1\x42\x67\x61\x8b\xde\xe5\xd6\xc0\x07\xb0\x12\xae\x38\xa4\
\xca\x58\x4e\xd9\x79\x59\x39\x72\x12\xd0\x87\xe0\xd7\xed\xf0\x01\
\xb0\x12\x6a\x2c\x26\x79\x39\x99\x04\x7f\xdc\x8d\x7c\xe2\x6a\xf2\
\x03\x73\x24\xcd\xfa\xe4\xb7\x96\x23\x33\x7e\xba\xf5\x66\xf4\x9b\
\xb3\x6e\xcb\xa5\xd2\x4c\xc4\x73\x76\xb5\xde\x45\x1a\x5b\x0d\xac\
\x6d\x42\xcb\x60\xfe\xee\x45\xac\x33\x90\xc7\xd5\xf6\x14\x50\x24\
\x25\x01\x33\x98\xfb\x75\x13\xca\x6c\x46\x9c\x58\x7d\xd3\x08\xcc\
\x87\x25\x06\x9b\x25\x5a\xc6\xac\xb1\xad\xcb\x5c\x84\x89\xfb\x8c\
\xfa\x8d\xae\xdf\x7d\x24\xcd\x94\x40\x8d\xb7\x53\xa1\xb1\xf6\x4a\
\x3c\x0e\xa2\xd0\xe7\x79\xd8\xb7\x52\x53\x14\xf9\xaf\xfa\x0e\x81\
\x4b\xa6\xdf\x5e\x6b\x1b\xe6\xc2\xa1\xae\x2f\xbd\xb6\x6c\xa3\xc7\
\x34\x8a\x44\x66\x3b\xf3\xa8\xe4\x99\x7a\x80\x1c\x51\x27\x70\x9d\
\x75\x1b\xe6\x18\xdd\x92\x0b\xbe\x3f\xd4\xaf\xed\x56\xcd\x52\x4b\
\x1b\x63\x62\x81\x69\xfb\x76\x63\x5f\x8c\xdf\xe3\xb7\x24\x93\x81\
\x36\xe3\xc2\xbb\x7f\x04\xa5\x76\xdb\x53\xbf\x31\x06\xad\x90\x6a\
\x49\xed\xf2\x83\xc5\x16\x8b\xfa\x05\xc1\x68\x2d\xa6\x05\xf7\x42\
\xd8\xee\xd0\x40\x2b\xf4\x7e\xbf\x37\xf0\xff\x13\x85\xfd\x05\xcb\
\x65\x68\xc0\x64\x52\x30\x96\x33\x62\x53\x87\x92\xfc\x42\x73\x0f\
\xcb\xa5\x28\x08\xf4\x37\xf0\x5c\x4b\x67\x6d\xdb\x33\x24\xd3\x01\
\xfd\x70\x65\x65\x79\x5c\x26\x3d\x75\x5d\xe6\x71\xaf\xcd\x6c\xaf\
\xcd\xac\xc6\xcf\x08\xf4\xc8\x39\x1e\xeb\x18\x47\x19\x20\x69\xd9\
\xa9\xe2\xff\x35\x43\x21\x81\x70\xce\x09\x6f\xe2\x8a\xa2\x95\xc5\
\xb5\xdc\xee\xf5\xdc\x43\x12\x34\x42\x74\xf2\x8b\xa1\x05\x77\xa2\
\x9f\xf2\x44\x8a\xd9\x15\x12\x96\xab\xd9\x15\x10\xfb\xa3\x45\xd0\
\x85\x74\x0f\xa6\xa6\xfc\xb4\xbb\x62\x2e\xe8\x11\x4f\x9f\x3b\x2c\
\xb3\x4c\xe4\x0d\x4c\x95\xe8\xbd\xd0\x50\x3c\x64\x3c\x17\x98\xcf\
\x7a\xaf\x1a\x63\x70\xee\x74\xd1\xf2\xd4\x6c\xa3\xb9\xcd\x9a\xf0\
\xcd\x5f\xd4\xe8\xd5\x62\xb6\x63\x40\x05\xd0\x2d\x68\xa5\x75\xbb\
\xe6\xb6\xe1\x55\x7f\x11\xfb\xdb\x66\x40\xc7\xb2\xb4\x65\x53\x30\
\xf5\xc6\x51\x8c\xb4\xaa\x47\xde\x43\xdb\x15\xba\x24\x8a\xf2\xec\
\x08\x58\x0d\x7a\xdb\x39\xcc\xe3\x16\x6d\xb9\x66\xa3\x23\x76\xbd\
\x11\x93\x38\x16\x64\xdc\x61\xdd\x01\xe7\x60\x16\xcb\x85\x1f\x93\
\x4d\x7e\x20\x8b\x0b\x25\xc5\x42\x23\xd4\x58\x20\xe5\x33\x66\xa5\
\xa4\xd1\xec\x30\xa1\xf5\xf7\xf0\x00\x04\x75\xe0\xac\xf1\xd5\x3e\
\x5a\x5d\xdd\xc2\x23\xfc\x94\xce\xfd\x00\x07\x4b\xfb\x24\xed\x1f\
\x27\x21\xae\x9b\x01\xc6\x97\x93\xdc\x7b\x56\x46\x32\x17\xdb\x9e\
\xdf\x46\xdb\x13\x89\x1e\xcc\x60\x8e\x77\xef\x88\x54\xbc\x7b\x47\
\xf9\xdd\x3b\x5a\xfb\x49\xc5\x07\x07\xbb\xef\x48\xb8\x6c\x52\xa3\
\x13\x55\xcb\x59\x49\xf7\x8b\xaf\xd8\x83\xdf\xb6\xe9\xc8\x05\x2a\
\x11\xec\x75\x03\xcc\x31\xcc\x45\xd1\xbc\x60\x6b\xba\x14\x43\xfa\
\x78\x20\x25\x9c\x81\x69\x9d\x02\x8c\x9d\xf8\x1c\x24\x9b\xfb\xd9\
\xd4\xe3\x30\x5d\x8f\x1b\xb0\x0d\x84\x74\x84\x35\xc1\x77\xc1\x64\
\xb0\x45\x12\x93\xce\x22\xa6\xb9\xb4\xe6\x37\x18\xf1\x58\x9c\x71\
\x2d\xce\x6e\x77\x08\xcf\x74\xef\x6e\x8c\xc7\xa7\xad\x44\x8b\x3c\
\x7c\x63\x52\xa1\x42\xd2\x59\x06\xa4\xa8\x93\x7a\xc2\xdf\x91\xc2\
\xb3\x97\x87\xea\x31\x05\x33\xa2\xa8\xa1\x5d\x2a\x91\xe5\xc6\x81\
\x52\x56\xd2\x5b\x2e\xdb\x44\x97\x6c\x33\xa1\xb7\x34\x69\xc3\x02\
\x4a\xa0\xd6\x6f\xab\x93\x09\xdf\x00\x8c\xa0\x6f\x22\xf3\x31\xac\
\x9e\x57\x05\xe5\x42\xf4\xae\xd9\x20\x8e\x26\x59\x32\xca\xdf\x6b\
\xbb\x3b\x25\xd2\x83\xb4\x35\x02\x5a\x3e\xd6\xb6\x17\x1d\xa9\xb0\
\x9d\x01\xf1\x34\x59\x82\x57\x57\x63\xbe\xd8\x40\xdb\x5f\xd1\xc8\
\x76\xb4\x5e\xee\x46\xbe\xc3\x17\x17\xb3\x5e\xdd\xf6\x8a\x8b\xbf\
\x81\xec\x8f\x57\xf7\x9b\x57\x7f\xa0\x86\x60\xc6\x9a\xeb\x13\x4c\
\x79\x48\x9b\x76\x21\xd0\x66\x78\xf2\xc2\x65\x1c\x68\xc8\x58\xaa\
\xd7\xd9\x6a\x72\xb2\xc2\xe4\x3e\x83\x19\x4f\xdc\x24\xd4\xd2\x8c\
\x7e\x30\xbe\x18\x2b\x4d\xc6\xe0\x08\x32\xc6\x6e\x98\x99\xab\xc6\
\xb6\x25\x07\x65\x62\x9b\xc9\xc1\x36\x2a\x10\xc3\xed\xb8\x84\xfb\
\x35\x83\xb6\x6d\xd3\x11\x9b\xa3\xb7\x87\x92\x0d\x67\xaa\xb3\x8d\
\x39\x31\x53\x48\x81\xf8\xd5\xfa\xe0\x6d\xa7\x6f\xd5\x68\xb6\xc9\
\xd5\xbe\xae\xaf\x2b\x47\xb3\x35\x52\xb2\x23\xf9\x61\x0c\x54\xe8\
\x24\x37\xb1\x42\xa3\x5a\x32\xa8\x0b\xb8\x54\xbc\xaa\x3d\x6f\xd5\
\x59\x5f\x89\xdb\xd7\x15\x50\x33\x3b\x33\x98\x9c\x44\x1b\x89\xed\
\x80\x78\xaa\xd7\xd8\xf8\x72\x7a\xbb\xf4\x5e\xac\xcb\x11\xba\x57\
\x02\xf0\xf9\x16\x12\xb0\x8a\x44\x06\x6d\x22\xae\xb7\x45\xa9\x2e\
\x53\xc5\xb6\xa7\x40\x98\x29\x2b\x4a\x67\xe7\x0a\x86\x9a\x9c\x05\
\xf0\xd2\xb3\xcb\x6d\x7e\xe0\xee\x72\x8f\x77\x97\xbb\xeb\xa3\xb2\
\x9e\xbd\x24\xdb\xe9\x71\xc8\xa9\x63\x9b\x49\xc7\x7a\x9d\xc6\xdd\
\x81\xc5\xa0\xfc\x8b\xed\x36\x63\x00\x09\xe4\x94\x12\xc6\xf0\x6f\
\xeb\x66\x38\xfe\xbb\x1d\xf6\xef\xb7\xde\x07\x52\xfa\xee\xd6\xd8\
\x8e\x6e\xdf\xfa\x38\xd1\x80\x3f\x91\xd1\xd5\x7c\x71\x30\x14\x27\
\x05\xa0\xdf\x3e\x4d\xc3\xa5\xa3\x60\x0f\x19\xc8\xae\xae\x16\x52\
\xa4\x4c\x6c\x23\x5e\xb3\x01\x51\x5c\x57\x16\xae\xea\xd3\x23\xc0\
\xe2\xaa\x26\xb1\x91\x2f\x15\x81\xc5\xc1\x3c\x6d\x33\xa2\xbb\xa0\
\xb2\x1a\xff\x38\xc7\xe3\xfc\x90\x09\xd1\x1f\x31\x33\xff\xc6\xe3\
\x03\x61\xd2\x77\x10\x42\x63\xed\x80\x11\xea\xc5\x55\xc3\x46\xea\
\x08\xdd\x00\x40\x91\x4f\x84\x5e\xb7\xfd\x17\x11\x79\x76\xd2\x61\
\xdc\x9f\x10\x6a\xd8\x96\xbe\x3d\x04\x88\xc6\x36\xa5\xcf\x98\x1b\
\x2e\xdb\x26\x54\x9a\xd0\x91\x24\x8f\x64\x49\xb2\x1d\x9b\x68\x1b\
\xea\x2c\x2f\xf6\xae\x28\x39\x94\x25\xab\x5e\x3a\xae\x31\x92\xd4\
\xd1\x33\xd5\xef\x4a\x7f\x47\xc8\xed\xee\xd5\xd2\xac\x68\x1d\xaa\
\x1b\x71\x3d\x89\x29\xf7\x0e\x74\x45\xc7\xcd\x46\xd6\x12\xc1\x84\
\x8d\xcd\xe5\x10\xc1\x86\x5b\x90\x04\xeb\xdc\xb8\x01\x43\x88\x6c\
\x2b\xd4\x5a\x91\xc7\xc9\x1e\xe3\x9e\x7c\x9f\x41\x98\xd3\x8b\x15\
\x51\x93\x81\x1d\xec\xe2\x18\xa9\xd1\x9b\x77\x4b\x78\x2a\xbc\x67\
\x0f\x3a\x6c\xd7\x0d\xa8\x01\xf7\xb9\xc1\xd8\x27\xd0\x44\x28\x1e\
\xa7\x03\xdb\xfd\xec\x54\x81\x12\x90\x1e\x1e\x61\x7b\xe1\x14\x94\
\x57\x36\x45\x38\xa7\x21\xbc\x5e\xad\xff\x14\xe9\xdd\x62\x1a\xe5\
\x13\x7f\xad\x44\x5d\x7f\x9f\x2c\x7c\x73\x9c\x02\x46\x61\x9f\xae\
\x44\x85\x26\x53\x46\xc4\x7b\x5d\xd3\xfe\x3b\x5d\xb8\xaf\x7c\x91\
\xce\x5f\x26\x3b\x19\x5b\x80\x2c\xef\x88\x3f\xbb\x11\xf7\x94\xdb\
\xbf\x77\x23\x0e\x37\xfb\x9b\x1b\xc9\x62\x0d\x57\x62\xa3\xe9\x68\
\x86\xfe\x37\x66\xe4\xbe\xba\xff\xec\xe1\xd7\xdb\x89\x28\x0f\xf2\
\xe1\xef\xbe\x9d\xb7\x67\x5b\xdb\xbb\x67\x49\x28\x3c\x6a\xb5\xca\
\xc2\xa8\x3d\xf2\x85\x16\x42\x99\xad\xf9\x79\x1c\x0e\x55\xdb\x04\
\x33\x09\x7d\xc2\x54\x50\xd0\x30\x10\x84\x4f\xf0\x82\x77\xd8\x58\
\x7b\xa8\x60\x28\xd8\x88\x64\x82\xa1\x61\x23\x1d\xc4\x0b\x17\xdc\
\x30\xb2\xa0\x13\xf5\x9c\x60\x42\xdb\x25\x66\x64\x9d\x4d\x11\xbd\
\xe7\x3e\x5d\xbf\x0f\xf7\xd2\x35\x7e\x39\x58\xe7\xed\x31\x04\x81\
\x67\x72\x2e\xca\x0d\x7d\xdf\xcf\x03\x94\x7b\x3c\xca\x10\x15\xd3\
\x74\xf9\xa8\x3e\x8b\xec\x29\xfd\xd8\xd1\x74\xff\x74\x83\xfd\xef\
\x5e\x1d\xe5\x5b\x8d\xf0\x61\x77\x8a\xbd\xde\x8d\xe5\x52\x42\x68\
\x32\x01\x16\x1a\xd3\xf6\x3f\x8f\x3d\x97\x7a\x1e\x09\xc5\x23\x0f\
\x26\x60\xce\xac\x34\x5e\x16\xec\xae\x61\x55\x14\xf8\x97\x73\x39\
\xa3\xac\x06\xa4\x99\xcc\x0d\xe5\x85\xb4\xfa\x40\x9a\xb7\x4d\xa9\
\x7b\x91\x0e\xdb\xf2\xa4\x28\x1f\x1d\x3c\xec\x41\x21\xaa\x96\x7c\
\x50\xcc\x49\x4f\x5e\xc3\x1e\xeb\x1a\x10\x15\x81\x0c\x33\x7a\xb5\
\xcd\x7d\x4d\x2b\x04\x07\x64\x25\xdb\xac\x53\x4e\xdf\x73\x85\x67\
\xef\x1f\xf0\xe9\x26\xde\x3b\x3a\x3b\xb5\x41\x37\x8b\x40\x33\x98\
\xdb\x84\x11\x59\x29\xcd\x7d\x8d\xb9\x4a\x8e\xd8\x56\x16\x99\x52\
\x80\xe8\xa0\xec\x88\x2a\x66\xc6\x56\x4b\x0f\x0a\xa0\xe8\x20\x5e\
\xb7\x6c\x73\x3b\x2d\x63\x3a\x28\xb3\x35\xca\x81\x92\xa2\xf8\xaa\
\x37\x8b\x94\x8c\x5e\xc7\xd3\x40\x21\xd4\xe7\x1d\x50\x6a\xfa\xbf\
\x3e\x78\xae\x85\xea\x6f\x08\x76\x45\x41\x79\xdb\x06\xc6\x24\x33\
\xe3\x75\x9f\x0a\x3d\x31\xfd\x29\xa2\x00\x3a\xb2\x8e\x43\x8b\x4e\
\x0b\x31\x80\x18\x0b\x8e\x45\xb2\x13\x86\xe0\x12\x81\x08\xcb\xec\
\x20\x52\x88\x19\xe1\x8a\xb2\x35\xd5\xa3\x40\xbb\xb6\x7f\x58\x01\
\xee\x6f\x4b\x24\xc9\x98\xc9\x84\x28\x7e\xcc\xc4\x08\x46\x6a\x43\
\x0b\xcc\x55\x46\x8b\xc7\x1b\x8a\x69\xee\xf6\x54\xb4\xdb\x83\xcd\
\x86\x8a\x29\xd0\x00\xd6\xd5\x1e\x89\x1e\x92\x8c\xc1\xb3\xee\x88\
\xf4\x3f\x99\x9b\xd0\xde\x34\x9d\x5e\x22\xed\x3a\xab\x89\x75\xfb\
\x35\x3c\xe0\x0b\x67\xda\x4d\xcd\x05\x04\x84\x4a\x99\xab\xc6\xf2\
\x3e\x30\x45\x43\x9a\xd2\x2a\x74\xe9\xe5\xaf\x81\xfe\x4c\xf7\x81\
\x80\x21\x78\x36\x55\xc0\x14\x12\xae\x8f\x76\xad\xae\xc6\x88\xe4\
\x8a\xcf\xe3\x96\x50\x3a\xb2\x2b\xfc\x71\x6b\xfc\x75\x6f\xf6\x74\
\xc8\xfd\x3f\x66\x17\x3e\xc1\x9e\x45\xef\xd4\x00\x00\x01\x85\x69\
\x43\x43\x50\x49\x43\x43\x20\x70\x72\x6f\x66\x69\x6c\x65\x00\x00\
\x78\x9c\x7d\x91\x3d\x48\xc3\x40\x1c\xc5\x5f\xd3\x4a\x45\xea\x07\
\xda\x41\xc4\x21\x43\xd5\xc5\x82\xa8\x88\xa3\x54\xb1\x08\x16\x4a\
\x5b\xa1\x55\x07\x93\x4b\xbf\xa0\x49\x43\x92\xe2\xe2\x28\xb8\x16\
\x1c\xfc\x58\xac\x3a\xb8\x38\xeb\xea\xe0\x2a\x08\x82\x1f\x20\x6e\
\x6e\x4e\x8a\x2e\x52\xe2\xff\x92\x42\x8b\x58\x0f\x8e\xfb\xf1\xee\
\xde\xe3\xee\x1d\x20\xd4\x4a\x4c\x35\x7d\x13\x80\xaa\x59\x46\x22\
\x1a\x11\xd3\x99\x55\xd1\xff\x0a\x1f\xfa\xd0\x83\x31\xf4\x4b\xcc\
\xd4\x63\xc9\xc5\x14\xda\x8e\xaf\x7b\x78\xf8\x7a\x17\xe6\x59\xed\
\xcf\xfd\x39\xba\x95\xac\xc9\x00\x8f\x48\x3c\xc7\x74\xc3\x22\xde\
\x20\x9e\xd9\xb4\x74\xce\xfb\xc4\x41\x56\x90\x14\xe2\x73\xe2\x71\
\x83\x2e\x48\xfc\xc8\x75\xd9\xe5\x37\xce\x79\x87\x05\x9e\x19\x34\
\x52\x89\x79\xe2\x20\xb1\x98\x6f\x61\xb9\x85\x59\xc1\x50\x89\xa7\
\x89\x43\x8a\xaa\x51\xbe\x90\x76\x59\xe1\xbc\xc5\x59\x2d\x55\x58\
\xe3\x9e\xfc\x85\x81\xac\xb6\x92\xe4\x3a\xcd\x61\x44\xb1\x84\x18\
\xe2\x10\x21\xa3\x82\x22\x4a\xb0\x10\xa6\x55\x23\xc5\x44\x82\xf6\
\x23\x6d\xfc\x43\x8e\x3f\x4e\x2e\x99\x5c\x45\x30\x72\x2c\xa0\x0c\
\x15\x92\xe3\x07\xff\x83\xdf\xdd\x9a\xb9\xa9\x49\x37\x29\x10\x01\
\x3a\x5e\x6c\xfb\x63\x04\xf0\xef\x02\xf5\xaa\x6d\x7f\x1f\xdb\x76\
\xfd\x04\xf0\x3e\x03\x57\x5a\xd3\x5f\xae\x01\xb3\x9f\xa4\x57\x9b\
\x5a\xe8\x08\xe8\xdd\x06\x2e\xae\x9b\x9a\xbc\x07\x5c\xee\x00\x83\
\x4f\xba\x64\x48\x8e\xe4\xa5\x29\xe4\x72\xc0\xfb\x19\x7d\x53\x06\
\x18\xb8\x05\xba\xd6\xdc\xde\x1a\xfb\x38\x7d\x00\x52\xd4\xd5\xf2\
\x0d\x70\x70\x08\x8c\xe6\x29\x7b\xbd\xcd\xbb\x3b\x5b\x7b\xfb\xf7\
\x4c\xa3\xbf\x1f\x70\x2c\x72\xa6\x01\xd0\xf6\xf4\x00\x00\x0f\x9c\
\x69\x54\x58\x74\x58\x4d\x4c\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\
\x65\x2e\x78\x6d\x70\x00\x00\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\
\x6b\x65\x74\x20\x62\x65\x67\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\
\x69\x64\x3d\x22\x57\x35\x4d\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\
\x72\x65\x53\x7a\x4e\x54\x63\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x0a\
\x3c\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\
\x3a\x78\x3d\x22\x61\x64\x6f\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\
\x61\x2f\x22\x20\x78\x3a\x78\x6d\x70\x74\x6b\x3d\x22\x58\x4d\x50\
\x20\x43\x6f\x72\x65\x20\x34\x2e\x34\x2e\x30\x2d\x45\x78\x69\x76\
\x32\x22\x3e\x0a\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\x78\x6d\
\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\
\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\x39\x2f\
\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\x61\x78\
\x2d\x6e\x73\x23\x22\x3e\x0a\x20\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x0a\x20\x20\x20\x20\x78\x6d\x6c\x6e\x73\
\x3a\x69\x70\x74\x63\x45\x78\x74\x3d\x22\x68\x74\x74\x70\x3a\x2f\
\x2f\x69\x70\x74\x63\x2e\x6f\x72\x67\x2f\x73\x74\x64\x2f\x49\x70\
\x74\x63\x34\x78\x6d\x70\x45\x78\x74\x2f\x32\x30\x30\x38\x2d\x30\
\x32\x2d\x32\x39\x2f\x22\x0a\x20\x20\x20\x20\x78\x6d\x6c\x6e\x73\
\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\
\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\
\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x0a\x20\x20\x20\x20\x78\x6d\x6c\
\x6e\x73\x3a\x73\x74\x45\x76\x74\x3d\x22\x68\x74\x74\x70\x3a\x2f\
\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\x61\
\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\x6f\
\x75\x72\x63\x65\x45\x76\x65\x6e\x74\x23\x22\x0a\x20\x20\x20\x20\
\x78\x6d\x6c\x6e\x73\x3a\x70\x6c\x75\x73\x3d\x22\x68\x74\x74\x70\
\x3a\x2f\x2f\x6e\x73\x2e\x75\x73\x65\x70\x6c\x75\x73\x2e\x6f\x72\
\x67\x2f\x6c\x64\x66\x2f\x78\x6d\x70\x2f\x31\x2e\x30\x2f\x22\x0a\
\x20\x20\x20\x20\x78\x6d\x6c\x6e\x73\x3a\x47\x49\x4d\x50\x3d\x22\
\x68\x74\x74\x70\x3a\x2f\x2f\x77\x77\x77\x2e\x67\x69\x6d\x70\x2e\
\x6f\x72\x67\x2f\x78\x6d\x70\x2f\x22\x0a\x20\x20\x20\x20\x78\x6d\
\x6c\x6e\x73\x3a\x64\x63\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x70\
\x75\x72\x6c\x2e\x6f\x72\x67\x2f\x64\x63\x2f\x65\x6c\x65\x6d\x65\
\x6e\x74\x73\x2f\x31\x2e\x31\x2f\x22\x0a\x20\x20\x20\x20\x78\x6d\
\x6c\x6e\x73\x3a\x74\x69\x66\x66\x3d\x22\x68\x74\x74\x70\x3a\x2f\
\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x74\x69\
\x66\x66\x2f\x31\x2e\x30\x2f\x22\x0a\x20\x20\x20\x20\x78\x6d\x6c\
\x6e\x73\x3a\x78\x6d\x70\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\
\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\
\x31\x2e\x30\x2f\x22\x0a\x20\x20\x20\x78\x6d\x70\x4d\x4d\x3a\x44\
\x6f\x63\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x67\x69\x6d\x70\x3a\
\x64\x6f\x63\x69\x64\x3a\x67\x69\x6d\x70\x3a\x33\x64\x62\x32\x63\
\x64\x61\x34\x2d\x34\x34\x32\x63\x2d\x34\x38\x33\x64\x2d\x38\x31\
\x63\x34\x2d\x30\x31\x62\x34\x65\x36\x64\x32\x33\x35\x65\x38\x22\
\x0a\x20\x20\x20\x78\x6d\x70\x4d\x4d\x3a\x49\x6e\x73\x74\x61\x6e\
\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\x3a\x31\x37\
\x31\x32\x36\x31\x35\x36\x2d\x33\x64\x62\x32\x2d\x34\x64\x64\x65\
\x2d\x39\x34\x64\x31\x2d\x64\x30\x61\x37\x30\x37\x34\x30\x39\x63\
\x64\x64\x22\x0a\x20\x20\x20\x78\x6d\x70\x4d\x4d\x3a\x4f\x72\x69\
\x67\x69\x6e\x61\x6c\x44\x6f\x63\x75\x6d\x65\x6e\x74\x49\x44\x3d\
\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x63\x66\x36\x34\x62\x66\x64\
\x61\x2d\x35\x38\x33\x37\x2d\x34\x37\x63\x64\x2d\x62\x33\x62\x33\
\x2d\x61\x34\x61\x33\x35\x30\x62\x65\x31\x61\x34\x61\x22\x0a\x20\
\x20\x20\x47\x49\x4d\x50\x3a\x41\x50\x49\x3d\x22\x32\x2e\x30\x22\
\x0a\x20\x20\x20\x47\x49\x4d\x50\x3a\x50\x6c\x61\x74\x66\x6f\x72\
\x6d\x3d\x22\x57\x69\x6e\x64\x6f\x77\x73\x22\x0a\x20\x20\x20\x47\
\x49\x4d\x50\x3a\x54\x69\x6d\x65\x53\x74\x61\x6d\x70\x3d\x22\x31\
\x36\x31\x38\x35\x38\x34\x30\x39\x32\x30\x33\x37\x39\x33\x34\x22\
\x0a\x20\x20\x20\x47\x49\x4d\x50\x3a\x56\x65\x72\x73\x69\x6f\x6e\
\x3d\x22\x32\x2e\x31\x30\x2e\x32\x32\x22\x0a\x20\x20\x20\x64\x63\
\x3a\x46\x6f\x72\x6d\x61\x74\x3d\x22\x69\x6d\x61\x67\x65\x2f\x70\
\x6e\x67\x22\x0a\x20\x20\x20\x74\x69\x66\x66\x3a\x4f\x72\x69\x65\
\x6e\x74\x61\x74\x69\x6f\x6e\x3d\x22\x31\x22\x0a\x20\x20\x20\x78\
\x6d\x70\x3a\x43\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\
\x47\x49\x4d\x50\x20\x32\x2e\x31\x30\x22\x3e\x0a\x20\x20\x20\x3c\
\x69\x70\x74\x63\x45\x78\x74\x3a\x4c\x6f\x63\x61\x74\x69\x6f\x6e\
\x43\x72\x65\x61\x74\x65\x64\x3e\x0a\x20\x20\x20\x20\x3c\x72\x64\
\x66\x3a\x42\x61\x67\x2f\x3e\x0a\x20\x20\x20\x3c\x2f\x69\x70\x74\
\x63\x45\x78\x74\x3a\x4c\x6f\x63\x61\x74\x69\x6f\x6e\x43\x72\x65\
\x61\x74\x65\x64\x3e\x0a\x20\x20\x20\x3c\x69\x70\x74\x63\x45\x78\
\x74\x3a\x4c\x6f\x63\x61\x74\x69\x6f\x6e\x53\x68\x6f\x77\x6e\x3e\
\x0a\x20\x20\x20\x20\x3c\x72\x64\x66\x3a\x42\x61\x67\x2f\x3e\x0a\
\x20\x20\x20\x3c\x2f\x69\x70\x74\x63\x45\x78\x74\x3a\x4c\x6f\x63\
\x61\x74\x69\x6f\x6e\x53\x68\x6f\x77\x6e\x3e\x0a\x20\x20\x20\x3c\
\x69\x70\x74\x63\x45\x78\x74\x3a\x41\x72\x74\x77\x6f\x72\x6b\x4f\
\x72\x4f\x62\x6a\x65\x63\x74\x3e\x0a\x20\x20\x20\x20\x3c\x72\x64\
\x66\x3a\x42\x61\x67\x2f\x3e\x0a\x20\x20\x20\x3c\x2f\x69\x70\x74\
\x63\x45\x78\x74\x3a\x41\x72\x74\x77\x6f\x72\x6b\x4f\x72\x4f\x62\
\x6a\x65\x63\x74\x3e\x0a\x20\x20\x20\x3c\x69\x70\x74\x63\x45\x78\
\x74\x3a\x52\x65\x67\x69\x73\x74\x72\x79\x49\x64\x3e\x0a\x20\x20\
\x20\x20\x3c\x72\x64\x66\x3a\x42\x61\x67\x2f\x3e\x0a\x20\x20\x20\
\x3c\x2f\x69\x70\x74\x63\x45\x78\x74\x3a\x52\x65\x67\x69\x73\x74\
\x72\x79\x49\x64\x3e\x0a\x20\x20\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\
\x48\x69\x73\x74\x6f\x72\x79\x3e\x0a\x20\x20\x20\x20\x3c\x72\x64\
\x66\x3a\x53\x65\x71\x3e\x0a\x20\x20\x20\x20\x20\x3c\x72\x64\x66\
\x3a\x6c\x69\x0a\x20\x20\x20\x20\x20\x20\x73\x74\x45\x76\x74\x3a\
\x61\x63\x74\x69\x6f\x6e\x3d\x22\x73\x61\x76\x65\x64\x22\x0a\x20\
\x20\x20\x20\x20\x20\x73\x74\x45\x76\x74\x3a\x63\x68\x61\x6e\x67\
\x65\x64\x3d\x22\x2f\x22\x0a\x20\x20\x20\x20\x20\x20\x73\x74\x45\
\x76\x74\x3a\x69\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\
\x6d\x70\x2e\x69\x69\x64\x3a\x64\x34\x36\x39\x61\x63\x33\x36\x2d\
\x63\x64\x31\x65\x2d\x34\x62\x32\x31\x2d\x61\x38\x62\x63\x2d\x61\
\x37\x65\x31\x66\x32\x63\x31\x65\x65\x30\x31\x22\x0a\x20\x20\x20\
\x20\x20\x20\x73\x74\x45\x76\x74\x3a\x73\x6f\x66\x74\x77\x61\x72\
\x65\x41\x67\x65\x6e\x74\x3d\x22\x47\x69\x6d\x70\x20\x32\x2e\x31\
\x30\x20\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x0a\x20\x20\x20\
\x20\x20\x20\x73\x74\x45\x76\x74\x3a\x77\x68\x65\x6e\x3d\x22\x32\
\x30\x32\x31\x2d\x30\x34\x2d\x31\x36\x54\x31\x36\x3a\x34\x31\x3a\
\x33\x32\x22\x2f\x3e\x0a\x20\x20\x20\x20\x3c\x2f\x72\x64\x66\x3a\
\x53\x65\x71\x3e\x0a\x20\x20\x20\x3c\x2f\x78\x6d\x70\x4d\x4d\x3a\
\x48\x69\x73\x74\x6f\x72\x79\x3e\x0a\x20\x20\x20\x3c\x70\x6c\x75\
\x73\x3a\x49\x6d\x61\x67\x65\x53\x75\x70\x70\x6c\x69\x65\x72\x3e\
\x0a\x20\x20\x20\x20\x3c\x72\x64\x66\x3a\x53\x65\x71\x2f\x3e\x0a\
\x20\x20\x20\x3c\x2f\x70\x6c\x75\x73\x3a\x49\x6d\x61\x67\x65\x53\
\x75\x70\x70\x6c\x69\x65\x72\x3e\x0a\x20\x20\x20\x3c\x70\x6c\x75\
\x73\x3a\x49\x6d\x61\x67\x65\x43\x72\x65\x61\x74\x6f\x72\x3e\x0a\
\x20\x20\x20\x20\x3c\x72\x64\x66\x3a\x53\x65\x71\x2f\x3e\x0a\x20\
\x20\x20\x3c\x2f\x70\x6c\x75\x73\x3a\x49\x6d\x61\x67\x65\x43\x72\
\x65\x61\x74\x6f\x72\x3e\x0a\x20\x20\x20\x3c\x70\x6c\x75\x73\x3a\
\x43\x6f\x70\x79\x72\x69\x67\x68\x74\x4f\x77\x6e\x65\x72\x3e\x0a\
\x20\x20\x20\x20\x3c\x72\x64\x66\x3a\x53\x65\x71\x2f\x3e\x0a\x20\
\x20\x20\x3c\x2f\x70\x6c\x75\x73\x3a\x43\x6f\x70\x79\x72\x69\x67\
\x68\x74\x4f\x77\x6e\x65\x72\x3e\x0a\x20\x20\x20\x3c\x70\x6c\x75\
\x73\x3a\x4c\x69\x63\x65\x6e\x73\x6f\x72\x3e\x0a\x20\x20\x20\x20\
\x3c\x72\x64\x66\x3a\x53\x65\x71\x2f\x3e\x0a\x20\x20\x20\x3c\x2f\
\x70\x6c\x75\x73\x3a\x4c\x69\x63\x65\x6e\x73\x6f\x72\x3e\x0a\x20\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x0a\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x0a\
\x3c\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x0a\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x3c\x3f\x78\
\x70\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x77\x22\x3f\x3e\
\x8d\x53\x5a\xef\x00\x00\x00\x06\x62\x4b\x47\x44\x00\x00\x00\x00\
\x00\x00\xf9\x43\xbb\x7f\x00\x00\x00\x09\x70\x48\x59\x73\x00\x00\
\x76\x1c\x00\x00\x76\x1c\x01\xa7\xc2\x78\xea\x00\x00\x00\x07\x74\
\x49\x4d\x45\x07\xe5\x04\x10\x0e\x29\x20\x5a\x72\x43\xe7\x00\x00\
\x03\x51\x49\x44\x41\x54\x48\xc7\xa5\x95\xbd\x6f\x5b\x55\x18\x87\
\x9f\xfb\xe9\xd8\xb1\xf3\xe1\x24\x4e\x69\x8b\x12\xda\x0a\x01\x62\
\x02\x86\x5b\x89\x0c\x2c\x48\x4c\x20\x18\xca\xc0\x80\x89\xd4\x5e\
\xa9\x85\x88\xb9\x02\x51\x06\x58\x40\x80\x00\xd5\x44\x42\x1e\x2a\
\x84\x18\x18\x22\xfe\x00\xa4\x2e\xf5\x80\xc4\x86\x90\x10\x08\x97\
\x24\x50\xa7\x49\x1b\xc7\x5f\xd7\xe7\x8b\x21\xb6\x6b\xe7\xba\x89\
\x0b\x57\x3a\x3a\xd2\x3d\xf7\xfd\xfd\xde\xf7\x79\xef\x39\xc7\xe2\
\x01\x9e\xb0\x18\x9c\x02\xa6\x81\x4a\x21\x5f\xfa\x6b\x94\x18\x6b\
\x44\xe1\x85\xcc\x78\xfa\xda\xcc\xe4\xd4\x92\xe3\x38\x08\x29\xa9\
\x6c\xdf\xfe\xbe\xd5\x8e\x96\x0b\xf9\xd2\xee\x61\xb1\xee\x21\xa2\
\x8b\xc0\xf3\x40\x7b\x7a\x62\xea\xbd\x47\x1e\x3e\xb9\x60\xdb\xf7\
\xf2\xc9\x4e\x4d\xbc\xf2\xdb\x9f\xe5\x63\xc0\xb3\x0f\x54\x41\x58\
\x0c\x92\x9e\xeb\x7d\x7e\x3c\x37\xbf\x9c\x1e\x4f\xb1\xb5\xbd\xc3\
\xfc\xdc\x2c\x09\xdf\x8b\x05\xd7\xea\x0d\x7e\xfd\xe3\xf7\x2b\xc0\
\x2f\x40\xa9\x90\x2f\xdd\x3c\xd2\xe0\xcd\x6b\x4b\x9f\x3e\x76\xfa\
\xd4\x4a\xc2\xf7\x01\xd8\xf8\xa7\xc2\x89\x63\xb9\xa1\xd9\x19\x63\
\xa8\xd5\x1b\x68\x63\xa8\xd5\xeb\x6c\xed\x6c\x7f\x2d\x95\x7a\xab\
\x90\x2f\x35\x86\x22\x0a\x8b\xc1\x4c\x2e\x3b\x73\xa9\x2b\x0e\x30\
\x3f\x9b\x1d\x2a\xae\xb5\x46\x6b\x8d\xef\x7b\x68\xad\x99\x9c\xc8\
\x90\x1c\x1b\x5b\x2e\x6f\x6e\x58\xc0\x72\xf7\x3b\xfb\x40\xdc\x62\
\x22\x91\x70\x06\x9a\xe4\xba\xb1\xac\x95\xd6\x28\xa5\x90\x4a\x0d\
\xcc\x96\x05\x93\xe9\xcc\x1b\x61\x31\x58\xb8\x5f\x93\x9b\xcd\x56\
\x93\xbf\x6f\x45\x00\x28\xad\x79\x28\x37\x8b\xe3\x38\x03\xe2\xdd\
\xec\x95\xd6\x68\xa5\x7a\xef\x94\xd6\xf8\x9e\x47\xa7\xf1\xe5\x98\
\x81\xe7\x7a\xaf\xfa\xae\xc7\x4c\x76\x9a\xfe\x3f\xa6\x1f\x89\x1a\
\x36\xf7\x99\x08\x25\x01\x76\x62\x15\x84\xc5\xe0\xe5\xc5\x13\x27\
\xdf\x99\x9c\x48\xc7\x90\x68\x63\x62\x99\xf6\x1b\x0a\x21\xc0\xb2\
\xd0\x4a\xd1\x6c\xb6\x76\x81\xeb\xb1\x1e\x64\x52\xe9\x4b\xc3\xc4\
\x7b\xbc\x3b\x73\xff\x90\x4a\x21\x84\xe0\x4e\xb5\x8a\x94\x92\xbd\
\x46\x83\x46\xab\x79\xb1\x90\x2f\xd5\x63\x15\xa4\x53\xa9\xb3\x07\
\x91\xc4\x78\xf7\x21\x89\x84\xa0\x56\xaf\x23\x95\x02\x03\xd5\x5a\
\x0d\x63\x8c\x00\x36\x87\xee\x64\xa9\xd4\x3a\x70\xc6\x18\x73\x38\
\xef\xce\xb0\x80\xf1\x64\xf2\xe0\x9a\x27\xa5\xfc\x12\x78\x22\x86\
\xe8\xee\xde\xee\x0f\x6d\x21\xf6\x91\x68\x8d\x54\x8a\x3b\xbb\x55\
\x84\x94\x43\xd1\x28\xad\x87\x62\xb3\x6c\xfb\xf1\xce\x31\x33\x68\
\x20\xa4\x7c\xbf\xbc\xb1\x59\x6e\xb7\x45\x4f\xc0\xf7\x3d\x36\x6f\
\x55\xd8\xda\xde\xe1\x6e\x75\x6f\xe0\x9f\xef\x8d\x4e\x32\xdd\x35\
\xa3\x35\x40\x72\xe8\x51\x11\x16\x83\x9c\xe7\xba\x1f\xa5\x53\xe9\
\x97\xc0\x64\xb4\x31\xb8\x8e\x43\x2a\x39\xd6\x43\x23\xa4\xec\xed\
\x11\xa9\x14\x03\x48\xf7\x4d\x2a\x57\x5f\xbf\x31\x7f\xe4\x71\x1d\
\x16\x83\xa5\xb9\x6c\xf6\xfa\x58\x22\x71\x8f\xb3\x52\x34\x5b\x11\
\x42\x4a\xa4\x92\x38\xb6\x8d\xe3\x38\x03\xcd\xd7\xc6\xbc\x56\xc8\
\x97\xbe\xe9\xea\x38\xf7\x33\xf8\x69\x6d\xfd\xe6\x93\x2f\xcc\x1d\
\xf7\x5c\xf7\xe9\x6e\xc6\x7a\xbf\x7c\x22\xd1\x2e\x2b\xa5\xae\x60\
\x59\xcf\x68\xad\x53\x9d\xb5\xdb\xc6\x98\x0b\xfd\xe2\x23\x5d\x38\
\x61\x31\x58\x49\xf8\xfe\xdb\x96\x65\x2d\x18\x09\xf6\x56\x02\x7b\
\x3d\x79\xe1\xe3\x4f\xd6\x56\xc3\x62\x60\x01\x4f\x75\x84\x7e\xbe\
\x9a\x2f\xe9\xff\x74\xa3\x01\xac\x9c\x7b\xf1\xab\xb4\xef\x9c\xd7\
\x1a\x1a\x52\x7d\xf0\xd9\x77\x6b\x97\x47\x89\x73\x47\x35\x88\xa2\
\xe8\x51\x47\xd9\x18\x03\x91\xd2\xb3\xa3\xc6\x8d\x6c\xd0\x8e\xa2\
\x33\x88\x8e\x99\x26\x18\x35\xee\x48\x44\xe7\x9e\x3b\xeb\x59\x58\
\x5f\x38\x16\xe7\xbb\x9b\x46\x03\xca\xf0\xa1\x85\x79\xf7\xdb\x1f\
\x6f\xc8\xff\x57\x81\xc1\x33\x98\x55\x69\x58\x8d\x2f\xe1\x03\x87\
\x1a\xfc\x0b\xe0\xf9\x76\xd0\xc6\x66\xe5\x22\x00\x00\x00\x00\x49\
\x45\x4e\x44\xae\x42\x60\x82\
"
qt_resource_name = b"\
\x00\x07\
\x07\x3b\xe0\xb3\
\x00\x70\
\x00\x6c\x00\x75\x00\x67\x00\x69\x00\x6e\x00\x73\
\x00\x09\
\x0d\x38\x29\x05\
\x00\x49\
\x00\x6e\x00\x66\x00\x6f\x00\x41\x00\x72\x00\x62\x00\x72\x00\x65\
\x00\x08\
\x0a\x61\x5a\xa7\
\x00\x69\
\x00\x63\x00\x6f\x00\x6e\x00\x2e\x00\x70\x00\x6e\x00\x67\
"
qt_resource_struct_v1 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\
\x00\x00\x00\x14\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\
\x00\x00\x00\x2c\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
"
qt_resource_struct_v2 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x14\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x2c\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
\x00\x00\x01\x78\xdb\x21\x3d\xcd\
"
qt_version = [int(v) for v in QtCore.qVersion().split('.')]
if qt_version < [5, 8, 0]:
rcc_version = 1
qt_resource_struct = qt_resource_struct_v1
else:
rcc_version = 2
qt_resource_struct = qt_resource_struct_v2
def qInitResources():
QtCore.qRegisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
def qCleanupResources():
QtCore.qUnregisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
qInitResources()
| 63.305233 | 104 | 0.715663 | 10,307 | 43,554 | 3.020472 | 0.029786 | 0.413786 | 0.598998 | 0.770526 | 0.350861 | 0.34161 | 0.337049 | 0.330303 | 0.326256 | 0.32298 | 0 | 0.362634 | 0.033568 | 43,554 | 687 | 105 | 63.39738 | 0.376984 | 0.00349 | 0 | 0.196721 | 0 | 0.944858 | 0.000023 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0.002981 | false | 0 | 0.00149 | 0 | 0.004471 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1cb93ad949aef3423451bd46ad686f2354159e78 | 68 | py | Python | examples/celery/worker.py | Workable/honeybadger-extensions | 235be26f37667386cd0fcf77b4df5d6ac267199c | [
"MIT"
] | 4 | 2017-10-10T11:37:41.000Z | 2019-09-05T14:27:50.000Z | examples/celery/worker.py | Workable/honeybadger-extensions | 235be26f37667386cd0fcf77b4df5d6ac267199c | [
"MIT"
] | 2 | 2017-10-19T14:47:17.000Z | 2018-01-18T13:06:51.000Z | examples/celery/worker.py | Workable/honeybadger-extensions | 235be26f37667386cd0fcf77b4df5d6ac267199c | [
"MIT"
] | null | null | null | from example_app.celery import celery
from example_app import tasks
| 22.666667 | 37 | 0.867647 | 11 | 68 | 5.181818 | 0.545455 | 0.385965 | 0.491228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 68 | 2 | 38 | 34 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1cfdb6c118d80167c1145b24bb09c0e44b5d9927 | 3,001 | py | Python | tests/test_validate_config.py | userdel/wings | 05d746d9738581548a893e3be1e6ea3c9e2ad4c9 | [
"MIT"
] | null | null | null | tests/test_validate_config.py | userdel/wings | 05d746d9738581548a893e3be1e6ea3c9e2ad4c9 | [
"MIT"
] | 1 | 2021-06-01T23:06:39.000Z | 2021-06-01T23:06:39.000Z | tests/test_validate_config.py | irlrobot/wings | 05d746d9738581548a893e3be1e6ea3c9e2ad4c9 | [
"MIT"
] | null | null | null | import pytest
import toml
from toml.decoder import TomlDecodeError
from unittest.mock import patch
from wings.validate_config import ValidateConfig
@patch('toml.load')
def test_validate_config_with_valid_data(mock_toml_load):
mock_toml_load.return_value = toml.loads("""
name = "wings"
description = "Weeeee"
[runtime]
service = "lambda"
language = "python36"
""")
config = ValidateConfig('/fake_path_yo/wings.toml').config
assert config == {
'description': 'Weeeee',
'name': 'wings',
'runtime': {
'language': 'python36',
'service': 'lambda'
}
}
@patch('toml.load')
def test_validate_config_with_missing_name_key(mock_toml_load):
mock_toml_load.return_value = toml.loads("""
description = "Weeeee"
[runtime]
service = "lambda"
language = "python36"
""")
with pytest.raises(KeyError):
ValidateConfig('/fake_path_yo/wings.toml').config
@patch('toml.load')
def test_validate_config_with_missing_runtime_key(mock_toml_load):
mock_toml_load.return_value = toml.loads("""
name = "wings"
description = "Weeeee"
service = "lambda"
language = "python36"
""")
with pytest.raises(KeyError):
ValidateConfig('/fake_path_yo/wings.toml').config
@patch('toml.load')
def test_validate_config_with_missing_service_key(mock_toml_load):
mock_toml_load.return_value = toml.loads("""
name = "wings"
description = "Weeeee"
[runtime]
language = "python36"
""")
with pytest.raises(KeyError):
ValidateConfig('/fake_path_yo/wings.toml').config
@patch('toml.load')
def test_validate_config_with_invalid_service_value(mock_toml_load):
mock_toml_load.return_value = toml.loads("""
name = "wings"
description = "Weeeee"
[runtime]
service = "blah"
language = "python36"
""")
with pytest.raises(ValueError):
ValidateConfig('/fake_path_yo/wings.toml').config
@patch('toml.load')
def test_validate_config_with_missing_language_key(mock_toml_load):
mock_toml_load.return_value = toml.loads("""
name = "wings"
description = "Weeeee"
[runtime]
service = "lambda"
""")
with pytest.raises(KeyError):
ValidateConfig('/fake_path_yo/wings.toml').config
@patch('toml.load')
def test_validate_config_with_invalid_language(mock_toml_load):
mock_toml_load.return_value = toml.loads("""
name = "wings"
description = "Weeeee"
[runtime]
service = "lambda"
language = "python69420"
""")
with pytest.raises(ValueError):
ValidateConfig('/fake_path_yo/wings.toml').config
@patch('toml.load')
def test_validate_config_with_invalid_toml(mock_toml_load):
with pytest.raises(TomlDecodeError):
mock_toml_load.return_value = toml.loads("""
not toml
""")
| 24.398374 | 68 | 0.644119 | 332 | 3,001 | 5.521084 | 0.129518 | 0.104746 | 0.104746 | 0.069831 | 0.818876 | 0.810147 | 0.810147 | 0.753955 | 0.733224 | 0.708674 | 0 | 0.007391 | 0.233589 | 3,001 | 122 | 69 | 24.598361 | 0.789565 | 0 | 0 | 0.714286 | 0 | 0 | 0.393202 | 0.055981 | 0 | 0 | 0 | 0 | 0.010989 | 1 | 0.087912 | false | 0 | 0.054945 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e809ed7e4ec7af1cc79b7658d6c0a12e3918972d | 76 | py | Python | topsis/__init__.py | duttrohan0302/TOPSIS-Rohan-101803151 | 39a925f977f6d753a9d9ae728bd1ef7e9d3a5ceb | [
"MIT"
] | null | null | null | topsis/__init__.py | duttrohan0302/TOPSIS-Rohan-101803151 | 39a925f977f6d753a9d9ae728bd1ef7e9d3a5ceb | [
"MIT"
] | null | null | null | topsis/__init__.py | duttrohan0302/TOPSIS-Rohan-101803151 | 39a925f977f6d753a9d9ae728bd1ef7e9d3a5ceb | [
"MIT"
] | null | null | null |
name="TOPSIS-Rohan-101803151/TOPSIS-Rohan-101803151"
__version__ = "1.0.0"
| 19 | 52 | 0.763158 | 11 | 76 | 4.909091 | 0.636364 | 0.407407 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.295775 | 0.065789 | 76 | 3 | 53 | 25.333333 | 0.464789 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0.6 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e80fa923d58cad255392248b28dd003deaaa9c7d | 15,179 | py | Python | conduction/unused/onlat_2d_variable_flux.py | tab10/conduction | b75c76cf57e10147433e85276c707317e4ad8d93 | [
"CNRI-Python"
] | 1 | 2021-01-24T20:34:59.000Z | 2021-01-24T20:34:59.000Z | conduction/unused/onlat_2d_variable_flux.py | tab10/PRTCNT | b75c76cf57e10147433e85276c707317e4ad8d93 | [
"CNRI-Python"
] | 18 | 2016-06-02T15:52:55.000Z | 2017-11-16T01:55:47.000Z | conduction/unused/onlat_2d_variable_flux.py | tab10/PRTCNT | b75c76cf57e10147433e85276c707317e4ad8d93 | [
"CNRI-Python"
] | 1 | 2016-06-03T04:13:17.000Z | 2016-06-03T04:13:17.000Z | from __future__ import division
from builtins import range
from past.utils import old_div
import logging
import numpy as np
import time
from mpi4py import MPI
from conduction import *
def serial_method(grid_size, tube_length, tube_radius, num_tubes, orientation, timesteps, save_loc_data,
quiet, save_loc_plots, save_dir, k_convergence_tolerance, begin_cov_check,
k_conv_error_buffer, plot_save_dir, gen_plots, kapitza, prob_m_cn, run_to_convergence,
num_walkers):
walker_data_save_dir = plot_save_dir + "/walker_locations"
walker_plot_save_dir = plot_save_dir + "/walker_plots"
grid = creation.Grid2D_onlat(grid_size, tube_length, num_tubes, orientation, tube_radius)
if gen_plots:
plots.plot_two_d_random_walk_setup(grid, quiet, plot_save_dir)
# plots.plot_check_array_2d(grid, quiet, plot_save_dir, gen_plots)
grid_range = [[0, grid.size + 1], [0, grid.size + 1]]
bins = grid.size + 1
H = np.zeros((grid.size + 1, grid.size + 1), dtype=int)
i = 0
k_list = []
dt_dx_list = []
heat_flux_list = []
k_convergence_err_list = []
k_convergence_err = 1.0
xedges = list(range(0, bins))
yedges = list(range(0, bins))
start = time.clock()
if run_to_convergence:
while k_convergence_err > k_convergence_tolerance:
H = randomwalk_routine_2d_serial(grid, timesteps, save_loc_data, quiet, save_loc_plots,
plot_save_dir, walker_plot_save_dir, walker_data_save_dir,
gen_plots, kapitza, prob_m_cn, i, H)
i += 1
dt_dx, heat_flux, dt_dx_err, k, k_err, r2 = analysis.check_convergence_2d_onlat(H, i * 2,
grid.size, timesteps)
k_list.append(k)
dt_dx_list.append(dt_dx)
heat_flux_list.append(heat_flux)
logging.info("%d: R squared: %.4f, k: %.4E, dT/dx: %.4E" % (i, r2, k, dt_dx))
if i > begin_cov_check:
k_convergence_err = np.std(np.array(k_list[-k_conv_error_buffer:]), ddof=1)
k_convergence_val = np.mean(np.array(k_list[-k_conv_error_buffer:]))
k_convergence_err_list.append(k_convergence_err)
logging.info("k: %.4E" % k_convergence_val)
logging.info("k error: %.4E" % k_convergence_err)
else:
for i in range(old_div(num_walkers, 2)):
H = randomwalk_routine_2d_serial(grid, timesteps, save_loc_data, quiet, save_loc_plots, plot_save_dir,
walker_plot_save_dir, walker_data_save_dir,
gen_plots, kapitza, prob_m_cn, i, H)
dt_dx, heat_flux, dt_dx_err, k, k_err, r2 = analysis.check_convergence_2d_onlat(H, i * 2,
grid.size, timesteps)
k_list.append(k)
dt_dx_list.append(dt_dx)
heat_flux_list.append(heat_flux)
logging.info("%d: R squared: %.4f, k: %.4E, dT/dx: %.4E" % (i, r2, k, dt_dx))
if i > begin_cov_check:
k_convergence_err = np.std(np.array(k_list[-k_conv_error_buffer:]), ddof=1)
k_convergence_val = np.mean(np.array(k_list[-k_conv_error_buffer:]))
k_convergence_err_list.append(k_convergence_err)
logging.info("k: %.4E" % k_convergence_val)
logging.info("k error: %.4E" % k_convergence_err)
end = time.clock()
logging.info("Simulation has converged with %d total walkers" % (i * 2))
logging.info("Finished random walks")
logging.info("Serial simulation time was %.4f s" % (end - start))
walk_sec = old_div((i * 2), (end - start))
logging.info("Crunched %.4f walkers/second" % walk_sec)
temp_profile = plots.plot_colormap_2d(grid, H, quiet, plot_save_dir, gen_plots)
if gen_plots == True:
plots.plot_k_convergence(k_list, quiet, plot_save_dir)
plots.plot_k_convergence_err(k_convergence_err_list, quiet, plot_save_dir, begin_cov_check)
plots.plot_dt_dx(dt_dx_list, quiet, plot_save_dir)
plots.plot_heat_flux(heat_flux_list, quiet, plot_save_dir)
temp_gradient_x = plots.plot_temp_gradient_2d_onlat(grid, temp_profile, xedges, yedges, quiet,
plot_save_dir, gradient_cutoff=0)
gradient_avg, gradient_std = plots.plot_linear_temp(temp_profile, grid_size, quiet, plot_save_dir,
gen_plots)
analysis.final_conductivity_2d_onlat(i * 2, grid.size, timesteps, gradient_avg, gradient_std,
k_convergence_err, num_tubes, plot_save_dir, k_convergence_val,
prob_m_cn, gradient_cutoff=0)
max_temp = np.max(temp_profile)
min_temp = np.min(temp_profile)
diff_temp = np.abs(max_temp) - np.abs(min_temp)
logging.info("Max temp is %d, min temp is %d, with difference %d" % (max_temp, min_temp, diff_temp))
logging.info("Complete")
def sim_2d_onlat_MPI(grid_size, tube_length, tube_radius, num_tubes, orientation, timesteps, save_loc_data,
quiet, save_loc_plots, save_dir, k_convergence_tolerance, begin_cov_check,
k_conv_error_buffer, plot_save_dir, gen_plots, kapitza, prob_m_cn, rank, size,
run_to_convergence, num_walkers):
comm = MPI.COMM_WORLD
walker_data_save_dir = plot_save_dir + "/walker_locations"
walker_plot_save_dir = plot_save_dir + "/walker_plots"
if rank == 0:
grid = creation.Grid2D_onlat(grid_size, tube_length, num_tubes, orientation, tube_radius)
if gen_plots:
plots.plot_two_d_random_walk_setup(grid, quiet, plot_save_dir)
#plots.plot_check_array_2d(grid, quiet, plot_save_dir, gen_plots)
else:
grid = None
grid = comm.bcast(grid, root=0)
grid_range = [[0, grid.size + 1], [0, grid.size + 1]]
bins = grid.size + 1
H = np.zeros((grid.size + 1, grid.size + 1), dtype=int)
tot_H = np.zeros((grid.size + 1, grid.size + 1), dtype=int)
i = 0
k_list = []
dt_dx_list = []
heat_flux_list = []
k_convergence_err_list = []
k_convergence_err = 1.0
xedges = list(range(0, bins))
yedges = list(range(0, bins))
start = MPI.Wtime()
if run_to_convergence:
while k_convergence_err > k_convergence_tolerance:
tot_H = randomwalk_routine_2d_MPI(grid, timesteps, save_loc_data, quiet, save_loc_plots, plot_save_dir,
walker_plot_save_dir, walker_data_save_dir, gen_plots, kapitza,
prob_m_cn, i, H, rank, comm, tot_H)
i += 1
if rank == 0:
dt_dx, heat_flux, dt_dx_err, k, k_err, r2 = analysis.check_convergence_2d_onlat(tot_H, i * 2 * size,
grid.size,
timesteps)
k_list.append(k)
dt_dx_list.append(dt_dx)
heat_flux_list.append(heat_flux)
logging.info("%d: R squared: %.4f, k: %.4E" % (i * size, r2, k))
comm.Barrier()
if (i * size) > begin_cov_check:
if rank == 0:
k_convergence_err = np.std(np.array(k_list[-k_conv_error_buffer:]), ddof=1)
k_convergence_val = np.mean(np.array(k_list[-k_conv_error_buffer:]))
k_convergence_err_list.append(k_convergence_err)
logging.info("k: %.4E" % k_convergence_val)
logging.info("k error: %.4E" % k_convergence_err)
else:
k_convergence_err = None
k_convergence_val = None
k_convergence_err = comm.bcast(k_convergence_err, root=0)
k_convergence_val = comm.bcast(k_convergence_val, root=0)
comm.Barrier()
else:
for i in range(old_div(num_walkers, (2 * size))): # rounds down total walkers slightly
tot_H = randomwalk_routine_2d_MPI(grid, timesteps, save_loc_data, quiet, save_loc_plots, plot_save_dir,
walker_plot_save_dir, walker_data_save_dir, gen_plots, kapitza,
prob_m_cn, i, H, rank, comm, tot_H)
if rank == 0:
dt_dx, heat_flux, dt_dx_err, k, k_err, r2 = analysis.check_convergence_2d_onlat(tot_H, i * 2 * size,
grid.size,
timesteps)
k_list.append(k)
dt_dx_list.append(dt_dx)
heat_flux_list.append(heat_flux)
logging.info("%d: R squared: %.4f, k: %.4E" % (i * size, r2, k))
comm.Barrier()
if (i * size) > begin_cov_check:
if rank == 0:
k_convergence_err = np.std(np.array(k_list[-k_conv_error_buffer:]), ddof=1)
k_convergence_val = np.mean(np.array(k_list[-k_conv_error_buffer:]))
k_convergence_err_list.append(k_convergence_err)
logging.info("k: %.4E" % k_convergence_val)
logging.info("k error: %.4E" % k_convergence_err)
else:
k_convergence_err = None
k_convergence_val = None
k_convergence_err = comm.bcast(k_convergence_err, root=0)
k_convergence_val = comm.bcast(k_convergence_val, root=0)
comm.Barrier()
if rank == 0:
end = MPI.Wtime()
logging.info("Simulation has converged with %d total walkers" % (i * 2 * size))
logging.info("Finished random walks")
logging.info("Using %d cores, parallel simulation time was %.4f s" % (size, end - start))
walk_sec = old_div((i * 2 * size), (end - start))
logging.info("Crunched %.4f walkers/second" % walk_sec)
temp_profile = plots.plot_colormap_2d(grid, tot_H, quiet, plot_save_dir, gen_plots)
if gen_plots:
plots.plot_k_convergence(k_list, quiet, plot_save_dir)
plots.plot_k_convergence_err(k_convergence_err_list, quiet, plot_save_dir, begin_cov_check)
plots.plot_dt_dx(dt_dx_list, quiet, plot_save_dir)
plots.plot_heat_flux(heat_flux_list, quiet, plot_save_dir)
temp_gradient_x = plots.plot_temp_gradient_2d_onlat(grid, temp_profile, xedges, yedges, quiet,
plot_save_dir, gradient_cutoff=0)
gradient_avg, gradient_std = plots.plot_linear_temp(temp_profile, grid_size, quiet, plot_save_dir,
gen_plots)
analysis.final_conductivity_2d_onlat(i * 2 * size, grid.size, timesteps, gradient_avg, gradient_std,
k_convergence_err, num_tubes, plot_save_dir, k_convergence_val,
prob_m_cn, gradient_cutoff=0)
max_temp = np.max(temp_profile)
min_temp = np.min(temp_profile)
diff_temp = np.abs(max_temp) - np.abs(min_temp)
logging.info("Max temp is %d, min temp is %d, with difference %d" % (max_temp, min_temp, diff_temp))
logging.info("Complete")
def randomwalk_routine_2d_serial(grid, timesteps, save_loc_data, quiet, save_loc_plots, plot_save_dir,
walker_plot_save_dir, walker_data_save_dir, gen_plots, kapitza, prob_m_cn,
i, H):
# run hot walker
# logging.info("Start hot walker %d" % (i+1))
walker = randomwalk.runrandomwalk_2d_onlat(grid, timesteps, 'hot', kapitza, prob_m_cn)
if save_loc_data:
run.save_walker_loc(walker, walker_data_save_dir, i, 'hot')
if i == 0 & save_loc_plots == False & gen_plots == True: # always save one example trajectory plot
plots.plot_walker_path_2d_onlat(walker, grid.size, 'hot', quiet, i + 1, plot_save_dir)
elif save_loc_plots & gen_plots == True:
plots.plot_walker_path_2d_onlat(walker, grid.size, 'hot', quiet, i + 1, walker_plot_save_dir)
H[walker.pos[-1][0], walker.pos[-1][1]] += 1
# run cold walker
# logging.info("Start cold walker %d" % (i+1))
walker = randomwalk.runrandomwalk_2d_onlat(grid, timesteps, 'cold', kapitza, prob_m_cn)
if save_loc_data:
run.save_walker_loc(walker, walker_data_save_dir, i, 'cold')
if i == 0 & save_loc_plots == False & gen_plots == True:
plots.plot_walker_path_2d_onlat(walker, grid.size, 'cold', quiet, i + 1, plot_save_dir)
elif save_loc_plots & gen_plots == True:
plots.plot_walker_path_2d_onlat(walker, grid.size, 'cold', quiet, i + 1, walker_plot_save_dir)
H[walker.pos[-1][0], walker.pos[-1][1]] -= 1
return H
def randomwalk_routine_2d_MPI(grid, timesteps, save_loc_data, quiet, save_loc_plots, plot_save_dir,
walker_plot_save_dir, walker_data_save_dir, gen_plots, kapitza, prob_m_cn,
i, H, rank, comm, tot_H):
# run hot walker
# logging.info("Start hot walker %d" % (i+1))
walker = randomwalk.runrandomwalk_2d_onlat(grid, timesteps, 'hot', kapitza, prob_m_cn)
if rank == 0:
if save_loc_data:
run.save_walker_loc(walker, walker_data_save_dir, i, 'hot')
if i == 0 & save_loc_plots == False & gen_plots == True: # always save one example trajectory plot
plots.plot_walker_path_2d_onlat(walker, grid.size, 'hot', quiet, i + 1, plot_save_dir)
elif save_loc_plots & gen_plots == True:
plots.plot_walker_path_2d_onlat(walker, grid.size, 'hot', quiet, i + 1, walker_plot_save_dir)
H[walker.pos[-1][0], walker.pos[-1][1]] += 1
# run cold walker
# logging.info("Start cold walker %d" % (i+1))
walker = randomwalk.runrandomwalk_2d_onlat(grid, timesteps, 'cold', kapitza, prob_m_cn)
if rank == 0:
if save_loc_data:
run.save_walker_loc(walker, walker_data_save_dir, i, 'cold')
if i == 0 & save_loc_plots == False & gen_plots == True:
plots.plot_walker_path_2d_onlat(walker, grid.size, 'cold', quiet, i + 1, plot_save_dir)
elif save_loc_plots & gen_plots == True:
plots.plot_walker_path_2d_onlat(walker, grid.size, 'cold', quiet, i + 1, walker_plot_save_dir)
H[walker.pos[-1][0], walker.pos[-1][1]] -= 1
comm.Reduce(H, tot_H, op=MPI.SUM, root=0)
# H is updated on every core for every i independently
# tot_H is the total across all cores
comm.Barrier()
return tot_H
| 50.936242 | 116 | 0.592463 | 2,068 | 15,179 | 3.99178 | 0.084139 | 0.052574 | 0.063961 | 0.034888 | 0.939431 | 0.927801 | 0.927317 | 0.917383 | 0.911811 | 0.911811 | 0 | 0.015454 | 0.309375 | 15,179 | 297 | 117 | 51.107744 | 0.772012 | 0.037684 | 0 | 0.807531 | 0 | 0 | 0.04962 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016736 | false | 0 | 0.033473 | 0 | 0.058577 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e82d1bf093363dd2bb6f8bdb887fec9848242359 | 7,367 | py | Python | homeworks/HW5/chemkin.py | xuwd11/cs207_Weidong_Xu | 00442657239c7a4040501bf7fa0f6697c731fe94 | [
"MIT"
] | null | null | null | homeworks/HW5/chemkin.py | xuwd11/cs207_Weidong_Xu | 00442657239c7a4040501bf7fa0f6697c731fe94 | [
"MIT"
] | null | null | null | homeworks/HW5/chemkin.py | xuwd11/cs207_Weidong_Xu | 00442657239c7a4040501bf7fa0f6697c731fe94 | [
"MIT"
] | null | null | null | import numpy as np
import reaction_coeffs
def progress_rate_1(nu, x, k):
'''Returns the progress rate for a reaction of the form: nu_A A + nu_B B -> nu_C C
INPUTS
=======
nu: 3-element list or array
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of
species A, B and C
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
omega: float, except the following cases:
If nu or x is not a 3-element list or array, a TypeError will be raised;
if nu contains non-positive element(s), a ValueError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> progress_rate_1([2, 1, 1], [1, 2, 3], 10)
20
>>> progress_rate_1([1, 1, 1], [4, 2, 3], 10)
80
'''
try:
if len(nu) != 3:
raise TypeError('nu must be a 3 element list or array.')
except:
raise TypeError('nu must be a 3 element list or array.')
if not all([nu_ > 0 for nu_ in nu]):
raise ValueError('All elements in nu must be positive.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = k*x[0]**nu[0]*x[1]**nu[1]
return omega
def progress_rate_2(nu_1, nu_2, x, k):
'''Returns the progress rate for a system of reactions of the form:
nu'_11 A + nu'_21 B -> nu''_31 C
nu'_12 A + nu'_32 C -> nu''_22 B + nu''_32 C
INPUTS
=======
nu_1: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of reactants.
If nu_1 is not an array, a conversion is attempted.
nu_2: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of products.
If nu_2 is not an array, a conversion is attempted.
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
omega: 2-element tuple
Has the form (float, float) which corresponds to the progress rates of 2 reactions, except
the following cases:
If nu_1 or nu_2 cannot be converted to an array of shape (3, 2), a TypeError will be raised;
if nu_1 or nu_2 contains negative element(s), a ValueError will be raised;
if x is not a 3-element list or array, a TypeError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> progress_rate_2([[1, 2], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
(40, 10)
>>> progress_rate_2([[1, 1], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 3], 10)
(40, 90)
'''
try:
nu_1 = np.array(nu_1)
if nu_1.shape != (3, 2):
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
try:
nu_2 = np.array(nu_2)
if nu_2.shape != (3, 2):
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
if np.any(nu_1 < 0):
raise ValueError('All elements in nu_1 must be non-negative.')
if np.any(nu_2 < 0):
raise ValueError('All elements in nu_2 must be non-negative.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = (k*np.prod([x[i]**nu_1[i, 0] for i in range(3)]), k*np.prod([x[i]**nu_1[i, 1] for i in range(3)]))
return omega
def reaction_rate_1(nu_1, nu_2, x, k):
'''Returns the reaction rates for species in a system of reactions of the form:
nu'_11 A + nu'_21 B -> nu''_31 C
nu'_32 C -> nu''_12 A + nu''_22 B
INPUTS
=======
nu_1: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of reactants.
If nu_1 is not an array, a conversion is attempted.
nu_2: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of products.
If nu_2 is not an array, a conversion is attempted.
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
f: 3-element array
Has the form (float, float, float) which corresponds to the reaction rates of species A, B, C,
except the following cases:
If nu_1 or nu_2 cannot be converted to an array of shape (3, 2), a TypeError will be raised;
if nu_1 or nu_2 contains negative element(s), a ValueError will be raised;
if x is not a 3-element list or array, a TypeError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> reaction_rate_1([[1, 0], [2, 0], [0, 2]], [[0, 1], [0, 2], [1, 0]], [1, 2, 1], 10)
array([-30, -60, 20])
>>> reaction_rate_1([[1, 0], [2, 0], [0, 1]], [[0, 1], [0, 2], [1, 0]], [1, 2, 3], 10)
array([-10, -20, 10])
'''
try:
nu_1 = np.array(nu_1)
if nu_1.shape != (3, 2):
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
try:
nu_2 = np.array(nu_2)
if nu_2.shape != (3, 2):
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
if np.any(nu_1 < 0):
raise ValueError('All elements in nu_1 must be non-negative.')
if np.any(nu_2 < 0):
raise ValueError('All elements in nu_2 must be non-negative.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = progress_rate_2(nu_1, nu_2, x, k)
omega = np.array(omega).reshape((len(omega), 1))
f = np.sum((np.dot(nu_2, omega) - np.dot(nu_1, omega)), axis=1)
return f | 41.857955 | 111 | 0.590064 | 1,212 | 7,367 | 3.504951 | 0.086634 | 0.02048 | 0.042373 | 0.049435 | 0.883475 | 0.862053 | 0.830979 | 0.830508 | 0.795669 | 0.762947 | 0 | 0.054401 | 0.301344 | 7,367 | 176 | 112 | 41.857955 | 0.770935 | 0.491652 | 0 | 0.822785 | 0 | 0 | 0.329987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037975 | false | 0 | 0.025316 | 0 | 0.101266 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1c049c7c42d12fef16d327f02604a05faf45abf4 | 23,807 | py | Python | passbase/api/identity_api.py | passbase/passbase-python | 9d5b9cf21b38c2a50fe3755084ef8291d9e2d4d9 | [
"MIT"
] | 8 | 2020-09-09T14:30:46.000Z | 2020-10-19T14:09:00.000Z | passbase/api/identity_api.py | passbase/passbase-python | 9d5b9cf21b38c2a50fe3755084ef8291d9e2d4d9 | [
"MIT"
] | null | null | null | passbase/api/identity_api.py | passbase/passbase-python | 9d5b9cf21b38c2a50fe3755084ef8291d9e2d4d9 | [
"MIT"
] | 1 | 2021-04-23T21:05:19.000Z | 2021-04-23T21:05:19.000Z | # coding: utf-8
"""
Verification API
# Introduction <span class=\"subtext\"> Welcome to the Passbase Verifications API docs. This documentation will help you understand our models and the Verification API with its endpoints. Based on this you can build your own system (i.e. verification) and hook it up to Passbase. In case of feedback or questions you can reach us under this email address: [developer@passbase.com](mailto:developer@passbase.com). </span> A User submits a video selfie and valid identifying __Resources__ during a __Verification__ guided by the Passbase client-side integration. Once all the necessary __Resources__ are submitted, __Data points__ are extracted, digitized, and authenticated. These Data points then becomes part of the User's __Identity__. The User then consents to share __Resources__ and/or __Data points__ from their Identity with you. This information is passed to you and can be used to make decisions about a User (e.g. activate account). This table below explains our terminology further. | Term | Description | |-----------------------------------------|-------------| | [Identity](#tag/identity_model) | A set of Data points and Resources related to and owned by one single User. This data can be accessed by you through a Verification. | | Data points | Any data about a User extracted from a Resource (E.g. Passport Number, or Age). | | [Resource](#tag/resource_model) | A source document used to generate the Data points for a User (E.g. Passport). | | [User](#tag/user_model) | The owner of an email address associated with an Identity. | | Verification | A transaction through which a User consents to share Data points with you. If the Data points you request are not already available in the User's Identity, the Passbase client will ask the User to submit the necessary Resource required to extract them. | | Re-authentication (login) | A transaction through which a User can certify the ownership of Personal data previously shared through an Authentication. | # Authentication <span class=\"subtext\"> There are two forms of authentication for the API: <br/>• API Key <br/>• Bearer JWT Token </span> # noqa: E501
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from passbase.api_client import ApiClient
class IdentityApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def get_identity_by_id(self, id, **kwargs): # noqa: E501
"""Get identity # noqa: E501
Retrieve an identity by providing the identity ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity_by_id(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: Unique ID of the identity to return (required)
:return: Identity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_identity_by_id_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.get_identity_by_id_with_http_info(id, **kwargs) # noqa: E501
return data
def get_identity_by_id_with_http_info(self, id, **kwargs): # noqa: E501
"""Get identity # noqa: E501
Retrieve an identity by providing the identity ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity_by_id_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: Unique ID of the identity to return (required)
:return: Identity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_identity_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_identity_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['SecretApiKey'] # noqa: E501
return self.api_client.call_api(
'/identities/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Identity', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_identity_resource_by_id(self, id, resource_id, **kwargs): # noqa: E501
"""Get resource # noqa: E501
Get a resource attached to an identity by providing the resource ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity_resource_by_id(id, resource_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: Identity id (required)
:param str resource_id: Resource id (required)
:return: Resource
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_identity_resource_by_id_with_http_info(id, resource_id, **kwargs) # noqa: E501
else:
(data) = self.get_identity_resource_by_id_with_http_info(id, resource_id, **kwargs) # noqa: E501
return data
def get_identity_resource_by_id_with_http_info(self, id, resource_id, **kwargs): # noqa: E501
"""Get resource # noqa: E501
Get a resource attached to an identity by providing the resource ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity_resource_by_id_with_http_info(id, resource_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: Identity id (required)
:param str resource_id: Resource id (required)
:return: Resource
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'resource_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_identity_resource_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_identity_resource_by_id`") # noqa: E501
# verify the required parameter 'resource_id' is set
if ('resource_id' not in params or
params['resource_id'] is None):
raise ValueError("Missing the required parameter `resource_id` when calling `get_identity_resource_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
if 'resource_id' in params:
path_params['resource_id'] = params['resource_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['SecretApiKey'] # noqa: E501
return self.api_client.call_api(
'/identity/{id}/resources/{resource_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Resource', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_identity_resource_file_by_id(self, id, resource_id, resource_file_id, **kwargs): # noqa: E501
"""Get resource file # noqa: E501
Get a raw resource file attached to an identity by providing the resource ID and the resource file ID. This is a protected route and you'll need a specific government authorization to access it. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity_resource_file_by_id(id, resource_id, resource_file_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: Identity id (required)
:param str resource_id: Resource id (required)
:param str resource_file_id: Resource file id (required)
:return: ResourceFile
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_identity_resource_file_by_id_with_http_info(id, resource_id, resource_file_id, **kwargs) # noqa: E501
else:
(data) = self.get_identity_resource_file_by_id_with_http_info(id, resource_id, resource_file_id, **kwargs) # noqa: E501
return data
def get_identity_resource_file_by_id_with_http_info(self, id, resource_id, resource_file_id, **kwargs): # noqa: E501
"""Get resource file # noqa: E501
Get a raw resource file attached to an identity by providing the resource ID and the resource file ID. This is a protected route and you'll need a specific government authorization to access it. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity_resource_file_by_id_with_http_info(id, resource_id, resource_file_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: Identity id (required)
:param str resource_id: Resource id (required)
:param str resource_file_id: Resource file id (required)
:return: ResourceFile
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'resource_id', 'resource_file_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_identity_resource_file_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_identity_resource_file_by_id`") # noqa: E501
# verify the required parameter 'resource_id' is set
if ('resource_id' not in params or
params['resource_id'] is None):
raise ValueError("Missing the required parameter `resource_id` when calling `get_identity_resource_file_by_id`") # noqa: E501
# verify the required parameter 'resource_file_id' is set
if ('resource_file_id' not in params or
params['resource_file_id'] is None):
raise ValueError("Missing the required parameter `resource_file_id` when calling `get_identity_resource_file_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
if 'resource_id' in params:
path_params['resource_id'] = params['resource_id'] # noqa: E501
if 'resource_file_id' in params:
path_params['resource_file_id'] = params['resource_file_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['SecretApiKey'] # noqa: E501
return self.api_client.call_api(
'/identity/{id}/resources/{resource_id}/resource_files/{resource_file_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ResourceFile', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_identities(self, **kwargs): # noqa: E501
"""List identities # noqa: E501
List all the identities retrievable by the provided API Secret Key. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_identities(async_req=True)
>>> result = thread.get()
:param async_req bool
:param int limit:
:param str cursor:
:return: PaginatedIdentities
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_identities_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_identities_with_http_info(**kwargs) # noqa: E501
return data
def list_identities_with_http_info(self, **kwargs): # noqa: E501
"""List identities # noqa: E501
List all the identities retrievable by the provided API Secret Key. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_identities_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param int limit:
:param str cursor:
:return: PaginatedIdentities
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['limit', 'cursor'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_identities" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'cursor' in params:
query_params.append(('cursor', params['cursor'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['SecretApiKey'] # noqa: E501
return self.api_client.call_api(
'/identities', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PaginatedIdentities', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_identity_resources(self, id, **kwargs): # noqa: E501
"""List resources # noqa: E501
List resources attached to an identity by providing the identity ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_identity_resources(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: Identity id (required)
:param int limit:
:param str cursor:
:return: PaginatedResources
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_identity_resources_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.list_identity_resources_with_http_info(id, **kwargs) # noqa: E501
return data
def list_identity_resources_with_http_info(self, id, **kwargs): # noqa: E501
"""List resources # noqa: E501
List resources attached to an identity by providing the identity ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_identity_resources_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: Identity id (required)
:param int limit:
:param str cursor:
:return: PaginatedResources
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'limit', 'cursor'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_identity_resources" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `list_identity_resources`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'cursor' in params:
query_params.append(('cursor', params['cursor'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['SecretApiKey'] # noqa: E501
return self.api_client.call_api(
'/identity/{id}/resources', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PaginatedResources', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 44.005545 | 2,289 | 0.622968 | 2,879 | 23,807 | 4.915596 | 0.100035 | 0.044658 | 0.015546 | 0.025438 | 0.851752 | 0.842778 | 0.83338 | 0.820944 | 0.819248 | 0.810274 | 0 | 0.014533 | 0.288991 | 23,807 | 540 | 2,290 | 44.087037 | 0.821528 | 0.399966 | 0 | 0.773519 | 0 | 0 | 0.195473 | 0.055708 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038328 | false | 0.003484 | 0.013937 | 0 | 0.108014 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1c12a97039e1fdf9e42c0f4c40d258ebdab3969a | 9,030 | py | Python | django-covid/dashboard/views.py | Falcao5/COVID-19 | 8c29a8f5c8fa9c1b6d005a4b15c44cec2e9d3ae0 | [
"CC-BY-4.0"
] | null | null | null | django-covid/dashboard/views.py | Falcao5/COVID-19 | 8c29a8f5c8fa9c1b6d005a4b15c44cec2e9d3ae0 | [
"CC-BY-4.0"
] | null | null | null | django-covid/dashboard/views.py | Falcao5/COVID-19 | 8c29a8f5c8fa9c1b6d005a4b15c44cec2e9d3ae0 | [
"CC-BY-4.0"
] | 1 | 2021-07-15T22:05:10.000Z | 2021-07-15T22:05:10.000Z | from django.shortcuts import render
# ref: https://docs.djangoproject.com/en/3.0/intro/tutorial03/
from django.http import HttpResponse
from django.core import serializers
from dashboard.models import Regione, Nazione, Provincia
def calculateDetailsForState():
labels = set()
ricoverati_con_sintomi = []
terapia_intensiva = []
totale_ospedalizzati = []
isolamento_domiciliare = []
attualmente_positivi = []
dimessi_guariti = []
deceduti = []
totale_casi = []
tamponi = []
perc_casi_conclusi = []
perc_guariti = []
perc_successo = []
nuovi_tamponi = []
nuovi_positivi = []
nuovi_guariti = []
nuovi_deceduti = []
perc_nuovi_positivi_per_tamponi = []
# regions = Regione.objects.all()
ita = Nazione.objects.all().order_by('data')
# init
first = ita.first()
positivi_ieri = first.totale_casi
guariti_ieri = first.dimessi_guariti
deceduti_ieri = first.deceduti
tamponi_ieri = first.tamponi
for r in ita:
labels.add(r.data) # store all distinct date values
casi_conclusi_oggi = r.dimessi_guariti + r.deceduti
if (r.totale_casi > 0):
perc_casi_conclusi_oggi = casi_conclusi_oggi / r.totale_casi
perc_guariti_oggi = r.dimessi_guariti / r.totale_casi
else:
perc_casi_conclusi_oggi = 0
perc_guariti_oggi = 0
if (casi_conclusi_oggi > 0):
perc_successo_oggi = r.dimessi_guariti / casi_conclusi_oggi
else:
perc_successo_oggi = 0
nuovi_tamponi_oggi = r.tamponi - tamponi_ieri
nuovi_positivi_oggi = r.totale_casi - positivi_ieri
nuovi_guariti_oggi = r.dimessi_guariti - guariti_ieri
nuovi_deceduti_oggi = r.deceduti - deceduti_ieri
if (nuovi_tamponi_oggi > 0):
perc_nuovi_positivi_per_tamponi_oggi = nuovi_positivi_oggi / nuovi_tamponi_oggi
else:
perc_nuovi_positivi_per_tamponi_oggi = 0
ricoverati_con_sintomi.append(r.ricoverati_con_sintomi)
terapia_intensiva.append(r.terapia_intensiva)
totale_ospedalizzati.append(r.totale_ospedalizzati)
isolamento_domiciliare.append(r.isolamento_domiciliare)
attualmente_positivi.append(r.totale_positivi)
dimessi_guariti.append(r.dimessi_guariti)
deceduti.append(r.deceduti)
totale_casi.append(r.totale_casi)
tamponi.append(r.tamponi)
perc_casi_conclusi.append(perc_casi_conclusi_oggi * 100)
perc_guariti.append(perc_guariti_oggi * 100)
perc_successo.append(perc_successo_oggi * 100)
nuovi_tamponi.append(nuovi_tamponi_oggi)
nuovi_positivi.append(nuovi_positivi_oggi)
nuovi_guariti.append(nuovi_guariti_oggi)
nuovi_deceduti.append(nuovi_deceduti_oggi)
perc_nuovi_positivi_per_tamponi.append(perc_nuovi_positivi_per_tamponi_oggi * 100)
positivi_ieri = r.totale_casi
guariti_ieri = r.dimessi_guariti
deceduti_ieri = r.deceduti
tamponi_ieri = r.tamponi
labels = sorted(labels)
result = {
"labels": labels,
"ricoverati_con_sintomi": ricoverati_con_sintomi,
"terapia_intensiva": terapia_intensiva,
"totale_ospedalizzati": totale_ospedalizzati,
"isolamento_domiciliare": isolamento_domiciliare,
"attualmente_positivi": attualmente_positivi,
"dimessi_guariti": dimessi_guariti,
"deceduti": deceduti,
"totale_casi": totale_casi,
"tamponi": tamponi,
"perc_casi_conclusi": perc_casi_conclusi,
"perc_guariti": perc_guariti,
"perc_successo": perc_successo,
"nuovi_tamponi": nuovi_tamponi,
"nuovi_positivi": nuovi_positivi,
"nuovi_guariti": nuovi_guariti,
"nuovi_deceduti": nuovi_deceduti,
"perc_nuovi_positivi_per_tamponi": perc_nuovi_positivi_per_tamponi,
}
return result
def calculateDetailsForRegion(regionCode):
labels = set()
ricoverati_con_sintomi = []
terapia_intensiva = []
totale_ospedalizzati = []
isolamento_domiciliare = []
attualmente_positivi = []
dimessi_guariti = []
deceduti = []
totale_casi = []
tamponi = []
perc_casi_conclusi = []
perc_guariti = []
perc_successo = []
nuovi_tamponi = []
nuovi_positivi = []
nuovi_guariti = []
nuovi_deceduti = []
perc_nuovi_positivi_per_tamponi = []
# regions = Regione.objects.all()
region = Regione.objects.filter(codice_regione=regionCode).order_by('data')
# init
first = region.first()
regionName = first.denominazione_regione
positivi_ieri = first.totale_casi
guariti_ieri = first.dimessi_guariti
deceduti_ieri = first.deceduti
tamponi_ieri = first.tamponi
for r in region:
labels.add(r.data) # store all distinct date values
casi_conclusi_oggi = r.dimessi_guariti + r.deceduti
if (r.totale_casi > 0):
perc_casi_conclusi_oggi = casi_conclusi_oggi / r.totale_casi
perc_guariti_oggi = r.dimessi_guariti / r.totale_casi
else:
perc_casi_conclusi_oggi = 0
perc_guariti_oggi = 0
if (casi_conclusi_oggi > 0):
perc_successo_oggi = r.dimessi_guariti / casi_conclusi_oggi
else:
perc_successo_oggi = 0
nuovi_tamponi_oggi = r.tamponi - tamponi_ieri
nuovi_positivi_oggi = r.totale_casi - positivi_ieri
nuovi_guariti_oggi = r.dimessi_guariti - guariti_ieri
nuovi_deceduti_oggi = r.deceduti - deceduti_ieri
if (nuovi_tamponi_oggi > 0):
perc_nuovi_positivi_per_tamponi_oggi = nuovi_positivi_oggi / nuovi_tamponi_oggi
else:
perc_nuovi_positivi_per_tamponi_oggi = 0
ricoverati_con_sintomi.append(r.ricoverati_con_sintomi)
terapia_intensiva.append(r.terapia_intensiva)
totale_ospedalizzati.append(r.totale_ospedalizzati)
isolamento_domiciliare.append(r.isolamento_domiciliare)
attualmente_positivi.append(r.totale_positivi)
dimessi_guariti.append(r.dimessi_guariti)
deceduti.append(r.deceduti)
totale_casi.append(r.totale_casi)
tamponi.append(r.tamponi)
perc_casi_conclusi.append(perc_casi_conclusi_oggi * 100)
perc_guariti.append(perc_guariti_oggi * 100)
perc_successo.append(perc_successo_oggi * 100)
nuovi_tamponi.append(nuovi_tamponi_oggi)
nuovi_positivi.append(nuovi_positivi_oggi)
nuovi_guariti.append(nuovi_guariti_oggi)
nuovi_deceduti.append(nuovi_deceduti_oggi)
perc_nuovi_positivi_per_tamponi.append(perc_nuovi_positivi_per_tamponi_oggi * 100)
positivi_ieri = r.totale_casi
guariti_ieri = r.dimessi_guariti
deceduti_ieri = r.deceduti
tamponi_ieri = r.tamponi
labels = sorted(labels)
result = {
"regionName": regionName,
"labels": labels,
"ricoverati_con_sintomi": ricoverati_con_sintomi,
"terapia_intensiva": terapia_intensiva,
"totale_ospedalizzati": totale_ospedalizzati,
"isolamento_domiciliare": isolamento_domiciliare,
"attualmente_positivi": attualmente_positivi,
"dimessi_guariti": dimessi_guariti,
"deceduti": deceduti,
"totale_casi": totale_casi,
"tamponi": tamponi,
"perc_casi_conclusi": perc_casi_conclusi,
"perc_guariti": perc_guariti,
"perc_successo": perc_successo,
"nuovi_tamponi": nuovi_tamponi,
"nuovi_positivi": nuovi_positivi,
"nuovi_guariti": nuovi_guariti,
"nuovi_deceduti": nuovi_deceduti,
"perc_nuovi_positivi_per_tamponi": perc_nuovi_positivi_per_tamponi,
}
province = Provincia.objects.filter(codice_regione=regionCode).order_by('data').values() # list of Provincia dicts
return result
def graph(request):
# check if the passed value is an integer
if request.method == 'POST' and 'regionCode' in request.POST:
try:
int(request.POST['regionCode'])
except ValueError:
return
regionCode = request.POST['regionCode']
# print(regionCode)
result = calculateDetailsForRegion(regionCode)
# print(result)
return render(request, "dashboard/graph.html", result)
def home(request):
if request.method == 'POST' and 'regionCode' in request.POST:
return graph(request)
regions = []
regionsWithDifferentCode = Regione.objects.all().distinct('codice_regione')
for region in regionsWithDifferentCode:
regions.append({'code': region.codice_regione, 'name': region.denominazione_regione})
def sort_key(e):
return e['name']
regions.sort(key=sort_key)
# sorted_regions = sorted(regions.items(), key=lambda x: x[1])
ita = calculateDetailsForState()
ita["regions"] = regions
return render(request, "dashboard/home.html", ita)
def thanh_guong(request):
return render(request, "thanh-guong/thanh-guong.html") | 33.947368 | 121 | 0.686157 | 1,016 | 9,030 | 5.735236 | 0.115157 | 0.062468 | 0.038442 | 0.048052 | 0.809851 | 0.804874 | 0.804874 | 0.804874 | 0.788742 | 0.773297 | 0 | 0.006183 | 0.22979 | 9,030 | 266 | 122 | 33.947368 | 0.831632 | 0.039092 | 0 | 0.813397 | 0 | 0 | 0.083324 | 0.020542 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028708 | false | 0 | 0.019139 | 0.009569 | 0.086124 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1c3382fc20e6b4435b9f77ff3cc16a4a96a24d0b | 34,879 | py | Python | tests/ManualTableau/TestIplManualTableau.py | oIi123/TableauxProver | cb527f91f5c2d0393fbfcb3fb501b4480e0c9031 | [
"MIT"
] | null | null | null | tests/ManualTableau/TestIplManualTableau.py | oIi123/TableauxProver | cb527f91f5c2d0393fbfcb3fb501b4480e0c9031 | [
"MIT"
] | null | null | null | tests/ManualTableau/TestIplManualTableau.py | oIi123/TableauxProver | cb527f91f5c2d0393fbfcb3fb501b4480e0c9031 | [
"MIT"
] | null | null | null | import unittest
from src.builder_factory import LogicType
from src.Parser.PropParser import PropParser
from src.TableauxBuilder.BaseManualTableau import BaseManualTableau, BaseTableauxBuilder
from src.TableauxBuilder.IpcTableauxBuilder import IpcTableauxBuilder
def parse(expr: str):
return PropParser.parse(expr).expr
class TestPlManualTableau(unittest.TestCase):
def test_incorrect_1(self):
expr = parse('(a->b)&(b->c)->(a->c)')
l_expr = [
parse('(a->b)&(b->c)'),
parse('(a->c)'),
]
tableau = IpcTableauxBuilder(false_exprs=[expr])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, expr, None), [l_expr], [[]], [[]], [])
self.assertFalse(success)
def test_incorrect_2(self):
expr = parse('(a->b)&(b->c)->(a->c)')
l_expr = [
parse('(a->b)&(b->c)'),
parse('(a->c)'),
]
r_expr = [
parse('(a->c)'),
]
tableau = IpcTableauxBuilder(false_exprs=[expr])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, expr, None), [l_expr], [r_expr], [[]], [])
self.assertFalse(success)
def test_incorrect_3(self):
expr = parse('(a->b)&(b->c)->(a->c)')
l_expr = [
parse('(a->b)&(b->c)'),
]
r_expr = [
parse('(b->c)'),
]
tableau = IpcTableauxBuilder(false_exprs=[expr])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, expr, None), [l_expr], [r_expr], [[]], [])
self.assertFalse(success)
def test_incorrect_4(self):
expr_t = parse('(a->b)&(b->c)->(a->c)')
expr_f = parse('(a->b)&(b->c)->(a->c)')
l_expr = [
parse('(a->b)&(b->c)'),
]
r_expr = [
parse('(a->c)'),
]
tableau = IpcTableauxBuilder(true_exprs=[expr_t], false_exprs=[expr_f])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t, None, None), [l_expr], [r_expr], [[]], [])
self.assertFalse(success)
def test_incorrect_5(self):
expr_t = parse('!!a')
expr_f = parse('a|b')
l_expr = []
r_expr = [
parse('a'),
]
tableau = IpcTableauxBuilder(true_exprs=[expr_t], false_exprs=[expr_f])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, expr_f, None), [l_expr], [r_expr], [[]], [])
self.assertFalse(success)
def test_incorrect_6(self):
expr_t = parse('!!a')
expr_f = parse('a|b')
l_expr = [
parse('a'),
]
r_expr = []
tableau = IpcTableauxBuilder(true_exprs=[expr_t], false_exprs=[expr_f])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t, None, None), [l_expr], [r_expr], [[]], [])
self.assertFalse(success)
def test_correct_1(self):
expr_t = parse('!!a')
expr_f = parse('a|b')
l_expr = []
r_expr = []
cf_expr = [
parse('!a')
]
tableau = IpcTableauxBuilder(true_exprs=[expr_t], false_exprs=[expr_f])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t, None, None), [l_expr], [r_expr], [cf_expr], [])
self.assertTrue(success)
def test_correct_2(self):
expr_t = parse('!!a')
expr_f = parse('a|b')
l_expr = []
r_expr = [
parse('a'),
parse('b')
]
tableau = IpcTableauxBuilder(true_exprs=[expr_t], false_exprs=[expr_f])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, expr_f, None), [l_expr], [r_expr], [[]], [])
self.assertTrue(success)
def test_correct_3(self):
expr_t = parse('!!a')
expr_f = parse('(a->b)&(b->c)->(a->c)')
l_expr = [
parse('(a->b)&(b->c)')
]
r_expr = [
parse('(a->c)')
]
tableau = IpcTableauxBuilder(true_exprs=[expr_t], false_exprs=[expr_f])
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, expr_f, None), [l_expr], [r_expr], [[]], [])
self.assertTrue(success)
def test_correct_4(self):
expr_t = [
parse('p->q'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
parse('!s')
]
r_expr = [
parse('!p')
]
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, expr_f[0], None), [l_expr], [r_expr], [[]], [])
self.assertTrue(success)
def test_correct_5(self):
expr_t = [
parse('p->q'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
[parse('q')], []
]
r_expr = [
[], [parse('p')]
]
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t[0], None, None), l_expr, r_expr, [[],[]], [])
self.assertTrue(success)
def test_correct_6(self):
expr_t = [
parse('p->q'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
[], [parse('q')]
]
r_expr = [
[parse('p')], []
]
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t[0], None, None), l_expr, r_expr, [[],[]], [])
self.assertTrue(success)
def test_correct_7(self):
expr_t = [
parse('a|b'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
[parse('b')], [parse('a')]
]
r_expr = [
[], []
]
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t[0], None, None), l_expr, r_expr, [[],[]], [])
self.assertTrue(success)
def test_correct_8(self):
expr_t = [
parse('a|b'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
[parse('a')], [parse('b')]
]
r_expr = [
[], []
]
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t[0], None, None), l_expr, r_expr, [[],[]], [])
self.assertTrue(success)
def test_correct_9(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
parse('a'),
parse('b&c')
]
r_expr = []
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t[0], None, None), [l_expr], [r_expr], [[]], [])
self.assertTrue(success)
def test_correct_10(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
parse('a&b'),
parse('c')
]
r_expr = []
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((expr_t[0], None, None), [l_expr], [r_expr], [[]], [])
self.assertTrue(success)
def test_correct_11(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
expr_cf = [
parse('!a')
]
l_expr = [
parse('a')
]
r_expr = []
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs=expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, None, expr_cf[0]), [l_expr], [r_expr], [[]], [])
self.assertTrue(success)
def test_correct_12(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
expr_cf = [
parse('a&b')
]
l_expr = [[],[]]
r_expr = [[],[]]
cf_expr = [
[parse('a')],
[parse('b')]
]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs=expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, None, expr_cf[0]), l_expr, r_expr, cf_expr, [])
self.assertTrue(success)
def test_correct_13(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
expr_cf = [
parse('a|b')
]
l_expr = [[]]
r_expr = [[]]
cf_expr = [[
parse('a'),
parse('b')
]]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs=expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, None, expr_cf[0]), l_expr, r_expr, cf_expr, [])
self.assertTrue(success)
def test_correct_14(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
expr_cf = [
parse('a->b')
]
l_expr = [[
parse('a')
]]
r_expr = [[
parse('b')
]]
cf_expr = [[]]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs=expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, None, expr_cf[0]), l_expr, r_expr, cf_expr, [])
self.assertTrue(success)
def test_correct_15(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
expr_cf = [
parse('a<->b')
]
l_expr = [[]]
r_expr = [[]]
cf_expr = [[
parse('(a->b)&(b->a)')
]]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs=expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
success = manual_tableau.merge((None, None, expr_cf[0]), l_expr, r_expr, cf_expr, [])
self.assertTrue(success)
def test_merge_true_and_perm_1(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
parse('a&b'),
parse('c')
]
r_expr = []
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((expr_t[0], None, None), [l_expr], [r_expr], [[]], [])
sequent = tableau.sequent
self.assertEqual(3, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('c'), sequent[BaseTableauxBuilder.true_atoms])
self.assertEqual(1, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertIn(parse('!s->!p'), sequent[BaseTableauxBuilder.false_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_processed])
self.assertEqual(0, len(tableau.children))
def test_merge_true_and_perm_2(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
parse('b&c'),
parse('a')
]
r_expr = []
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((expr_t[0], None, None), [l_expr], [r_expr], [[]], [])
sequent = tableau.sequent
self.assertEqual(3, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a'), sequent[BaseTableauxBuilder.true_atoms])
self.assertEqual(1, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertIn(parse('!s->!p'), sequent[BaseTableauxBuilder.false_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_processed])
self.assertEqual(0, len(tableau.children))
def test_merge_false_impl(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p'),
parse('t&f')
]
l_expr = [
parse('!s')
]
r_expr = [
parse('!p')
]
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((None, expr_f[0], None), [l_expr], [r_expr], [[]], [])
self.assertEqual(1, len(tableau.children))
sequent = tableau.children[0].sequent
self.assertEqual(4, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('!s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertEqual(1, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertIn(parse('!p'), sequent[BaseTableauxBuilder.false_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
def test_merge_true_impl(self):
expr_t = [
parse('q->r'),
parse('a&b&c'),
parse('r->s'),
]
expr_f = [
parse('!s->!p')
]
l_expr = [
[], [parse('r')],
]
r_expr = [
[parse('q')], [],
]
tableau = IpcTableauxBuilder(true_exprs=expr_t, false_exprs=expr_f)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((expr_t[0], None, None), l_expr, r_expr, [[],[]], [])
sequent = tableau.sequent
self.assertEqual(2, len(tableau.children))
self.assertEqual(2, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_processed])
c_s_1 = tableau.children[0].sequent
c_s_2 = tableau.children[1].sequent
self.assertEqual(2, len(c_s_1[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('r->s'), c_s_1[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), c_s_1[BaseTableauxBuilder.true_exprs])
self.assertEqual(1, len(c_s_1[BaseTableauxBuilder.false_atoms]))
self.assertIn(parse('q'), c_s_1[BaseTableauxBuilder.false_atoms])
self.assertEqual(0, len(c_s_1[BaseTableauxBuilder.true_atoms]))
self.assertEqual(0, len(c_s_1[BaseTableauxBuilder.false_processed]))
self.assertEqual(1, len(c_s_1[BaseTableauxBuilder.true_processed]))
self.assertIn(parse('q->r'), c_s_1[BaseTableauxBuilder.true_processed])
self.assertEqual(2, len(c_s_2[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('r->s'), c_s_2[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), c_s_2[BaseTableauxBuilder.true_exprs])
self.assertEqual(1, len(c_s_2[BaseTableauxBuilder.true_atoms]))
self.assertIn(parse('r'), c_s_2[BaseTableauxBuilder.true_atoms])
self.assertEqual(0, len(c_s_2[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(c_s_2[BaseTableauxBuilder.false_processed]))
self.assertEqual(1, len(c_s_2[BaseTableauxBuilder.true_processed]))
self.assertIn(parse('q->r'), c_s_2[BaseTableauxBuilder.true_processed])
def test_merge_cf_or(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p'),
parse('t&f')
]
expr_cf = [
parse('s|q')
]
l_expr = []
r_expr = []
cf_expr = [
parse('s'),
parse('q')
]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs = expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((None, None, expr_cf[0]), [l_expr], [r_expr], [cf_expr], [])
self.assertEqual(0, len(tableau.children))
sequent = tableau.sequent
self.assertEqual(3, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertEqual(2, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertIn(parse('!s->!p'), sequent[BaseTableauxBuilder.false_exprs])
self.assertIn(parse('t&f'), sequent[BaseTableauxBuilder.false_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_exprs]))
self.assertEqual(2, len(sequent[BaseTableauxBuilder.certain_falsehood_atoms]))
self.assertIn(parse('s'), sequent[BaseTableauxBuilder.certain_falsehood_atoms])
self.assertIn(parse('q'), sequent[BaseTableauxBuilder.certain_falsehood_atoms])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
def test_merge_cf_and(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p'),
parse('t&f')
]
expr_cf = [
parse('s&q')
]
l_expr = [[],[]]
r_expr = [[],[]]
cf_expr = [
[parse('s')],
[parse('q')]
]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs = expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((None, None, expr_cf[0]), l_expr, r_expr, cf_expr, [])
self.assertEqual(2, len(tableau.children))
sequent_1 = tableau.children[0].sequent
sequent_2 = tableau.children[1].sequent
self.assertEqual(3, len(sequent_1[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent_1[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent_1[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent_1[BaseTableauxBuilder.true_exprs])
self.assertEqual(0, len(sequent_1[BaseTableauxBuilder.false_exprs]))
self.assertEqual(0, len(sequent_1[BaseTableauxBuilder.certain_falsehood_exprs]))
self.assertEqual(1, len(sequent_1[BaseTableauxBuilder.certain_falsehood_atoms]))
self.assertIn(parse('s'), sequent_1[BaseTableauxBuilder.certain_falsehood_atoms])
self.assertEqual(0, len(sequent_1[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(sequent_1[BaseTableauxBuilder.true_atoms]))
self.assertEqual(0, len(sequent_1[BaseTableauxBuilder.true_processed]))
self.assertEqual(0, len(sequent_1[BaseTableauxBuilder.false_processed]))
self.assertEqual(3, len(sequent_2[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent_2[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent_2[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent_2[BaseTableauxBuilder.true_exprs])
self.assertEqual(0, len(sequent_2[BaseTableauxBuilder.false_exprs]))
self.assertEqual(0, len(sequent_2[BaseTableauxBuilder.certain_falsehood_exprs]))
self.assertEqual(1, len(sequent_2[BaseTableauxBuilder.certain_falsehood_atoms]))
self.assertIn(parse('q'), sequent_2[BaseTableauxBuilder.certain_falsehood_atoms])
self.assertEqual(0, len(sequent_2[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(sequent_2[BaseTableauxBuilder.true_atoms]))
self.assertEqual(0, len(sequent_2[BaseTableauxBuilder.true_processed]))
self.assertEqual(0, len(sequent_2[BaseTableauxBuilder.false_processed]))
def test_merge_cf_impl(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p'),
parse('t&f')
]
expr_cf = [
parse('s->q')
]
l_expr = [[
parse('s')
]]
r_expr = [[
parse('q')
]]
cf_expr = [[]]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs = expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((None, None, expr_cf[0]), l_expr, r_expr, cf_expr, [])
self.assertEqual(1, len(tableau.children))
sequent = tableau.children[0].sequent
self.assertEqual(3, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertEqual(1, len(sequent[BaseTableauxBuilder.true_atoms]))
self.assertIn(parse('s'), sequent[BaseTableauxBuilder.true_atoms])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertIn(parse('q'), sequent[BaseTableauxBuilder.false_atoms])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_exprs]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
def test_merge_cf_eq(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p'),
parse('t&f')
]
expr_cf = [
parse('s<->q')
]
l_expr = [[]]
r_expr = [[]]
cf_expr = [[
parse('(s->q)&(q->s)')
]]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs = expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((None, None, expr_cf[0]), l_expr, r_expr, cf_expr, [])
self.assertEqual(0, len(tableau.children))
sequent = tableau.sequent
self.assertEqual(3, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_atoms]))
self.assertEqual(2, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertIn(parse('!s->!p'), sequent[BaseTableauxBuilder.false_exprs])
self.assertIn(parse('t&f'), sequent[BaseTableauxBuilder.false_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.certain_falsehood_exprs]))
self.assertIn(parse('(s->q)&(q->s)'), sequent[BaseTableauxBuilder.certain_falsehood_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.certain_falsehood_processed]))
self.assertIn(parse('s<->q'), sequent[BaseTableauxBuilder.certain_falsehood_processed])
def test_merge_cf_not(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p'),
parse('t&f')
]
expr_cf = [
parse('!s')
]
l_expr = [[
parse('s')
]]
r_expr = [[]]
cf_expr = [[]]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs = expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((None, None, expr_cf[0]), l_expr, r_expr, cf_expr, [])
self.assertEqual(1, len(tableau.children))
sequent = tableau.children[0].sequent
self.assertEqual(3, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertEqual(1, len(sequent[BaseTableauxBuilder.true_atoms]))
self.assertIn(parse('s'), sequent[BaseTableauxBuilder.true_atoms])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_exprs]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.certain_falsehood_processed]))
self.assertIn(parse('!s'), sequent[BaseTableauxBuilder.certain_falsehood_processed])
def test_merge_true_not(self):
expr_t = [
parse('!s'),
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s->!p'),
parse('t&f')
]
expr_cf = [
parse('!s')
]
l_expr = [[]]
r_expr = [[]]
cf_expr = [[
parse('s')
]]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs = expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((expr_t[0], None, None), l_expr, r_expr, cf_expr, [])
self.assertEqual(0, len(tableau.children))
sequent = tableau.sequent
self.assertEqual(3, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_atoms]))
self.assertEqual(2, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertIn(parse('!s->!p'), sequent[BaseTableauxBuilder.false_exprs])
self.assertIn(parse('t&f'), sequent[BaseTableauxBuilder.false_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.certain_falsehood_exprs]))
self.assertIn(parse('!s'), sequent[BaseTableauxBuilder.certain_falsehood_exprs])
self.assertEqual(1, len(sequent[BaseTableauxBuilder.certain_falsehood_atoms]))
self.assertIn(parse('s'), sequent[BaseTableauxBuilder.certain_falsehood_atoms])
self.assertEqual(1, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertIn(parse('!s'), sequent[BaseTableauxBuilder.true_processed])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_processed]))
def test_merge_false_not(self):
expr_t = [
parse('a&b&c'),
parse('q->r'),
parse('r->s'),
]
expr_f = [
parse('!s'),
parse('!s->!p'),
parse('t&f'),
]
expr_cf = [
parse('!s')
]
l_expr = [[
parse('s')
]]
r_expr = [[]]
cf_expr = [[]]
tableau = IpcTableauxBuilder(true_exprs=expr_t,
false_exprs=expr_f,
cf_exprs = expr_cf)
manual_tableau = BaseManualTableau(LogicType.IPROPOSITIONAL, tableau)
manual_tableau.merge((None, expr_f[0], None), l_expr, r_expr, cf_expr, [])
self.assertEqual(1, len(tableau.children))
sequent = tableau.children[0].sequent
self.assertEqual(3, len(sequent[BaseTableauxBuilder.true_exprs]))
self.assertIn(parse('q->r'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('r->s'), sequent[BaseTableauxBuilder.true_exprs])
self.assertIn(parse('a&b&c'), sequent[BaseTableauxBuilder.true_exprs])
self.assertEqual(1, len(sequent[BaseTableauxBuilder.true_atoms]))
self.assertIn(parse('s'), sequent[BaseTableauxBuilder.true_atoms])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_exprs]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_atoms]))
self.assertEqual(1, len(sequent[BaseTableauxBuilder.certain_falsehood_exprs]))
self.assertIn(parse('!s'), sequent[BaseTableauxBuilder.certain_falsehood_exprs])
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_atoms]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.true_processed]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.false_processed]))
self.assertEqual(0, len(sequent[BaseTableauxBuilder.certain_falsehood_processed]))
| 33.764763 | 99 | 0.577998 | 3,758 | 34,879 | 5.150612 | 0.020756 | 0.174623 | 0.109372 | 0.061841 | 0.978095 | 0.971275 | 0.961459 | 0.944462 | 0.907626 | 0.845888 | 0 | 0.008579 | 0.278104 | 34,879 | 1,032 | 100 | 33.797481 | 0.760157 | 0 | 0 | 0.704631 | 0 | 0 | 0.031595 | 0.003612 | 0 | 0 | 0 | 0 | 0.255319 | 1 | 0.041302 | false | 0 | 0.006258 | 0.001252 | 0.050063 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1c43c36c2ada9f5d833ee6c1c64f1c7f9dff279a | 113 | py | Python | dexy/reporters/nodegraph/__init__.py | dsoto/dexy | 0f2090250040c3c54c8481a16de8e476b559e87c | [
"MIT"
] | 136 | 2015-01-06T15:04:47.000Z | 2021-12-21T22:52:41.000Z | dexy/reporters/nodegraph/__init__.py | dsoto/dexy | 0f2090250040c3c54c8481a16de8e476b559e87c | [
"MIT"
] | 13 | 2015-01-26T14:06:58.000Z | 2020-03-27T21:16:10.000Z | dexy/reporters/nodegraph/__init__.py | dsoto/dexy | 0f2090250040c3c54c8481a16de8e476b559e87c | [
"MIT"
] | 34 | 2015-01-02T16:24:53.000Z | 2021-11-27T05:38:30.000Z | import dexy.reporters.nodegraph.d3
import dexy.reporters.nodegraph.text
import dexy.reporters.nodegraph.graphviz
| 28.25 | 40 | 0.867257 | 15 | 113 | 6.533333 | 0.466667 | 0.306122 | 0.581633 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009346 | 0.053097 | 113 | 3 | 41 | 37.666667 | 0.906542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
1c7c6a6f78cc40bb25b511d7c7cabff82a3c3202 | 162 | py | Python | volcengine_ml_platform/util/metric.py | Berumotto1/ml-platform-sdk-python | fc30300552bbeed5d97e8846beb040c9d262d23e | [
"MIT"
] | 11 | 2021-09-08T09:20:54.000Z | 2022-02-18T06:45:47.000Z | volcengine_ml_platform/util/metric.py | Berumotto1/ml-platform-sdk-python | fc30300552bbeed5d97e8846beb040c9d262d23e | [
"MIT"
] | 1 | 2021-09-24T03:21:07.000Z | 2021-09-24T06:32:26.000Z | volcengine_ml_platform/util/metric.py | Berumotto1/ml-platform-sdk-python | fc30300552bbeed5d97e8846beb040c9d262d23e | [
"MIT"
] | 4 | 2021-09-23T07:54:06.000Z | 2021-11-27T09:40:55.000Z | import time
def current_ts():
return int(round(time.time() * 1000.0))
def cost_time(start_time):
return int(round(time.time() * 1000.0 - start_time))
| 16.2 | 56 | 0.67284 | 26 | 162 | 4.038462 | 0.461538 | 0.171429 | 0.266667 | 0.342857 | 0.514286 | 0.514286 | 0.514286 | 0 | 0 | 0 | 0 | 0.074627 | 0.17284 | 162 | 9 | 57 | 18 | 0.708955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
1c7e699b02f236cb4e64a201166dfecf9de66049 | 6,424 | py | Python | sfaira/models/embedding/output_layers.py | johnmous/sfaira | c50240a74530e614ab7681bf9c63b04cb815b361 | [
"BSD-3-Clause"
] | 110 | 2020-09-08T07:47:15.000Z | 2022-03-29T03:33:56.000Z | sfaira/models/embedding/output_layers.py | johnmous/sfaira | c50240a74530e614ab7681bf9c63b04cb815b361 | [
"BSD-3-Clause"
] | 405 | 2020-09-15T15:05:46.000Z | 2022-03-16T14:44:23.000Z | sfaira/models/embedding/output_layers.py | johnmous/sfaira | c50240a74530e614ab7681bf9c63b04cb815b361 | [
"BSD-3-Clause"
] | 20 | 2021-03-30T15:30:14.000Z | 2022-03-07T12:52:58.000Z | try:
import tensorflow as tf
except ImportError:
tf = None
class NegBinOutput(tf.keras.layers.Layer):
"""Negative binomial output layer"""
def __init__(
self,
original_dim=None,
name='neg_bin_output',
**kwargs
):
super().__init__(name=name, **kwargs)
self.means = tf.keras.layers.Dense(original_dim, activation='linear')
self.var = tf.keras.layers.Dense(original_dim, activation='linear')
def call(self, inputs, **kwargs):
activation, sf = inputs
mean, var = self.means(activation), self.var(activation)
# clip to log of largest values supported by log operation
bound = 60.
mean_clip = tf.clip_by_value(mean, -bound, bound, "decoder_clip")
var_clip = tf.clip_by_value(var, -bound, bound, "decoder_clip")
invlinker_mean = tf.exp(mean_clip + sf)
invlinker_var = tf.exp(var_clip)
return [invlinker_mean, invlinker_var]
class NegBinSharedDispOutput(tf.keras.layers.Layer):
"""Negative binomial output layer with a single dispersion estimate per features"""
def __init__(
self,
original_dim=None,
name='neg_bin_shared_disp_output',
**kwargs
):
super().__init__(name=name, **kwargs)
self.means = tf.keras.layers.Dense(original_dim, activation='linear')
self.var = self.add_weight(
"var_bias",
shape=[1, original_dim]
)
def call(self, inputs, **kwargs):
activation, sf = inputs
mean = self.means(activation)
var = self.var
var = tf.broadcast_to(var, tf.shape(mean))
# clip to log of largest values supported by log operation
bound = 60.
mean_clip = tf.clip_by_value(mean, -bound, bound, "decoder_clip")
var_clip = tf.clip_by_value(var, -bound, bound, "decoder_clip")
invlinker_mean = tf.exp(mean_clip + sf)
invlinker_var = tf.exp(var_clip)
return [invlinker_mean, invlinker_var]
class NegBinConstDispOutput(tf.keras.layers.Layer):
"""Negative binomial output layer with dispersion set as constant (=1)."""
def __init__(
self,
original_dim=None,
name='neg_bin_const_disp_output',
**kwargs
):
super().__init__(name=name, **kwargs)
self.means = tf.keras.layers.Dense(original_dim, activation='linear')
self.var_constant = 1.
def call(self, inputs, **kwargs):
activation, sf = inputs
mean = self.means(activation)
var = tf.constant([[self.var_constant]], dtype=activation.dtype)
var = tf.broadcast_to(var, tf.shape(mean))
# clip to log of largest values supported by log operation
bound = 60.
mean_clip = tf.clip_by_value(mean, -bound, bound, "decoder_clip")
var_clip = tf.clip_by_value(var, -bound, bound, "decoder_clip")
invlinker_mean = tf.exp(mean_clip + sf)
invlinker_var = tf.exp(var_clip)
return [invlinker_mean, invlinker_var]
class GaussianOutput(tf.keras.layers.Layer):
"""
Gaussian output layer.
Size factor only makes sense if logged and data is positive and logged.
"""
def __init__(
self,
original_dim=None,
name='gaussian_output',
**kwargs
):
super().__init__(name=name, **kwargs)
self.means = tf.keras.layers.Dense(original_dim, activation='linear')
self.var = tf.keras.layers.Dense(original_dim, activation='linear')
def call(self, inputs, **kwargs):
activation, sf = inputs
mean, var = self.means(activation), self.var(activation)
# clip to log of largest values supported by log operation
bound = 60.
mean_clip = tf.clip_by_value(mean, tf.exp(-bound), tf.exp(bound), "decoder_clip")
var_clip = tf.clip_by_value(var, -bound, bound, "decoder_clip")
invlinker_mean = mean_clip + sf
invlinker_var = tf.exp(var_clip)
return [invlinker_mean, invlinker_var]
class GaussianSharedStdOutput(tf.keras.layers.Layer):
"""
Gaussian output layer with a single standard deviation estimate per features.
Size factor only makes sense if logged and data is positive and logged.
"""
def __init__(
self,
original_dim=None,
name='gaussian_shared_disp_output',
**kwargs
):
super().__init__(name=name, **kwargs)
self.means = tf.keras.layers.Dense(original_dim, activation='linear')
self.var = self.add_weight(
"var_bias",
shape=[1, original_dim]
)
def call(self, inputs, **kwargs):
activation, sf = inputs
mean = self.means(activation)
var = self.var
var = tf.broadcast_to(var, tf.shape(mean))
# clip to log of largest values supported by log operation
bound = 60.
mean_clip = tf.clip_by_value(mean, tf.exp(-bound), tf.exp(bound), "decoder_clip")
var_clip = tf.clip_by_value(var, -bound, bound, "decoder_clip")
invlinker_mean = mean_clip + sf
invlinker_var = tf.exp(var_clip)
return [invlinker_mean, invlinker_var]
class GaussianConstStdOutput(tf.keras.layers.Layer):
"""
Gaussian output layer with standard deviation set as constant (=1).
Size factor only makes sense if logged and data is positive and logged.
"""
def __init__(
self,
original_dim=None,
name='gaussian_const_disp_output',
**kwargs
):
super().__init__(name=name, **kwargs)
self.means = tf.keras.layers.Dense(original_dim, activation='linear')
self.var_constant = 1.
def call(self, inputs, **kwargs):
activation, sf = inputs
mean = self.means(activation)
var = tf.constant([[self.var_constant]], dtype=activation.dtype)
var = tf.broadcast_to(var, tf.shape(mean))
# clip to log of largest values supported by log operation
bound = 60.
mean_clip = tf.clip_by_value(mean, tf.exp(-bound), tf.exp(bound), "decoder_clip")
var_clip = tf.clip_by_value(var, -bound, bound, "decoder_clip")
invlinker_mean = mean_clip + sf
invlinker_var = tf.exp(var_clip)
return [invlinker_mean, invlinker_var]
| 30.445498 | 89 | 0.62142 | 798 | 6,424 | 4.780702 | 0.111529 | 0.023591 | 0.047706 | 0.037746 | 0.925295 | 0.921625 | 0.921625 | 0.911927 | 0.878637 | 0.82464 | 0 | 0.003837 | 0.26977 | 6,424 | 210 | 90 | 30.590476 | 0.809422 | 0.141345 | 0 | 0.863636 | 0 | 0 | 0.062707 | 0.019125 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.015152 | 0 | 0.19697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1c90ffa65571f25f70600aad8b236e01b2be660e | 197 | py | Python | player/settings/secrets.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | player/settings/secrets.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | player/settings/secrets.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | import os
os.environ['DJANGO_TOUCHDB_SECRET_KEY'] = "example_django_secret_key"
os.environ['POSTGRES_USER'] = "example_database_user"
os.environ['POSTGRES_PASSWORD'] = "example_datanase_password"
| 32.833333 | 69 | 0.817259 | 26 | 197 | 5.730769 | 0.5 | 0.181208 | 0.228188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060914 | 197 | 5 | 70 | 39.4 | 0.805405 | 0 | 0 | 0 | 0 | 0 | 0.639594 | 0.48731 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
c720c72168dbc2053cd4c1b16a6cdf7ac292af7d | 239 | py | Python | nmigen_boards/upduino_v2.py | lethalbit/nmigen-boards | aaf18252e457ff95257137da2a629820c0ff2bfa | [
"BSD-2-Clause"
] | 11 | 2021-12-10T12:23:29.000Z | 2022-03-13T08:40:20.000Z | nmigen_boards/upduino_v2.py | lethalbit/nmigen-boards | aaf18252e457ff95257137da2a629820c0ff2bfa | [
"BSD-2-Clause"
] | 12 | 2021-12-11T18:51:29.000Z | 2022-03-12T05:08:52.000Z | nmigen_boards/upduino_v2.py | lethalbit/nmigen-boards | aaf18252e457ff95257137da2a629820c0ff2bfa | [
"BSD-2-Clause"
] | 7 | 2021-12-12T07:20:21.000Z | 2022-03-06T06:20:55.000Z | from amaranth_boards.upduino_v2 import *
from amaranth_boards.upduino_v2 import __all__
import warnings
warnings.warn("instead of nmigen_boards.upduino_v2, use amaranth_boards.upduino_v2",
DeprecationWarning, stacklevel=2)
| 29.875 | 84 | 0.803347 | 31 | 239 | 5.806452 | 0.516129 | 0.288889 | 0.333333 | 0.383333 | 0.366667 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0.024272 | 0.138075 | 239 | 7 | 85 | 34.142857 | 0.849515 | 0 | 0 | 0 | 0 | 0 | 0.280335 | 0.213389 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c77838154e23b881e0a90cebbd9d10f0933e2d51 | 163,427 | py | Python | ostap/fitting/fit3d.py | Pro100Tema/ostap | 1765304fce43714e1f51dfe03be0daa5aa5d490f | [
"BSD-3-Clause"
] | null | null | null | ostap/fitting/fit3d.py | Pro100Tema/ostap | 1765304fce43714e1f51dfe03be0daa5aa5d490f | [
"BSD-3-Clause"
] | null | null | null | ostap/fitting/fit3d.py | Pro100Tema/ostap | 1765304fce43714e1f51dfe03be0daa5aa5d490f | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# =============================================================================
## @file ostap/fitting/fit3d.py
# Set of useful basic utilities to build various fit models
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2011-07-25
# =============================================================================
"""Set of useful basic utilities to build various 2D-fit models"""
# =============================================================================
__version__ = "$Revision:"
__author__ = "Vanya BELYAEV Ivan.Belyaev@itep.ru"
__date__ = "2011-07-25"
__all__ = (
##
'PDF3' , ## useful base class for 3D-models
'Fit3D' , ## the model for 3D-fit
'Fit3DSym' , ## the model for symmetric 3D-fit
'Fit3DMix' , ## the model for half-symmetric 3D-fit
##
'Generic3D_pdf' , ## wrapper over imported RooFit (2D)-pdf
'Flat3D' , ## the most trivial 3D-pdf - constant
'Model3D' , ## trivial class to build 3D model from 1D-components
'Sum3D' , ## non-extended sum two PDFs
'H3D_pdf' , ## convertor of 1D-histo to RooDataPdf
)
# =============================================================================
import ROOT, random
from ostap.core.core import dsID , hID , VE , Ostap
from ostap.core.ostap_types import integer_types
from ostap.logger.utils import roo_silent , rooSilent
from ostap.fitting.utils import H3D_dset , component_similar , component_clone
from ostap.fitting.basic import PDF , Flat1D
from ostap.fitting.fit2d import PDF2 , Model2D
from ostap.fitting.roofit import SETVAR
from builtins import range
# =============================================================================
from ostap.logger.logger import getLogger
if '__main__' == __name__ : logger = getLogger ( 'ostap.fitting.fit3d' )
else : logger = getLogger ( __name__ )
# =============================================================================
# @class PDF3
# The helper base class for implementation of 3D-pdfs
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2017-11-11
class PDF3 (PDF2) :
""" Useful helper base class for implementation of PDFs for 3D-fit
"""
def __init__ ( self , name , xvar = None , yvar = None , zvar = None , special = False ) :
PDF2.__init__ ( self , name , xvar , yvar , special = special )
## create the variable
if isinstance ( zvar , tuple ) and 2 == len(zvar) :
self.__zvar = self.make_var ( zvar , ## var
'z' , ## name
'z-variable(mass)' , ## title/comment
None , ## fix ?
*zvar ) ## min/max
elif isinstance ( zvar , ROOT.RooAbsReal ) :
self.__zvar = self.make_var ( zvar , ## var
'z' , ## name
'z-variable/mass' , ## title/comment
fix = None ) ## fix ?
else :
self.warning('``z-variable''is not specified properly %s/%s' % ( zvar , type ( zvar ) ) )
self.__zvar = self.make_var( zvar , 'z' , 'z-variable' )
self.vars.add ( self.__zvar )
## save the configuration
self.config = {
'name' : self.name ,
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'zvar' : self.zvar ,
}
def zminmax ( self ) :
"""Min/max values for z-variable"""
return self.__zvar.minmax()
@property
def zvar ( self ) :
"""``z''-variable for the fit (same as ``z'')"""
return self.__zvar
@property
def z ( self ) :
"""``z''-variable for the fit (same as ``zvar'')"""
return self.__zvar
# =========================================================================
## make the actual fit
# @code
# r,f = model.fitTo ( dataset )
# r,f = model.fitTo ( dataset , weighted = True )
# r,f = model.fitTo ( dataset , ncpu = 10 )
# r,f = model.fitTo ( dataset )
# @endcode
def fitTo ( self ,
dataset ,
silent = False ,
refit = False ,
timer = False ,
args = () , **kwargs ) :
"""
Perform the actual fit (and draw it)
>>> r,f = model.fitTo ( dataset )
>>> r,f = model.fitTo ( dataset , weighted = True )
>>> r,f = model.fitTo ( dataset , ncpu = 10 )
>>> r,f = model.fitTo ( dataset )
"""
if isinstance ( dataset , H3D_dset ) : dataset = dataset.dset
elif isinstance ( dataset , ROOT.TH3 ) :
density = kwargs.pop ( 'density' , False )
chi2 = kwargs.pop ( 'chi2' , False )
return self.fitHisto ( dataset ,
draw = draw ,
silent = silent ,
density = dentity ,
chi2 = chi2 , args = args , **kwargs )
## play a bit with binning cache for convolutions
if self.zvar.hasBinning ( 'cache' ) :
nb1 = self.zvar.getBins( 'cache' )
zv = getattr ( dataset , self.zvar.name , None )
if zv and zv.hasBinning ( 'cache' ) :
nb2 = yv.getBins('cache')
if nb1 != nb2 :
zv.setBins ( max ( nb1 , nb2 ) , 'cache' )
self.info ('Adjust binning cache %s->%s for variable %s in dataset' % ( nb2 , nb1 , zv.name ) )
elif zv :
zv.setBins ( nb1 , 'cache' )
self .info ('Set binning cache %s for variable %s in dataset' % ( nb1 , zv.name ) )
result,f = PDF2.fitTo ( self ,
dataset = dataset ,
draw = False , ## False here!
nbins = 50 , ## fake here!
ybins = 20 , ## fake here!
silent = silent ,
refit = refit ,
timer = timer ,
args = args , **kwargs )
return result
# =========================================================================
## draw the projection over 1st variable
#
# @code
# r,f = model.fitTo ( dataset ) ## fit dataset
# fx = model.draw1 ( dataset , nbins = 100 ) ## draw results
#
# fx = model.draw1 ( dataset , nbins = 100 , in_range2 = (2,3) ) ## draw results
#
# model.yvar.setRange ( 'QUQU2' , 2 , 3 )
# fx = model.draw1 ( dataset , nbins = 100 , in_range2 = 'QUQU2') ## draw results
#
# @endcode
def draw1 ( self ,
dataset = None ,
nbins = 100 ,
silent = True ,
in_range2 = None ,
in_range3 = None , **kwargs ) :
""" Draw the projection over 3rd variable
>>> r,f = model.fitTo ( dataset ) ## fit dataset
>>> fx = model.draw1 ( dataset , nbins = 100 ) ## draw results
>>> fx = model.draw1 ( dataset , nbins = 100 , in_range2 = (2,3) ) ## draw results
>>> model.yvar.setRange ( 'QUQU2' , 2 , 3 )
>>> fx = model.draw1 ( dataset , nbins = 100 , in_range2 = 'QUQU2') ## draw results
"""
if in_range2 and isinstance ( in_range2 , tuple ) and 2 == len ( in_range2 ) :
with rooSilent ( 3 ) : self.yvar.setRange ( 'aux_rng2' , in_range2[0] , in_range2[1] )
in_range2 = 'aux_rng2'
if in_range3 and isinstance ( in_range3 , tuple ) and 2 == len ( in_range3 ) :
with rooSilent ( 3 ) : self.zvar.setRange ( 'aux_rng3' , in_range3[0] , in_range3[1] )
in_range3 = 'aux_rng3'
in_range = []
if in_range2 : in_range.append( in_range2 )
if in_range3 : in_range.append( in_range3 )
in_ranage = tuple( in_range )
return self.draw ( drawvar = self.xvar ,
dataset = dataset ,
nbins = nbins ,
ybins = 20 , ## fake
silent = silent ,
in_range = in_range , **kwargs )
# =========================================================================
## draw the projection over 2nd variable
#
# @code
# r,f = model.fitTo ( dataset ) ## fit dataset
# fy = model.draw1 ( dataset , nbins = 100 ) ## draw results
#
# fy = model.draw1 ( dataset , nbins = 100 , in_range1 = (2,3) ) ## draw results
#
# model.xvar.setRange ( 'QUQU1' , 2 , 3 )
# fy = model.draw1 ( dataset , nbins = 100 , in_range1 = 'QUQU1') ## draw results
#
# @endcode
def draw2 ( self ,
dataset = None ,
nbins = 100 ,
silent = True ,
in_range1 = None ,
in_range3 = None , **kwargs ) :
""" Draw the projection over 2nd variable
>>> r,f = model.fitTo ( dataset ) ## fit dataset
>>> fy = model.draw2 ( dataset , nbins = 100 ) ## draw results
>>> fx = model.draw2 ( dataset , nbins = 100 , in_range1 = (2,3) ) ## draw results
>>> model.xvar.setRange ( 'QUQU1' , 2 , 3 )
>>> fx = model.draw2 ( dataset , nbins = 100 , in_range1 = 'QUQU1') ## draw results
"""
if in_range1 and isinstance ( in_range1 , tuple ) and 2 == len ( in_range1 ) :
with rooSilent ( 3 ) : self.xvar.setRange ( 'aux_rng1' , in_range1[0] , in_range1[1] )
in_range1 = 'aux_rng1'
if in_range3 and isinstance ( in_range3 , tuple ) and 2 == len ( in_range3 ) :
with rooSilent ( 3 ) : self.zvar.setRange ( 'aux_rng3' , in_range3[0] , in_range3[1] )
in_range3 = 'aux_rng3'
in_range = []
if in_range1 : in_range.append( in_range1 )
if in_range3 : in_range.append( in_range3 )
in_ranage = tuple( in_range )
return self.draw ( drawvar = self.yvar ,
dataset = dataset ,
nbins = nbins ,
ybins = 20 , ## fake
silent = silent ,
in_range = in_range , **kwargs )
# =========================================================================
## draw the projection over 3rd variable
#
# @code
# r,f = model.fitTo ( dataset ) ## fit dataset
# fz = model.draw3 ( dataset , nbins = 100 ) ## draw results
#
# fz = model.draw3 ( dataset , nbins = 100 , in_range2 = (2,3) ) ## draw results
#
# model.yvar.setRange ( 'QUQU2' , 2 , 3 )
# f = model.draw3 ( dataset , nbins = 100 , in_range2 = 'QUQU2') ## draw results
# @endcode
def draw3 ( self ,
dataset = None ,
nbins = 100 ,
silent = True ,
in_range1 = None ,
in_range2 = None , **kwargs ) :
""" Draw the projection over 3rd variable
>>> r,f = model.fitTo ( dataset ) ## fit dataset
>>> fx = model.draw3 ( dataset , nbins = 100 ) ## draw results
>>> fx = model.draw3 ( dataset , nbins = 100 , in_range2 = (2,3) ) ## draw results
>>> model.yvar.setRange ( 'QUQU2' , 2 , 3 )
>>> fx = model.draw3 ( dataset , nbins = 100 , in_range2 = 'QUQU2') ## draw results
"""
if in_range1 and isinstance ( in_range1 , tuple ) and 2 == len ( in_range1 ) :
with rooSilent ( 3 ) : self.xvar.setRange ( 'aux_rng1' , in_range1[0] , in_range1[1] )
in_range1 = 'aux_rng1'
if in_range2 and isinstance ( in_range2 , tuple ) and 2 == len ( in_range2 ) :
with rooSilent ( 3 ) : self.yvar.setRange ( 'aux_rng2' , in_range2[0] , in_range2[1] )
in_range2 = 'aux_rng2'
in_range = []
if in_range1 : in_range.append( in_range1 )
if in_range2 : in_range.append( in_range2 )
in_ranage = tuple( in_range )
return self.draw ( drawvar = self.zvar ,
dataset = dataset ,
nbins = nbins ,
ybins = 20 , ## fake
silent = silent ,
in_range = in_range , **kwargs )
# =========================================================================
## make 1D-plot
def draw ( self ,
drawvar = None ,
dataset = None ,
nbins = 100 ,
silent = True ,
in_range = None ,
**kwargs ) :
"""
Make 1D-plot:
"""
if drawvar in ( 'z' , 'Z' , '3' , 3 , self.zvar.name ) :
drawvar = self.zvar
return PDF2.draw ( self ,
drawvar = drawvar ,
dataset = dataset ,
nbins = nbins ,
silent = silent ,
in_range = in_range , **kwargs )
# =========================================================================
## fit the 3D-histogram (and draw it)
#
# @code
#
# histo = ...
# r,f = model.fitHisto ( histo )
#
# @endcode
def fitHisto ( self ,
histo ,
draw = False ,
silent = False ,
density = False ,
chi2 = False ,
args = () , **kwargs ) :
"""Fit the histogram (and draw it)
>>> histo = ...
>>> r,f = model.fitHisto ( histo , draw = True )
"""
xminmax = histo.xminmax()
yminmax = histo.yminmax()
zminmax = histo.zminmax()
with RangeVar ( self.xvar , *xminmax ) , \
RangeVar ( self.yvar , *yminmax ) , \
RangeVar ( self.xvar , *zminmax ):
hdata = getattr ( self , 'histo_data' , None )
if hdata and isinstance ( hdata , H3D_dset ) and \
hdata.histo is histo and \
hdata.density == density and \
hdata.histo_hash == hash ( histo ) :
## reuse the existing dataset
self.debug ('Reuse the existing H3D_dset')
data = hdata.dset
else :
## convert it!
self.debug ('Create new H3D_dset' )
self.histo_data = H3D_dset ( histo , self.xvar , self.yvar , self.zvar ,
density , silent )
data = self.histo_data
if chi2 : return self.chi2fitTo ( data ,
draw = draw ,
silent = False ,
density = density ,
args = args , **kwargs )
else : return self.fitTo ( data ,
silent = silent ,
args = args , **kwargs )
# =========================================================================
## generate toy-sample according to PDF
# @code
# model = ....
# data = model.generate ( 10000 ) ## generate dataset with 10000 events
# varset = ....
# data = model.generate ( 100000 , varset )
# data = model.generate ( 100000 , varset , extended = = True )
# @endcode
def generate ( self , nEvents , varset = None , extended = False , *args ) :
"""Generate toy-sample according to PDF
>>> model = ....
>>> data = model.generate ( 10000 ) ## generate dataset with 10000 events
>>> varset = ....
>>> data = model.generate ( 100000 , varset )
>>> data = model.generate ( 100000 , varset , extended = True )
"""
args = args + ( ROOT.RooFit.Name ( dsID() ) , ROOT.RooFit.NumEvents ( nEvents ) )
if extended :
args = args + ( ROOT.RooFit.Extended () , )
if not varset :
varset = ROOT.RooArgSet( self.xvar , self.yvar , self.zvar )
elif isinstance ( varset , ROOT.RooAbsReal ) :
varset = ROOT.RooArgSet( varser )
if not self.xvar in varset :
vs = ROOT.RooArgSet()
vs . add ( self.xvar )
for v in varset : vs.add ( v )
varset = vs
if not self.yvar in varset :
vs = ROOT.RooArgSet()
vs . add ( self.yvar )
for v in varset : vs.add ( v )
varset = vs
if not self.zvar in varset :
vs = ROOT.RooArgSet()
vs . add ( self.zvar )
for v in varset : vs.add ( v )
varset = vs
return self.pdf.generate ( varset , *args )
# ====================================================================================
## simple 'function-like' interface
def __call__ ( self , x , y , z , error = False , normalized = True ) :
if isinstance ( self.xvar , ROOT.RooRealVar ) and \
isinstance ( self.yvar , ROOT.RooRealVar ) and \
isinstance ( self.zvar , ROOT.RooRealVar ) :
if x in self.xvar and y in self.yvar and z in self.zvar :
with SETVAR( self.xvar ) , SETVAR( self.yvar ) , SETVAR( self.zvar ) :
self.xvar.setVal ( x )
self.yvar.setVal ( y )
self.zvar.setVal ( z )
v = self.pdf.getVal ( self.vars ) if normalized else self.pdf.getValV ()
if error and self.fit_result :
e = self.pdf.getPropagatedError ( self.fit_result )
if 0<= e : return VE ( v , e * e )
return v
else : return 0.0
raise AttributeError ( 'something wrong goes here' )
# ========================================================================
## check minmax of the PDF using the random shoots
# @code
# pdf = ....
# mn , mx = pdf.minmax()
# @endcode
def minmax ( self , nshoots = 200000 ) :
"""Check min/max for the PDF using random shoots
>>> pdf = ....
>>> mn , mx = pdf.minmax()
"""
## try to get minmax directly from pdf/function
if self.tricks and hasattr ( self.pdf , 'function' ) :
if hasattr ( self.pdf , 'setPars' ) : self.pdf.setPars()
f = self.pdf.function()
if hasattr ( f , 'minmax' ) :
try :
mn , mx = f.minmax()
if 0<= mn and mn <= mx and 0 < mx :
return mn , mx
except :
pass
if hasattr ( f , 'max' ) :
try :
mx = f.max()
if 0 < mx : return 0 , mx
except :
pass
## check RooAbsReal functionality
code = self.pdf.getMaxVal( ROOT.RooArgSet ( self.xvar , self.yvar , self.zvar ) )
if 0 < code :
mx = self.pdf.maxVal ( code )
if 0 < mx : return 0 , mx
## not try to use random
mn , mx = -1 , -10
if hasattr ( self.pdf , 'min' ) : mn = self.pdf.min()
if hasattr ( self.pdf , 'max' ) : mx = self.pdf.max()
if 0 <= mn and mn <= mx and 0 < mx : return mn , mx
if not self.xminmax() : return ()
if not self.yminmax() : return ()
if not self.zminmax() : return ()
mn , mx = -1 , -10
xmn , xmx = self.xminmax()
ymn , ymx = self.yminmax()
zmn , zmx = self.zminmax()
for i in range ( nshoots ) :
xx = random.uniform ( xmn , xmx )
yy = random.uniform ( ymn , ymx )
zz = random.uniform ( zmn , zmx )
with SETVAR ( self.xvar ) :
with SETVAR ( self.yvar ) :
with SETVAR ( self.zvar ) :
self.xvar.setVal ( xx )
self.yvar.setVal ( yy )
self.zvar.setVal ( zz )
vv = self.pdf.getVal()
if mn < 0 or vv < mn : mn = vv
if mx < 0 or vv > mx : mx = vv
return mn , mx
# =========================================================================
## get integral over (xmin,xmax,ymin,ymax,zmin,zmax) region
# @code
# pdf = ...
# print pdf.integral( 0,1,0,2,0,5)
# @endcode
def integral ( self, xmin , xmax , ymin , ymax , zmin , zmax , nevents = True ) :
"""Get integral over (xmin,xmax,ymin,ymax,zmin,zmax) region
>>> pdf = ...
>>> print pdf.integral( 0,1,0,2,0,5)
"""
if self.xminmax() :
xmn , xmx = self.xminmax()
xmin = max ( xmin , xmn )
xmax = min ( xmax , xmx )
if self.yminmax() :
ymn , ymx = self.yminmax()
ymin = max ( ymin , ymn )
ymax = min ( ymax , ymx )
if self.zminmax() :
zmn , zmx = self.zminmax()
zmin = max ( zmin , zmn )
zmax = min ( zmax , zmx )
value , todo = 0 , True
## 1) make a try to use analytical integral (could be fast)
if self.tricks :
try:
if hasattr ( self.pdf , 'setPars' ) : self.pdf.setPars()
fun = self.pdf.function()
value , todo = fun.integral ( xmin , xmax ,
ymin , ymax ,
zmin , zmax ) , False
except:
pass
## for numerical integration
from ostap.math.integral import integral3 as _integral3
extended = self.pdf.canBeExtended() or isinstance ( self.pdf , ROOT.RooAddPdf )
if todo and extended :
value = _integral3 ( self , xmin , xmax , ymin , ymax , zmin , zmax )
elif todo :
## use unormalized PDF here to speed up the integration
ifun = lambda x , y , z : self ( x , y , z , error = False , normalized = False )
value = _integral3 ( ifun , xmin , xmax , ymin , ymax , zmin , zmax )
norm = self.pdf.getNorm ( self.vars )
value /= norm
if nevents and self.pdf.mustBeExtended () :
evts = self.pdf.expectedEvents( self.vars )
if evts <= 0 or iszero ( evts ) :
self.warning ( "integral: expectedEvents is %s" % evts )
value *= evts
return value
# ==========================================================================
## get a minimum of PDF for certain interval
# @code
# pdf2 = ...
# x , y , z = pdf3.minimum()
# @endcode
def minimum ( self ,
xmin = None , xmax = None ,
ymin = None , ymax = None ,
zmin = None , zmax = None , x0 = () ) :
"""Get a minimum of PDF for certain interval
>>> pdf3 = ...
>>> x, y , z = pdf3.minimum()
"""
if xmin is None : xmin = self.xminmax()[0]
if xmax is None : xmax = self.xminmax()[1]
if self.xminmax() :
xmin = max ( xmin , self.xminmax()[0] )
xmax = min ( xmax , self.xminmax()[1] )
if ymin is None : ymin = self.yminmax()[0]
if ymax is None : ymax = self.yminmax()[1]
if self.yminmax() :
ymin = max ( ymin , self.yminmax()[0] )
ymax = min ( ymax , self.yminmax()[1] )
if zmin is None : zmin = self.zminmax()[0]
if zmax is None : zmax = self.zminmax()[1]
if self.zminmax() :
zmin = max ( zmin , self.zminmax()[0] )
zmax = min ( zmax , self.zminmax()[1] )
if not x0 : x0 = 0.5 * ( xmin + xmax ) , 0.5 * ( ymin + ymax ) , 0.5 * ( zmin + zmax )
if not xmin <= x0[0] <= xmax :
logger.error("Wrong xmin/x0[0]/xmax: %s/%s/%s" % ( xmin , x0[0] , xmax ) )
if not ymin <= x0[1] <= ymax :
logger.error("Wrong ymin/x0[1]/ymax: %s/%s/%s" % ( ymin , x0[1] , ymax ) )
if not zmin <= x0[2] <= zmax :
logger.error("Wrong zmin/x0[2]/zmax: %s/%s/%s" % ( zmin , x0[2] , zmax ) )
from ostap.math.minimize import sp_minimum_3D
return sp_minimum_3D ( self ,
xmin , xmax ,
ymin , ymax ,
zmin , zmax , x0 )
# ==========================================================================
## get a maximum of PDF for certain interval
# @code
# pdf2 = ...
# x , y , z = pdf3.maximum()
# @endcode
def minimum ( self ,
xmin = None , xmax = None ,
ymin = None , ymax = None ,
zmin = None , zmax = None , x0 = () ) :
"""Get a maximum of PDF for certain interval
>>> pdf3 = ...
>>> x, y , z = pdf3.maximum()
"""
if xmin is None : xmin = self.xminmax()[0]
if xmax is None : xmax = self.xminmax()[1]
if self.xminmax() :
xmin = max ( xmin , self.xminmax()[0] )
xmax = min ( xmax , self.xminmax()[1] )
if ymin is None : ymin = self.yminmax()[0]
if ymax is None : ymax = self.yminmax()[1]
if self.yminmax() :
ymin = max ( ymin , self.yminmax()[0] )
ymax = min ( ymax , self.yminmax()[1] )
if zmin is None : zmin = self.zminmax()[0]
if zmax is None : zmax = self.zminmax()[1]
if self.zminmax() :
zmin = max ( zmin , self.zminmax()[0] )
zmax = min ( zmax , self.zminmax()[1] )
if not x0 : x0 = 0.5 * ( xmin + xmax ) , 0.5 * ( ymin + ymax ) , 0.5 * ( zmin + zmax )
if not xmin <= x0[0] <= xmax :
logger.error("Wrong xmin/x0[0]/xmax: %s/%s/%s" % ( xmin , x0[0] , xmax ) )
if not ymin <= x0[1] <= ymax :
logger.error("Wrong ymin/x0[1]/ymax: %s/%s/%s" % ( ymin , x0[1] , ymax ) )
if not zmin <= x0[2] <= zmax :
logger.error("Wrong zmin/x0[2]/zmax: %s/%s/%s" % ( zmin , x0[2] , zmax ) )
from ostap.math.minimize import sp_maximum_3D
return sp_maximum_3D ( self ,
xmin , xmax ,
ymin , ymax ,
zmin , zmax , x0 )
# ==========================================================================
## convert PDF into TF2 object, e.g. to profit from TF3::Draw options
# @code
# pdf = ...
# tf3 = pdf.tf()
# tf3.Draw( options )
# @endcode
def tf ( self ,
xmin = None , xmax = None ,
ymin = None , ymax = None ,
zmin = None , zmax = None ) :
"""Convert PDF to TF3 object, e.g. to profit from TF3::Draw options
>>> pdf = ...
>>> tf3 = pdf.tf()
>>> tf3.Draw('colz')
"""
def _aux_fun_ ( x , pars = [] ) :
return self ( x[0] , x[1] , x[2] , error = False )
if xmin == None and self.xminmax() : xmin = self.xminmax()[0]
if xmax == None and self.xminmax() : xmax = self.xminmax()[1]
if ymin == None and self.yminmax() : ymin = self.yminmax()[0]
if ymax == None and self.yminmax() : ymax = self.yminmax()[1]
if zmin == None and self.zminmax() : zmin = self.zminmax()[0]
if zmax == None and self.zminmax() : zmax = self.zminmax()[1]
if xmin == None : xmin = 0.0
if xmax == None : xmin = 1.0
if ymin == None : ymin = 0.0
if ymax == None : ymin = 1.0
if zmin == None : zmin = 0.0
if zmax == None : zmin = 1.0
from ostap.core.core import fID
return ROOT.TF3 ( fID() , _aux_fun_ , xmin , xmax , ymin , ymax , zmin , zmax )
# ==========================================================================
## create the histogram accoring to specifications
def make_histo ( self ,
xbins = 10 , xmin = None , xmax = None ,
ybins = 10 , ymin = None , ymax = None ,
zbins = 10 , zmin = None , zmax = None ,
hpars = () ,
histo = None ) :
"""Create the histogram accoring to specifications"""
import ostap.histos.histos
# histogram is provided
if histo :
assert isinstance ( histo , ROOT.TH3 ), \
"Illegal type of ``histo''-argument %s" % type( histo )
histo = histo.clone()
histo.Reset()
# arguments for the histogram constructor
elif hpars :
from ostap.core.core import hID
histo = ROOT.TH3F ( hID () , 'PDF%s' % self.name , *hpars )
if not histo.GetSumw2() : histo.Sumw2()
# explicit contruction from (#bins,min,max)-triplet
else :
assert isinstance ( xbins , integer_types ) and 0 < xbins, \
"Wrong ``xbins''-argument %s" % xbins
assert isinstance ( ybins , integer_types ) and 0 < ybins, \
"Wrong ``ybins''-argument %s" % ybins
assert isinstance ( zbins , integer_types ) and 0 < zbins, \
"Wrong ``zbins''-argument %s" % zbins
if xmin == None and self.xminmax() : xmin = self.xminmax()[0]
if xmax == None and self.xminmax() : xmax = self.xminmax()[1]
if ymin == None and self.yminmax() : ymin = self.yminmax()[0]
if ymax == None and self.yminmax() : ymax = self.yminmax()[1]
if zmin == None and self.zminmax() : zmin = self.zminmax()[0]
if zmax == None and self.zminmax() : zmax = self.zminmax()[1]
from ostap.core.core import hID
histo = ROOT.TH3F ( hID() , 'PDF%s' % self.name ,
xbins , xmin , xmax ,
ybins , ymin , ymax ,
zbins , zmin , zmax )
if not histo.GetSumw2() : histo.Sumw2()
return histo
# ==========================================================================
## Convert PDF to the 3D-histogram in correct way
# @code
# pdf = ...
# h1 = pdf.histo ( 10 , 0. , 10. , 10 , 0. , 4. , 10 , 0. , 3 ) ## specify histogram parameters
# histo_template = ...
# h2 = pdf.histo ( histo = histo_template ) ## use historgam template
# h3 = pdf.histo ( ... , integral = True ) ## use PDF integral within the bin
# @endcode
def histo ( self ,
xbins = 10 , xmin = None , xmax = None ,
ybins = 10 , ymin = None , ymax = None ,
zbins = 10 , zmin = None , zmax = None ,
hpars = () ,
histo = None ,
intergal = True ,
errors = False ) :
"""Convert PDF to the 3D-histogram in correct way
>>> pdf = ...
>>> h1 = pdf.histo ( 10 , 0. , 10. , 10 , 0. , 4. , 10 , 0. , 3 ) ## specify histogram parameters
>>> histo_template = ...
>>> h2 = pdf.histo ( histo = histo_template ) ## use historgam template
>>> h3 = pdf.histo ( ... , integral = True ) ## use PDF integral within the bin
"""
histo = self.make_histo ( xbins = xbins , xmin = xmin , xmax = xmax ,
ybins = ybins , ymin = ymin , ymax = ymax ,
zbins = zbins , zmin = zmin , zmax = zmax ,
hpars = hpars ,
histo = histo )
# loop over the historgam bins
for ix , iy , iz , x , y , z , w in histo.items() :
xv , xe = x.value() , x.error()
yv , ye = y.value() , y.error()
zv , ze = z.value() , z.error()
# value at the bin center
c = self ( xv , yv , zv , error = errors )
if not integral :
histo[ix,iy,iz] = c
continue
# integral over the bin
v = self.integral( xv - xe , xv + xe ,
yv - ye , yv + ye ,
zv - ze , zv + ze )
if errors :
if 0 == c.cov2 () : pass
elif 0 != c.value() and 0 != v :
v = c * ( v / c.value() )
histo[ix,iy,iz] = v
return histo
# ==========================================================================
## Convert PDF to the 3D-histogram, taking PDF-values at bin-centres
# @code
# pdf = ...
# h1 = pdf.roo_histo ( 10 , 0. , 10. , 10 , 0. , 4. , 10 , 0. , 3 )
# histo_template = ...
# h2 = pdf.roo_histo ( histo = histo_template ) ## use historgam template
# @endcode
def roo_histo ( self ,
xbins = 10 , xmin = None , xmax = None ,
ybins = 10 , ymin = None , ymax = None ,
zbins = 10 , zmin = None , zmax = None ,
hpars = () ,
histo = None ,
events = True) :
"""Convert PDF to the 3D-histogram, taking PDF-values at bin-centres
>>> pdf = ...
>>> h1 = pdf.roo_histo ( 10 , 0. , 10. , 10 , 0. , 4. , 10 , 0. , 3 )
>>> histo_template = ...
>>> h2 = pdf.roo_histo ( histo = histo_template ) ## use historgam template
"""
histo = self.make_histo ( xbins = xbins , xmin = xmin , xmax = xmax ,
ybins = ybins , ymin = ymin , ymax = ymax ,
zbins = zbins , zmin = zmin , zmax = zmax ,
hpars = hpars ,
histo = histo )
hh = self.pdf.createHistogram (
hID() ,
self.xvar , self.binning ( histo.GetXaxis() , 'histo3x' ) ,
ROOT.RooFit.YVar ( self.yvar , self.binning ( histo.GetYaxis() , 'histo3y' ) ) ,
ROOT.RooFit.ZVar ( self.zvar , self.binning ( histo.GetZaxis() , 'histo3z' ) ) ,
ROOT.RooFit.Scaling ( False ) ,
ROOT.RooFit.Extended ( False ) )
for i in hh : hh.SetBinError ( i , 0 )
if events and self.pdf.mustBeExtended() :
for ix , iy , iz , x , y , z , v in hh.items() :
volume = 8 * x.error() * y.error() * z.error()
hh [ iz , iy , iz ] *= volume
hh *= self.pdf.expectedEvents ( self.vars ) / hh.sum()
histo += hh
return histo
# ==========================================================================
## get the residual histogram : (data-fit)
# @see PDF.as_histo
# @see PDF.residual_histo
# @see PDF.make_histo
# @code
# data = ...
# pdf = ...
# pdf.fitTo ( data )
# residual = pdf.residual ( data , nbins = 100 )
# @endcode
def residual ( self , dataset , **kwargs ) :
"""Get the residual histogram
- see PDF.as_histo
- see PDF.residual_histo
- see PDF.make_histo
>>> data = ...
>>> pdf = ...
>>> pdf.fitTo ( data )
>>> residual = pdf.residual ( data , nbins = 100 )
"""
hdata = self.make_histo ( **kwargs )
dataset.project ( hdata , ( self.xvar.name , self.yvar.name , self.xvar.name ) )
return self.residual_histo ( hdata )
# ==========================================================================
## get the pull histogram : (data-fit)/data_error
# @see PDF.as_histo
# @see PDF.residual_histo
# @see PDF.make_histo
# @code
# data = ...
# pdf = ...
# pdf.fitTo ( data )
# residual = pdf.pull ( data , nbins = 100 )
# @endcode
def pull ( self , dataset , **kwargs ) :
"""Get the pull histogram: (data-fit)/data_error
- see PDF.as_histo
- see PDF.residual_histo
- see PDF.make_histo
>>> data = ...
>>> pdf = ...
>>> pdf.fitTo ( data )
>>> residual = pdf.residual ( data , nbins = 100 )
"""
hdata = self.make_histo ( **kwargs )
dataset.project ( hdata , ( self.zvar.name , self.yvar.name , self.xvar.name ) )
return self.pull_histo ( hdata )
## conversion to string
def __str__ ( self ) :
return '%s(%s,xvar=%s,yvar=%s,zvar=%s)' % (
self.__class__.__name__ , self.name ,
self.xvar.name , self.yvar.name , self.zvar.name )
__repr__ = __str__
# =============================================================================
## @class Generic3D_pdf
# "Wrapper" over generic RooFit (3D)-pdf
# @code
#
# raw_pdf =
# pdf = Generic3D_pdf ( raw_pdf )
#
# @endcode
# If more functionality is required , more actions are possible:
# @code
# ## for sPlot
# pdf.alist2 = ROOT.RooArgList ( n1 , n2 , n3 ) ## for sPlotting
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-03-29
class Generic3D_pdf(PDF3) :
""" Wrapper for generic (3D) RooFit pdf:
>>> raw_pdf =
>>> x,y,z =
>>> pdf = Generic3D_pdf ( raw_pdf , xvar = x , yvar = y , zvar = z)
"""
## constructor
def __init__ ( self , pdf , xvar , yvar , zvar ,
name = None ,
special = False ,
add_to_signals = True ) :
assert isinstance ( xvar , ROOT.RooAbsReal ) , "``xvar'' must be ROOT.RooAbsReal"
assert isinstance ( yvar , ROOT.RooAbsReal ) , "``yvar'' must be ROOT.RooAbsReal"
assert isinstance ( zvar , ROOT.RooAbsReal ) , "``zvar'' must be ROOT.RooAbsReal"
assert isinstance ( pdf , ROOT.RooAbsReal ) , "``pdf'' must be ROOT.RooAbsReal"
name = name if name else pdf.GetName ()
PDF3 . __init__ ( self , name , xvar , yvar , zvar , special = special )
if not self.special :
assert isinstance ( pdf , ROOT.RooAbsPdf ) , "``pdf'' must be ROOT.RooAbsPdf"
## PDF!
self.pdf = pdf
## add it to the list of signal components ?
self.__add_to_signals = True if add_to_signals else False
if self.add_to_signals :
self.signals.add ( self.pdf )
## save the configuration
self.config = {
'pdf' : self.pdf ,
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'zvar' : self.zvar ,
'name' : self.name ,
'special' : self.special ,
'add_to_signals' : self.add_to_signals ,
}
@property
def add_to_signals ( self ) :
"""``add_to_signals'' : shodul PDF be added into list of signal components?"""
return self.__add_to_signals
## redefine the clone method, allowing only the name to be changed
# @attention redefinition of parameters and variables is disabled,
# since it can't be done in a safe way
def clone ( self , name = '' , xvar = None , yvar = None , zvar = None ) :
"""Redefine the clone method, allowing only the name to be changed
- redefinition of parameters and variables is disabled,
since it can't be done in a safe way
"""
if xvar and not xvar is self.xvar :
raise AttributeError("Generic2D_pdf can not be cloned with different `xvar''")
if yvar and not yvar is self.yvar :
raise AttributeError("Generic2D_pdf can not be cloned with different `yvar''")
if zvar and not zvar is self.zvar :
raise AttributeError("Generic2D_pdf can not be cloned with different `zvar''")
return PDF.clone ( self , name = name ) if name else PDF.clone( self )
# =============================================================================
## @class Sum3D
# Non-extended sum of two PDFs
# @code
# pdf1 = ...
# pdf2 = ...
# sum = Sum3D ( pdf1 , pdf2 )
# @endcode
# It is just a small wrapper for <code>ROOT.RooAddPdf</code>
# @see RooAddPdf
class Sum3D(PDF3) :
"""Non-extended sum of two PDFs:
It is just a small wrapper for <code>ROOT.RooAddPdf</code>
- see RooAddPdf
pdf1 = ...
pdf2 = ...
sum = Sum3D ( pdf1 , pdf2 )
"""
def __init__ ( self ,
pdf1 ,
pdf2 ,
xvar = None ,
yvar = None ,
zvar = None ,
name = '' ,
fraction = None ) :
if isinstance ( pdf1 , PDF3 ) :
assert ( not xvar ) or xvar is pdf1.xvar, "Invalid xvar/pdf1.xvar: %s/%s" % ( xvar , pdf1.xvar )
assert ( not yvar ) or yvar is pdf1.yvar, "Invalid yvar/pdf1.yvar: %s/%s" % ( yvar , pdf1.yvar )
assert ( not zvar ) or zvar is pdf1.zvar, "Invalid zvar/pdf1.zvar: %s/%s" % ( zvar , pdf1.zvar )
xvar = pdf1.xvar
yvar = pdf1.yvar
zvar = pdf1.yvar
elif isinstance ( pdf1 , ROOT.RooAbsPdf ) and \
xvar and isinstance ( xvar , ROOT.RooAbsReal ) and \
yvar and isinstance ( yvar , ROOT.RooAbsReal ) and \
zvar and isinstance ( zvar , ROOT.RooAbsReal ):
pdf1 = Generic3D_pdf ( pdf1 , xvar , yvar , zvar )
else :
raise TypeError ( "Invalid type: pdf1/xvar/yvar/zvar: %s/%s/%s/%s" % ( pdf1 , xvar , yvar , zvar ) )
if isinstance ( pdf2 , PDF3 ) and xvar in pdf2.vars and yvar in pdf2.vars and zvar in pdf2.vars : pass
elif isinstance ( pdf2 , ROOT.RooAbsPdf ) : pdf2 = Generic3D_pdf ( pdf2 , xvar , yvar , zvar )
else :
raise TypeError ( "Invalid type: pdf2/xvar/yvar/zvar: %s/%s/%s/%s" % ( pdf1 , xvar , yvar , zvar ) )
name = name if name else 'Sum_%s_%s' % ( pdf1.name , pdf2.name )
PDF3.__init__ (self, name , xvar , yvar , zvar )
self.__pdf1 = pdf1
self.__pdf2 = pdf2
self.__fraction = self.make_var ( fraction ,
'f_%s_%s' % ( pdf1.name , pdf2.name ) ,
'Fraction:(%s)+(%s)' % ( pdf1.name , pdf2.name ) ,
fraction , 0 , 1 )
self.alist1 = ROOT.RooArgList (
self.__pdf1.pdf ,
self.__pdf2.pdf )
self.alist2 = ROOT.RooArgList (
self.__fraction )
self.pdf = ROOT.RooAddPdf ( name ,
'(%s)+(%s)' % ( pdf1.name , pdf2.name ) , self.alist1,self.alist2 )
if self.pdf1.pdf.canBeExtended() : self.error ("``pdf1'' can be extended!")
if self.pdf2.pdf.canBeExtended() : self.error ("``pdf2'' can be extended!")
self.config = {
'pdf1' : self.pdf1 ,
'pdf2' : self.pdf2 ,
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'name' : self.name ,
'fraction' : self.fraction
}
@property
def pdf1 ( self ) :
"""``pdf1'' : the first PDF"""
return self.__pdf1
@property
def pdf2 ( self ) :
"""``pdf2'' : the second PDF"""
return self.__pdf2
@property
def fraction ( self ) :
"""``fraction'' : the fraction of the first PDF in the sum"""
return self.__fraction
@fraction.setter
def fraction ( self , value ) :
val = float ( value )
self.__fraction.setVal ( val )
@property
def F ( self ) :
"""``F'' : the fratcion of the first PDF in the sum (the same as ``fraction'')"""
return self.__fraction
@F.setter
def F ( self , value ) :
self.fraction = value
# =============================================================================
## @class Flat3D
# The most trivial 3D-model - constant
# @code
# pdf = Flat3D( 'flat' , xvar = ... , yvar = ... , zvar = ... )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
class Flat3D(PDF3) :
"""The most trival 3D-model - constant
>>> pdf = Flat3D( 'flat' , xvar = ... , yvar = ... , zvar = ... )
"""
def __init__ ( self , xvar , yvar , zvar , name = 'Flat3D' , title = '' ) :
PDF3.__init__ ( self , name , xvar , yvar , zvar )
if not title : title = 'flat3(%s)' % name
self.pdf = Ostap.Models.Uniform ( name , title , self.xvar , self.yvar , self.zvar )
assert 3 == self.pdf.dim() , 'Flat3D: wrong dimensionality!'
## save configuration
self.config = {
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'zvar' : self.zvar ,
'name' : self.name ,
'title' : title ,
}
# =============================================================================
## @class Model3D
# Trivial class to construct 3D model as a product of split 1D-models
# actually it is a tiny wrapper over <code>ROOT.RooProdPdf</code>
# @code
# pdfx = ...
# pdfy = ...
# pdfz = ...
# pdf3D = Model3D( 'D3' , xmodel = pdfx , ymodel = pdfy , zmodel = pdfz )
# @endcode
class Model3D(PDF3) :
"""Trivial class to construct 3D model as a product of split 1D-models
- actually it is a tiny wrapper over ROOT.RooProdPdf
>>> pdfx = ...
>>> pdfy = ...
>>> pdfz = ...
>>> pdf3D = Model3D( 'D3' , xmodel = pdfx , ymodel = pdfy , zmodel = pdfz )
"""
def __init__ ( self ,
name ,
xmodel ,
ymodel ,
zmodel ,
xvar = None ,
yvar = None ,
zvar = None ,
title = '' ) :
if isinstance ( xmodel , PDF ) : self.__xmodel = xmodel
elif isinstance ( xmodel , ROOT.RooAbsPdf ) and xvar :
self.__xmodel = Generic1D_pdf ( xmodel , xvar )
else : raise AttributeError ( "Invalid ``x-model'' attribute" )
if isinstance ( ymodel , PDF ) : self.__ymodel = ymodel
elif isinstance ( ymodel , ROOT.RooAbsPdf ) and yvar :
self.__ymodel = Generic1D_pdf ( ymodel , yvar )
else : raise AttributeError ( "Invalid ``y-model'' attribute" )
if isinstance ( zmodel , PDF ) : self.__zmodel = zmodel
elif isinstance ( zmodel , ROOT.RooAbsPdf ) and zvar :
self.__zmodel = Generic1D_pdf ( zmodel , zvar )
else : raise AttributeError ( "Invalid ``z-model'' attribute" )
## initialize the base
PDF3.__init__ ( self , name ,
self.__xmodel.xvar ,
self.__ymodel.xvar ,
self.__zmodel.xvar )
self.__plst = ROOT.RooArgList (
self.__xmodel.pdf ,
self.__ymodel.pdf ,
self.__zmodel.pdf ,
)
if not title : title = '%s x %s x %s' % ( self.__xmodel.name ,
self.__ymodel.name ,
self.__zmodel.name )
def _triv_ ( m ) :
_U = Ostap.Models.Uniform
if isinstance ( m , Flat1D ) : return True
return isinstance ( m.pdf , _U ) and 1 == m.pdf.dim()
## trivial case:
if _triv_ ( self.xmodel ) and _triv_ ( self.ymodel ) and _triv_ ( self.zmodel ) :
self.debug ('use Flat3D-model for the trivial product')
self.__flat = Flat3D ( self.xvar , self.yvar , self.zvar , name = name , title = title )
self.pdf = self.__flat.pdf
else :
## build pdf
self.__plst = ROOT.RooArgList (
self.__xmodel.pdf ,
self.__ymodel.pdf ,
self.__zmodel.pdf ,
)
self.pdf = ROOT.RooProdPdf ( name , title , self.__plst )
## save configuration
self.config = {
'name' : self.name ,
'xmodel' : self.xmodel ,
'ymodel' : self.ymodel ,
'zmodel' : self.zmodel ,
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'zvar' : self.xvar ,
'title' : self.pdf.GetTitle()
}
@property
def xmodel ( self ) :
"""``x-model'' x-component of M(x)*M(y)*M(z) PDF"""
return self.__xmodel
@property
def ymodel ( self ) :
"""``y-model'' y-component of M(x)*M(y)*M(z) PDF"""
return self.__ymodel
@property
def zmodel ( self ) :
"""``z-model'' z-component of M(x)*M(y)*M(z) PDF"""
return self.__zmodel
# =============================================================================
## simple convertor of 3D-histogram into PDF
# @author Vanya Belyaev Ivan.Belyaev@itep.ru
# @date 2013-12-01
class H3D_pdf(H3D_dset,PDF3) :
"""Simple convertor of 3D-histogram into PDF
"""
def __init__ ( self ,
name ,
histo ,
xvar = None ,
yvar = None ,
zvar = None ,
density = False ,
silent = False ) :
H3D_dset.__init__ ( self , histo3 , xvar , yvar , zvar , density , silent )
PDF3 .__init__ ( self , name , self.xaxis , self.yaxis , self.zaxis )
self.__vset = ROOT.RooArgSet ( self.xvar , self.yvar , self.zvar )
#
## finally create PDF :
#
with roo_silent ( silent ) :
self.pdf = ROOT.RooHistPdf (
'hpdf_%s' % name ,
'Histo3PDF(%s/%s/%s)' % ( name , histo3.GetName() , histo2.GetTitle() ) ,
self.__vset ,
self.dset )
## and declare it be be a "signal"
self.signals.add ( self.pdf )
## save the configuration
self.config = {
'name' : self.name ,
'histo' : self.histo ,
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'zvar' : self.zvar ,
'density' : self.density ,
'silent' : self.silent ,
}
# =============================================================================
# Compound models for 3D-fit
# =============================================================================
# =============================================================================
## @class Fit3D
# The actual model for 3D-fits
#
# @code
#
# model = Models.Fit3D (
# signal_x = Models.Gauss_pdf ( 'Gx' , mass = m_x ) ,
# signal_y = Models.Gauss_pdf ( 'Gy' , mass = m_y ) ,
# signal_z = Models.Gauss_pdf ( 'Gz' , mass = m_z ) )
#
# r = model.fitTo ( dataset ) ## fit dataset
#
# print r ## get results
#
# fx = model.draw1 () ## visualize X-projection
# fy = model.draw2 () ## visualize Y-projection
# fz = model.draw3 () ## visualize Z-projection
#
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2017-07-25
class Fit3D (PDF3) :
"""The actual model for 3D-fits
>>> model = Models.Fit3D (
... signal_x = Models.Gauss_pdf ( 'Gx' , mass = m_x ) ,
... signal_y = Models.Gauss_pdf ( 'Gy' , mass = m_y ) ,
... signal_x = Models.Gauss_pdf ( 'Gz' , mass = m_z ) ,
... bkg_1x = 1 ,
... bkg_1y = 0 ,
... bkg_1z = 0 )
>>> r,f = model.fitTo ( dataset ) ## fit dataset
>>> print r ## get results
>>> fx = model.draw1 () ## visualize X-projection
>>> fy = model.draw2 () ## visualize Y-projection
>>> fz = model.draw3 () ## visualize Z-projection
Parameters
----------
signal_x : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in X-direction
signal_y : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in Y-direction
signal_z : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in Z-direction
suffix : string
An optional suffix to be added to the names of created PDFs and variables
bkg_1x : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for Bx1(x)* Sy(y)* Sz(z) term
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_1y : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for Sx(z)*By1(y)* Sz(z) term
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_1x : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for Sx(x)* Sy(y)*Bz1(z) term
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_2xy : RooFit/PDF, Ostap/PDF or None
2D-background for Bxy(x,y)*Sz(z) term
Use directly RooFit/PDF or Ostap/PDF otherwise create from bkgX2 and bkgY2
bkg_2xz : RooFit/PDF, Ostap/PDF or None
2D-background for Bxz(x,z)*Sy(y) term
Use directly RooFit/PDF or Ostap/PDF otherwise create from bkgX2 and bkgZ2
bkg_2yz : RooFit/PDF, Ostap/PDF or None
2D-background for Bxz(x,z)*Sy(y) term
Use directly RooFit/PDF or Ostap/PDF otherwise create from bkgX2 and bkgZ2
bkg_2x : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for used to create Bxy(x,y) and Bxz(x,z) if they are not specified.
If None - bkgX1 is used
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_2y : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for used to create Bxy(x,y) and Byz(y,z) if they are not specified.
If None - bkgY1 is used
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_2z : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for used to create Bxz(x,z) and Byz(y,z) if they are not specified.
If None - bkgZ1 is used
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_3x : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for used to create Bxyz(x,y,z) if it is not specified.
If None - bkgX2 is used
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_3y : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for used to create Bxyz(x,y,z) if it is not specified.
If None - bkgY2 is used
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_3z : RooFit/PDF, Ostap/PDF, non-negative integer, RooRealVar or None
1D-background for used to create Bxyz(x,y,z) if it is not specified.
If None - bkgZ2 is used
Use directly RooFit/PDF or Ostap/PDF, otherwise create and use Bkg_pdf
bkg_3D : RooFit/PDF or Ostap/PDF 3to descrive 3D-backround component Bxyz(x,y,z)
sss : None, RooRealVar, non-negative float or tuple
Variable for the yield of sig(1) * sig(2) * sig(3) component
Use directly RooRealVar, otherwise create it using self.make_var function
ssb : None, RooRealVar, non-negative float or tuple
Variable for the yield of sig(1) * sig(2) * bkg (3) component
Use directly RooRealVar, otherwise create it using self.make_var function
sbs : None, RooRealVar, non-negative float or tuple
Variable for the yield of sig(1) * bkg(2) * sig (3) component
Use directly RooRealVar, otherwise create it using self.make_var function
bss : None, RooRealVar, non-negative float or tuple
Variable for the yield of bkg(1) * sig(2) * sig (3) component
Use directly RooRealVar, otherwise create it using self.make_var function
sbb : None, RooRealVar, non-negative float or tuple
Variable for the yield of sig(1) * bkg(2) * bkg (3) component
Use directly RooRealVar, otherwise create it using self.make_var function
bsb : None, RooRealVar, non-negative float or tuple
Variable for the yield of bkg(1) * sig(2) * bkg (3) component
Use directly RooRealVar, otherwise create it using self.make_var function
bbs : None, RooRealVar, non-negative float or tuple
Variable for the yield of bkg(1) * bkg(2) * sig (3) component
Use directly RooRealVar, otherwise create it using self.make_var function
bbb : None, RooRealVar, non-negative float or tuple
Variable for the yield of bkg(1) * bkg(2) * bkg (3) component
Use directly RooRealVar, otherwise create it using self.make_var function
"""
def __init__ ( self ,
#
signal_x ,
signal_y ,
signal_z ,
suffix = '' ,
#
bkg_1x = None , ## 1D-background for Bx(x)*Sy(y)*Sz(z) term
bkg_1y = None , ## 1D-background for Sx(z)*By(y)*Sz(z) term
bkg_1z = None , ## 1D-background for Sx(x)*Sy(y)*Bz(z) term
#
bkg_2xy = None , ## 2D-background for Bxy(x,y)*Sz(z) term
bkg_2xz = None , ## 2D-background for Bxz(x,z)*Sy(y) term
bkg_2yz = None , ## 2D-background for Byz(y,z)*Sx(z) term
## *if* no XY,XZ,BC backgrounds are specified, combine them from
bkg_2x = None , ## Bxy(x,y) = Bx2(x)*By2(y)
bkg_2y = None , ## Bxz(x,z) = Bx2(x)*Bz2(z)
bkg_2z = None , ## Bkg(y,z) = By2(y)*Bz2(z)
##
bkg_3D = None , ## 3D-backround component B(x,y,z)
## *if* no 3D-background components is specified, combine it from
bkg_3x = None , ## Bkg(x,y,z) = Bx3(x)*By3(y)*Bz3(z)
bkg_3y = None , ## Bkg(x,y,z) = Bx3(x)*By3(y)*Bz3(z)
bkg_3z = None , ## Bkg(x,y,z) = Bx3(x)*By3(y)*Bz3(z)
#
## Yields of the main components :
sss = None , ## sig(x) * sig(y) * sig(z)
ssb = None , ## sig(x) * sig(y) * bkg(z)
sbs = None , ## sig(x) * bkg(y) * sig(z)
bss = None , ## bkg(x) * sig(y) * sig(z)
sbb = None , ## sig(x) * bkg(y,z)
bsb = None , ## sig(y) * bkg(x,z)
bbs = None , ## sig(z) * bkg(x,y)
bbb = None , ## background-3D
## additional components
components = [] ,
xvar = None ,
yvar = None ,
zvar = None ,
name = '' ) :
## keep all arguments
self.__args = {
##
'signal_x' : signal_x ,
'signal_y' : signal_y ,
'signal_z' : signal_z ,
##
'bkg_1x' : bkg_1x ,
'bkg_1y' : bkg_1y ,
'bkg_1z' : bkg_1z ,
##
'bkg_2xy' : bkg_2xy ,
'bkg_2xz' : bkg_2xz ,
'bkg_2yz' : bkg_2yz ,
##
'bkg_2x' : bkg_2x ,
'bkg_2y' : bkg_2y ,
'bkg_2z' : bkg_2z ,
##
'bkg_3D' : bkg_3D ,
##
'bkg_3x' : bkg_3x ,
'bkg_3y' : bkg_3y ,
'bkg_3z' : bkg_3z ,
##
'sss' : sss ,
'ssb' : ssb ,
'sbs' : sbs ,
'bss' : bss ,
'sbb' : sbb ,
'bsb' : bsb ,
'bbs' : bbs ,
'bbb' : bbb ,
##
'components' : components ,
'xvar' : xvar ,
'yvar' : yvar ,
'zvar' : zvar ,
##
'name' : name
}
self.__suffix = suffix
if isinstance ( signal_x , PDF ) : self.__signal_x = signal_x
elif isinstance ( signal_x , ROOT.RooAbsPdf ) and xvar :
self.__signal_x = Generic1D_pdf ( signal_1 , xvar , 'SX' )
else : raise AttributeError ( "Invalid ``signal_x'' argument: %s" % signal_x )
if isinstance ( signal_y , PDF ) : self.__signal_y = signal_y
elif isinstance ( signal_y , ROOT.RooAbsPdf ) and yvar :
self.__signal_y = Generic1D_pdf ( signal_y , yvar , 'SY' )
else : raise AttributeError ( "Invalid ``signal_y'' argument: %s" % signal_y )
if isinstance ( signal_z , PDF ) : self.__signal_z = signal_z
elif isinstance ( signal_z , ROOT.RooAbsPdf ) and zvar :
self.__signal_z = Generic1D_pdf ( signal_z , zvar , 'SZ' )
else : raise AttributeError ( "Invalid ``signal_z'' argument: %s" % signal_z )
#
## initialize base class
#
if not name :
name = "%s&%s&%s" % ( self.__signal_x.name ,
self.__signal_y.name ,
self.__signal_z.name )
if suffix : name += '_'+ suffix
PDF3.__init__ ( self , name ,
self.__signal_x.xvar ,
self.__signal_y.xvar ,
self.__signal_z.xvar )
# =====================================================================
## 1) First component: all signals
# =====================================================================
self.__sss_cmp = Model3D (
'SSS_pdf' + suffix , self.__signal_x , self.__signal_y , self.__signal_z )
# =====================================================================
## 2-4) Three terms: ( 2 signals ) x ( 1 background )
# =====================================================================
self.__bkg_1x = self.make_bkg ( bkg_1x , 'Bkg1X_BSS' + suffix , self.xvar )
self.__bkg_1y = self.make_bkg ( bkg_1y , 'Bkg1Y_SBS' + suffix , self.yvar )
self.__bkg_1z = self.make_bkg ( bkg_1z , 'Bkg1Z_SSB' + suffix , self.zvar )
self.__ssb_cmp = Model3D ( "SSB_pdf" + suffix ,
self.__signal_x , self.__signal_y , self.__bkg_1z ,
title = "Signal(x) x Signal(y) x Background1(x)" )
self.__sbs_cmp = Model3D ( "SBS_pdf" + suffix ,
self.__signal_x , self.__bkg_1y , self.__signal_z ,
title = "Signal(x) x Background1(y) x Signal(z)" )
self.__bss_cmp = Model3D ( "BSS_pdf" + suffix ,
self.__bkg_1x , self.__signal_y , self.__signal_z ,
title = "Background1(x) x Signal(y) x Signal(z)" )
# =====================================================================
## (intermezzo-1) Assumptions about SBB-background sub-components
# =====================================================================
if component_clone ( bkg_2x ) :
bkg_2x = self.__bkg_1x
self.debug ( 'bkg_2x set to [CLONE] %s' % bkg_2x )
elif component_similar ( bkg_2x ) :
bkg_2x = bkg_1x
self.debug ( 'bkg_2x set to [SIMILAR] %s' % bkg_2x )
if component_clone ( bkg_2y ) :
bkg_2y = self.__bkg_1y
self.debug ( 'bkg_2y set to [CLONE] %s' % bkg_2y )
elif component_similar ( bkg_2x ) :
bkg_2y = bkg_1y
self.debug ( 'bkg_2y set to [SIMILAR] %s' % bkg_2y )
if component_clone ( bkg_2z ) :
bkg_2z = self.__bkg_1z
self.debug ( 'bkg_2z set to [CLONE] %s' % bkg_2z )
elif component_similar ( bkg_2z ) :
bkg_2z = bkg_1z
self.debug ( 'bkg_2z set to [SIMILAR] %s' % bkg_2z )
# =====================================================================
# =====================================================================
## 5-7) Three terms: (1 signal) x (2 backgrounds)
# =====================================================================
self.__bkg_2x = None
self.__bkg_2y = None
self.__bkg_2z = None
if bkg_2xy and isinstance ( bkg_2xy , PDF2 ) :
self.__bkg_2xy = bkg_2xy
elif bkg_2xy and isinstance ( bkg_2xy , ROOT.RooAbsPdf ) :
self.__bkg_2xy = Generic2D_pdf ( bkg_2xy , self.xvar , self.yvar )
elif bkg_2xy and isinstance ( bkg_2xy , ( tuple , list ) ) :
from ostap.fitting.models_2d import make_B2D
self.__bkg_2xy = make_B2D ( 'Bkg2XY' + suffix , self.xvar , self.yvar , *bkg_2xy )
else :
if not self.__bkg_2x : self.__bkg_2x = self.make_bkg ( bkg_2x , 'Bkg2X_S2B' + suffix , self.xvar )
if not self.__bkg_2y : self.__bkg_2y = self.make_bkg ( bkg_2y , 'Bkg2Y_S2B' + suffix , self.yvar )
self.__bkg_2xy = Model2D ( 'Bkg2XY' + suffix ,
self.__bkg_2x ,
self.__bkg_2y ,
title = 'Backrgound2(x) x Background2(y)' )
if bkg_2xz and isinstance ( bkg_2xz , PDF2 ) :
self.__bkg_2xz = bkg_2xz
elif bkg_2xz and isinstance ( bkg_2xz , ROOT.RooAbsPdf ) :
self.__bkg_2xz = Generic2D_pdf ( bkg_2xz , self.xvar , self.zvar )
elif bkg_2xz and isinstance ( bkg_2xz , ( tuple , list ) ) :
from ostap.fitting.models_2d import make_B2D
self.__bkg_2xz = make_B2D ( 'Bkg2XZ' + suffix , self.xvar , self.zvar , *bkg_2xz )
else :
if not self.__bkg_2x : self.__bkg_2x = self.make_bkg ( bkg_2x , 'Bkg2X_S2B' + suffix , self.xvar )
if not self.__bkg_2z : self.__bkg_2z = self.make_bkg ( bkg_2z , 'Bkg2Z_S2B' + suffix , self.zvar )
self.__bkg_2xz = Model2D ( 'Bkg2XZ' + suffix ,
self.__bkg_2x ,
self.__bkg_2z ,
title = 'Backrgound2(x) x Background2(z)' )
if bkg_2yz and isinstance ( bkg_2yz , PDF2 ) :
self.__bkg_2yz = bkg_2yz
elif bkg_2yz and isinstance ( bkg_2yz , ROOT.RooAbsPdf ) :
self.__bkg_2yz = Generic2D_pdf ( bkg_2yz , self.yvar , self.zvar)
elif bkg_2yz and isinstance ( bkg_2yz , ( tuple , list ) ) :
from ostap.fitting.models_2d import make_B2D
self.__bkg_2yz = make_B2D ( 'Bkg2YZ' + suffix , self.yvar , self.zvar , *bkg_2yz )
else :
if not self.__bkg_2y : self.__bkg_2y = self.make_bkg ( bkg_2y , 'Bkg2Y_S2B' + suffix , self.yvar )
if not self.__bkg_2z : self.__bkg_2z = self.make_bkg ( bkg_2z , 'Bkg2Z_S2B' + suffix , self.zvar )
self.__bkg_2yz = Model2D ( 'Bkg2YZ' + suffix ,
self.__bkg_2y ,
self.__bkg_2z ,
title = 'Backrgound2(y) x Background2(z)' )
self.__sbb_cmp = Generic3D_pdf (
ROOT.RooProdPdf ( "SBB_pdf" + suffix , "Signal(x) x Background(y,z)" , self.__signal_x.pdf , self.__bkg_2yz.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__bsb_cmp = Generic3D_pdf (
ROOT.RooProdPdf ( "BSB_pdf" + suffix , "Signal(y) x Background(x,z)" , self.__signal_y.pdf , self.__bkg_2xz.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__bbs_cmp = Generic3D_pdf (
ROOT.RooProdPdf ( "BBS_pdf" + suffix , "Signal(z) x Background(x,y)" , self.__signal_z.pdf , self.__bkg_2xy.pdf ) ,
self.xvar , self.yvar , self.zvar )
# =====================================================================
## (intermezzo-2) Assumptions about BBB-background sub-components
# =====================================================================
if component_clone ( bkg_3x ) :
bkg_3x = self.__bkg_2x
self.debug ( 'bkg_3x set to [CLONE] %s' % bkg_3x )
elif component_similar ( bkg_3x ) :
bkg_3x = bkg_2x
self.debug ( 'bkg_3x set to [SIMILAR] %s' % bkg_3x )
if component_clone ( bkg_3y ) :
bkg_3y = self.__bkg_2y
self.debug ( 'bkg_3y set to [CLONE] %s' % bkg_3y )
elif component_similar ( bkg_3x ) :
bkg_3y = bkg_2y
self.debug ( 'bkg_3y set to [SIMILAR] %s' % bkg_3y )
if component_clone ( bkg_3z ) :
bkg_3z = self.__bkg_2z
self.debug ( 'bkg_3z set to [CLONE] %s' % bkg_3z )
elif component_similar ( bkg_3x ) :
bkg_3z = bkg_2z
self.debug ( 'bkg_3z set to [SIMILAR] %s' % bkg_3z )
# =====================================================================
## 8) pure background
# =====================================================================
self.__bkg_3x = None
self.__bkg_3y = None
self.__bkg_3z = None
if bkg_3D and isinstance ( bkg_3D , PDF3 ) :
self.__bbb_cmp = bkg_3D
elif bkg_3D and isinstance ( bkg_3D , ROOT.RooAbsPdf ) :
self.__bbb_cmp = Generic3D_pdf ( bkg_3D , self.xvar , self.yvar , self.zvar )
elif bkg_3D and isinstance ( bkg_3D , ( tuple , list ) ) :
from ostap.fitting.models_3d import make_B3D
self.__bbb_cmp = make_B3D ( 'BBB_pdf' + suffix ,
self.xvar , self.yvar , self.zvar , *bkg_3D )
else :
self.__bkg_3x = self.make_bkg ( bkg_3x , 'Bkg3X_BBB' + suffix , self.xvar )
self.__bkg_3y = self.make_bkg ( bkg_3y , 'Bkg3Y_BBB' + suffix , self.yvar )
self.__bkg_3z = self.make_bkg ( bkg_3z , 'Bkg3Z_BBB' + suffix , self.zvar )
self.__bbb_cmp = Model3D (
"BBB_pdf" + suffix ,
self.__bkg_3x ,
self.__bkg_3y ,
self.__bkg_3z ,
title = "Background3(x) x Backrgound3(y) x Background3(z)" )
#
## coefficients
#
self.__sss = self.make_var ( sss , "SSS" + suffix ,
"Signal(x)&Signal(y)&Signal(z)" + suffix , sss , 1000 , 0 , 1.e+7 )
self.__ssb = self.make_var ( ssb , "SSB" + suffix ,
"Signal(x)&Signal(y)&Background(z)" + suffix , ssb , 1000 , 0 , 1.e+7 )
self.__sbs = self.make_var ( sbs , "SBS" + suffix ,
"Signal(x)&Background(y)&Signal(z)" + suffix , sbs , 1000 , 0 , 1.e+7 )
self.__bss = self.make_var ( bss , "BSS" + suffix ,
"Background(x)&Signal(y)&Signal(z)" + suffix , bss , 1000 , 0 , 1.e+7 )
self.__sbb = self.make_var ( sbb , "SBB" + suffix ,
"Signal(x)&Background(y,z)" + suffix , sbb , 1000 , 0 , 1.e+7 )
self.__bsb = self.make_var ( bsb , "BSB" + suffix ,
"Signal(y)&Background(x,z)" + suffix , bsb , 1000 , 0 , 1.e+7 )
self.__bbs = self.make_var ( bbs , "BBS" + suffix ,
"Signal(z)&Background(x,y)" + suffix , bbs , 1000 , 0 , 1.e+7 )
self.__bbb = self.make_var ( bbb , "BBB" + suffix ,
"Background(x,y,z)" + suffix , bbb , 1000 , 0 , 1.e+7 )
self.alist1 = ROOT.RooArgList (
self.__sss_cmp.pdf ,
self.__ssb_cmp.pdf ,
self.__sbs_cmp.pdf ,
self.__bss_cmp.pdf ,
self.__sbb_cmp.pdf ,
self.__bsb_cmp.pdf ,
self.__bbs_cmp.pdf ,
self.__bbb_cmp.pdf )
self.alist2 = ROOT.RooArgList (
self.__sss ,
self.__ssb ,
self.__sbs ,
self.__bss ,
self.__sbb ,
self.__bsb ,
self.__bbs ,
self.__bbb )
## treat additional components (if specified)
self.__nums_components = []
icmp = 0
self.__more_components = []
for cmp in components :
if isinstance ( cmp , PDF3 ) : cc = cmp
elif isinstance ( cmp , ROOT.RooAbsPdf ) : cc = Generic3D_pdf ( cmp , self.xvar , self.yvar, self.zvar )
else :
self.error ("unknown ``other''component %s/%s, skip it!" % ( cc , type(cc) ) )
continue
self.__more_components.append ( cc )
self.components.add ( cc.pdf )
nc = len( self.__more_components )
if 1 == nc :
cf = self.make_var ( None , "C"+suffix , "Component" + suffix , None , 1 , 0 , 1.e+7 )
self.alist1.add ( self.components[0] )
self.__nums_components.append ( cf )
elif 2 <= nc :
fic = self.make_fracs ( nc , 'C_%%d%s' % suffix , 'C(%%d)%s' % suffix , fractions = False )
for c in self.components : self.alist1.add ( c)
for f in fic : self.__nums_components.append ( f )
self.__nums_components = tuple ( self.__nums_components )
for c in self.__nums_components : self.alist2.add ( c )
##
#
## build the final PDF
#
self.pdf = ROOT.RooAddPdf ( "model3D" + suffix ,
"Model3D(%s)" % suffix ,
self.alist1 ,
self.alist2 )
self.signals .add ( self.__sss_cmp.pdf )
self.backgrounds .add ( self.__bbb_cmp.pdf )
self.crossterms1 .add ( self.__ssb_cmp.pdf ) ## cross-terms
self.crossterms1 .add ( self.__sbs_cmp.pdf ) ## cross-terms
self.crossterms1 .add ( self.__bss_cmp.pdf ) ## cross-terms
self.crossterms2 .add ( self.__sbb_cmp.pdf ) ## cross-terms
self.crossterms2 .add ( self.__bsb_cmp.pdf ) ## cross-terms
self.crossterms2 .add ( self.__bbs_cmp.pdf ) ## cross-terms
## save the configuration
self.config = {
'signal_x' : self.signal_x ,
'signal_y' : self.signal_y ,
'signal_z' : self.signal_z ,
'suffix' : self.suffix ,
##
'bkg_1x' : self.bkg_1x ,
'bkg_1y' : self.bkg_1y ,
'bkg_1z' : self.bkg_1z ,
##
'bkg_2x' : self.bkg_2x ,
'bkg_2y' : self.bkg_2y ,
'bkg_2z' : self.bkg_2z ,
##
'bkg_2xy' : self.bkg_2xy ,
'bkg_2xz' : self.bkg_2xz ,
'bkg_2yz' : self.bkg_2yz ,
##
'bkg_3x' : self.bkg_3x ,
'bkg_3y' : self.bkg_3y ,
'bkg_3z' : self.bkg_3z ,
##
'bkg_3D' : self.bkg_3D ,
##
'sss' : self.SSS ,
'ssb' : self.SSB ,
'sbs' : self.SBS ,
'bss' : self.BSS ,
'sbb' : self.SBB ,
'bsb' : self.BSB ,
'bbs' : self.BBS ,
'bbb' : self.BBB ,
#
'components' : self.more_components ,
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'zvar' : self.zvar ,
'name' : self.name ,
}
## redefine the clone method, allowing only the name to be changed
# @attention redefinition of parameters and variables is disabled,
# since it can't be done in a safe way
def clone ( self , name = '' , xvar = None , yvar = None , zvar = None ) :
"""Redefine the clone method, allowing only the name to be changed
- redefinition of parameters and variables is disabled,
since it can't be done in a safe way
"""
if xvar and not xvar is self.xvar :
raise AttributeError("Fit3D can not be cloned with different `xvar''")
if yvar and not yvar is self.yvar :
raise AttributeError("Fit3D can not be cloned with different `yvar''")
if zvar and not zvar is self.zvar :
raise AttributeError("Fit3D can not be cloned with different `zvar''")
return PDF.clone ( self , name = name ) if name else PDF.clone( self )
@property
def SSS ( self ) :
"""The yield of Signal(x)*Signal(y)*Signal(z) component"""
return self.__sss
@SSS.setter
def SSS ( self , value ) :
value = float ( value )
assert value in self.__sss, "Value %s is out of the allowed range %s " % ( value , self.__sss.minmax() )
self.__sss.setVal ( value )
@property
def SSB ( self ) :
"""The yield of Signal(x)*Signal(y)*Background(z) component"""
return self.__ssb
@SSB.setter
def SSB ( self , value ) :
value = float ( value )
assert value in self.__ssb, "Value %s is out of the allowed range %s " % ( value , self.__ssb.minmax() )
self.__ssb.setVal ( value )
@property
def SBS ( self ) :
"""The yield of Signal(x)*Background(y)*Signal(z) component"""
return self.__sbs
@SBS.setter
def SBS ( self , value ) :
value = float ( value )
assert value in self.__sbs, "Value %s is out of the allowed range %s " % ( value , self.__sbs.minmax() )
self.__sbs.setVal ( value )
@property
def BSS ( self ) :
"""The yield of Background(x)*Signal(y)*Signal(z) component"""
return self.__bss
@BSS.setter
def BSS ( self , value ) :
value = float ( value )
assert value in self.__bss, "Value %s is out of the allowed range %s " % ( value , self.__bss.minmax() )
self.__bss.setVal ( value )
@property
def SBB ( self ) :
"""The yield of Signal(x)*Background(y,z) component"""
return self.__sbb
@SBB.setter
def SBB ( self , value ) :
value = float ( value )
assert value in self.__sbb, "Value %s is out of the allowed range %s " % ( value , self.__sbb.minmax() )
self.__sbb.setVal ( value )
@property
def BSB ( self ) :
"""The yield of Background(x,z)*Signal(y) component"""
return self.__bsb
@BSB.setter
def BSB ( self , value ) :
value = float ( value )
assert value in self.__bsb, "Value %s is out of the allowed range %s " % ( value , self.__bsb.minmax() )
self.__bsb.setVal ( value )
@property
def BBS ( self ) :
"""The yield of Background(x,y)*Signal(z) component"""
return self.__bbs
@BBS.setter
def BBS ( self , value ) :
value = float ( value )
assert value in self.__bbs, "Value %s is out of the allowed range %s " % ( value , self.__bbs.minmax() )
self.__bbs.setVal ( value )
@property
def BBB ( self ) :
"""The yield of Background(x,y,z) component"""
return self.__bbb
@BBB.setter
def BBB ( self , value ) :
value = float ( value )
assert value in self.__bbb, "Value %s is out of the allowed range %s " % ( value , self.__bbb.minmax() )
self.__bbb.setVal ( value )
@property
def C ( self ) :
"""Get the yields of ``other'' component(s)
For single ``other'' component:
>>> print pdf.C ## read the single ``other'' component
>>> pdf.C = 100 ## assign to it
For multiple ``other'' components:
>>> print pdf.C[4] ## read the 4th ``other'' component
>>> pdf.C = 4,100 ## assign to it
... or, alternatively:
>>> print pdf.C[4] ## read the 4th ``other'' component
>>> pdf.C[4].value 100 ## assign to it
"""
lst = [ i for i in self.__nums_components ]
if not lst : return () ## extended fit? no other components?
elif 1 == len(lst) : return lst[0] ## single component?
return tuple ( lst )
@C.setter
def C ( self , value ) :
_n = len ( self.__nums_components )
assert 1 <= _n , "No ``other'' components are defined, assignement is impossible"
if 1 == _n :
_c = self.C
value = float ( value )
else :
index = value [0]
assert isinstance ( index , int ) and 0 <= index < _n, "Invalid ``other'' index %s/%d" % ( index , _n )
value = float ( value[1] )
_c = self.C[index]
## assign
assert value in _c , "Value %s is outside the allowed region %s" % ( value , _c.minmax() )
_c.setVal ( value )
@property
def yields ( self ) :
"""The list/tuple of the yields of all numeric components"""
return tuple ( [ i for i in self.alist2 ] )
@property
def total_yield ( self ) :
"""``total_yield''' : get the total yield"""
if not self.fit_result : return None
if not valid_pointer ( self.fit_result ) : return None
return self.fit_result.sum ( *self.yields )
# =========================================================================
# components
# =========================================================================
@property
def signal_x ( self ) :
"""``signal_x'': Signal(x) component/PDF"""
return self.__signal_x
@property
def signal_y ( self ) :
"""``signal_y'': Signal(y) component/PDF"""
return self.__signal_y
@property
def signal_z ( self ) :
"""``signal_z'': Signal(z) component/PDF"""
return self.__signal_z
@property
def bkg_1x ( self ) :
"""``bkg_1x'': B(x) component for B(x)*S(y)*S(z) term"""
return self.__bkg_1x
@property
def bkg_1y ( self ) :
"""``bkg_1y'': B(y) component for S(x)*B(y)*S(z) term"""
return self.__bkg_1y
@property
def bkg_1z ( self ) :
"""``bkg_1z'': B(z) component for S(x)*S(y)*B(z) term"""
return self.__bkg_1z
@property
def bkg_2xy ( self ) :
"""``bkg_2xy'': B(x,y) component for B(x,y)*S(z) term"""
return self.__bkg_2xy
@property
def bkg_2xz ( self ) :
"""``bkg_2xz'': B(x,z) component for B(x,z)*S(y) term"""
return self.__bkg_2xz
@property
def bkg_2yz ( self ) :
"""``bkg_2yz'': B(y,z) component for B(y,z)*S(x) term"""
return self.__bkg_2yz
@property
def bkg_2x ( self ) :
"""``bkg_2x'': B(x) component for B(x,y)*S(z) & B(x,z)*S(y) terms"""
return self.__bkg_2x
@property
def bkg_2y ( self ) :
"""``bkg_2y'': B(y) component for B(y,z)*S(x) & B(x,y)*S(z) terms"""
return self.__bkg_2y
@property
def bkg_2z ( self ) :
"""``bkg_2z'': B(z) component for B(x,z)*S(y) & B(y,z)*S(x) terms"""
return self.__bkg_2z
@property
def bkg_3x ( self ) :
"""``bkg_3x'': B(x) component for B(x,y,z) term"""
return self.__bkg_3x
@property
def bkg_3y ( self ) :
"""``bkg_3y'': B(y) component for B(x,y,z) term"""
return self.__bkg_3y
@property
def bkg_3z ( self ) :
"""``bkg_3z'': B(z) component for B(z,y,z) term"""
return self.__bkg_3z
@property
def bkg_3D ( self ) :
"""```bkg_3D'': B(x,y,z) component/PDF for the final PDF"""
return self.__bbb_cmp
@property
def cmp_SSS ( self ) :
"""```triple-signal'' component/PDF"""
return self.__sss_cmp
@property
def cpm_SSB ( self ) :
"""```signal-signal-background'' component/PDF"""
return self.__ssb_cmp
@property
def cmp_SBS ( self ) :
"""```signal-background-signal'' component/PDF"""
return self.__sbs_cmp
@property
def cmp_BSS ( self ) :
"""```background-signal-signal'' component/PDF"""
return self.__bss_cmp
@property
def cpm_SBB ( self ) :
"""```signal-background-background'' component/PDF"""
return self.__sbb_cmp
@property
def cmp_BSB ( self ) :
"""```background-signal-background'' component/PDF"""
return self.__bsb_cmp
@property
def cmp_BBS ( self ) :
"""```background-background-signal'' component/PDF"""
return self.__bbs_cmp
@property
def cmp_BBB ( self ) :
"""```triple-background'' component/PDF"""
return self.__bbb_cmp
@property
def more_components ( self ) :
"""additional/``other'' components"""
return tuple( self.__more_components )
@property
def suffix ( self ) :
"""``suffix'', used to build the name"""
return self.__suffix
# =============================================================================
## @class Fit3DSym
# The actual model for fully symmetric 3D-fits
#
# @param signal_x (RooFit/PDF or Ostap/PDF) PDF to describe (1D)-signal in X-direction
# @param signal_y (RooFit/PDF, Ostap/PDF or None) PDF to describe (1D)-signal in Y-direction
# @param signal_z (RooFit/PDF, Ostap/PDF or None) PDF to describe (1D)-signal in Z-direction
# @param suffix (string) An optional suffix to be added to the names of created PDFs and variables
# @param bkg_1x (RooFit/PDF, Ostap/PDF, integer, RooRealVar or None)
# 1D x-background for SSB-terms
# Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
# @param bkg_2x (RooFit/PDF, Ostap/PDF,integer, RooRealVar or None)
# 1D x-background for SBB-terms, if <code>bkg2D</code> is not specified.
# Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
# @param bkg_2xy (RooFit/PDF or Ostap/PDF)
# 2D x,y-background for SBB-terms
# Use directly RooFit/PDF or Ostap/PDF
# @param bkg_3x (RooFit/PDF, Ostap/PDF,integer, RooRealVar or None)
# 1D x-background for BBB-term, if <code>bkg3D</code> is not specified.
# Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
# @param bkg_3D (RooFit/PDF or Ostap/PDF)
# 3D x,y,z-background for BBB-term
# Use directly RooFit/PDF or Ostap/PDF
# @param sss (None, RooRealVar, non-negative float or tuple)
# Variable for the yield of SSS component.
# Use directly RooRelaVar, otherwise create it using self.make_var function
# @param ssb (None, RooRealVar, non-negative float or tuple)
# Variable for the yield of SSB component.
# Use directly RooRelaVar, otherwise create it using self.make_var function
# @param sbb (None, RooRealVar, non-negative float or tuple)
# Variable for the yield of SBB component.
# Use directly RooRelaVar, otherwise create it using self.make_var function
# @param bbb (None, RooRealVar, non-negative float or tuple)
# Variable for the yield of BBB component.
# Use directly RooRelaVar, otherwise create it using self.make_var function
# @param components ([]) the list of additional 3D-PDFs to be used in the fit
# @param xvar (None) the x-variable
# @param yvar (None) the y-variable
# @param zvar (None) the z-variable
# @param name ("") the PDF name
# @code
# model = Models.Fit3DSym (
# signal_x = Models.Gauss_pdf ( 'Gx' , mass = m_x ) ,
# signal_y = Models.Gauss_pdf ( 'Gy' , mass = m_y ) ,
# signal_z = Models.Gauss_pdf ( 'Gz' , mass = m_z ) ,
# bkg_1x = -1 ,
# bkg_2x = -1 ,
# bkg_3x = -1 )
#
# r = model.fitTo ( dataset ) ## fit dataset
#
# print r ## get results
#
# fx = model.draw1 () ## visualize X-projection
# fy = model.draw2 () ## visualize Y-projection
# fz = model.draw3 () ## visualize Z-projection
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2017-07-25
class Fit3DSym (PDF3) :
"""The actual model for fully symmetric 3D-fits
>>> model = Models.Fit3DSym (
... signal_x = Models.Gauss_pdf ( 'Gx' , mass = m_x ) ,
... signal_y = Models.Gauss_pdf ( 'Gy' , mass = m_y ) ,
... signal_z = Models.Gauss_pdf ( 'Gz' , mass = m_z ) ,
... bkg_1x = 1 ,
... bkg_2x = -1 ,
... bkg_3x = -1 )
>>> r,f = model.fitTo ( dataset ) ## fit dataset
>>> print r ## get results
>>> fx = model.draw1 () ## visualize X-projection
>>> fy = model.draw2 () ## visualize Y-projection
>>> fz = model.draw3 () ## visualize Z-projection
Parameters
----------
signal_x : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in X-direction
signal_y : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in Y-direction
signal_z : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in Z-direction
suffix : string
An optional suffix to be added to the names of created PDFs and variables
bkg_1x : RooFit/PDF, Ostap/PDF, integer, RooRealVar or None
1D x-background for SSB-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_2x : RooFit/PDF, Ostap/PDF, integer, RooRealVar or None
1D x-background for SBB-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_2xy : RooFit/PDF, Ostap/PDF, list/tuple or None
2D (x,y)-background for SBB-terms
Use directly RooFit/PDF or Ostap/PDF
bkg_3x : RooFit/PDF, Ostap/PDF, integer integer, RooRealVar or None
1D x-background for BBB-term
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_3D : RooFit/PDF, Ostap/PDF, list/tuple or None
3D x,y,z-background for BBB-term
Use directly RooFit/PDF or Ostap/PDF
sss : None, RooRealVar, non-negative float or tuple
Variable for the yield of SSS component
Use directly RooRelaVar, otherwise create it using self.make_var function
ssb : None, RooRealVar, non-negative float or tuple
Variable for the yield of SSB component
Use directly RooRelaVar, otherwise create it using self.make_var function
sbb : None, RooRealVar, non-negative float or tuple
Variable for the yield of SBB component
Use directly RooRelaVar, otherwise create it using self.make_var function
bbb : None, RooRealVar, non-negative float or tuple
Variable for the yield of BBB component
Use directly RooRelaVar, otherwise create it using self.make_var function
components : the list of additional 3D-PDFs to be used in the fit
xvar : x-variable
yvar : y-variable
zvar : z-variable
name : the PDF name
"""
def __init__ ( self ,
#
signal_x ,
signal_y = None ,
signal_z = None ,
suffix = '' ,
# background for SSB-terms:
bkg_1x = None , ## 1D x-background for BSS-terms
# background for SBB-terms:
bkg_2x = None , ## 1D x-background for BBS-terms, if no bkg2d is specified
bkg_2xy = None , ## 2D x,y-background for BBS-terms (symmetric)
# background for BBB-term
bkg_3x = None , ## 1D x-background for BBB-term
bkg_3D = None , ## 3D x,y,z-background for B(x,y,z) term (symmetric)
#
## Yields of the main components :
sss = None , ## sig(1) * sig(2) * sig(3)
ssb = None , ## SSB components
sbb = None , ## SBB components
bbb = None , ## background-3D
## additional components
components = [] ,
xvar = None ,
yvar = None ,
zvar = None ,
name = '' ) :
## keep all arguments
self.__args = {
#
'signal_x' : signal_x ,
'signal_y' : signal_y ,
'signal_z' : signal_z ,
#
'bkg_1x' : bkg_1x ,
'bkg_2x' : bkg_2x ,
'bkg_2xy' : bkg_2xy ,
'bkg_3x' : bkg_3x ,
'bkg_3D' : bkg_3D ,
##
'sss' : sss ,
'ssb' : ssb ,
'sbb' : sbb ,
'bbb' : bbb ,
##
'components' : components ,
##
'xvar' : xvar ,
'yvar' : yvar ,
'zvar' : zvar ,
##
'name' : name
}
self.__suffix = suffix
if isinstance ( signal_x , PDF ) : self.__signal_x= signal_x
elif isinstance ( signal_x , ROOT.RooAbsPdf ) and xvar :
self.__signal_x = Generic1D_pdf ( signal_x , xvar , 'SX' )
else : raise AttributeError ( "Invalid ``signal_x'' argument: %s" % signal_x )
if isinstance ( signal_y , PDF ) : self.__signal_y = signal_y
elif isinstance ( signal_y , ROOT.RooAbsPdf ) and yvar :
self.__signal_y = Generic1D_pdf ( signal_y , yvar , 'SY' )
elif yvar and not signal_y :
self.__signal_y = self.__signal_x.clone ( xvar = yvar , name = 'SY' )
self.debug('signal y-component is cloned from the signal_x component')
else : raise AttributeError ( "Invalid ``signal_y'' argument: %s" % signal_y )
if isinstance ( signal_z , PDF ) : self.__signal_z = signal_z
elif isinstance ( signal_z , ROOT.RooAbsPdf ) and zvar :
self.__signal_z = Generic1D_pdf ( signal_z , zvar , 'SZ' )
elif zvar and not signal_z :
self.__signal_z = self.__signal_x.clone ( xvar = zvar , name = 'SZ' )
self.debug('signal z-component is cloned from the signal_x component')
else : raise AttributeError ( "Invalid ``signal_z'' argument: %s" % signal_z )
#
## initialize base class
#
if not name :
name = "%s&%s&%s" % ( self.__signal_x.name ,
self.__signal_y.name ,
self.__signal_z.name )
if suffix : name += '_'+ suffix
PDF3.__init__ ( self , name ,
self.__signal_x.xvar ,
self.__signal_y.xvar ,
self.__signal_z.xvar )
# =====================================================================
## 1) First component: all signals
# =====================================================================
self.__sss_cmp = Model3D (
'SSS_pdf' + suffix , self.__signal_x , self.__signal_y , self.__signal_z )
# =====================================================================
## 2) ( 2 signals ) x ( 1 background )
# =====================================================================
self.__bkg_1x = self.make_bkg ( bkg_1x , 'Bkg1X_BSS' + suffix , self.xvar )
self.__bkg_1y = self.make_bkg ( self.__bkg_1x , 'Bkg1Y_SBS' + suffix , self.yvar )
self.__bkg_1z = self.make_bkg ( self.__bkg_1x , 'Bkg1Z_SSB' + suffix , self.zvar )
self.__ssb_cmp_raw = Model3D ( "SSB_raw" + suffix ,
self.__signal_x , self.__signal_y , self.__bkg_1z ,
title = "Signal(x) x Signal(y) x Background1(z)" )
self.__sbs_cmp_raw = Model3D ( "SBS_raw" + suffix ,
self.__signal_x , self.__bkg_1y , self.__signal_z ,
title = "Signal(x) x Background1(y) x Signal(z)" )
self.__bss_cmp_raw = Model3D ( "BSS_raw" + suffix ,
self.__bkg_1x , self.__signal_y , self.__signal_z ,
title = "Background1(x) x Signal(y) x Signal(z)" )
self.__ssb_cmp = Generic3D_pdf (
self.make_sum ( "SSB_pdf" + suffix ,
"S(x)*S(y)*B(z)+S(x)*B(y)*S(z)+B(x)*S(y)*B(z)" ,
self.__ssb_cmp_raw.pdf ,
self.__sbs_cmp_raw.pdf ,
self.__bss_cmp_raw.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__sbs_cmp = self.__ssb_cmp
self.__bss_cmp = self.__ssb_cmp
# =====================================================================
## (intermezzo-1) Assumptions about the SBB-background sub-components
# =====================================================================
if component_clone ( bkg_2x ) :
bkg_2x = self.__bkg_1x
self.debug ( 'bkg_2x set to [CLONE] %s' % bkg_2x )
elif component_similar ( bkg_2x ) :
bkg_2x = bkg_1x
self.debug ( 'bkg_2x set to [SIMILAR] %s' % bkg_2x )
# =====================================================================
# =====================================================================
## 3 Three terms: (1 signal) x (2 backgrounds)
# =====================================================================
self.__bkg_2x = None
self.__bkg_2y = None
self.__bkg_2z = None
if bkg_2xy and isinstance ( bkg_2xy , PDF2 ) :
self.__bkg_2xy = bkg_2xy
self.__bkg_2xz = bkg_2xy.clone ( xvar = self.xvar , yvar = self.zvar , name = 'Bkg2XZ_pdf' + suffix )
self.__bkg_3yz = bkg_2xy.clone ( xvar = self.yvar , yvar = self.yvar , name = 'Bkg2YZ_pdf' + suffix )
elif bkg_2xy and isinstance ( bkg_2xy , ROOT.RooAbsPdf ) :
self.__bkg_2xy = Generic2D_pdf ( bkg_2xy , self.xvar , self.yvar )
self.__bkg_2xz = self.__bkg_2xy.clone ( name = 'Bkg2XZ_pdf' + suffix ,
xvar = self.xvar ,
yvar = self.zvar )
self.__bkg_2xz = self.__bkg_2xy.clone ( name = 'Bkg2YZ_pdf' + suffix ,
xvar = self.yvar ,
yvar = self.zvar )
elif bkg_2xy and isinstance ( bkg_2xy , ( tuple , list ) ) :
from ostap.fitting.models_2d import make_B2Dsym
self.__bkg_2xy = make_B2Dsym ( 'Bkg2XY_pdf' , self.xvar , self.yvar , *bkg_2xy )
self.__bkg_2xz = self.__bkg_2xy.clone ( name = 'Bkg2XZ_pdf' + suffix ,
xvar = self.xvar ,
yvar = self.zvar )
self.__bkg_2xz = self.__bkg_2xy.clone ( name = 'Bkg2YZ_pdf' + suffix ,
xvar = self.yvar ,
yvar = self.zvar )
else :
self.__bkg_2x = self.make_bkg ( bkg_2x , 'Bkg2X_S2B' + suffix , self.xvar )
self.__bkg_2y = self.make_bkg ( self.__bkg_2x , 'Bkg2Y_S2B' + suffix , self.yvar )
self.__bkg_2z = self.make_bkg ( self.__bkg_2x , 'Bkg2Z_S2B' + suffix , self.zvar )
self.__bkg_2xy = Model2D ( 'Bkg2XY' + suffix ,
self.__bkg_2x ,
self.__bkg_2y , title = 'Background2(x,y)' )
self.__bkg_2xz = Model2D ( 'Bkg2XZ' + suffix ,
self.__bkg_2x ,
self.__bkg_2z , title = 'Background2(x,z)' )
self.__bkg_2yz = Model2D ( 'Bkg2YZ' + suffix ,
self.__bkg_2y ,
self.__bkg_2z , title = 'Background2(y,z)' )
self.__sbb_cmp_raw = Generic3D_pdf (
ROOT.RooProdPdf ( "SBB_raw" + suffix , "Signal(x) x Background2(y,z)" , self.__signal_x.pdf , self.__bkg_2yz.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__bsb_cmp_raw = Generic3D_pdf (
ROOT.RooProdPdf ( "BSB_raw" + suffix , "Signal(y) x Background2(x,z)" , self.__signal_y.pdf , self.__bkg_2xz.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__bbs_cmp_raw = Generic3D_pdf (
ROOT.RooProdPdf ( "BBS_raw" + suffix , "Signal(z) x Background2(x,y)" , self.__signal_z.pdf , self.__bkg_2xy.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__sbb_cmp = Generic3D_pdf (
self.make_sum( "SBB_pdf" + suffix ,
"S(x)*B(y,z)+S(y)*B(x,z)+S(Z)*B(x,y)" ,
self.__sbb_cmp_raw.pdf ,
self.__bsb_cmp_raw.pdf ,
self.__bbs_cmp_raw.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__bsb_cmp = self.__sbb_cmp
self.__bbs_cmp = self.__sbb_cmp
# =====================================================================
## (intermezzo-2) Assumptions about the BBB-background sub-components
# =====================================================================
if component_clone ( bkg_3x ) :
bkg_3x = self.__bkg_2x
self.debug ( 'bkg_3x set to [CLONE] %s' % bkg_3x )
elif component_similar ( bkg_3x ) :
bkg_3x = bkg_2x
self.debug ( 'bkg_3x set to [SIMILAR] %s' % bkg_3x )
# =====================================================================
# =====================================================================
## 8) pure background
# =====================================================================
self.__bkg_3x = None
self.__bkg_3y = None
self.__bkg_3z = None
if bkg_3D and isinstance ( bkg_3D , PDF3 ) :
self.__bbb_cmp = bkg_3D
elif bkg_3D and isinstance ( bkg_3D , ROOT.RooAbsPdf ) :
self.__bbb_cmp = Generic3D_pdf ( bkg_3D , self.xvar , self.yvar , self.zvar )
elif bkg_3D and isinstance ( bkg_3D , ( tuple , list ) ) :
from ostap.fitting.models_3d import make_B3Dsym
self.__bbb_cmp = make_B3Dsym ( 'BBB_pdf' , self.xvar , self.yvar , self.zvar , *bkg_3D )
else :
self.__bkg_3x = self.make_bkg ( bkg_3x , 'Bkg3X_BBB' + suffix , self.xvar )
self.__bkg_3y = self.make_bkg ( self.__bkg_3x , 'Bkg3Y_BBB' + suffix , self.yvar )
self.__bkg_3z = self.make_bkg ( self.__bkg_3x , 'Bkg3Z_BBB' + suffix , self.zvar )
self.__bbb_cmp = Model3D ( "BBB_pdf" + suffix ,
self.__bkg_3x ,
self.__bkg_3y ,
self.__bkg_3z , title = "Background(x,y,z)" )
#
## coefficients
#
self.__sss = self.make_var ( sss , "SSS" + suffix ,
"Signal(x)&Signal(y)&Signal(z)" + suffix , sss , 1000 , 0 , 1.e+7 )
self.__ssb = self.make_var ( ssb , "SSB" + suffix ,
"Signal*2&Background" + suffix , ssb , 1000 , 0 , 1.e+7 )
self.__sbb = self.make_var ( sbb , "SBB" + suffix ,
"Signal&Backrgound*2" + suffix , sbb , 1000 , 0 , 1.e+7 )
self.__bbb = self.make_var ( bbb , "BBB" + suffix ,
"Background*3" + suffix , bbb , 1000 , 0 , 1.e+7 )
self.__sbs = self.__ssb ## the same
self.__bss = self.__ssb ## the same
self.__bsb = self.__sbb ## the same
self.__bbs = self.__sbb ## the same
self.alist1 = ROOT.RooArgList (
self.__sss_cmp.pdf ,
self.__ssb_cmp.pdf ,
self.__sbb_cmp.pdf ,
self.__bbb_cmp.pdf )
self.alist2 = ROOT.RooArgList (
self.__sss ,
self.__ssb ,
self.__sbb ,
self.__bbb )
## treat additional components (if specified)
self.__nums_components = []
icmp = 0
self.__more_components = []
for cmp in components :
if isinstance ( cmp , PDF3 ) : cc = cmp
elif isinstance ( cmp , ROOT.RooAbsPdf ) : cc = Generic3D_pdf ( cmp , self.xvar , self.yvar,self.zvar )
else :
self.error ("unknown ``other''component %s/%s, skip it!" % ( cc , type(cc) ) )
continue
self.__more_components.append ( cc )
self.components.add ( cc.pdf )
nc = len( self.__more_components )
if 1 == nc :
cf = self.make_var ( None , "C"+suffix , "Component" + suffix , None , 1 , 0 , 1.e+7 )
self.alist1.add ( self.components[0] )
self.__nums_components.append ( cf )
elif 2 <= nc :
fic = self.make_fracs ( nc , 'C_%%d%s' % suffix , 'C(%%d)%s' % suffix , fractions = False )
for c in self.components : self.alist1.add ( c)
for f in fic : self.__nums_components.append ( f )
self.__nums_components = tuple ( self.__nums_components )
for c in self.__nums_components : self.alist2.add ( c )
##
#
## build the final PDF
#
self.pdf = ROOT.RooAddPdf ( "model3D" + suffix ,
"Model3D(%s)" % suffix ,
self.alist1 ,
self.alist2 )
self.signals .add ( self.__sss_cmp.pdf )
self.backgrounds .add ( self.__bbb_cmp.pdf )
self.crossterms1 .add ( self.__ssb_cmp.pdf ) ## cross-terms
self.crossterms2 .add ( self.__sbb_cmp.pdf ) ## cross-terms
## save the configuration
self.config = {
##
'signal_x' : self.signal_x ,
'signal_y' : self.signal_y ,
'signal_z' : self.signal_z ,
'suffix' : self.suffix ,
##
'bkg_1x' : self.bkg_1x ,
'bkg_2x' : self.bkg_2x ,
'bkg_2xy' : self.bkg_2xy ,
'bkg_3x' : self.bkg_3x ,
'bkg_3D' : self.bkg_3D ,
##
'sss' : self.SSS ,
'ssb' : self.SSB ,
'sbb' : self.SBB ,
'bbb' : self.BBB ,
##
'components' : self.more_components ,
##
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'zvar' : self.zvar ,
##
'name' : self.name ,
}
## redefine the clone method, allowing only the name to be changed
# @attention redefinition of parameters and variables is disabled,
# since it can't be done in a safe way
def clone ( self , name = '' , xvar = None , yvar = None , zvar = None ) :
"""Redefine the clone method, allowing only the name to be changed
- redefinition of parameters and variables is disabled,
since it can't be done in a safe way
"""
if xvar and not xvar is self.xvar :
raise AttributeError("Fit3DSym can not be cloned with different `xvar''")
if yvar and not yvar is self.yvar :
raise AttributeError("Fit3DSym can not be cloned with different `yvar''")
if zvar and not zvar is self.zvar :
raise AttributeError("Fit3DSym can not be cloned with different `zvar''")
return PDF.clone ( self , name = name ) if name else PDF.clone( self )
@property
def SSS ( self ) :
"""The yield of Signal(x)*Signal(y)*Signal(z) component"""
return self.__sss
@SSS.setter
def SSS ( self , value ) :
value = float ( value )
assert value in self.__sss, "Value %s is out of the allowed range %s " % ( value , self.__sss.minmax() )
self.__sss.setVal ( value )
@property
def SSB ( self ) :
"""The yield of S(x)*S(y)*B(z)+S(x)*B(y)*S(z)+B(x)*S(y)*S(z) component (same as SBS, BSS)"""
return self.__ssb
@SSB.setter
def SSB ( self , value ) :
value = float ( value )
assert value in self.__ssb, "Value %s is out of the allowed range %s " % ( value , self.__ssb.minmax() )
self.__ssb.setVal ( value )
@property
def SBS ( self ) :
"""The yield of S(x)*S(y)*B(z)+S(x)*B(y)*S(z)+B(x)*S(y)*S(z) component (same as SSB, BSS)"""
return self.__sbs
@SBS.setter
def SBS ( self , value ) :
value = float ( value )
assert value in self.__sbs, "Value %s is out of the allowed range %s " % ( value , self.__sbs.minmax() )
self.__sbs.setVal ( value )
@property
def BSS ( self ) :
"""The yield of S(x)*S(y)*B(z)+S(x)*B(y)*S(z)+B(x)*S(y)*S(z) component (same as SSB, SBS)"""
return self.__bss
@BSS.setter
def BSS ( self , value ) :
value = float ( value )
assert value in self.__bss, "Value %s is out of the allowed range %s " % ( value , self.__bss.minmax() )
self.__bss.setVal ( value )
@property
def SBB ( self ) :
"""The yield of S(x)*B(y,z)+S(y)*B(x,z)+S(z)*B(y,z) component (same as BSB, BBS)"""
return self.__sbb
@SBB.setter
def SBB ( self , value ) :
value = float ( value )
assert value in self.__sbb, "Value %s is out of the allowed range %s " % ( value , self.__sbb.minmax() )
self.__sbb.setVal ( value )
@property
def BSB ( self ) :
"""The yield of S(x)*B(y,z)+S(y)*B(x,z)+S(z)*B(y,z) component (same as SBB, BBS)"""
return self.__bsb
@BSB.setter
def BSB ( self , value ) :
value = float ( value )
assert value in self.__bsb, "Value %s is out of the allowed range %s " % ( value , self.__bsb.minmax() )
self.__bsb.setVal ( value )
@property
def BBS ( self ) :
"""The yield of S(x)*B(y,z)+S(y)*B(x,z)+S(z)*B(y,z) component (same as SBB, BSB )"""
return self.__bbs
@BBS.setter
def BBS ( self , value ) :
value = float ( value )
assert value in self.__bbs, "Value %s is out of the allowed range %s " % ( value , self.__bbs.minmax() )
self.__bbs.setVal ( value )
@property
def BBB ( self ) :
"""The yield of Background(x,y,z) component"""
return self.__bbb
@BBB.setter
def BBB ( self , value ) :
value = float ( value )
assert value in self.__bbb, "Value %s is out of the allowed range %s " % ( value , self.__bbb.minmax() )
self.__bbb.setVal ( value )
@property
def C ( self ) :
"""Get the yields of ``other'' component(s)
For single ``other'' component:
>>> print pdf.C ## read the single ``other'' component
>>> pdf.C = 100 ## assign to it
For multiple ``other'' components:
>>> print pdf.C[4] ## read the 4th ``other'' component
>>> pdf.C = 4,100 ## assign to it
... or, alternatively:
>>> print pdf.C[4] ## read the 4th ``other'' component
>>> pdf.C[4].value 100 ## assign to it
"""
lst = [ i for i in self.__nums_components ]
if not lst : return () ## extended fit? no other components?
elif 1 == len(lst) : return lst[0] ## single component?
return tuple ( lst )
@C.setter
def C ( self , value ) :
_n = len ( self.__nums_components )
assert 1 <= _n , "No ``other'' components are defined, assignement is impossible"
if 1 == _n :
_c = self.C
value = float ( value )
else :
index = value [0]
assert isinstance ( index , int ) and 0 <= index < _n, "Invalid ``other'' index %s/%d" % ( index , _n )
value = float ( value[1] )
_c = self.C[index]
## assign
assert value in _c , "Value %s is outside the allowed region %s" % ( value , _c.minmax() )
_c.setVal ( value )
@property
def yields ( self ) :
"""The list/tuple of the yields of all numeric components"""
return tuple ( [ i for i in self.alist2 ] )
@property
def total_yield ( self ) :
"""``total_yield''' : get the total yield"""
if not self.fit_result : return None
if not valid_pointer ( self.fit_result ) : return None
return self.fit_result.sum ( *self.yields )
# =========================================================================
# components
# =========================================================================
@property
def signal_x ( self ) :
"""``signal_x'': Signal(x) component/PDF"""
return self.__signal_x
@property
def signal_y ( self ) :
"""``signal_y'': Signal(y) component/PDF"""
return self.__signal_y
@property
def signal_z ( self ) :
"""``signal_z'': Signal(z) component/PDF"""
return self.__signal_z
@property
def bkg_1x ( self ) :
"""``bkg_1x'': B(x) component for B(x)*S(y)*S(z) term"""
return self.__bkg_1x
@property
def bkg_1y ( self ) :
"""``bkg_1y'': B(y) component for S(x)*B(y)*S(z) term"""
return self.__bkg_1y
@property
def bkg_1z ( self ) :
"""``bkg_1z'': B(z) component for S(x)*S(y)*B(z) term"""
return self.__bkg_1z
@property
def bkg_2xy( self ) :
"""``bkg_2xy'': B(x,y) component for B(x,y)*S(z) term"""
return self.__bkg_2xy
@property
def bkg_2xz( self ) :
"""``bkg_2xz'': B(x,z) component for B(x,z)*S(y) term"""
return self.__bkg_2xz
@property
def bkg_2yz( self ) :
"""``bkg_2yz'': B(y,z) component for B(y,z)*S(x) term"""
return self.__bkg_2yz
@property
def bkg_2x ( self ) :
"""``bkg_2x'': B(x) component for B(x,y)*S(z) & B(x,z)*S(y) terms"""
return self.__bkg_2x
@property
def bkg_2y ( self ) :
"""``bkg_2y'': B(y) component for B(y,z)*S(x) & B(x,y)*S(z) terms"""
return self.__bkg_2y
@property
def bkg_2z ( self ) :
"""``bkg_2z'': B(z) component for B(x,z)*S(y) & B(y,z)*S(x) terms"""
return self.__bkg_2z
@property
def bkg_3x ( self ) :
"""``bkg_3x'': B(x) component for B(x)*B(y)*B(z) term"""
return self.__bkg_3x
@property
def bkg_3y ( self ) :
"""``bkg_3y'': B(y) component for B(x)*B(y)*B(z) term"""
return self.__bkg_3y
@property
def bkg_3z ( self ) :
"""``bkg_3z'': B(z) component for B(x)*B(y)*B(z) term"""
return self.__bkg_3z
@property
def bkg_3D ( self ) :
"""``bkg_3D'': B(x,y,z) component/PDF for the final PDF"""
return self.__bbb_cmp
@property
def cmp_SSS ( self ) :
"""```triple-signal'' component/PDF"""
return self.__sss_cmp
@property
def cpm_SSB ( self ) :
"""```signal-signal-background'' symmetrized component/PDF"""
return self.__ssb_cmp
@property
def cmp_SBS ( self ) :
"""```signal-background-signal'' symmetrized component/PDF (same as cmp_SSB)"""
return self.__sbs_cmp
@property
def cmp_BSS ( self ) :
"""```background-signal-signal'' symmetrized component/PDF (same as cmp_SSB)"""
return self.__bss_cmp
@property
def cpm_SBB ( self ) :
"""```signal-background-background'' symmetrized component/PDF"""
return self.__sbb_cmp
@property
def cmp_BSB ( self ) :
"""```background-signal-background'' symmetrized component/PDF (same as cmp_SBB)"""
return self.__bsb_cmp
@property
def cmp_BBS ( self ) :
"""```background-background-signal'' symmetrized component/PDF (same as cmp_SBB)"""
return self.__bbs_cmp
@property
def cmp_BBB ( self ) :
"""```triple-background'' component/PDF"""
return self.__bbb_cmp
@property
def more_components ( self ) :
"""additional/``other'' components"""
return tuple( self.__more_components )
@property
def suffix ( self ) :
"""``suffix'', used to build the name"""
return self.__suffix
# =========================================================================
## Raw, non-symmetrized fit components/PDF (for debugging)
# =========================================================================
@property
def cpm_raw_SSB ( self ) :
"""```signal-signal-background'' raw,non-symmetrized component/PDF"""
return self.__ssb_cmp_raw
@property
def cmp_raw_SBS ( self ) :
"""```signal-background-signal'' raw,non-symmetrized component/PDF"""
return self.__sbs_cmp_raw
@property
def cmp_raw_BSS ( self ) :
"""```background-signal-signal'' raw,non-symmetrized component/PDF"""
return self.__bss_cmp_raw
@property
def cpm_raw_SBB ( self ) :
"""```signal-background-background'' raw,non-symmetrized component/PDF"""
return self.__sbb_cmp_raw
@property
def cmp_raw_BSB ( self ) :
"""```background-signal-background'' raw,non-symmetrized component/PDF"""
return self.__bsb_cmp_raw
@property
def cmp_raw_BBS ( self ) :
"""```background-background-signal'' raw,non-symmetrized component/PDF"""
return self.__bbs_cmp_raw
# =============================================================================
## @class Fit3DMix
# The actual model for fully "mixed-symemtry" 3D-fits (symmetric for y<-->z)
#
# @param signal_x (RooFit/PDF or Ostap/PDF) PDF to describe (1D)-signal in X-direction
# @param signal_y (RooFit/PDF, Ostap/PDF or None) PDF to describe (1D)-signal in Y-direction
# @param signal_z (RooFit/PDF, Ostap/PDF or None) PDF to describe (1D)-signal in Z-direction
# @param suffix (string) An optional suffix to be added to the names of created PDFs and variables
# @param bkg_1x (RooFit/PDF, Ostap/PDF, integer, RooRealVar or None)
# 1D x-background for SSB-terms
# Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
# @param bkg_1y (RooFit/PDF, Ostap/PDF,integer, RooRealVar or None)
# 1D x-background for SBB-terms, if <code>bkg2D</code> is not specified.
# Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
# @param bkg_2x (RooFit/PDF or Ostap/PDF)
# 2D x,y-background for SBB-terms
# Use directly RooFit/PDF or Ostap/PDF
# @param bkg3 (RooFit/PDF, Ostap/PDF,integer, RooRealVar or None)
# 1D x-background for BBB-term, if <code>bkg3D</code> is not specified.
# Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
# @param bkg3D (RooFit/PDF or Ostap/PDF)
# 3D x,y,z-background for BBB-term
# Use directly RooFit/PDF or Ostap/PDF
# @param sss (None, RooRealVar, non-negative float or tuple)
# Variable for the yield of SSS component.
# Use directly RooRelaVar, otherwise create it using self.make_var function
# @param ssb (None, RooRealVar, non-negative float or tuple)
# Variable for the yield of SSB component.
# Use directly RooRelaVar, otherwise create it using self.make_var function
# @param sbb (None, RooRealVar, non-negative float or tuple)
# Variable for the yield of SBB component.
# Use directly RooRelaVar, otherwise create it using self.make_var function
# @param bbb (None, RooRealVar, non-negative float or tuple)
# Variable for the yield of BBB component.
# Use directly RooRelaVar, otherwise create it using self.make_var function
# @param components ([]) the list of additional 3D-PDFs to be used in the fit
# @param xvar (None) the x-variable
# @param yvar (None) the y-variable
# @param zvar (None) the z-variable
# @param name ("") the PDF name
# @code
# model = Models.Fit3DSym (
# signal_x = Models.Gauss_pdf ( 'Gx' , mass = m_x ) ,
# signal_y = Models.Gauss_pdf ( 'Gy' , mass = m_y ) ,
# signal_z = Models.Gauss_pdf ( 'Gz' , mass = m_z ) ,
# bkg_1x = -1 ,
# bkg_2x = -1 ,
# bkg_1y = -1 )
#
# r = model.fitTo ( dataset ) ## fit dataset
#
# print r ## get results
#
# fx = model.draw1 () ## visualize X-projection
# fy = model.draw2 () ## visualize Y-projection
# fz = model.draw3 () ## visualize Z-projection
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2017-07-25
class Fit3DMix (PDF3) :
"""The actual model for y<->z-symmetric 3D-fits
>>> model = Models.Fit3DSym (
... signal_x = Models.Gauss_pdf ( 'Gx' , mass = m_x ) ,
... signal_y = Models.Gauss_pdf ( 'Gy' , mass = m_y ) ,
... signal_z = Models.Gauss_pdf ( 'Gz' , mass = m_z ) ,
... bkg_1x = 1 ,
... bkg_2x = -1 ,
... bkg_1y = -1 )
>>> r,f = model.fitTo ( dataset ) ## fit dataset
>>> print r ## get results
>>> fx = model.draw1 () ## visualize X-projection
>>> fy = model.draw2 () ## visualize Y-projection
>>> fz = model.draw3 () ## visualize Z-projection
Parameters
----------
signal_x : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in X-direction
signal_y : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in Y-direction
signal_z : RooFit/PDF or Ostap/PDF
PDF to describe (1D)-signal in Z-direction
suffix : string
An optional suffix to be added to the names of created PDFs and variables
bkg_1x : RooFit/PDF, Ostap/PDF, integer, RooRealVar or None
1D x-background for BSS-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_1y : RooFit/PDF, Ostap/PDF, integer, RooRealVar or None
1D y-background for BSS-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_2x : RooFit/PDF, Ostap/PDF, integer, RooRealVar or None
1D x-background for BBS-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_2y : RooFit/PDF, Ostap/PDF, integer, RooRealVar or None
1D y-background for BBS-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_2xy : RooFit/PDF, Ostap/PDF, list/tuple or None
2D (x,y)-background for BBS-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created
bkg_2yz : RooFit/PDF, Ostap/PDF, list/tuple or None
2D (y,z)-background for BBS-termsm must be symmetric!
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created
bkg_3x : RooFit/PDF, Ostap/PDF, integer, RooRealVar or None
1D x-background for BBB-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_3y : RooFit/PDF, Ostap/PDF, integer, RooRealVar or None
1D y-background for BBB-terms
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created via self.make_bkg
bkg_3yz : RooFit/PDF, Ostap/PDF, list/tuple or None
2D (y,z)-background for BBB-terms (must be symmetric!)
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created
bkg_3D : RooFit/PDF, Ostap/PDF, list/tuple or None
3D (x,y,z)-background for BBB-terms (must be symmetric for y<-->z !)
Use directly RooFit/PDF or Ostap/PDF, otherwise PDF is created
sss : None, RooRealVar, non-negative float or tuple
Variable for the yield of SSS component
Use directly RooRelaVar, otherwise create it using self.make_var function
ssb : None, RooRealVar, non-negative float or tuple
Variable for the yield of SSB component
Use directly RooRelaVar, otherwise create it using self.make_var function
sbb : None, RooRealVar, non-negative float or tuple
Variable for the yield of SBB component
Use directly RooRelaVar, otherwise create it using self.make_var function
bbb : None, RooRealVar, non-negative float or tuple
Variable for the yield of BBB component
Use directly RooRelaVar, otherwise create it using self.make_var function
components : the list of additional 3D-PDFs to be used in the fit
xvar : x-variable
yvar : y-variable
zvar : z-variable
name : the PDF name
"""
def __init__ ( self ,
#
signal_x ,
signal_y ,
signal_z = None , ## cloned from signal_y if None
suffix = '' ,
# background for SSB-terms:
bkg_1x = None , ## 1D x-background for BSS-terms
bkg_1y = None , ## 1D y-background for BSS-terms, z-background is cloned
# background for SBB-terms:
bkg_2x = None , ## 1D x-background for SBB-terms
bkg_2y = None , ## 1D y-background for SBB-terms, z-background is cloned
# background for SBB-terms:
bkg_2xy = None , ## 2D x,y-background for SBB-terms, (x,z)-backgroud is cloned
bkg_2yz = None , ## 3D y,z-background for SBB-terms, must be SYMMETRIC!
# background for BBB-terms:
bkg_3x = None , ## 1D x-background for SBB-terms
bkg_3y = None , ## 1D y-background for SBB-terms, z-background is cloned
# background for BBB-terms:
bkg_3yz = None , ## 3D y,z-background for SBB-terms, must be SYMMETRIC!
bkg_3D = None , ## 3D x,y,z-background for B(x,y,z) term (symmetric)
## Yields of the main components :
sss = None , ## S(x)*S(y)*S(z) component
bss = None , ## B(x)*S(y)*S(z) component
sbb = None , ## S(x)*B(y,z) component
ssb = None , ## S(x)*[S(y)*B(z)+B(y)*S(z)] component
bbs = None , ## B(x)*[S(y)*B(z)+B(y)*S(z)] component
bbb = None , ## B(x,y,z) component
## additional components
components = [] , ## the list additional components
xvar = None , ## x-variable
yvar = None , ## y-variable
zvar = None , ## z-variable
name = '' ) :
## keep all arguments
self.__args = {
#
'signal_x' : signal_x ,
'signal_y' : signal_y ,
'signal_z' : signal_z ,
##
'bkg_1x' : bkg_1x ,
'bkg_1y' : bkg_1y ,
##
'bkg_2x' : bkg_2x ,
'bkg_2y' : bkg_2y ,
'bkg_2xy' : bkg_2xy ,
'bkg_2yz' : bkg_2yz ,
##
'bkg_3x' : bkg_3x ,
'bkg_3y' : bkg_3y ,
'bkg_3yz' : bkg_3yz ,
##
'bkg_3D' : bkg_3D ,
#
'sss' : sss ,
'bss' : bss ,
'ssb' : ssb ,
'sbb' : sbb ,
'bbs' : bbs ,
'bbb' : bbb ,
#
'components' : components ,
'xvar' : xvar ,
'yvar' : yvar ,
'zvar' : zvar ,
'name' : name
}
self.__suffix = suffix
if isinstance ( signal_x , PDF ) : self.__signal_x = signal_x
elif isinstance ( signal_x , ROOT.RooAbsPdf ) and xvar :
self.__signal_x = Generic1D_pdf ( signal_x , xvar , 'SX' )
else : raise AttributeError ( "Invalid ``signal_x'' argumnent: %s" % signal_x )
if isinstance ( signal_y , PDF ) : self.__signal_y = signal_y
elif isinstance ( signal_y , ROOT.RooAbsPdf ) and yvar :
self.__signal_y = Generic1D_pdf ( signal_y , yvar , 'SY' )
else : raise AttributeError ( "Invalid ``signal_y'' argument: %s" % signal_y )
if isinstance ( signal_z , PDF ) : self.__signal_z = signal_z
elif isinstance ( signal_z , ROOT.RooAbsPdf ) and zvar :
self.__signal_z = Generic1D_pdf ( signal_z , zvar , 'SZ' )
elif zvar and not signal_z :
self.__signal_z = self.__signal_y.clone ( xvar = zvar , name = 'SZ' )
self.debug('signal z-component is cloned from the signal_y component')
else : raise AttributeError ( "Invalid ``signal_z'' argument: %s" % signal_z )
#
## initialize base class
#
if not name :
name = "%s&%s&%s" % ( self.__signal_x.name ,
self.__signal_y.name ,
self.__signal_z.name )
if suffix : name += '_'+ suffix
PDF3.__init__ ( self , name ,
self.__signal_x.xvar ,
self.__signal_y.xvar ,
self.__signal_z.xvar )
# =====================================================================
## 1) All signals component
# =====================================================================
self.__sss_cmp = Model3D (
'SSS_pdf' + suffix , self.__signal_x , self.__signal_y , self.__signal_z )
# =====================================================================
## 2) background x signal x signal
# =====================================================================
self.__bkg_1x = self.make_bkg ( bkg_1x , 'Bkg1X_BSS' + suffix , self.xvar )
self.__bss_cmp = Model3D ( "BSS_pdf" + suffix ,
self.__bkg_1x , self.__signal_y , self.__signal_z ,
title = "Background1(x) x Signal(y) x Signal(z)" )
# =====================================================================
## 3) signal x ( signal x background + backround x signal )
# =====================================================================
self.__bkg_1y = self.make_bkg ( bkg_1y , 'Bkg1Y_SBS' + suffix , self.yvar )
self.__bkg_1z = self.make_bkg ( self.__bkg_1y , 'Bkg1Z_SSB' + suffix , self.zvar )
self.__ssb_cmp_raw = Model3D ( "SSB_raw" + suffix ,
self.__signal_x , self.__signal_y , self.__bkg_1z ,
title = "Signal(x) x Signal(y) x Background1(z)" )
self.__sbs_cmp_raw = Model3D ( "SBS_raw" + suffix ,
self.__signal_x , self.__bkg_1y , self.__signal_z ,
title = "Signal(x) x Background1(y) x Signal(z)" )
self.__ssb_sym_cmp = Generic3D_pdf (
self.make_sum ( "SSB_pdf" + suffix ,
"S(x)*S(y)*B(z)+S(x)*B(y)*S(z)" ,
self.__ssb_cmp_raw.pdf ,
self.__sbs_cmp_raw.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__ssb_cmp = self.__ssb_sym_cmp ##
self.__sbs_cmp = self.__ssb_sym_cmp ## ditto
# =====================================================================
## (intermezzo-1) Assumptions about the SBB-background sub-components
# =====================================================================
if component_clone ( bkg_2x ) :
bkg_2x = self.__bkg_1x
self.debug ( 'bkg_2x set to [CLONE] %s' % bkg_2x )
elif component_similar ( bkg_2x ) :
bkg_2x = bkg_1x
self.debug ( 'bkg_2x set to [SIMILAR] %s' % bkg_2x )
if component_clone ( bkg_2y ) :
bkg_2y = self.__bkg_1y
self.debug ( 'bkg_2y set to [CLONE] %s' % bkg_2y )
elif component_similar ( bkg_2y ) :
bkg_2y = bkg_1y
self.debug ( 'bkg_2y set to [SIMILAR] %s' % bkg_2y )
# =====================================================================
# =====================================================================
## 4) signal x background x background
# =====================================================================
self.__bkg_2x = None
self.__bkg_2y = None
self.__bkg_2z = None
if bkg_2yz and isinstance ( bkg_2yz , PDF2 ) :
self.__bkg_2yz = bkg_2yz
elif bkg_2yz and isinstance ( bkg_2yz , ROOT.RooAbsPdf ) :
self.__bkg_2yz = Generic2D_pdf ( bkg_2yz , self.yvar , self.zvar , name = bkg2D.name )
elif bkg_2yz and isinstance ( bkh_2yz , ( tuple , list ) ) :
from ostap.fitting.models_2d import make_B2Dsym
self.__bkg_2yz = make_B2Dsym ( 'Bkg2YZ' + suffix , self.yvar , self.zvar , *bkg_2yz )
else :
self.__bkg_2y = self.make_bkg ( bkg_2y , 'Bkg2Y_S2B' + suffix , self.yvar )
self.__bkg_2z = self.make_bkg ( self.__bkg_2y , 'Bkg2Z_S2B' + suffix , self.zvar )
self.__bkg_2yz = Model2D ( 'Bkg2YZ_pdf' + suffix ,
self.__bkg_2y ,
self.__bkg_2z , title = 'Background2(y,z)' )
self.__sbb_cmp = Generic3D_pdf (
ROOT.RooProdPdf ( "SBB_raw_pdf" + suffix , "Signal(x) x Background2(y,z)" , self.__signal_x.pdf , self.__bkg_2yz.pdf ) ,
self.xvar , self.yvar , self.zvar )
# =====================================================================
## 5) background x ( signal x background + background x signal )
# =====================================================================
if bkg_2xy and isinstance ( bkg_2xy , PDF2 ) :
self.__bkg_2xy = bkg_2xy
self.__bkg_2xz = bkg_2xy.clone ( name = 'Bkg2XZ' + suffix ,
xvar = self.xvar ,
yvar = self.zvar )
elif bkg_2xy and isinstance ( bkg_2xy , ROOT.RooAbsPdf ) :
self.__bkg_2xy = Generic2D_pdf ( bkg_2xy , self.xvar , self.yvar , name = bkg2D.name )
self.__bkg_2xz = self.__bkg_2xy.clone ( name = 'Bkg2XZ' + suffix ,
xvar = self.xvar ,
yvar = self.zvar )
elif bkg_2xy and isinstance ( bkg_2xy , ( tuple , list ) ) :
from ostap.fitting.models_2d import make_B2D
self.__bkg_2xy = make_B2D ( 'Bkg2XY' + suffix , self.xvar , self.yvar , *bkg_2xy )
self.__bkg_2xz = self.__bkg_2xy.clone ( name = 'Bkg2XZ' + suffix ,
xvar = self.xvar ,
yvar = self.zvar )
else :
if not self.__bkg_2x : self.__bkg_2x = self.make_bkg ( bkg_2x , 'Bkg2X_S2B' + suffix , self.xvar )
if not self.__bkg_2y : self.__bkg_2y = self.make_bkg ( bkg_2y , 'Bkg2Y_S2B' + suffix , self.yvar )
if not self.__bkg_2z : self.__bkg_2z = self.make_bkg ( self.__bkg_2y , 'Bkg2Z_S2B' + suffix , self.zvar )
self.__bkg_2xy = Model2D ( 'Bkg2XY' + suffix ,
self.__bkg_2x ,
self.__bkg_2y , title = 'Background2(x,y)' )
self.__bkg_2xz = Model2D ( 'Bkg2XZ' + suffix ,
self.__bkg_2x ,
self.__bkg_2z , title = 'Background2(x,z)' )
self.__bsb_cmp_raw = Generic3D_pdf (
ROOT.RooProdPdf ( "BSB_raw" + suffix , "Signal(y) x Background2(x,z)" , self.__signal_y.pdf , self.__bkg_2xz.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__bbs_cmp_raw = Generic3D_pdf (
ROOT.RooProdPdf ( "BBS_raw" + suffix , "Signal(z) x Background2(x,y)" , self.__signal_z.pdf , self.__bkg_2xy.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__bbs_sym_cmp = Generic3D_pdf (
self.make_sum ( "SBB_pdf" + suffix ,
"S(x)*B(y,z)+S(y)*B(x,z)+S(Z)*B(x,y)" ,
self.__bsb_cmp_raw.pdf ,
self.__bbs_cmp_raw.pdf ) ,
self.xvar , self.yvar , self.zvar )
self.__bbs_cmp = self.__bbs_sym_cmp ## ditto
self.__bsb_cmp = self.__bbs_sym_cmp
# =====================================================================
## (intermezzo-2) Assumptions about the BBB-background sub-components
# =====================================================================
if component_clone ( bkg_3x ) :
bkg_3x = self.__bkg_2x
self.debug ( 'bkg_3x set to [CLONE] %s' % bkg_3x )
elif component_similar ( bkg_3x ) :
bkg_3x = bkg_2x
self.debug ( 'bkg_3x set to [SIMILAR] %s' % bkg_3x )
if component_clone ( bkg_3y ) :
bkg_3y = self.__bkg_2y
self.debug ( 'bkg_3y set to [CLONE] %s' % bkg_3y )
elif component_similar ( bkg_3y ) :
bkg_3y = bkg_2y
self.debug ( 'bkg_3y set to [SIMILAR] %s' % bkg_3y )
if component_clone ( bkg_3yz ) :
bkg_3yz = self.__bkg_2yz
self.debug ( 'bkg_3yz set to [CLONE] %s' % bkg_3yz )
elif component_similar ( bkg_3yz ) :
bkg_3yz = bkg_2yz
self.debug ( 'bkg_3yz set to [SIMILAR] %s' % bkg_3yz )
# =====================================================================
# =====================================================================
## 6) pure background
# =====================================================================
self.__bkg_3x = None
self.__bkg_3y = None
self.__bkg_3z = None
self.__bkg_3yz = None
if bkg_3D and isinstance ( bkg_3D , PDF3 ) :
self.__bbb_cmp = bkg_3D
elif bkg_3D and isinstance ( bkg_3D , ROOT.RooAbsPdf ) :
self.__bbb_cmp = Generic3D_pdf ( bkg_3D , self.xvar , self.yvar , self.zvar )
elif bkg_3D and isinstance ( bkg_2D , ( tuple , list ) ) :
from ostap.fitting.models_2d import make_B2DmixYZ
self.__bkg_3D = make_B2DmixYZ ( 'BBB_pdf' + suffix , self.xvar , self.yvar , self.zvar , *bkg_3D )
else :
if bkg_3yz and isinstance ( bkg_3yz , PDF2 ) :
self.__bkg_3xy = bkg_3xy
elif bkg_3yz and isinstance ( bkg_3yz , ROOT.RooAbsPdf ) :
self.__bkg_3yz = Generic2D_pdf ( bkg_3yz , self.yvar , self.zvar )
else :
self.__bkg_3y = self.make_bkg ( bkg_3y , 'Bkg3Y_BBB' + suffix , self.yvar )
self.__bkg_3z = self.make_bkg ( self.__bkg_3y , 'Bkg3Z_BBB' + suffix , self.zvar )
self.__bkg_3yz = Model2D ( 'Bkg3YZ' + suffix , self.__bkg_3y , self.__bkg_3z )
self.__bkg_3x = self.make_bkg ( bkg_3x , 'Bkg3X_BBB' + suffix , self.xvar )
self.__bbb_cmp = Generic3D_pdf (
ROOT.RooProdPdf ( "BBB_pdf" + suffix ,
"Background3(x) x Background3(y,z)" ,
self.__bkg_3x.pdf , self.__bkg_3yz.pdf ) ,
self.xvar , self.yvar , self.zvar )
# =====================================================================
## coefficients
# =====================================================================
self.__sss = self.make_var ( sss , "SSS" + suffix ,
"Signal(x)&Signal(y)&Signal(z)" + suffix , sss , 1000 , 0 , 1.e+7 )
self.__bss = self.make_var ( bss , "BSS" + suffix ,
"Background(x)&Signal(y)&Signal(z)" + suffix , bss , 1000 , 0 , 1.e+7 )
self.__ssb = self.make_var ( ssb , "SSB" + suffix ,
"Signal&(SB+BS)" + suffix , sbb , 1000 , 0 , 1.e+7 )
self.__sbb = self.make_var ( sbb , "SBB" + suffix ,
"Signal&Background(x,y)" + suffix , sbb , 1000 , 0 , 1.e+7 )
self.__bbs = self.make_var ( bbs , "BBS" + suffix ,
"Backgroun&(SB+BS)" + suffix , bbs , 1000 , 0 , 1.e+7 )
self.__bbb = self.make_var ( bbb , "BBB" + suffix ,
"Background(x,y,z)" + suffix , bbb , 1000 , 0 , 1.e+7 )
self.__sbs = self.__ssb ## the same
self.__bsb = self.__bbs ## the same
self.alist1 = ROOT.RooArgList (
self.__sss_cmp.pdf ,
self.__bss_cmp.pdf ,
self.__ssb_cmp.pdf ,
self.__sbb_cmp.pdf ,
self.__bbs_cmp.pdf ,
self.__bbb_cmp.pdf )
self.alist2 = ROOT.RooArgList (
self.__sss ,
self.__bss ,
self.__ssb ,
self.__sbb ,
self.__bbs ,
self.__bbb )
## treat additional components (if specified)
self.__nums_components = []
icmp = 0
self.__more_components = []
for cmp in components :
if isinstance ( cmp , PDF3 ) : cc = cmp
elif isinstance ( cmp , ROOT.RooAbsPdf ) : cc = Generic3D_pdf ( cmp , self.xvar , self.yvar )
else :
self.error ("unknown ``other''component %s/%s, skip it!" % ( cc , type(cc) ) )
continue
self.__more_components.append ( cc )
self.components.add ( cc.pdf )
nc = len( self.__more_components )
if 1 == nc :
cf = self.make_var ( None , "C"+suffix , "Component" + suffix , None , 1 , 0 , 1.e+7 )
self.alist1.add ( self.components[0] )
self.__nums_components.append ( cf )
elif 2 <= nc :
fic = self.make_fracs ( nc , 'C_%%d%s' % suffix , 'C(%%d)%s' % suffix , fractions = False )
for c in self.components : self.alist1.add ( c)
for f in fic : self.__nums_components.append ( f )
self.__nums_components = tuple ( self.__nums_components )
for c in self.__nums_components : self.alist2.add ( c )
##
#
## build the final PDF
#
self.pdf = ROOT.RooAddPdf ( "model3D" + suffix ,
"Model3D(%s)" % suffix ,
self.alist1 ,
self.alist2 )
self.signals .add ( self.__sss_cmp.pdf )
self.backgrounds .add ( self.__bbb_cmp.pdf )
self.crossterms1 .add ( self.__ssb_cmp.pdf ) ## cross-terms
self.crossterms2 .add ( self.__sbb_cmp.pdf ) ## cross-terms
## save the configuration
self.config = {
##
'signal_x' : self.signal_x ,
'signal_y' : self.signal_y ,
'signal_z' : self.signal_z ,
##
'suffix' : self.suffix ,
## SSB-terms
'bkg_1x' : self.bkg_1x ,
'bkg_1y' : self.bkg_1y ,
## SBB-terms
'bkg_2x' : self.bkg_2x ,
'bkg_2y' : self.bkg_2y ,
'bkg_2xz' : self.bkg_2xz ,
'bkg_2yz' : self.bkg_2yz ,
## BBB-term
'bkg_3x' : self.bkg_3x ,
'bkg_3y' : self.bkg_3y ,
## 'bkg_3xz' : self.bkg_3xz ,
'bkg_3yz' : self.bkg_3yz ,
'bkg_3D' : self.bkg_3D ,
## Yields
'sss' : self.SSS ,
'bss' : self.BSS ,
'ssb' : self.SSB ,
'sbb' : self.SBB ,
'bbs' : self.BBS ,
'bbb' : self.BBB ,
#
'components' : self.more_components ,
'xvar' : self.xvar ,
'yvar' : self.yvar ,
'zvar' : self.zvar ,
'name' : self.name ,
}
## redefine the clone method, allowing only the name to be changed
# @attention redefinition of parameters and variables is disabled,
# since it can't be done in a safe way
def clone ( self , name = '' , xvar = None , yvar = None , zvar = None ) :
"""Redefine the clone method, allowing only the name to be changed
- redefinition of parameters and variables is disabled,
since it can't be done in a safe way
"""
if xvar and not xvar is self.xvar :
raise AttributeError("Fit3DMix can not be cloned with different `xvar''")
if yvar and not yvar is self.yvar :
raise AttributeError("Fit3DMix can not be cloned with different `yvar''")
if zvar and not zvar is self.zvar :
raise AttributeError("Fit3DMix can not be cloned with different `zvar''")
return PDF.clone ( self , name = name ) if name else PDF.clone( self )
@property
def SSS ( self ) :
"""The yield of Signal(x)*Signal(y)*Signal(z) component"""
return self.__sss
@SSS.setter
def SSS ( self , value ) :
value = float ( value )
assert value in self.__sss, "Value %s is out of the allowed range %s " % ( value , self.__sss.minmax() )
self.__sss.setVal ( value )
@property
def SSB ( self ) :
"""The yield of S(x)*[S(y)*(z)+B(y)*S(z)] symmetrized component (same as SBS)"""
return self.__bss
@SSB.setter
def SSB ( self , value ) :
value = float ( value )
assert value in self.__bss, "Value %s is out of the allowed range %s " % ( value , self.__bss.minmax() )
self.__bss.setVal ( value )
@property
def SBS ( self ) :
"""The yield of S(x)*[S(y)*B(z)+B(y)*S(z)] symmetrized component (same as SSB)"""
return self.__sbs
@SBS.setter
def SBS ( self , value ) :
value = float ( value )
assert value in self.__sbs, "Value %s is out of the allowed range %s " % ( value , self.__sbs.minmax() )
self.__sbs.setVal ( value )
@property
def BSS ( self ) :
"""The yield of B(x)*S(y)*S(z) component"""
return self.__bss
@BSS.setter
def BSS ( self , value ) :
value = float ( value )
assert value in self.__bss, "Value %s is out of the allowed range %s " % ( value , self.__bss.minmax() )
self.__bss.setVal ( value )
@property
def SBB ( self ) :
"""The yield of S(x)*B(y,z) component"""
return self.__sbb
@SBB.setter
def SBB ( self , value ) :
value = float ( value )
assert value in self.__sbb, "Value %s is out of the allowed range %s " % ( value , self.__sbb.minmax() )
self.__sbb.setVal ( value )
@property
def BSB ( self ) :
"""The yield of B(x)*[S(y)*B(z)+B(y)*S(z)] symmetrized component (same as BBS)"""
return self.__bsb
@BSB.setter
def BSB ( self , value ) :
value = float ( value )
assert value in self.__bsb, "Value %s is out of the allowed range %s " % ( value , self.__bsb.minmax() )
self.__bsb.setVal ( value )
@property
def BBS ( self ) :
"""The yield of B(x)*[S(y)*B(z)+B(y)*S(z)] symmetrized component (same as BSB)"""
return self.__bbs
@BBS.setter
def BBS ( self , value ) :
value = float ( value )
assert value in self.__bbs, "Value %s is out of the allowed range %s " % ( value , self.__bbs.minmax() )
self.__bbs.setVal ( value )
@property
def BBB ( self ) :
"""The yield of B(x,y,z) component"""
return self.__bbb
@BBB.setter
def BBB ( self , value ) :
value = float ( value )
assert value in self.__bbb, "Value %s is out of the allowed range %s " % ( value , self.__bbb.minmax() )
self.__bbb.setVal ( value )
@property
def C ( self ) :
"""Get the yields of ``other'' component(s)
For single ``other'' component:
>>> print pdf.C ## read the single ``other'' component
>>> pdf.C = 100 ## assign to it
For multiple ``other'' components:
>>> print pdf.C[4] ## read the 4th ``other'' component
>>> pdf.C = 4,100 ## assign to it
... or, alternatively:
>>> print pdf.C[4] ## read the 4th ``other'' component
>>> pdf.C[4].value 100 ## assign to it
"""
lst = [ i for i in self.__nums_components ]
if not lst : return () ## extended fit? no other components?
elif 1 == len(lst) : return lst[0] ## single component?
return tuple ( lst )
@C.setter
def C ( self , value ) :
_n = len ( self.__nums_components )
assert 1 <= _n , "No ``other'' components are defined, assignement is impossible"
if 1 == _n :
_c = self.C
value = float ( value )
else :
index = value [0]
assert isinstance ( index , int ) and 0 <= index < _n, "Invalid ``other'' index %s/%d" % ( index , _n )
value = float ( value[1] )
_c = self.C[index]
## assign
assert value in _c , "Value %s is outside the allowed region %s" % ( value , _c.minmax() )
_c.setVal ( value )
@property
def yields ( self ) :
"""The list/tuple of the yields of all numeric components"""
return tuple ( [ i for i in self.alist2 ] )
@property
def total_yield ( self ) :
"""``total_yield''' : get the total yield"""
if not self.fit_result : return None
if not valid_pointer ( self.fit_result ) : return None
return self.fit_result.sum ( *self.yields )
# =========================================================================
# components
# =========================================================================
@property
def signal_x ( self ) :
"""``signal_x'': Signal(x) component/PDF"""
return self.__signal_x
@property
def signal_y ( self ) :
"""``signal_y'': Signal(y) component/PDF"""
return self.__signal_y
@property
def signal_z ( self ) :
"""``signal_z'': Signal(z) component/PDF"""
return self.__signal_z
@property
def bkg_1x ( self ) :
"""``bkg_1x'': B(x) component for B(x)*S(y)*S(z) term"""
return self.__bkg_1x
@property
def bkg_1y ( self ) :
"""``bkg_1y'': B(y) component for S(x)*B(y)*S(z) term"""
return self.__bkg_1y
@property
def bkg_1z ( self ) :
"""``bkg_1z'': B(z) component for S(x)*S(y)*B(z) term"""
return self.__bkg_1z
@property
def bkg_2xy( self ) :
"""``bkg_2xy'': B(x,y) component for B(x,y)*S(z) term"""
return self.__bkg_2xy
@property
def bkg_2xz( self ) :
"""``bkg_2xz'': B(x,z) component for B(x,z)*S(y) term"""
return self.__bkg_2xz
@property
def bkg_2yz( self ) :
"""``bkg_2yz'': B(y,z) component for B(y,z)*S(x) term"""
return self.__bkg_2yz
@property
def bkg_2x ( self ) :
"""``bkg_2x'': B(x) component for B(x,y)*S(z) & B(x,z)*S(y) terms"""
return self.__bkg_2x
@property
def bkg_2y ( self ) :
"""``bkg_2y'': B(y) component for B(y,z)*S(x) & B(x,y)*S(z) terms"""
return self.__bkg_2y
@property
def bkg_2z ( self ) :
"""``bkg_2z'': B(z) component for B(x,z)*S(y) & B(y,z)*S(x) terms"""
return self.__bkg_2z
@property
def bkg_3x ( self ) :
"""``bkg_3x'': B(x) component for B(x)*B(y)*B(z) term"""
return self.__bkg_3x
@property
def bkg_3y ( self ) :
"""``bkg_3y'': B(y) component for B(x)*B(y)*B(z) term"""
return self.__bkg_3y
@property
def bkg_3z ( self ) :
"""``bkg_3z'': B(z) component for B(x)*B(y)*B(z) term"""
return self.__bkg_3z
@property
def bkg_3yz ( self ) :
"""``bkg_3yz'': B(y,z) component for B(x)*B(y,z) term"""
return self.__bkg_3yz
@property
def bkg_3D ( self ) :
"""B(x,y,z) component/PDF for the final PDF"""
return self.__bbb_cmp
@property
def cmp_SSS ( self ) :
"""```triple-signal'' component/PDF"""
return self.__sss_cmp
@property
def cpm_SSB ( self ) :
"""```signal-signal-background'' symmetrized component/PDF"""
return self.__ssb_cmp
@property
def cmp_SBS ( self ) :
"""```signal-background-signal'' symmetrized component/PDF (same as cmp_SSB)"""
return self.__sbs_cmp
@property
def cmp_BSS ( self ) :
"""```background-signal-signal'' symmetrized component/PDF (same as cmp_SSB)"""
return self.__bss_cmp
@property
def cpm_SBB ( self ) :
"""```signal-background-background'' symmetrized component/PDF"""
return self.__sbb_cmp
@property
def cmp_BSB ( self ) :
"""```background-signal-background'' symmetrized component/PDF (same as cmp_SBB)"""
return self.__bsb_cmp
@property
def cmp_BBS ( self ) :
"""```background-background-signal'' symmetrized component/PDF (same as cmp_SBB)"""
return self.__bbs_cmp
@property
def cmp_BBB ( self ) :
"""```triple-background'' component/PDF"""
return self.__bbb_cmp
@property
def more_components ( self ) :
"""additional/``other'' components"""
return tuple( self.__more_components )
@property
def suffix ( self ) :
"""``suffix'', used to build the name"""
return self.__suffix
# =========================================================================
## Raw, non-symmetrized fit components/PDF (for debugging)
# =========================================================================
@property
def cpm_raw_SSB ( self ) :
"""```signal-signal-background'' raw,non-symmetrized component/PDF"""
return self.__ssb_cmp_raw
@property
def cmp_raw_SBS ( self ) :
"""```signal-background-signal'' raw,non-symmetrized component/PDF"""
return self.__sbs_cmp_raw
@property
def cmp_raw_BBS ( self ) :
"""```background-background-signal'' raw,non-symmetrized component/PDF"""
return self.__bss_cmp_raw
@property
def cmp_raw_BSB ( self ) :
"""```background-signal-background'' raw,non-symmetrized component/PDF"""
return self.__bsb_cmp_raw
# =============================================================================
if '__main__' == __name__ :
from ostap.utils.docme import docme
docme ( __name__ , logger = logger )
# =============================================================================
# The END
# =============================================================================
| 43.673704 | 142 | 0.469237 | 18,140 | 163,427 | 4.046141 | 0.039085 | 0.026704 | 0.007943 | 0.011554 | 0.82675 | 0.804692 | 0.779746 | 0.764813 | 0.741515 | 0.726011 | 0 | 0.022507 | 0.38367 | 163,427 | 3,741 | 143 | 43.685378 | 0.70618 | 0.304766 | 0 | 0.716266 | 0 | 0.001848 | 0.074587 | 0.004457 | 0 | 0 | 0 | 0 | 0.021257 | 1 | 0.090573 | false | 0.002311 | 0.012477 | 0.000924 | 0.181146 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c7c63811d51be10887151b23bb02922b5c230dfe | 24,929 | py | Python | Vectorization/Vectorization_Benchmark.py | davedjung/Project-Paul | a3df0d6262071c2f930e0357b807d2ada27d20b3 | [
"MIT"
] | null | null | null | Vectorization/Vectorization_Benchmark.py | davedjung/Project-Paul | a3df0d6262071c2f930e0357b807d2ada27d20b3 | [
"MIT"
] | null | null | null | Vectorization/Vectorization_Benchmark.py | davedjung/Project-Paul | a3df0d6262071c2f930e0357b807d2ada27d20b3 | [
"MIT"
] | null | null | null | #Vectorization_Benchmark.py
from timeit import default_timer as timer
from numba import vectorize
import numpy as np
print("Magnitude Computation")
@vectorize(['float32(float32, float32)'], target='cpu')
def mag_cpu(a, b):
return a**2 + b**2
@vectorize(['float32(float32, float32)'], target='cuda')
def mag_gpu(a, b):
return a**2 + b**2
print("Set 1: size = 10")
#declarations
size = 10
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 2: size = 20")
#declarations
size = 20
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 3: size = 50")
#declarations
size = 50
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 4: size = 100")
#declarations
size = 100
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 5: size = 200")
#declarations
size = 200
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 6: size = 500")
#declarations
size = 500
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 7: size = 1000")
#declarations
size = 1000
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 8: size = 2000")
#declarations
size = 2000
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 9: size = 3000")
#declarations
size = 3000
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 10: size = 4000")
#declarations
size = 4000
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
print("Set 11: size = 5000")
#declarations
size = 5000
r = np.zeros((size,2))
rij = np.zeros(((size,size,2)))
rij_mag = np.zeros((size,size))
rij0 = np.zeros((size,size), dtype=np.float32)
rij1 = np.zeros((size,size), dtype=np.float32)
x_max = 10
x_min = -10
y_max = 10
y_min = -10
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
#computation
start = timer()
for i in range(size):
for j in range(size):
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 1 complete [linear] : " , lifetime)
#initialization
r = np.zeros((size,2))
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
if j < i:
rij[i][j] = - rij[j][i]
rij_mag[i][j] = rij_mag[j][i]
else:
rij[i][j] = r[i] - r[j]
rij_mag[i][j] = np.sqrt(np.square(rij[i][j][0])+np.square(rij[i][j][1]))
lifetime = timer() - start
print("Test 2 complete [linear optimzed] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_cpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 3 complete [vectorized] : " , lifetime)
#initialization
r = np.zeros((size,2),dtype=np.float32)
for i in range(size):
r[i,0] = np.random.rand() * (x_max-x_min)/2 - (x_max-x_min)/4
r[i,1] = np.random.rand() * (y_max-y_min)/2 - (y_max-y_min)/4
start = timer()
for i in range(size):
for j in range(size):
rij0[i][j] = r[i][0] - r[j][0]
rij1[i][j] = r[i][1] - r[j][1]
for i in range(size):
rij_mag[i] = mag_gpu(rij0[i], rij1[i])
rij_mag = np.sqrt(rij_mag)
lifetime = timer() - start
print("Test 4 complete [gpu] : " , lifetime)
| 27.485116 | 75 | 0.611617 | 5,124 | 24,929 | 2.869633 | 0.015613 | 0.073313 | 0.115207 | 0.082291 | 0.968784 | 0.963887 | 0.963887 | 0.962255 | 0.962255 | 0.962255 | 0 | 0.042324 | 0.155522 | 24,929 | 906 | 76 | 27.515453 | 0.656137 | 0.035902 | 0 | 0.958621 | 0 | 0 | 0.065382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002759 | false | 0 | 0.004138 | 0.002759 | 0.009655 | 0.077241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
40075c8278920e9b0d200362e1b3152152071200 | 151 | py | Python | ramda/remove_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 56 | 2018-08-06T08:44:58.000Z | 2022-03-17T09:49:03.000Z | ramda/remove_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 28 | 2019-06-17T11:09:52.000Z | 2022-02-18T16:59:21.000Z | ramda/remove_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 5 | 2019-09-18T09:24:38.000Z | 2021-07-21T08:40:23.000Z | from ramda import *
from ramda.private.asserts import *
def remove_test():
assert_equal(remove(2, 3, [1, 2, 3, 4, 5, 6, 7, 8]), [1, 2, 6, 7, 8])
| 21.571429 | 73 | 0.602649 | 29 | 151 | 3.068966 | 0.62069 | 0.202247 | 0.067416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0.205298 | 151 | 6 | 74 | 25.166667 | 0.616667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
4077c7e3908ba62b7a109bb7be91e29da7a8262b | 2,772 | py | Python | Tests/test_causal_integral_filter.py | tonyroberts/project-tetra-display | ada1169d3884e61c06e90fe50a9886b50564c7dd | [
"MIT"
] | 4 | 2020-07-29T09:18:16.000Z | 2021-05-19T22:31:23.000Z | Tests/test_causal_integral_filter.py | tonyroberts/project-tetra-display | ada1169d3884e61c06e90fe50a9886b50564c7dd | [
"MIT"
] | 44 | 2020-07-29T09:12:07.000Z | 2021-07-04T01:50:57.000Z | Tests/test_causal_integral_filter.py | tonyroberts/project-tetra-display | ada1169d3884e61c06e90fe50a9886b50564c7dd | [
"MIT"
] | 4 | 2020-07-31T20:02:47.000Z | 2021-05-14T08:48:38.000Z | import unittest
import numpy as np
from causal_integral_filter import CausalIntegralFilter
class TestCausalIntegralFilter(unittest.TestCase):
def test_constant_zero_data(self):
dt = 0.01
t = np.arange(0, 2, dt)
to_process_data = 0*t
desired_processed_data = 0*t
processed_data = np.array([desired_processed_data[0]])
filter = CausalIntegralFilter(processed_data[0], t[0])
for i in range(1, len(t)):
filter.append(to_process_data[i], t[i])
processed_data = np.append(processed_data, filter.get_datum())
rms_error = (
np.sqrt(np.mean((desired_processed_data-processed_data)**2)))
self.assertLess(rms_error, 0.05,
"Fails to integrate a constant 0 signal to 0.")
def test_constant_one_data(self):
dt = 0.01
t = np.arange(0, 2, dt)
to_process_data = 0*t + 1
desired_processed_data = t + 1
processed_data = np.array([desired_processed_data[0]])
filter = CausalIntegralFilter(processed_data[0], t[0])
for i in range(1, len(t)):
filter.append(to_process_data[i], t[i])
processed_data = np.append(processed_data, filter.get_datum())
rms_error = (
np.sqrt(np.mean((desired_processed_data-processed_data)**2)))
self.assertLess(rms_error, 0.05,
"Fails to integrate a constant 1 signal to t+1.")
def test_linear_data(self):
dt = 0.01
t = np.arange(0, 2, dt)
to_process_data = t
desired_processed_data = 0.5*t**2
processed_data = np.array([desired_processed_data[0]])
filter = CausalIntegralFilter(processed_data[0], t[0])
for i in range(1, len(t)):
filter.append(to_process_data[i], t[i])
processed_data = np.append(processed_data, filter.get_datum())
rms_error = (
np.sqrt(np.mean((desired_processed_data-processed_data)**2)))
self.assertLess(rms_error, 0.05,
"Fails to integrate a t signal to 1/2 t^2.")
def test_sine_data(self):
dt = 0.01
t = np.arange(0, 2, dt)
to_process_data = np.sin(t)
desired_processed_data = -np.cos(t)
processed_data = np.array([desired_processed_data[0]])
filter = CausalIntegralFilter(processed_data[0], t[0])
for i in range(1, len(t)):
filter.append(to_process_data[i], t[i])
processed_data = np.append(processed_data, filter.get_datum())
rms_error = (
np.sqrt(np.mean((desired_processed_data-processed_data)**2)))
self.assertLess(rms_error, 0.05,
"Fails to integrate a Sin[t] signal to -Cos[t].")
| 37.972603 | 74 | 0.606421 | 386 | 2,772 | 4.137306 | 0.142487 | 0.260488 | 0.150282 | 0.078898 | 0.817157 | 0.790232 | 0.790232 | 0.790232 | 0.790232 | 0.790232 | 0 | 0.033416 | 0.276696 | 2,772 | 72 | 75 | 38.5 | 0.763092 | 0 | 0 | 0.666667 | 0 | 0 | 0.063853 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.05 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4080166d442303a70ff59b14a474d03732c31a17 | 21,800 | py | Python | library/scrollphathd/fonts/fonthachicro.py | Kisty/scroll-phat-hd | 74a16305574ecd708f3a1c8ce667bf114675c5c7 | [
"MIT"
] | 155 | 2017-02-28T15:33:17.000Z | 2021-12-15T15:53:52.000Z | library/scrollphathd/fonts/fonthachicro.py | Kisty/scroll-phat-hd | 74a16305574ecd708f3a1c8ce667bf114675c5c7 | [
"MIT"
] | 59 | 2017-03-02T23:46:13.000Z | 2022-02-09T17:44:27.000Z | library/scrollphathd/fonts/fonthachicro.py | Kisty/scroll-phat-hd | 74a16305574ecd708f3a1c8ce667bf114675c5c7 | [
"MIT"
] | 75 | 2017-02-28T10:22:00.000Z | 2022-02-04T12:59:50.000Z | data = {
0x00000030: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000031: [[0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0xff],
[0xff,0xff,0x00,0xff],
[0x00,0xff,0x00,0xff],
[0x00,0xff,0x00,0xff],
[0x00,0xff,0x00,0xff],
[0x00,0xff,0xff,0xff]],
0x00000032: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000033: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000034: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0x00,0x00,0x00,0x00,0xff,0x00,0xff],
[0x00,0x00,0x00,0x00,0xff,0xff,0xff]],
0x00000035: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000036: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000037: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0x00,0xff],
[0x00,0x00,0x00,0x00,0xff,0x00,0xff],
[0x00,0x00,0x00,0x00,0xff,0x00,0xff],
[0x00,0x00,0x00,0x00,0xff,0xff,0xff]],
0x00000038: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000039: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000041: [[0x00,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x00000042: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0x00]],
0x00000043: [[0x00,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0x00]],
0x00000044: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0x00]],
0x00000045: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000046: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0x00,0x00,0x00],
[0xff,0xff,0xff,0x00,0x00,0x00,0x00]],
0x00000047: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000048: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x00000049: [[0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0xff],
[0xff,0xff,0x00,0xff,0xff],
[0x00,0xff,0x00,0xff,0x00],
[0xff,0xff,0x00,0xff,0xff],
[0xff,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff]],
0x0000004a: [[0x00,0x00,0xff,0xff,0xff,0xff,0xff],
[0x00,0x00,0xff,0x00,0x00,0x00,0xff],
[0x00,0x00,0xff,0xff,0x00,0xff,0xff],
[0xff,0xff,0xff,0xff,0x00,0xff,0x00],
[0xff,0x00,0xff,0xff,0x00,0xff,0x00],
[0xff,0xff,0x00,0x00,0xff,0xff,0x00],
[0x00,0xff,0xff,0xff,0xff,0x00,0x00]],
0x0000004b: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0x00,0xff,0xff],
[0xff,0x00,0x00,0x00,0xff,0xff,0x00],
[0xff,0x00,0xff,0xff,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x0000004c: [[0xff,0xff,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x0000004d: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0xff,0x00,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x0000004e: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x0000004f: [[0x00,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0x00]],
0x00000050: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0xff,0x00,0x00,0x00,0x00],
[0xff,0xff,0xff,0x00,0x00,0x00,0x00]],
0x00000051: [[0x00,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0x00,0xff,0xff],
[0xff,0xff,0x00,0x00,0xff,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000052: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000053: [[0x00,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0x00]],
0x00000054: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0x00,0x00,0xff,0x00,0xff,0x00,0x00],
[0x00,0x00,0xff,0x00,0xff,0x00,0x00],
[0x00,0x00,0xff,0x00,0xff,0x00,0x00],
[0x00,0x00,0xff,0xff,0xff,0x00,0x00]],
0x00000055: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0x00]],
0x00000056: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0xff,0x00,0xff,0xff],
[0x00,0xff,0xff,0x00,0xff,0xff,0x00],
[0x00,0x00,0xff,0xff,0xff,0x00,0x00]],
0x00000057: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0x00,0xff,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x00000058: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0xff,0x00,0xff,0xff],
[0x00,0xff,0xff,0x00,0xff,0xff,0x00],
[0xff,0xff,0x00,0xff,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x00000059: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0xff,0x00,0xff,0xff],
[0x00,0xff,0xff,0x00,0xff,0xff,0x00],
[0x00,0x00,0xff,0x00,0xff,0x00,0x00],
[0x00,0x00,0xff,0x00,0xff,0x00,0x00],
[0x00,0x00,0xff,0xff,0xff,0x00,0x00]],
0x0000005a: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0x00,0xff,0xff],
[0x00,0xff,0xff,0x00,0xff,0xff,0x00],
[0xff,0xff,0x00,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000061: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0x00],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000062: [[0xff,0xff,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0x00]],
0x00000063: [[0x00,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000064: [[0x00,0x00,0x00,0x00,0xff,0xff,0xff],
[0x00,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000065: [[0x00,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0x00,0xff,0xff,0xff,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000066: [[0x00,0x00,0xff,0xff,0xff,0xff,0x00],
[0x00,0xff,0xff,0x00,0x00,0xff,0xff],
[0xff,0xff,0x00,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0xff,0xff,0xff],
[0xff,0xff,0x00,0xff,0xff,0xff,0x00],
[0x00,0xff,0x00,0xff,0x00,0x00,0x00],
[0x00,0xff,0xff,0xff,0x00,0x00,0x00]],
0x00000067: [[0x00,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0x00]],
0x00000068: [[0xff,0xff,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x00000069: [[0xff,0xff,0xff],
[0xff,0x00,0xff],
[0xff,0xff,0xff],
[0xff,0x00,0xff],
[0xff,0x00,0xff],
[0xff,0x00,0xff],
[0xff,0xff,0xff]],
0x0000006a: [[0x00,0x00,0xff,0xff,0xff],
[0x00,0x00,0xff,0x00,0xff],
[0x00,0x00,0xff,0xff,0xff],
[0x00,0x00,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0xff,0xff],
[0xff,0xff,0xff,0xff,0x00]],
0x0000006b: [[0xff,0xff,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x0000006c: [[0xff,0xff,0xff,0x00],
[0xff,0x00,0xff,0x00],
[0xff,0x00,0xff,0x00],
[0xff,0x00,0xff,0x00],
[0xff,0x00,0xff,0xff],
[0xff,0xff,0x00,0xff],
[0x00,0xff,0xff,0xff]],
0x0000006d: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x0000006e: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff]],
0x0000006f: [[0x00,0x00,0xff,0xff,0xff,0x00,0x00],
[0x00,0xff,0xff,0x00,0xff,0xff,0x00],
[0xff,0xff,0x00,0xff,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0xff,0x00,0xff,0xff],
[0x00,0xff,0xff,0x00,0xff,0xff,0x00],
[0x00,0x00,0xff,0xff,0xff,0x00,0x00]],
0x00000070: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0xff,0x00,0x00,0x00,0x00]],
0x00000071: [[0x00,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0x00,0xff,0xff,0xff,0xff,0x00,0xff],
[0x00,0x00,0x00,0x00,0xff,0xff,0xff]],
0x00000072: [[0xff,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0x00,0xff,0x00,0x00,0xff,0xff],
[0xff,0x00,0x00,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0x00,0x00,0x00],
[0xff,0x00,0xff,0x00,0x00,0x00,0x00],
[0xff,0xff,0xff,0x00,0x00,0x00,0x00]],
0x00000073: [[0x00,0xff,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0x00,0x00,0x00,0xff,0x00],
[0xff,0x00,0xff,0xff,0xff,0xff,0x00],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0x00,0xff,0xff,0xff,0xff,0x00,0xff],
[0x00,0xff,0x00,0x00,0x00,0xff,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000074: [[0x00,0xff,0xff,0xff,0x00,0x00,0x00],
[0xff,0xff,0x00,0xff,0xff,0xff,0x00],
[0xff,0x00,0x00,0x00,0x00,0xff,0x00],
[0xff,0xff,0x00,0xff,0xff,0xff,0xff],
[0x00,0xff,0x00,0xff,0xff,0x00,0xff],
[0x00,0xff,0xff,0x00,0x00,0xff,0xff],
[0x00,0x00,0xff,0xff,0xff,0xff,0x00]],
0x00000075: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000076: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0x00,0xff,0xff],
[0xff,0x00,0x00,0xff,0xff,0xff,0x00],
[0xff,0xff,0xff,0xff,0x00,0x00,0x00]],
0x00000077: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000078: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0xff,0x00,0x00,0xff],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0x00,0xff,0x00,0xff,0x00,0xff,0x00],
[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0x00,0xff,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
0x00000079: [[0xff,0xff,0xff,0x00,0xff,0xff,0xff],
[0xff,0x00,0xff,0x00,0xff,0x00,0xff],
[0xff,0x00,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0x00,0xff],
[0x00,0xff,0xff,0xff,0xff,0x00,0xff],
[0x00,0xff,0x00,0x00,0x00,0xff,0xff],
[0x00,0xff,0xff,0xff,0xff,0xff,0x00]],
0x0000007a: [[0xff,0xff,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0x00,0xff],
[0xff,0xff,0x00,0x00,0x00,0xff,0xff],
[0xff,0x00,0xff,0xff,0xff,0xff,0xff],
[0xff,0x00,0x00,0x00,0x00,0x00,0xff],
[0xff,0xff,0xff,0xff,0xff,0xff,0xff]],
}
width = 7
height = 7
| 43.512974 | 51 | 0.56867 | 3,007 | 21,800 | 4.122714 | 0.022614 | 0.778253 | 0.711462 | 0.54852 | 0.948617 | 0.948617 | 0.948617 | 0.948294 | 0.944745 | 0.942809 | 0 | 0.343742 | 0.24789 | 21,800 | 500 | 52 | 43.6 | 0.412357 | 0 | 0 | 0.773973 | 0 | 0 | 0 | 0 | 0 | 0 | 0.56789 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
409b8722ee160518eb163aced03be3ad2c111cdb | 21,491 | py | Python | lib/modeling/nets/mobilenet.py | iSmida/DetectionYolo | b7e1eb26ca874da797cee02cb3e1639cf3546e0c | [
"MIT"
] | 3 | 2019-08-28T10:08:24.000Z | 2020-08-10T08:58:42.000Z | lib/modeling/nets/mobilenet.py | iSmida/DetectionYolo | b7e1eb26ca874da797cee02cb3e1639cf3546e0c | [
"MIT"
] | null | null | null | lib/modeling/nets/mobilenet.py | iSmida/DetectionYolo | b7e1eb26ca874da797cee02cb3e1639cf3546e0c | [
"MIT"
] | 1 | 2020-04-29T11:01:43.000Z | 2020-04-29T11:01:43.000Z | import torch
import torch.nn as nn
from collections import namedtuple
import functools
Conv = namedtuple('Conv', ['stride', 'depth'])
DepthSepConv = namedtuple('DepthSepConv', ['stride', 'depth'])
InvertedResidual = namedtuple('InvertedResidual', ['stride', 'depth', 'num', 't']) # t is the expension factor
DilatedResidual = namedtuple('DilatedResidual', ['stride', 'depth', 'num', 't'])
Pool2d = namedtuple('Pool2d',['kernel_size','stride'])
V1_CONV_DEFS = [
Conv(stride=2, depth=32),
DepthSepConv(stride=1, depth=64),
DepthSepConv(stride=2, depth=128),
DepthSepConv(stride=1, depth=128),
DepthSepConv(stride=2, depth=256),
DepthSepConv(stride=1, depth=256),
DepthSepConv(stride=2, depth=512),
DepthSepConv(stride=1, depth=512),
DepthSepConv(stride=1, depth=512),
DepthSepConv(stride=1, depth=512),
DepthSepConv(stride=1, depth=512),
DepthSepConv(stride=1, depth=512),
DepthSepConv(stride=2, depth=1024),
DepthSepConv(stride=1, depth=1024)
]
V2_CONV_DEFS = [
Conv(stride=2, depth=32),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=24, num=2, t=6),
InvertedResidual(stride=2, depth=32, num=3, t=6),
InvertedResidual(stride=1, depth=64, num=4, t=6),
InvertedResidual(stride=1, depth=96, num=3, t=6),
InvertedResidual(stride=2, depth=160, num=3, t=6),
InvertedResidual(stride=1, depth=320, num=1, t=6),
]
#3-29
V2_CONV_DEFS = [
#1
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=20, num=3, t=4),
InvertedResidual(stride=2, depth=24, num=5, t=4),
InvertedResidual(stride=1, depth=32, num=3, t=4),
InvertedResidual(stride=2, depth=40, num=3, t=4),
InvertedResidual(stride=1, depth=52, num=1, t=4),
]
#3-30
V2_CONV_DEFS = [
#1InvertedResidual = namedtuple('InvertedResidual', ['stride', 'depth', 'num', 't'])
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=20, num=3, t=4),
InvertedResidual(stride=2, depth=24, num=3, t=4),
InvertedResidual(stride=1, depth=30, num=5, t=4),
InvertedResidual(stride=2, depth=36, num=3, t=4),
]
#4-2 def forward(self, x):
V2_CONV_DEFS = [
#1
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=20, num=3, t=4),
InvertedResidual(stride=2, depth=24, num=5, t=4),
InvertedResidual(stride=1, depth=30, num=3, t=4),
InvertedResidual(stride=2, depth=36, num=3, t=4),
InvertedResidual(stride=1, depth=42, num=1, t=4),
]
#4-3Conv
V2_CONV_DEFS = [
#1
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=20, num=3, t=4),
InvertedResidual(stride=2, depth=24, num=5, t=4),
InvertedResidual(stride=1, depth=28, num=3, t=4),
InvertedResidual(stride=2, depth=32, num=3, t=4),
InvertedResidual(stride=1, depth=36, num=1, t=4),
]
#4-7
V2_CONV_DEFS = [
#1
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=20, num=2, t=4),
InvertedResidual(stride=2, depth=24, num=5, t=4),
InvertedResidual(stride=1, depth=28, num=3, t=4),
InvertedResidual(stride=2, depth=32, num=3, t=4),
InvertedResidual(stride=1, depth=36, num=1, t=4),
]
#4-7-2
V2_CONV_DEFS = [
#1
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=16, num=1, t=4),
InvertedResidual(stride=2, depth=20, num=5, t=4),
InvertedResidual(stride=1, depth=24, num=3, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=4),
]
'''
#4-8 quick stride=2-fail
V2_CONV_DEFS = [
#1Conv
Conv(stride=2, depth=12),
InvertedResidual(stride=2, depth=12, num=1, t=1),
InvertedResidual(stride=1, depth=16, num=1, t=4),
InvertedResidual(stride=2, depth=20, num=5, t=4),
InvertedResidual(stride=1, depth=24, num=3, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=4),
]
#4-8-2
V2_CONV_DEFS = [
#1
Conv(stride=2, depth=16),
InvertedResidual(stride=2, depth=16, num=1, t=1),
InvertedResidual(stride=1, depth=16, num=1, t=4),
InvertedResidual(stride=2, depth=20, num=5, t=4),
InvertedResidual(stride=1, depth=24, num=3, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=4),
]
'''
#4-11-3
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=20, num=2, t=4),
InvertedResidual(stride=1, depth=24, num=5, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=4),
]
#4-12-1
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=20, num=2, t=4),
InvertedResidual(stride=1, depth=24, num=3, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=2),
]
#4-12-2-2
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=20, num=3, t=4),
InvertedResidual(stride=1, depth=24, num=3, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=2),
]
#4-13-1-2
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=20, num=1, t=1),
InvertedResidual(stride=2, depth=24, num=3, t=4),
InvertedResidual(stride=1, depth=24, num=3, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=2),
]
#4-14-1-2
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=20, num=1, t=4),
InvertedResidual(stride=2, depth=24, num=3, t=4),
InvertedResidual(stride=1, depth=30, num=3, t=4),
InvertedResidual(stride=2, depth=36, num=3, t=4),
InvertedResidual(stride=1, depth=40, num=1, t=2),
]
#4-16-1-2
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=24, num=3, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=2),
]
#4-16-2-2
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
InvertedResidual(stride=1, depth=24, num=4, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=3, t=2),
]
#4-17-1
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=16, num=1, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
InvertedResidual(stride=1, depth=24, num=4, t=4),
InvertedResidual(stride=2, depth=28, num=3, t=4),
InvertedResidual(stride=1, depth=32, num=1, t=4),
]
#4-18-2
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=10, num=1, t=4),
InvertedResidual(stride=2, depth=14, num=2, t=4),
InvertedResidual(stride=1, depth=18, num=4, t=4),
InvertedResidual(stride=2, depth=22, num=4, t=4),
InvertedResidual(stride=1, depth=26, num=4, t=4),
]
#4-19-1
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=10, num=1, t=4),
InvertedResidual(stride=2, depth=12, num=2, t=4),
InvertedResidual(stride=1, depth=16, num=4, t=4),
InvertedResidual(stride=2, depth=20, num=4, t=4),
InvertedResidual(stride=1, depth=24, num=4, t=4),
]
#5-4-1
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
DilatedResidual(stride=1, depth=10, num=1, t=4),
DilatedResidual(stride=2, depth=12, num=2, t=4),
DilatedResidual(stride=1, depth=16, num=4, t=4),
DilatedResidual(stride=2, depth=20, num=3, t=4),
DilatedResidual(stride=1, depth=24, num=1, t=4),
]
#5-7-1
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
DilatedResidual(stride=1, depth=10, num=1, t=4),
DilatedResidual(stride=2, depth=12, num=2, t=4),
DilatedResidual(stride=1, depth=16, num=4, t=4),
DilatedResidual(stride=2, depth=20, num=4, t=4),
DilatedResidual(stride=1, depth=24, num=4, t=4),
]
#5-8-1
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=10, num=1, t=4),
InvertedResidual(stride=2, depth=12, num=2, t=4),
InvertedResidual(stride=1, depth=16, num=3, t=4),
DilatedResidual(stride=1, depth=16, num=1, t=4),
InvertedResidual(stride=2, depth=20, num=4, t=4),
InvertedResidual(stride=1, depth=24, num=3, t=4),
DilatedResidual(stride=1, depth=24, num=1, t=4),
]
#5-10-1
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=10, num=1, t=4),
InvertedResidual(stride=2, depth=12, num=2, t=4),
InvertedResidual(stride=1, depth=16, num=2, t=4),
DilatedResidual(stride=1, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
DilatedResidual(stride=1, depth=20, num=1, t=4),
DilatedResidual(stride=1, depth=24, num=1, t=4),
]
#5-14-1
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=1, depth=16),
InvertedResidual(stride=1, depth=10, num=1, t=4),
InvertedResidual(stride=2, depth=12, num=2, t=4),
InvertedResidual(stride=1, depth=16, num=2, t=4),
DilatedResidual(stride=1, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
DilatedResidual(stride=1, depth=20, num=2, t=4),
InvertedResidual(stride=1, depth=24, num=2, t=4),
DilatedResidual(stride=1, depth=24, num=2, t=4),
]
#6-15-2
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=2, depth=16),
InvertedResidual(stride=2, depth=10, num=2, t=4),
InvertedResidual(stride=2, depth=12, num=2, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
DilatedResidual(stride=1, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
DilatedResidual(stride=1, depth=20, num=2, t=4),
InvertedResidual(stride=1, depth=24, num=2, t=4),
DilatedResidual(stride=1, depth=24, num=2, t=4),
]
#6-16-3
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=2, depth=16),
InvertedResidual(stride=2, depth=12, num=2, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
DilatedResidual(stride=1, depth=20, num=2, t=4),
InvertedResidual(stride=2, depth=24, num=2, t=4),
DilatedResidual(stride=1, depth=24, num=2, t=4),
InvertedResidual(stride=1, depth=32, num=2, t=4),
DilatedResidual(stride=1, depth=32, num=2, t=4),
]
#6-18-3
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=2, depth=16),
InvertedResidual(stride=2, depth=12, num=2, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=24, num=2, t=4),
DilatedResidual(stride=1, depth=24, num=2, t=4),
InvertedResidual(stride=2, depth=36, num=2, t=4),
DilatedResidual(stride=1, depth=36, num=2, t=4),
InvertedResidual(stride=1, depth=48, num=2, t=4),
DilatedResidual(stride=1, depth=48, num=2, t=4),
]
V2_CONV_DEFS = [
Conv(stride=2, depth=32),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=24, num=2, t=6),
InvertedResidual(stride=2, depth=32, num=3, t=6),
InvertedResidual(stride=1, depth=64, num=4, t=6),
InvertedResidual(stride=1, depth=96, num=3, t=6),
InvertedResidual(stride=2, depth=160, num=3, t=6),
InvertedResidual(stride=1, depth=320, num=1, t=6),
]
#6-18-3
V2_CONV_DEFS = [
#1
#nn.MaxPool2d(2)
Conv(stride=2, depth=16),
InvertedResidual(stride=2, depth=12, num=2, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=24, num=2, t=4),
DilatedResidual(stride=1, depth=24, num=2, t=4),
InvertedResidual(stride=2, depth=36, num=2, t=4),
DilatedResidual(stride=1, depth=36, num=2, t=4),
InvertedResidual(stride=1, depth=48, num=2, t=4),
DilatedResidual(stride=1, depth=48, num=2, t=4),
]
#6-19-3
V2_CONV_DEFS = [
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=12, num=1, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
DilatedResidual(stride=1, depth=20, num=1, t=4),
InvertedResidual(stride=2, depth=24, num=3, t=4),
DilatedResidual(stride=1, depth=24, num=1, t=4),
InvertedResidual(stride=1, depth=30, num=2, t=4),
DilatedResidual(stride=1, depth=30, num=1, t=4),
InvertedResidual(stride=2, depth=36, num=3, t=4),
DilatedResidual(stride=1, depth=48, num=1, t=4),
]
#6-21
V2_CONV_DEFS = [
Conv(stride=2, depth=32),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=24, num=2, t=6),
InvertedResidual(stride=2, depth=32, num=3, t=6),
InvertedResidual(stride=2, depth=64, num=4, t=6),
InvertedResidual(stride=1, depth=96, num=3, t=6),
InvertedResidual(stride=2, depth=160, num=3, t=6),
InvertedResidual(stride=1, depth=320, num=1, t=6),
]
#6-21
V2_CONV_DEFS = [
Conv(stride=2, depth=32),
InvertedResidual(stride=1, depth=16, num=1, t=1),
InvertedResidual(stride=2, depth=24, num=2, t=6),
InvertedResidual(stride=2, depth=32, num=3, t=6),
InvertedResidual(stride=2, depth=64, num=4, t=6),
InvertedResidual(stride=1, depth=96, num=3, t=6),
InvertedResidual(stride=2, depth=160, num=3, t=6),
InvertedResidual(stride=1, depth=320, num=1, t=6),
]
#6-20
V2_CONV_DEFS = [
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=12, num=1, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
InvertedResidual(stride=1, depth=20, num=1, t=4),
InvertedResidual(stride=2, depth=24, num=3, t=4),
InvertedResidual(stride=1, depth=24, num=1, t=4),
InvertedResidual(stride=1, depth=30, num=2, t=4),
InvertedResidual(stride=1, depth=30, num=1, t=4),
InvertedResidual(stride=2, depth=36, num=3, t=4),
InvertedResidual(stride=1, depth=48, num=1, t=4),
]
#6-24-0
V2_CONV_DEFS = [
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=12, num=1, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=2, t=4),
InvertedResidual(stride=1, depth=20, num=1, t=4),
InvertedResidual(stride=2, depth=24, num=3, t=4),
InvertedResidual(stride=1, depth=24, num=1, t=4),
InvertedResidual(stride=1, depth=30, num=2, t=4),
InvertedResidual(stride=1, depth=30, num=1, t=4),
InvertedResidual(stride=2, depth=36, num=3, t=4),
InvertedResidual(stride=1, depth=48, num=1, t=4),
]
#6-24-0-2
V2_CONV_DEFS = [
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=12, num=1, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=3, t=4),
InvertedResidual(stride=1, depth=24, num=4, t=4),
InvertedResidual(stride=1, depth=30, num=3, t=4),
InvertedResidual(stride=2, depth=36, num=3, t=4),
InvertedResidual(stride=1, depth=48, num=1, t=4),
]
#6-25-0
V2_CONV_DEFS = [
Conv(stride=2, depth=16),
InvertedResidual(stride=1, depth=12, num=1, t=4),
InvertedResidual(stride=2, depth=16, num=2, t=4),
InvertedResidual(stride=2, depth=20, num=3, t=4),
InvertedResidual(stride=2, depth=24, num=4, t=4),
InvertedResidual(stride=1, depth=30, num=3, t=4),
InvertedResidual(stride=2, depth=36, num=3, t=4),
DilatedResidual(stride=1, depth=48, num=1, t=4),
]
class _conv_bn(nn.Module):
def __init__(self, inp, oup, stride):
super(_conv_bn, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True),
)
def forward(self, x):
return self.conv(x)
class _pool2d(nn.Module):
def __init__(self):
super(_pool2d, self).__init__()
self.pool2d = nn.Sequential(
nn.MaxPool2d(2,2),
)
def forward(self, x):
return self.pool2d(x)
class _conv_dw(nn.Module):
def __init__(self, inp, oup, stride):
super(_conv_dw, self).__init__()
self.conv = nn.Sequential(
# dw
nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
nn.BatchNorm2d(inp),
nn.ReLU(inplace=True),
# pw
nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True),
)
def forward(self, x):
return self.conv(x)
class _inverted_residual_bottleneck(nn.Module):
def __init__(self, inp, oup, stride, expand_ratio):
super(_inverted_residual_bottleneck, self).__init__()
self.use_res_connect = stride == 1 and inp == oup
self.conv = nn.Sequential(
# pw
nn.Conv2d(inp, inp * expand_ratio, 1, 1, 0, bias=False),
nn.BatchNorm2d(inp * expand_ratio),
nn.ReLU6(inplace=True),
# dw
nn.Conv2d(inp * expand_ratio, inp * expand_ratio, 3, stride, 1, groups=inp * expand_ratio, bias=False),
nn.BatchNorm2d(inp * expand_ratio),
nn.ReLU6(inplace=True),
# pw-linear
nn.Conv2d(inp * expand_ratio, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
)
self.depth = oup
def forward(self, x):
if self.use_res_connect:
return x + self.conv(x)
else:
return self.conv(x)
class _dilated_residual_bottleneck(nn.Module):
def __init__(self, inp, oup, stride, expand_ratio):
super(_dilated_residual_bottleneck, self).__init__()
self.use_res_connect = stride == 1 and inp == oup
self.conv = nn.Sequential(
# pw
nn.Conv2d(inp, inp * expand_ratio, 1, 1, 0, bias=False),
nn.BatchNorm2d(inp * expand_ratio),
nn.ReLU6(inplace=True),
# dw
nn.Conv2d(inp * expand_ratio, inp * expand_ratio, 3, stride, groups=inp * expand_ratio, padding=2, dilation=2, bias=False),
nn.BatchNorm2d(inp * expand_ratio),
nn.ReLU6(inplace=True),
# pw-linear
nn.Conv2d(inp * expand_ratio, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
)
self.depth = oup
def forward(self, x):
if self.use_res_connect:
return x + self.conv(x)
else:
return self.conv(x)
def mobilenet(conv_defs, depth_multiplier=1.0, min_depth=8):
depth = lambda d: max(int(d * depth_multiplier), min_depth)
layers = []
in_channels = 3
for conv_def in conv_defs:
if isinstance(conv_def, Conv):
layers += [_conv_bn(in_channels, depth(conv_def.depth), conv_def.stride)]
in_channels = depth(conv_def.depth)
elif isinstance(conv_def, DepthSepConv):
layers += [_conv_dw(in_channels, depth(conv_def.depth), conv_def.stride)]
in_channels = depth(conv_def.depth)
elif isinstance(conv_def, InvertedResidual):
for n in range(conv_def.num):
stride = conv_def.stride if n == 0 else 1
layers += [_inverted_residual_bottleneck(in_channels, depth(conv_def.depth), stride, conv_def.t)]
in_channels = depth(conv_def.depth)
elif isinstance(conv_def,DilatedResidual ):
for n in range(conv_def.num):
stride = conv_def.stride if n == 0 else 1
layers += [_dilated_residual_bottleneck(in_channels, depth(conv_def.depth), stride, conv_def.t)]
in_channels = depth(conv_def.depth)
return layers
def wrapped_partial(func, *args, **kwargs):
partial_func = functools.partial(func, *args, **kwargs)
functools.update_wrapper(partial_func, func)
return partial_func
mobilenet_v1 = wrapped_partial(mobilenet, conv_defs=V1_CONV_DEFS, depth_multiplier=1.0)
mobilenet_v1_075 = wrapped_partial(mobilenet, conv_defs=V1_CONV_DEFS, depth_multiplier=0.75)
mobilenet_v1_050 = wrapped_partial(mobilenet, conv_defs=V1_CONV_DEFS, depth_multiplier=0.50)
mobilenet_v1_025 = wrapped_partial(mobilenet, conv_defs=V1_CONV_DEFS, depth_multiplier=0.25)
mobilenet_v2 = wrapped_partial(mobilenet, conv_defs=V2_CONV_DEFS, depth_multiplier=1.0)
mobilenet_v2_075 = wrapped_partial(mobilenet, conv_defs=V2_CONV_DEFS, depth_multiplier=0.75)
mobilenet_v2_050 = wrapped_partial(mobilenet, conv_defs=V2_CONV_DEFS, depth_multiplier=0.50)
mobilenet_v2_025 = wrapped_partial(mobilenet, conv_defs=V2_CONV_DEFS, depth_multiplier=0.25)
| 34.3856 | 136 | 0.640733 | 3,371 | 21,491 | 4.005933 | 0.041531 | 0.3242 | 0.13596 | 0.225711 | 0.908027 | 0.900548 | 0.881961 | 0.875148 | 0.854858 | 0.85234 | 0 | 0.096641 | 0.191103 | 21,491 | 624 | 137 | 34.440705 | 0.680166 | 0.030943 | 0 | 0.712389 | 0 | 0 | 0.00612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026549 | false | 0 | 0.00885 | 0.006637 | 0.066372 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
40a9c82124e2ab9231f6263adf611d935195a356 | 67,697 | py | Python | sdk/python/pulumi_google_native/compute/alpha/interconnect_attachment.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 44 | 2021-04-18T23:00:48.000Z | 2022-02-14T17:43:15.000Z | sdk/python/pulumi_google_native/compute/alpha/interconnect_attachment.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 354 | 2021-04-16T16:48:39.000Z | 2022-03-31T17:16:39.000Z | sdk/python/pulumi_google_native/compute/alpha/interconnect_attachment.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 8 | 2021-04-24T17:46:51.000Z | 2022-01-05T10:40:21.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
from ._enums import *
from ._inputs import *
__all__ = ['InterconnectAttachmentArgs', 'InterconnectAttachment']
@pulumi.input_type
class InterconnectAttachmentArgs:
def __init__(__self__, *,
region: pulumi.Input[str],
admin_enabled: Optional[pulumi.Input[bool]] = None,
bandwidth: Optional[pulumi.Input['InterconnectAttachmentBandwidth']] = None,
candidate_ipv6_subnets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
candidate_subnets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
cloud_router_ipv6_interface_id: Optional[pulumi.Input[str]] = None,
customer_router_ipv6_interface_id: Optional[pulumi.Input[str]] = None,
dataplane_version: Optional[pulumi.Input[int]] = None,
description: Optional[pulumi.Input[str]] = None,
edge_availability_domain: Optional[pulumi.Input['InterconnectAttachmentEdgeAvailabilityDomain']] = None,
encryption: Optional[pulumi.Input['InterconnectAttachmentEncryption']] = None,
interconnect: Optional[pulumi.Input[str]] = None,
ipsec_internal_addresses: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
mtu: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
pairing_key: Optional[pulumi.Input[str]] = None,
partner_asn: Optional[pulumi.Input[str]] = None,
partner_metadata: Optional[pulumi.Input['InterconnectAttachmentPartnerMetadataArgs']] = None,
project: Optional[pulumi.Input[str]] = None,
request_id: Optional[pulumi.Input[str]] = None,
router: Optional[pulumi.Input[str]] = None,
stack_type: Optional[pulumi.Input['InterconnectAttachmentStackType']] = None,
type: Optional[pulumi.Input['InterconnectAttachmentType']] = None,
validate_only: Optional[pulumi.Input[str]] = None,
vlan_tag8021q: Optional[pulumi.Input[int]] = None):
"""
The set of arguments for constructing a InterconnectAttachment resource.
:param pulumi.Input[bool] admin_enabled: Determines whether this Attachment will carry packets. Not present for PARTNER_PROVIDER.
:param pulumi.Input['InterconnectAttachmentBandwidth'] bandwidth: Provisioned bandwidth capacity for the interconnect attachment. For attachments of type DEDICATED, the user can set the bandwidth. For attachments of type PARTNER, the Google Partner that is operating the interconnect must set the bandwidth. Output only for PARTNER type, mutable for PARTNER_PROVIDER and DEDICATED, and can take one of the following values: - BPS_50M: 50 Mbit/s - BPS_100M: 100 Mbit/s - BPS_200M: 200 Mbit/s - BPS_300M: 300 Mbit/s - BPS_400M: 400 Mbit/s - BPS_500M: 500 Mbit/s - BPS_1G: 1 Gbit/s - BPS_2G: 2 Gbit/s - BPS_5G: 5 Gbit/s - BPS_10G: 10 Gbit/s - BPS_20G: 20 Gbit/s - BPS_50G: 50 Gbit/s
:param pulumi.Input[Sequence[pulumi.Input[str]]] candidate_ipv6_subnets: Up to 16 candidate prefixes that control the allocation of cloudRouterIpv6Address and customerRouterIpv6Address for this attachment. Each prefix must be in the Global Unique Address (GUA) space. It is highly recommended that it be in a range owned by the requestor. A GUA in a range owned by Google will cause the request to fail. Google will select an available prefix from the supplied candidates or fail the request. If not supplied, a /125 from a Google-owned GUA block will be selected.
:param pulumi.Input[Sequence[pulumi.Input[str]]] candidate_subnets: Up to 16 candidate prefixes that can be used to restrict the allocation of cloudRouterIpAddress and customerRouterIpAddress for this attachment. All prefixes must be within link-local address space (169.254.0.0/16) and must be /29 or shorter (/28, /27, etc). Google will attempt to select an unused /29 from the supplied candidate prefix(es). The request will fail if all possible /29s are in use on Google's edge. If not supplied, Google will randomly select an unused /29 from all of link-local space.
:param pulumi.Input[str] cloud_router_ipv6_interface_id: If supplied, the interface id (index within the subnet) to be used for the cloud router address. The id must be in the range of 1 to 6. If a subnet mask is supplied, it must be /125, and the subnet should either be 0 or match the selected subnet.
:param pulumi.Input[str] customer_router_ipv6_interface_id: If supplied, the interface id (index within the subnet) to be used for the customer router address. The id must be in the range of 1 to 6. If a subnet mask is supplied, it must be /125, and the subnet should either be 0 or match the selected subnet.
:param pulumi.Input[int] dataplane_version: [Output only for types PARTNER and DEDICATED. Not present for PARTNER_PROVIDER.] Dataplane version for this InterconnectAttachment. This field is only present for Dataplane version 2 and higher. Absence of this field in the API output indicates that the Dataplane is version 1.
:param pulumi.Input[str] description: An optional description of this resource.
:param pulumi.Input['InterconnectAttachmentEdgeAvailabilityDomain'] edge_availability_domain: Desired availability domain for the attachment. Only available for type PARTNER, at creation time, and can take one of the following values: - AVAILABILITY_DOMAIN_ANY - AVAILABILITY_DOMAIN_1 - AVAILABILITY_DOMAIN_2 For improved reliability, customers should configure a pair of attachments, one per availability domain. The selected availability domain will be provided to the Partner via the pairing key, so that the provisioned circuit will lie in the specified domain. If not specified, the value will default to AVAILABILITY_DOMAIN_ANY.
:param pulumi.Input['InterconnectAttachmentEncryption'] encryption: Indicates the user-supplied encryption option of this VLAN attachment (interconnectAttachment). Can only be specified at attachment creation for PARTNER or DEDICATED attachments. Possible values are: - NONE - This is the default value, which means that the VLAN attachment carries unencrypted traffic. VMs are able to send traffic to, or receive traffic from, such a VLAN attachment. - IPSEC - The VLAN attachment carries only encrypted traffic that is encrypted by an IPsec device, such as an HA VPN gateway or third-party IPsec VPN. VMs cannot directly send traffic to, or receive traffic from, such a VLAN attachment. To use *IPsec-encrypted Cloud Interconnect*, the VLAN attachment must be created with this option. Not currently available publicly.
:param pulumi.Input[str] interconnect: URL of the underlying Interconnect object that this attachment's traffic will traverse through.
:param pulumi.Input[Sequence[pulumi.Input[str]]] ipsec_internal_addresses: A list of URLs of addresses that have been reserved for the VLAN attachment. Used only for the VLAN attachment that has the encryption option as IPSEC. The addresses must be regional internal IP address ranges. When creating an HA VPN gateway over the VLAN attachment, if the attachment is configured to use a regional internal IP address, then the VPN gateway's IP address is allocated from the IP address range specified here. For example, if the HA VPN gateway's interface 0 is paired to this VLAN attachment, then a regional internal IP address for the VPN gateway interface 0 will be allocated from the IP address specified for this VLAN attachment. If this field is not specified when creating the VLAN attachment, then later on when creating an HA VPN gateway on this VLAN attachment, the HA VPN gateway's IP address is allocated from the regional external IP address pool. Not currently available publicly.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: Labels for this resource. These can only be added or modified by the setLabels method. Each label key/value pair must comply with RFC1035. Label values may be empty.
:param pulumi.Input[int] mtu: Maximum Transmission Unit (MTU), in bytes, of packets passing through this interconnect attachment. Only 1440 and 1500 are allowed. If not specified, the value will default to 1440.
:param pulumi.Input[str] name: Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
:param pulumi.Input[str] pairing_key: [Output only for type PARTNER. Input only for PARTNER_PROVIDER. Not present for DEDICATED]. The opaque identifier of an PARTNER attachment used to initiate provisioning with a selected partner. Of the form "XXXXX/region/domain"
:param pulumi.Input[str] partner_asn: Optional BGP ASN for the router supplied by a Layer 3 Partner if they configured BGP on behalf of the customer. Output only for PARTNER type, input only for PARTNER_PROVIDER, not available for DEDICATED.
:param pulumi.Input['InterconnectAttachmentPartnerMetadataArgs'] partner_metadata: Informational metadata about Partner attachments from Partners to display to customers. Output only for for PARTNER type, mutable for PARTNER_PROVIDER, not available for DEDICATED.
:param pulumi.Input[str] router: URL of the Cloud Router to be used for dynamic routing. This router must be in the same region as this InterconnectAttachment. The InterconnectAttachment will automatically connect the Interconnect to the network & region within which the Cloud Router is configured.
:param pulumi.Input['InterconnectAttachmentStackType'] stack_type: The stack type for this interconnect attachment to identify whether the IPv6 feature is enabled or not. If not specified, IPV4_ONLY will be used. This field can be both set at interconnect attachments creation and update interconnect attachment operations.
:param pulumi.Input['InterconnectAttachmentType'] type: The type of interconnect attachment this is, which can take one of the following values: - DEDICATED: an attachment to a Dedicated Interconnect. - PARTNER: an attachment to a Partner Interconnect, created by the customer. - PARTNER_PROVIDER: an attachment to a Partner Interconnect, created by the partner.
:param pulumi.Input[int] vlan_tag8021q: The IEEE 802.1Q VLAN tag for this attachment, in the range 2-4094. Only specified at creation time.
"""
pulumi.set(__self__, "region", region)
if admin_enabled is not None:
pulumi.set(__self__, "admin_enabled", admin_enabled)
if bandwidth is not None:
pulumi.set(__self__, "bandwidth", bandwidth)
if candidate_ipv6_subnets is not None:
pulumi.set(__self__, "candidate_ipv6_subnets", candidate_ipv6_subnets)
if candidate_subnets is not None:
pulumi.set(__self__, "candidate_subnets", candidate_subnets)
if cloud_router_ipv6_interface_id is not None:
pulumi.set(__self__, "cloud_router_ipv6_interface_id", cloud_router_ipv6_interface_id)
if customer_router_ipv6_interface_id is not None:
pulumi.set(__self__, "customer_router_ipv6_interface_id", customer_router_ipv6_interface_id)
if dataplane_version is not None:
pulumi.set(__self__, "dataplane_version", dataplane_version)
if description is not None:
pulumi.set(__self__, "description", description)
if edge_availability_domain is not None:
pulumi.set(__self__, "edge_availability_domain", edge_availability_domain)
if encryption is not None:
pulumi.set(__self__, "encryption", encryption)
if interconnect is not None:
pulumi.set(__self__, "interconnect", interconnect)
if ipsec_internal_addresses is not None:
pulumi.set(__self__, "ipsec_internal_addresses", ipsec_internal_addresses)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if mtu is not None:
pulumi.set(__self__, "mtu", mtu)
if name is not None:
pulumi.set(__self__, "name", name)
if pairing_key is not None:
pulumi.set(__self__, "pairing_key", pairing_key)
if partner_asn is not None:
pulumi.set(__self__, "partner_asn", partner_asn)
if partner_metadata is not None:
pulumi.set(__self__, "partner_metadata", partner_metadata)
if project is not None:
pulumi.set(__self__, "project", project)
if request_id is not None:
pulumi.set(__self__, "request_id", request_id)
if router is not None:
pulumi.set(__self__, "router", router)
if stack_type is not None:
pulumi.set(__self__, "stack_type", stack_type)
if type is not None:
pulumi.set(__self__, "type", type)
if validate_only is not None:
pulumi.set(__self__, "validate_only", validate_only)
if vlan_tag8021q is not None:
pulumi.set(__self__, "vlan_tag8021q", vlan_tag8021q)
@property
@pulumi.getter
def region(self) -> pulumi.Input[str]:
return pulumi.get(self, "region")
@region.setter
def region(self, value: pulumi.Input[str]):
pulumi.set(self, "region", value)
@property
@pulumi.getter(name="adminEnabled")
def admin_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Determines whether this Attachment will carry packets. Not present for PARTNER_PROVIDER.
"""
return pulumi.get(self, "admin_enabled")
@admin_enabled.setter
def admin_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "admin_enabled", value)
@property
@pulumi.getter
def bandwidth(self) -> Optional[pulumi.Input['InterconnectAttachmentBandwidth']]:
"""
Provisioned bandwidth capacity for the interconnect attachment. For attachments of type DEDICATED, the user can set the bandwidth. For attachments of type PARTNER, the Google Partner that is operating the interconnect must set the bandwidth. Output only for PARTNER type, mutable for PARTNER_PROVIDER and DEDICATED, and can take one of the following values: - BPS_50M: 50 Mbit/s - BPS_100M: 100 Mbit/s - BPS_200M: 200 Mbit/s - BPS_300M: 300 Mbit/s - BPS_400M: 400 Mbit/s - BPS_500M: 500 Mbit/s - BPS_1G: 1 Gbit/s - BPS_2G: 2 Gbit/s - BPS_5G: 5 Gbit/s - BPS_10G: 10 Gbit/s - BPS_20G: 20 Gbit/s - BPS_50G: 50 Gbit/s
"""
return pulumi.get(self, "bandwidth")
@bandwidth.setter
def bandwidth(self, value: Optional[pulumi.Input['InterconnectAttachmentBandwidth']]):
pulumi.set(self, "bandwidth", value)
@property
@pulumi.getter(name="candidateIpv6Subnets")
def candidate_ipv6_subnets(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Up to 16 candidate prefixes that control the allocation of cloudRouterIpv6Address and customerRouterIpv6Address for this attachment. Each prefix must be in the Global Unique Address (GUA) space. It is highly recommended that it be in a range owned by the requestor. A GUA in a range owned by Google will cause the request to fail. Google will select an available prefix from the supplied candidates or fail the request. If not supplied, a /125 from a Google-owned GUA block will be selected.
"""
return pulumi.get(self, "candidate_ipv6_subnets")
@candidate_ipv6_subnets.setter
def candidate_ipv6_subnets(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "candidate_ipv6_subnets", value)
@property
@pulumi.getter(name="candidateSubnets")
def candidate_subnets(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Up to 16 candidate prefixes that can be used to restrict the allocation of cloudRouterIpAddress and customerRouterIpAddress for this attachment. All prefixes must be within link-local address space (169.254.0.0/16) and must be /29 or shorter (/28, /27, etc). Google will attempt to select an unused /29 from the supplied candidate prefix(es). The request will fail if all possible /29s are in use on Google's edge. If not supplied, Google will randomly select an unused /29 from all of link-local space.
"""
return pulumi.get(self, "candidate_subnets")
@candidate_subnets.setter
def candidate_subnets(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "candidate_subnets", value)
@property
@pulumi.getter(name="cloudRouterIpv6InterfaceId")
def cloud_router_ipv6_interface_id(self) -> Optional[pulumi.Input[str]]:
"""
If supplied, the interface id (index within the subnet) to be used for the cloud router address. The id must be in the range of 1 to 6. If a subnet mask is supplied, it must be /125, and the subnet should either be 0 or match the selected subnet.
"""
return pulumi.get(self, "cloud_router_ipv6_interface_id")
@cloud_router_ipv6_interface_id.setter
def cloud_router_ipv6_interface_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cloud_router_ipv6_interface_id", value)
@property
@pulumi.getter(name="customerRouterIpv6InterfaceId")
def customer_router_ipv6_interface_id(self) -> Optional[pulumi.Input[str]]:
"""
If supplied, the interface id (index within the subnet) to be used for the customer router address. The id must be in the range of 1 to 6. If a subnet mask is supplied, it must be /125, and the subnet should either be 0 or match the selected subnet.
"""
return pulumi.get(self, "customer_router_ipv6_interface_id")
@customer_router_ipv6_interface_id.setter
def customer_router_ipv6_interface_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "customer_router_ipv6_interface_id", value)
@property
@pulumi.getter(name="dataplaneVersion")
def dataplane_version(self) -> Optional[pulumi.Input[int]]:
"""
[Output only for types PARTNER and DEDICATED. Not present for PARTNER_PROVIDER.] Dataplane version for this InterconnectAttachment. This field is only present for Dataplane version 2 and higher. Absence of this field in the API output indicates that the Dataplane is version 1.
"""
return pulumi.get(self, "dataplane_version")
@dataplane_version.setter
def dataplane_version(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "dataplane_version", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
An optional description of this resource.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="edgeAvailabilityDomain")
def edge_availability_domain(self) -> Optional[pulumi.Input['InterconnectAttachmentEdgeAvailabilityDomain']]:
"""
Desired availability domain for the attachment. Only available for type PARTNER, at creation time, and can take one of the following values: - AVAILABILITY_DOMAIN_ANY - AVAILABILITY_DOMAIN_1 - AVAILABILITY_DOMAIN_2 For improved reliability, customers should configure a pair of attachments, one per availability domain. The selected availability domain will be provided to the Partner via the pairing key, so that the provisioned circuit will lie in the specified domain. If not specified, the value will default to AVAILABILITY_DOMAIN_ANY.
"""
return pulumi.get(self, "edge_availability_domain")
@edge_availability_domain.setter
def edge_availability_domain(self, value: Optional[pulumi.Input['InterconnectAttachmentEdgeAvailabilityDomain']]):
pulumi.set(self, "edge_availability_domain", value)
@property
@pulumi.getter
def encryption(self) -> Optional[pulumi.Input['InterconnectAttachmentEncryption']]:
"""
Indicates the user-supplied encryption option of this VLAN attachment (interconnectAttachment). Can only be specified at attachment creation for PARTNER or DEDICATED attachments. Possible values are: - NONE - This is the default value, which means that the VLAN attachment carries unencrypted traffic. VMs are able to send traffic to, or receive traffic from, such a VLAN attachment. - IPSEC - The VLAN attachment carries only encrypted traffic that is encrypted by an IPsec device, such as an HA VPN gateway or third-party IPsec VPN. VMs cannot directly send traffic to, or receive traffic from, such a VLAN attachment. To use *IPsec-encrypted Cloud Interconnect*, the VLAN attachment must be created with this option. Not currently available publicly.
"""
return pulumi.get(self, "encryption")
@encryption.setter
def encryption(self, value: Optional[pulumi.Input['InterconnectAttachmentEncryption']]):
pulumi.set(self, "encryption", value)
@property
@pulumi.getter
def interconnect(self) -> Optional[pulumi.Input[str]]:
"""
URL of the underlying Interconnect object that this attachment's traffic will traverse through.
"""
return pulumi.get(self, "interconnect")
@interconnect.setter
def interconnect(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "interconnect", value)
@property
@pulumi.getter(name="ipsecInternalAddresses")
def ipsec_internal_addresses(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of URLs of addresses that have been reserved for the VLAN attachment. Used only for the VLAN attachment that has the encryption option as IPSEC. The addresses must be regional internal IP address ranges. When creating an HA VPN gateway over the VLAN attachment, if the attachment is configured to use a regional internal IP address, then the VPN gateway's IP address is allocated from the IP address range specified here. For example, if the HA VPN gateway's interface 0 is paired to this VLAN attachment, then a regional internal IP address for the VPN gateway interface 0 will be allocated from the IP address specified for this VLAN attachment. If this field is not specified when creating the VLAN attachment, then later on when creating an HA VPN gateway on this VLAN attachment, the HA VPN gateway's IP address is allocated from the regional external IP address pool. Not currently available publicly.
"""
return pulumi.get(self, "ipsec_internal_addresses")
@ipsec_internal_addresses.setter
def ipsec_internal_addresses(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "ipsec_internal_addresses", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Labels for this resource. These can only be added or modified by the setLabels method. Each label key/value pair must comply with RFC1035. Label values may be empty.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter
def mtu(self) -> Optional[pulumi.Input[int]]:
"""
Maximum Transmission Unit (MTU), in bytes, of packets passing through this interconnect attachment. Only 1440 and 1500 are allowed. If not specified, the value will default to 1440.
"""
return pulumi.get(self, "mtu")
@mtu.setter
def mtu(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "mtu", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="pairingKey")
def pairing_key(self) -> Optional[pulumi.Input[str]]:
"""
[Output only for type PARTNER. Input only for PARTNER_PROVIDER. Not present for DEDICATED]. The opaque identifier of an PARTNER attachment used to initiate provisioning with a selected partner. Of the form "XXXXX/region/domain"
"""
return pulumi.get(self, "pairing_key")
@pairing_key.setter
def pairing_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "pairing_key", value)
@property
@pulumi.getter(name="partnerAsn")
def partner_asn(self) -> Optional[pulumi.Input[str]]:
"""
Optional BGP ASN for the router supplied by a Layer 3 Partner if they configured BGP on behalf of the customer. Output only for PARTNER type, input only for PARTNER_PROVIDER, not available for DEDICATED.
"""
return pulumi.get(self, "partner_asn")
@partner_asn.setter
def partner_asn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "partner_asn", value)
@property
@pulumi.getter(name="partnerMetadata")
def partner_metadata(self) -> Optional[pulumi.Input['InterconnectAttachmentPartnerMetadataArgs']]:
"""
Informational metadata about Partner attachments from Partners to display to customers. Output only for for PARTNER type, mutable for PARTNER_PROVIDER, not available for DEDICATED.
"""
return pulumi.get(self, "partner_metadata")
@partner_metadata.setter
def partner_metadata(self, value: Optional[pulumi.Input['InterconnectAttachmentPartnerMetadataArgs']]):
pulumi.set(self, "partner_metadata", value)
@property
@pulumi.getter
def project(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "project")
@project.setter
def project(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project", value)
@property
@pulumi.getter(name="requestId")
def request_id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "request_id")
@request_id.setter
def request_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "request_id", value)
@property
@pulumi.getter
def router(self) -> Optional[pulumi.Input[str]]:
"""
URL of the Cloud Router to be used for dynamic routing. This router must be in the same region as this InterconnectAttachment. The InterconnectAttachment will automatically connect the Interconnect to the network & region within which the Cloud Router is configured.
"""
return pulumi.get(self, "router")
@router.setter
def router(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "router", value)
@property
@pulumi.getter(name="stackType")
def stack_type(self) -> Optional[pulumi.Input['InterconnectAttachmentStackType']]:
"""
The stack type for this interconnect attachment to identify whether the IPv6 feature is enabled or not. If not specified, IPV4_ONLY will be used. This field can be both set at interconnect attachments creation and update interconnect attachment operations.
"""
return pulumi.get(self, "stack_type")
@stack_type.setter
def stack_type(self, value: Optional[pulumi.Input['InterconnectAttachmentStackType']]):
pulumi.set(self, "stack_type", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input['InterconnectAttachmentType']]:
"""
The type of interconnect attachment this is, which can take one of the following values: - DEDICATED: an attachment to a Dedicated Interconnect. - PARTNER: an attachment to a Partner Interconnect, created by the customer. - PARTNER_PROVIDER: an attachment to a Partner Interconnect, created by the partner.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input['InterconnectAttachmentType']]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="validateOnly")
def validate_only(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "validate_only")
@validate_only.setter
def validate_only(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "validate_only", value)
@property
@pulumi.getter(name="vlanTag8021q")
def vlan_tag8021q(self) -> Optional[pulumi.Input[int]]:
"""
The IEEE 802.1Q VLAN tag for this attachment, in the range 2-4094. Only specified at creation time.
"""
return pulumi.get(self, "vlan_tag8021q")
@vlan_tag8021q.setter
def vlan_tag8021q(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "vlan_tag8021q", value)
class InterconnectAttachment(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
admin_enabled: Optional[pulumi.Input[bool]] = None,
bandwidth: Optional[pulumi.Input['InterconnectAttachmentBandwidth']] = None,
candidate_ipv6_subnets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
candidate_subnets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
cloud_router_ipv6_interface_id: Optional[pulumi.Input[str]] = None,
customer_router_ipv6_interface_id: Optional[pulumi.Input[str]] = None,
dataplane_version: Optional[pulumi.Input[int]] = None,
description: Optional[pulumi.Input[str]] = None,
edge_availability_domain: Optional[pulumi.Input['InterconnectAttachmentEdgeAvailabilityDomain']] = None,
encryption: Optional[pulumi.Input['InterconnectAttachmentEncryption']] = None,
interconnect: Optional[pulumi.Input[str]] = None,
ipsec_internal_addresses: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
mtu: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
pairing_key: Optional[pulumi.Input[str]] = None,
partner_asn: Optional[pulumi.Input[str]] = None,
partner_metadata: Optional[pulumi.Input[pulumi.InputType['InterconnectAttachmentPartnerMetadataArgs']]] = None,
project: Optional[pulumi.Input[str]] = None,
region: Optional[pulumi.Input[str]] = None,
request_id: Optional[pulumi.Input[str]] = None,
router: Optional[pulumi.Input[str]] = None,
stack_type: Optional[pulumi.Input['InterconnectAttachmentStackType']] = None,
type: Optional[pulumi.Input['InterconnectAttachmentType']] = None,
validate_only: Optional[pulumi.Input[str]] = None,
vlan_tag8021q: Optional[pulumi.Input[int]] = None,
__props__=None):
"""
Creates an InterconnectAttachment in the specified project using the data included in the request.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[bool] admin_enabled: Determines whether this Attachment will carry packets. Not present for PARTNER_PROVIDER.
:param pulumi.Input['InterconnectAttachmentBandwidth'] bandwidth: Provisioned bandwidth capacity for the interconnect attachment. For attachments of type DEDICATED, the user can set the bandwidth. For attachments of type PARTNER, the Google Partner that is operating the interconnect must set the bandwidth. Output only for PARTNER type, mutable for PARTNER_PROVIDER and DEDICATED, and can take one of the following values: - BPS_50M: 50 Mbit/s - BPS_100M: 100 Mbit/s - BPS_200M: 200 Mbit/s - BPS_300M: 300 Mbit/s - BPS_400M: 400 Mbit/s - BPS_500M: 500 Mbit/s - BPS_1G: 1 Gbit/s - BPS_2G: 2 Gbit/s - BPS_5G: 5 Gbit/s - BPS_10G: 10 Gbit/s - BPS_20G: 20 Gbit/s - BPS_50G: 50 Gbit/s
:param pulumi.Input[Sequence[pulumi.Input[str]]] candidate_ipv6_subnets: Up to 16 candidate prefixes that control the allocation of cloudRouterIpv6Address and customerRouterIpv6Address for this attachment. Each prefix must be in the Global Unique Address (GUA) space. It is highly recommended that it be in a range owned by the requestor. A GUA in a range owned by Google will cause the request to fail. Google will select an available prefix from the supplied candidates or fail the request. If not supplied, a /125 from a Google-owned GUA block will be selected.
:param pulumi.Input[Sequence[pulumi.Input[str]]] candidate_subnets: Up to 16 candidate prefixes that can be used to restrict the allocation of cloudRouterIpAddress and customerRouterIpAddress for this attachment. All prefixes must be within link-local address space (169.254.0.0/16) and must be /29 or shorter (/28, /27, etc). Google will attempt to select an unused /29 from the supplied candidate prefix(es). The request will fail if all possible /29s are in use on Google's edge. If not supplied, Google will randomly select an unused /29 from all of link-local space.
:param pulumi.Input[str] cloud_router_ipv6_interface_id: If supplied, the interface id (index within the subnet) to be used for the cloud router address. The id must be in the range of 1 to 6. If a subnet mask is supplied, it must be /125, and the subnet should either be 0 or match the selected subnet.
:param pulumi.Input[str] customer_router_ipv6_interface_id: If supplied, the interface id (index within the subnet) to be used for the customer router address. The id must be in the range of 1 to 6. If a subnet mask is supplied, it must be /125, and the subnet should either be 0 or match the selected subnet.
:param pulumi.Input[int] dataplane_version: [Output only for types PARTNER and DEDICATED. Not present for PARTNER_PROVIDER.] Dataplane version for this InterconnectAttachment. This field is only present for Dataplane version 2 and higher. Absence of this field in the API output indicates that the Dataplane is version 1.
:param pulumi.Input[str] description: An optional description of this resource.
:param pulumi.Input['InterconnectAttachmentEdgeAvailabilityDomain'] edge_availability_domain: Desired availability domain for the attachment. Only available for type PARTNER, at creation time, and can take one of the following values: - AVAILABILITY_DOMAIN_ANY - AVAILABILITY_DOMAIN_1 - AVAILABILITY_DOMAIN_2 For improved reliability, customers should configure a pair of attachments, one per availability domain. The selected availability domain will be provided to the Partner via the pairing key, so that the provisioned circuit will lie in the specified domain. If not specified, the value will default to AVAILABILITY_DOMAIN_ANY.
:param pulumi.Input['InterconnectAttachmentEncryption'] encryption: Indicates the user-supplied encryption option of this VLAN attachment (interconnectAttachment). Can only be specified at attachment creation for PARTNER or DEDICATED attachments. Possible values are: - NONE - This is the default value, which means that the VLAN attachment carries unencrypted traffic. VMs are able to send traffic to, or receive traffic from, such a VLAN attachment. - IPSEC - The VLAN attachment carries only encrypted traffic that is encrypted by an IPsec device, such as an HA VPN gateway or third-party IPsec VPN. VMs cannot directly send traffic to, or receive traffic from, such a VLAN attachment. To use *IPsec-encrypted Cloud Interconnect*, the VLAN attachment must be created with this option. Not currently available publicly.
:param pulumi.Input[str] interconnect: URL of the underlying Interconnect object that this attachment's traffic will traverse through.
:param pulumi.Input[Sequence[pulumi.Input[str]]] ipsec_internal_addresses: A list of URLs of addresses that have been reserved for the VLAN attachment. Used only for the VLAN attachment that has the encryption option as IPSEC. The addresses must be regional internal IP address ranges. When creating an HA VPN gateway over the VLAN attachment, if the attachment is configured to use a regional internal IP address, then the VPN gateway's IP address is allocated from the IP address range specified here. For example, if the HA VPN gateway's interface 0 is paired to this VLAN attachment, then a regional internal IP address for the VPN gateway interface 0 will be allocated from the IP address specified for this VLAN attachment. If this field is not specified when creating the VLAN attachment, then later on when creating an HA VPN gateway on this VLAN attachment, the HA VPN gateway's IP address is allocated from the regional external IP address pool. Not currently available publicly.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: Labels for this resource. These can only be added or modified by the setLabels method. Each label key/value pair must comply with RFC1035. Label values may be empty.
:param pulumi.Input[int] mtu: Maximum Transmission Unit (MTU), in bytes, of packets passing through this interconnect attachment. Only 1440 and 1500 are allowed. If not specified, the value will default to 1440.
:param pulumi.Input[str] name: Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
:param pulumi.Input[str] pairing_key: [Output only for type PARTNER. Input only for PARTNER_PROVIDER. Not present for DEDICATED]. The opaque identifier of an PARTNER attachment used to initiate provisioning with a selected partner. Of the form "XXXXX/region/domain"
:param pulumi.Input[str] partner_asn: Optional BGP ASN for the router supplied by a Layer 3 Partner if they configured BGP on behalf of the customer. Output only for PARTNER type, input only for PARTNER_PROVIDER, not available for DEDICATED.
:param pulumi.Input[pulumi.InputType['InterconnectAttachmentPartnerMetadataArgs']] partner_metadata: Informational metadata about Partner attachments from Partners to display to customers. Output only for for PARTNER type, mutable for PARTNER_PROVIDER, not available for DEDICATED.
:param pulumi.Input[str] router: URL of the Cloud Router to be used for dynamic routing. This router must be in the same region as this InterconnectAttachment. The InterconnectAttachment will automatically connect the Interconnect to the network & region within which the Cloud Router is configured.
:param pulumi.Input['InterconnectAttachmentStackType'] stack_type: The stack type for this interconnect attachment to identify whether the IPv6 feature is enabled or not. If not specified, IPV4_ONLY will be used. This field can be both set at interconnect attachments creation and update interconnect attachment operations.
:param pulumi.Input['InterconnectAttachmentType'] type: The type of interconnect attachment this is, which can take one of the following values: - DEDICATED: an attachment to a Dedicated Interconnect. - PARTNER: an attachment to a Partner Interconnect, created by the customer. - PARTNER_PROVIDER: an attachment to a Partner Interconnect, created by the partner.
:param pulumi.Input[int] vlan_tag8021q: The IEEE 802.1Q VLAN tag for this attachment, in the range 2-4094. Only specified at creation time.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: InterconnectAttachmentArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Creates an InterconnectAttachment in the specified project using the data included in the request.
:param str resource_name: The name of the resource.
:param InterconnectAttachmentArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(InterconnectAttachmentArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
admin_enabled: Optional[pulumi.Input[bool]] = None,
bandwidth: Optional[pulumi.Input['InterconnectAttachmentBandwidth']] = None,
candidate_ipv6_subnets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
candidate_subnets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
cloud_router_ipv6_interface_id: Optional[pulumi.Input[str]] = None,
customer_router_ipv6_interface_id: Optional[pulumi.Input[str]] = None,
dataplane_version: Optional[pulumi.Input[int]] = None,
description: Optional[pulumi.Input[str]] = None,
edge_availability_domain: Optional[pulumi.Input['InterconnectAttachmentEdgeAvailabilityDomain']] = None,
encryption: Optional[pulumi.Input['InterconnectAttachmentEncryption']] = None,
interconnect: Optional[pulumi.Input[str]] = None,
ipsec_internal_addresses: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
mtu: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
pairing_key: Optional[pulumi.Input[str]] = None,
partner_asn: Optional[pulumi.Input[str]] = None,
partner_metadata: Optional[pulumi.Input[pulumi.InputType['InterconnectAttachmentPartnerMetadataArgs']]] = None,
project: Optional[pulumi.Input[str]] = None,
region: Optional[pulumi.Input[str]] = None,
request_id: Optional[pulumi.Input[str]] = None,
router: Optional[pulumi.Input[str]] = None,
stack_type: Optional[pulumi.Input['InterconnectAttachmentStackType']] = None,
type: Optional[pulumi.Input['InterconnectAttachmentType']] = None,
validate_only: Optional[pulumi.Input[str]] = None,
vlan_tag8021q: Optional[pulumi.Input[int]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = InterconnectAttachmentArgs.__new__(InterconnectAttachmentArgs)
__props__.__dict__["admin_enabled"] = admin_enabled
__props__.__dict__["bandwidth"] = bandwidth
__props__.__dict__["candidate_ipv6_subnets"] = candidate_ipv6_subnets
__props__.__dict__["candidate_subnets"] = candidate_subnets
__props__.__dict__["cloud_router_ipv6_interface_id"] = cloud_router_ipv6_interface_id
__props__.__dict__["customer_router_ipv6_interface_id"] = customer_router_ipv6_interface_id
__props__.__dict__["dataplane_version"] = dataplane_version
__props__.__dict__["description"] = description
__props__.__dict__["edge_availability_domain"] = edge_availability_domain
__props__.__dict__["encryption"] = encryption
__props__.__dict__["interconnect"] = interconnect
__props__.__dict__["ipsec_internal_addresses"] = ipsec_internal_addresses
__props__.__dict__["labels"] = labels
__props__.__dict__["mtu"] = mtu
__props__.__dict__["name"] = name
__props__.__dict__["pairing_key"] = pairing_key
__props__.__dict__["partner_asn"] = partner_asn
__props__.__dict__["partner_metadata"] = partner_metadata
__props__.__dict__["project"] = project
if region is None and not opts.urn:
raise TypeError("Missing required property 'region'")
__props__.__dict__["region"] = region
__props__.__dict__["request_id"] = request_id
__props__.__dict__["router"] = router
__props__.__dict__["stack_type"] = stack_type
__props__.__dict__["type"] = type
__props__.__dict__["validate_only"] = validate_only
__props__.__dict__["vlan_tag8021q"] = vlan_tag8021q
__props__.__dict__["cloud_router_ip_address"] = None
__props__.__dict__["cloud_router_ipv6_address"] = None
__props__.__dict__["creation_timestamp"] = None
__props__.__dict__["customer_router_ip_address"] = None
__props__.__dict__["customer_router_ipv6_address"] = None
__props__.__dict__["kind"] = None
__props__.__dict__["label_fingerprint"] = None
__props__.__dict__["operational_status"] = None
__props__.__dict__["private_interconnect_info"] = None
__props__.__dict__["satisfies_pzs"] = None
__props__.__dict__["self_link"] = None
__props__.__dict__["self_link_with_id"] = None
__props__.__dict__["state"] = None
super(InterconnectAttachment, __self__).__init__(
'google-native:compute/alpha:InterconnectAttachment',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None) -> 'InterconnectAttachment':
"""
Get an existing InterconnectAttachment resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = InterconnectAttachmentArgs.__new__(InterconnectAttachmentArgs)
__props__.__dict__["admin_enabled"] = None
__props__.__dict__["bandwidth"] = None
__props__.__dict__["candidate_ipv6_subnets"] = None
__props__.__dict__["candidate_subnets"] = None
__props__.__dict__["cloud_router_ip_address"] = None
__props__.__dict__["cloud_router_ipv6_address"] = None
__props__.__dict__["cloud_router_ipv6_interface_id"] = None
__props__.__dict__["creation_timestamp"] = None
__props__.__dict__["customer_router_ip_address"] = None
__props__.__dict__["customer_router_ipv6_address"] = None
__props__.__dict__["customer_router_ipv6_interface_id"] = None
__props__.__dict__["dataplane_version"] = None
__props__.__dict__["description"] = None
__props__.__dict__["edge_availability_domain"] = None
__props__.__dict__["encryption"] = None
__props__.__dict__["interconnect"] = None
__props__.__dict__["ipsec_internal_addresses"] = None
__props__.__dict__["kind"] = None
__props__.__dict__["label_fingerprint"] = None
__props__.__dict__["labels"] = None
__props__.__dict__["mtu"] = None
__props__.__dict__["name"] = None
__props__.__dict__["operational_status"] = None
__props__.__dict__["pairing_key"] = None
__props__.__dict__["partner_asn"] = None
__props__.__dict__["partner_metadata"] = None
__props__.__dict__["private_interconnect_info"] = None
__props__.__dict__["region"] = None
__props__.__dict__["router"] = None
__props__.__dict__["satisfies_pzs"] = None
__props__.__dict__["self_link"] = None
__props__.__dict__["self_link_with_id"] = None
__props__.__dict__["stack_type"] = None
__props__.__dict__["state"] = None
__props__.__dict__["type"] = None
__props__.__dict__["vlan_tag8021q"] = None
return InterconnectAttachment(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="adminEnabled")
def admin_enabled(self) -> pulumi.Output[bool]:
"""
Determines whether this Attachment will carry packets. Not present for PARTNER_PROVIDER.
"""
return pulumi.get(self, "admin_enabled")
@property
@pulumi.getter
def bandwidth(self) -> pulumi.Output[str]:
"""
Provisioned bandwidth capacity for the interconnect attachment. For attachments of type DEDICATED, the user can set the bandwidth. For attachments of type PARTNER, the Google Partner that is operating the interconnect must set the bandwidth. Output only for PARTNER type, mutable for PARTNER_PROVIDER and DEDICATED, and can take one of the following values: - BPS_50M: 50 Mbit/s - BPS_100M: 100 Mbit/s - BPS_200M: 200 Mbit/s - BPS_300M: 300 Mbit/s - BPS_400M: 400 Mbit/s - BPS_500M: 500 Mbit/s - BPS_1G: 1 Gbit/s - BPS_2G: 2 Gbit/s - BPS_5G: 5 Gbit/s - BPS_10G: 10 Gbit/s - BPS_20G: 20 Gbit/s - BPS_50G: 50 Gbit/s
"""
return pulumi.get(self, "bandwidth")
@property
@pulumi.getter(name="candidateIpv6Subnets")
def candidate_ipv6_subnets(self) -> pulumi.Output[Sequence[str]]:
"""
Up to 16 candidate prefixes that control the allocation of cloudRouterIpv6Address and customerRouterIpv6Address for this attachment. Each prefix must be in the Global Unique Address (GUA) space. It is highly recommended that it be in a range owned by the requestor. A GUA in a range owned by Google will cause the request to fail. Google will select an available prefix from the supplied candidates or fail the request. If not supplied, a /125 from a Google-owned GUA block will be selected.
"""
return pulumi.get(self, "candidate_ipv6_subnets")
@property
@pulumi.getter(name="candidateSubnets")
def candidate_subnets(self) -> pulumi.Output[Sequence[str]]:
"""
Up to 16 candidate prefixes that can be used to restrict the allocation of cloudRouterIpAddress and customerRouterIpAddress for this attachment. All prefixes must be within link-local address space (169.254.0.0/16) and must be /29 or shorter (/28, /27, etc). Google will attempt to select an unused /29 from the supplied candidate prefix(es). The request will fail if all possible /29s are in use on Google's edge. If not supplied, Google will randomly select an unused /29 from all of link-local space.
"""
return pulumi.get(self, "candidate_subnets")
@property
@pulumi.getter(name="cloudRouterIpAddress")
def cloud_router_ip_address(self) -> pulumi.Output[str]:
"""
IPv4 address + prefix length to be configured on Cloud Router Interface for this interconnect attachment.
"""
return pulumi.get(self, "cloud_router_ip_address")
@property
@pulumi.getter(name="cloudRouterIpv6Address")
def cloud_router_ipv6_address(self) -> pulumi.Output[str]:
"""
IPv6 address + prefix length to be configured on Cloud Router Interface for this interconnect attachment.
"""
return pulumi.get(self, "cloud_router_ipv6_address")
@property
@pulumi.getter(name="cloudRouterIpv6InterfaceId")
def cloud_router_ipv6_interface_id(self) -> pulumi.Output[str]:
"""
If supplied, the interface id (index within the subnet) to be used for the cloud router address. The id must be in the range of 1 to 6. If a subnet mask is supplied, it must be /125, and the subnet should either be 0 or match the selected subnet.
"""
return pulumi.get(self, "cloud_router_ipv6_interface_id")
@property
@pulumi.getter(name="creationTimestamp")
def creation_timestamp(self) -> pulumi.Output[str]:
"""
Creation timestamp in RFC3339 text format.
"""
return pulumi.get(self, "creation_timestamp")
@property
@pulumi.getter(name="customerRouterIpAddress")
def customer_router_ip_address(self) -> pulumi.Output[str]:
"""
IPv4 address + prefix length to be configured on the customer router subinterface for this interconnect attachment.
"""
return pulumi.get(self, "customer_router_ip_address")
@property
@pulumi.getter(name="customerRouterIpv6Address")
def customer_router_ipv6_address(self) -> pulumi.Output[str]:
"""
IPv6 address + prefix length to be configured on the customer router subinterface for this interconnect attachment.
"""
return pulumi.get(self, "customer_router_ipv6_address")
@property
@pulumi.getter(name="customerRouterIpv6InterfaceId")
def customer_router_ipv6_interface_id(self) -> pulumi.Output[str]:
"""
If supplied, the interface id (index within the subnet) to be used for the customer router address. The id must be in the range of 1 to 6. If a subnet mask is supplied, it must be /125, and the subnet should either be 0 or match the selected subnet.
"""
return pulumi.get(self, "customer_router_ipv6_interface_id")
@property
@pulumi.getter(name="dataplaneVersion")
def dataplane_version(self) -> pulumi.Output[int]:
"""
[Output only for types PARTNER and DEDICATED. Not present for PARTNER_PROVIDER.] Dataplane version for this InterconnectAttachment. This field is only present for Dataplane version 2 and higher. Absence of this field in the API output indicates that the Dataplane is version 1.
"""
return pulumi.get(self, "dataplane_version")
@property
@pulumi.getter
def description(self) -> pulumi.Output[str]:
"""
An optional description of this resource.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="edgeAvailabilityDomain")
def edge_availability_domain(self) -> pulumi.Output[str]:
"""
Desired availability domain for the attachment. Only available for type PARTNER, at creation time, and can take one of the following values: - AVAILABILITY_DOMAIN_ANY - AVAILABILITY_DOMAIN_1 - AVAILABILITY_DOMAIN_2 For improved reliability, customers should configure a pair of attachments, one per availability domain. The selected availability domain will be provided to the Partner via the pairing key, so that the provisioned circuit will lie in the specified domain. If not specified, the value will default to AVAILABILITY_DOMAIN_ANY.
"""
return pulumi.get(self, "edge_availability_domain")
@property
@pulumi.getter
def encryption(self) -> pulumi.Output[str]:
"""
Indicates the user-supplied encryption option of this VLAN attachment (interconnectAttachment). Can only be specified at attachment creation for PARTNER or DEDICATED attachments. Possible values are: - NONE - This is the default value, which means that the VLAN attachment carries unencrypted traffic. VMs are able to send traffic to, or receive traffic from, such a VLAN attachment. - IPSEC - The VLAN attachment carries only encrypted traffic that is encrypted by an IPsec device, such as an HA VPN gateway or third-party IPsec VPN. VMs cannot directly send traffic to, or receive traffic from, such a VLAN attachment. To use *IPsec-encrypted Cloud Interconnect*, the VLAN attachment must be created with this option. Not currently available publicly.
"""
return pulumi.get(self, "encryption")
@property
@pulumi.getter
def interconnect(self) -> pulumi.Output[str]:
"""
URL of the underlying Interconnect object that this attachment's traffic will traverse through.
"""
return pulumi.get(self, "interconnect")
@property
@pulumi.getter(name="ipsecInternalAddresses")
def ipsec_internal_addresses(self) -> pulumi.Output[Sequence[str]]:
"""
A list of URLs of addresses that have been reserved for the VLAN attachment. Used only for the VLAN attachment that has the encryption option as IPSEC. The addresses must be regional internal IP address ranges. When creating an HA VPN gateway over the VLAN attachment, if the attachment is configured to use a regional internal IP address, then the VPN gateway's IP address is allocated from the IP address range specified here. For example, if the HA VPN gateway's interface 0 is paired to this VLAN attachment, then a regional internal IP address for the VPN gateway interface 0 will be allocated from the IP address specified for this VLAN attachment. If this field is not specified when creating the VLAN attachment, then later on when creating an HA VPN gateway on this VLAN attachment, the HA VPN gateway's IP address is allocated from the regional external IP address pool. Not currently available publicly.
"""
return pulumi.get(self, "ipsec_internal_addresses")
@property
@pulumi.getter
def kind(self) -> pulumi.Output[str]:
"""
Type of the resource. Always compute#interconnectAttachment for interconnect attachments.
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter(name="labelFingerprint")
def label_fingerprint(self) -> pulumi.Output[str]:
"""
A fingerprint for the labels being applied to this InterconnectAttachment, which is essentially a hash of the labels set used for optimistic locking. The fingerprint is initially generated by Compute Engine and changes after every request to modify or update labels. You must always provide an up-to-date fingerprint hash in order to update or change labels, otherwise the request will fail with error 412 conditionNotMet. To see the latest fingerprint, make a get() request to retrieve an InterconnectAttachment.
"""
return pulumi.get(self, "label_fingerprint")
@property
@pulumi.getter
def labels(self) -> pulumi.Output[Mapping[str, str]]:
"""
Labels for this resource. These can only be added or modified by the setLabels method. Each label key/value pair must comply with RFC1035. Label values may be empty.
"""
return pulumi.get(self, "labels")
@property
@pulumi.getter
def mtu(self) -> pulumi.Output[int]:
"""
Maximum Transmission Unit (MTU), in bytes, of packets passing through this interconnect attachment. Only 1440 and 1500 are allowed. If not specified, the value will default to 1440.
"""
return pulumi.get(self, "mtu")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="operationalStatus")
def operational_status(self) -> pulumi.Output[str]:
"""
The current status of whether or not this interconnect attachment is functional, which can take one of the following values: - OS_ACTIVE: The attachment has been turned up and is ready to use. - OS_UNPROVISIONED: The attachment is not ready to use yet, because turnup is not complete.
"""
return pulumi.get(self, "operational_status")
@property
@pulumi.getter(name="pairingKey")
def pairing_key(self) -> pulumi.Output[str]:
"""
[Output only for type PARTNER. Input only for PARTNER_PROVIDER. Not present for DEDICATED]. The opaque identifier of an PARTNER attachment used to initiate provisioning with a selected partner. Of the form "XXXXX/region/domain"
"""
return pulumi.get(self, "pairing_key")
@property
@pulumi.getter(name="partnerAsn")
def partner_asn(self) -> pulumi.Output[str]:
"""
Optional BGP ASN for the router supplied by a Layer 3 Partner if they configured BGP on behalf of the customer. Output only for PARTNER type, input only for PARTNER_PROVIDER, not available for DEDICATED.
"""
return pulumi.get(self, "partner_asn")
@property
@pulumi.getter(name="partnerMetadata")
def partner_metadata(self) -> pulumi.Output['outputs.InterconnectAttachmentPartnerMetadataResponse']:
"""
Informational metadata about Partner attachments from Partners to display to customers. Output only for for PARTNER type, mutable for PARTNER_PROVIDER, not available for DEDICATED.
"""
return pulumi.get(self, "partner_metadata")
@property
@pulumi.getter(name="privateInterconnectInfo")
def private_interconnect_info(self) -> pulumi.Output['outputs.InterconnectAttachmentPrivateInfoResponse']:
"""
Information specific to an InterconnectAttachment. This property is populated if the interconnect that this is attached to is of type DEDICATED.
"""
return pulumi.get(self, "private_interconnect_info")
@property
@pulumi.getter
def region(self) -> pulumi.Output[str]:
"""
URL of the region where the regional interconnect attachment resides. You must specify this field as part of the HTTP request URL. It is not settable as a field in the request body.
"""
return pulumi.get(self, "region")
@property
@pulumi.getter
def router(self) -> pulumi.Output[str]:
"""
URL of the Cloud Router to be used for dynamic routing. This router must be in the same region as this InterconnectAttachment. The InterconnectAttachment will automatically connect the Interconnect to the network & region within which the Cloud Router is configured.
"""
return pulumi.get(self, "router")
@property
@pulumi.getter(name="satisfiesPzs")
def satisfies_pzs(self) -> pulumi.Output[bool]:
"""
Set to true if the resource satisfies the zone separation organization policy constraints and false otherwise. Defaults to false if the field is not present.
"""
return pulumi.get(self, "satisfies_pzs")
@property
@pulumi.getter(name="selfLink")
def self_link(self) -> pulumi.Output[str]:
"""
Server-defined URL for the resource.
"""
return pulumi.get(self, "self_link")
@property
@pulumi.getter(name="selfLinkWithId")
def self_link_with_id(self) -> pulumi.Output[str]:
"""
Server-defined URL for this resource with the resource id.
"""
return pulumi.get(self, "self_link_with_id")
@property
@pulumi.getter(name="stackType")
def stack_type(self) -> pulumi.Output[str]:
"""
The stack type for this interconnect attachment to identify whether the IPv6 feature is enabled or not. If not specified, IPV4_ONLY will be used. This field can be both set at interconnect attachments creation and update interconnect attachment operations.
"""
return pulumi.get(self, "stack_type")
@property
@pulumi.getter
def state(self) -> pulumi.Output[str]:
"""
The current state of this attachment's functionality. Enum values ACTIVE and UNPROVISIONED are shared by DEDICATED/PRIVATE, PARTNER, and PARTNER_PROVIDER interconnect attachments, while enum values PENDING_PARTNER, PARTNER_REQUEST_RECEIVED, and PENDING_CUSTOMER are used for only PARTNER and PARTNER_PROVIDER interconnect attachments. This state can take one of the following values: - ACTIVE: The attachment has been turned up and is ready to use. - UNPROVISIONED: The attachment is not ready to use yet, because turnup is not complete. - PENDING_PARTNER: A newly-created PARTNER attachment that has not yet been configured on the Partner side. - PARTNER_REQUEST_RECEIVED: A PARTNER attachment is in the process of provisioning after a PARTNER_PROVIDER attachment was created that references it. - PENDING_CUSTOMER: A PARTNER or PARTNER_PROVIDER attachment that is waiting for a customer to activate it. - DEFUNCT: The attachment was deleted externally and is no longer functional. This could be because the associated Interconnect was removed, or because the other side of a Partner attachment was deleted.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter
def type(self) -> pulumi.Output[str]:
"""
The type of interconnect attachment this is, which can take one of the following values: - DEDICATED: an attachment to a Dedicated Interconnect. - PARTNER: an attachment to a Partner Interconnect, created by the customer. - PARTNER_PROVIDER: an attachment to a Partner Interconnect, created by the partner.
"""
return pulumi.get(self, "type")
@property
@pulumi.getter(name="vlanTag8021q")
def vlan_tag8021q(self) -> pulumi.Output[int]:
"""
The IEEE 802.1Q VLAN tag for this attachment, in the range 2-4094. Only specified at creation time.
"""
return pulumi.get(self, "vlan_tag8021q")
| 72.480728 | 1,117 | 0.716176 | 8,861 | 67,697 | 5.305045 | 0.059587 | 0.047971 | 0.051332 | 0.02506 | 0.851153 | 0.81969 | 0.786377 | 0.769763 | 0.752638 | 0.717963 | 0 | 0.013649 | 0.205607 | 67,697 | 933 | 1,118 | 72.558414 | 0.860463 | 0.520259 | 0 | 0.463497 | 1 | 0 | 0.155765 | 0.088724 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159593 | false | 0.001698 | 0.013582 | 0.006791 | 0.283531 | 0.008489 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
40e7c60a6ac966224d2dab6d46b077aadd6adb22 | 576 | py | Python | FuzzingTool_Dialog_CompilationError_Child.py | Ryu-Miyaki/Fuzz4B | 8546f165d4dbdd97eb6ab5a6f4c445ee81ec364b | [
"MIT"
] | 16 | 2020-06-25T11:56:59.000Z | 2022-02-05T14:00:12.000Z | FuzzingTool_Dialog_CompilationError_Child.py | Ryu-Miyaki/Fuzz4B | 8546f165d4dbdd97eb6ab5a6f4c445ee81ec364b | [
"MIT"
] | null | null | null | FuzzingTool_Dialog_CompilationError_Child.py | Ryu-Miyaki/Fuzz4B | 8546f165d4dbdd97eb6ab5a6f4c445ee81ec364b | [
"MIT"
] | null | null | null | """Subclass of Dialog_CompilationError, which is generated by wxFormBuilder."""
import wx
import FuzzingTool
from FuzzingTool_Dialog_CompilationError import FuzzingTool_Dialog_CompilationError
# Implementing Dialog_CompilationError
class FuzzingTool_Dialog_CompilationError_Child( FuzzingTool_Dialog_CompilationError ):
def __init__( self, parent ):
FuzzingTool.Dialog_CompilationError.__init__( self, parent )
# Handlers for Dialog_CompilationError events.
def Button_OKOnButtonClick( self, event ):
# TODO: Implement Button_OKOnButtonClick
self.EndModal(True)
| 32 | 87 | 0.838542 | 59 | 576 | 7.79661 | 0.508475 | 0.382609 | 0.358696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105903 | 576 | 17 | 88 | 33.882353 | 0.893204 | 0.338542 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 1 | 0.25 | false | 0 | 0.375 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
40e99ca22b5aa16f3d86649fea9f57bdaeacee5a | 13,887 | py | Python | dglt/models/zoo/mpnn.py | uta-smile/CD-MVGNN | b48f4cd14befed298980a83edb417ab6809f0af6 | [
"MIT"
] | 3 | 2022-02-06T09:13:51.000Z | 2022-02-19T15:03:35.000Z | dglt/models/zoo/mpnn.py | uta-smile/CD-MVGNN | b48f4cd14befed298980a83edb417ab6809f0af6 | [
"MIT"
] | 1 | 2022-02-14T23:16:27.000Z | 2022-02-14T23:16:27.000Z | dglt/models/zoo/mpnn.py | uta-smile/CD-MVGNN | b48f4cd14befed298980a83edb417ab6809f0af6 | [
"MIT"
] | null | null | null | from argparse import Namespace
from torch import nn as nn
from dglt.models.modules import MPN, DualMPN, DualMPNPlus
from dglt.models.nn_utils import get_activation_function
class MPNN(nn.Module):
"""A MPNN is a model which contains a message passing network following by feed-forward layers."""
def __init__(self, classification: bool):
"""
Initializes the MPNN.
:param classification: Whether the model is a classification model.
"""
super(MPNN, self).__init__()
self.classification = classification
if self.classification:
self.sigmoid = nn.Sigmoid()
def create_encoder(self, args: Namespace):
"""
Creates the message passing encoder for the model.
:param args: Arguments.
"""
self.encoder = MPN(args)
def create_ffn(self, args: Namespace):
"""
Creates the feed-forward network for the model.
:param args: Arguments.
"""
if args.features_only:
first_linear_dim = args.features_size
else:
first_linear_dim = int(args.hidden_size * 100)
if args.self_attention:
first_linear_dim = first_linear_dim * args.attn_out
if args.use_input_features:
first_linear_dim += args.features_dim
dropout = nn.Dropout(args.dropout)
activation = get_activation_function(args.activation)
# Create FFN layers
if args.ffn_num_layers == 1:
ffn = [
dropout,
nn.Linear(first_linear_dim, args.output_size)
]
else:
ffn = [
dropout,
nn.Linear(first_linear_dim, args.ffn_hidden_size * 100)
]
for _ in range(args.ffn_num_layers - 2):
ffn.extend([
activation,
dropout,
nn.Linear(args.ffn_hidden_size * 100, args.ffn_hidden_size * 100),
])
ffn.extend([
activation,
dropout,
nn.Linear(args.ffn_hidden_size * 100, args.output_size),
])
# Create FFN model
self.ffn = nn.Sequential(*ffn)
def forward(self, *input):
"""
Runs the MPNN on input.
:param input: Input.
:return: The output of the MPNN.
"""
output = self.ffn(self.encoder(*input))
# Don't apply sigmoid during training b/c using BCEWithLogitsLoss
if self.classification and not self.training:
output = self.sigmoid(output)
return output
class MultiMPNN(nn.Module):
def __init__(self, args):
super(MultiMPNN, self).__init__()
self.encoders = nn.ModuleList()
if args.nencoders == 1:
encoder = MPN(args=args)
self.encoders.append(encoder)
else:
input_encoder = MPN(args=args, atom_emb_output=True)
self.encoders.append(input_encoder)
for i in range(args.nencoders - 2):
mid_encoder = MPN(args=args,
bond_fdim=input_encoder.bond_fdim,
atom_fdim=int(args.hidden_size * 100),
atom_emb_output=True)
self.encoders.append(mid_encoder)
# hard coding here
out_encoders = MPN(args=args,
bond_fdim=input_encoder.bond_fdim,
atom_fdim=int(args.hidden_size * 100),
atom_emb_output=False)
self.encoders.append(out_encoders)
self.ffn = self.create_ffn(args)
self.classification = args.dataset_type == 'classification'
if self.classification:
self.sigmoid = nn.Sigmoid()
def create_ffn(self, args: Namespace):
"""
Creates the feed-forward network for the model.
:param args: Arguments.
"""
if args.features_only:
first_linear_dim = args.features_size
else:
first_linear_dim = int(args.hidden_size * 100)
if args.use_input_features:
first_linear_dim += args.features_dim
dropout = nn.Dropout(args.dropout)
activation = get_activation_function(args.activation)
# Create FFN layers
if args.ffn_num_layers == 1:
ffn = [
dropout,
nn.Linear(first_linear_dim, args.output_size)
]
else:
ffn = [
dropout,
nn.Linear(first_linear_dim, args.ffn_hidden_size)
]
for _ in range(args.ffn_num_layers - 2):
ffn.extend([
activation,
dropout,
nn.Linear(args.ffn_hidden_size, args.ffn_hidden_size),
])
ffn.extend([
activation,
dropout,
nn.Linear(args.ffn_hidden_size, args.output_size),
])
# Create FFN model
return nn.Sequential(*ffn)
def forward(self, batch, features_batch, adjs_batch):
# TODO: Dense/Inception can apply here.
for encoder in self.encoders:
n_f_atom = encoder(batch, features_batch)
# batch.set_new_atom_feature(n_f_atom)
batch = [n_f_atom] + list(batch[1:])
mol_output = n_f_atom
output = self.ffn(mol_output)
# Don't apply sigmoid during training b/c using BCEWithLogitsLoss
if self.classification and not self.training:
output = self.sigmoid(output)
return output
class DualMPNN(nn.Module):
def __init__(self, args):
super(DualMPNN, self).__init__()
self.encoders = DualMPN(args=args)
self.mol_atom_ffn = self.create_ffn(args)
self.mol_bond_ffn = self.create_ffn(args)
self.ffn = nn.ModuleList()
self.ffn.append(self.mol_atom_ffn)
self.ffn.append(self.mol_bond_ffn)
self.classification = args.dataset_type == 'classification'
if self.classification:
self.sigmoid = nn.Sigmoid()
def create_ffn(self, args: Namespace):
"""
Creates the feed-forward network for the model.
:param args: Arguments.
"""
if args.features_only:
first_linear_dim = args.features_size
else:
first_linear_dim = int(args.hidden_size * 100)
if args.self_attention:
first_linear_dim = first_linear_dim * args.attn_out
if args.use_input_features:
first_linear_dim += args.features_dim
dropout = nn.Dropout(args.dropout)
activation = get_activation_function(args.activation)
# TODO: ffn_hidden_size
# Create FFN layers
if args.ffn_num_layers == 1:
ffn = [
dropout,
nn.Linear(first_linear_dim, args.output_size)
]
else:
ffn = [
dropout,
nn.Linear(first_linear_dim, args.ffn_hidden_size * 100)
]
for _ in range(args.ffn_num_layers - 2):
ffn.extend([
activation,
dropout,
nn.Linear(args.ffn_hidden_size * 100, args.ffn_hidden_size * 100),
])
ffn.extend([
activation,
dropout,
nn.Linear(args.ffn_hidden_size * 100, args.output_size),
])
# Create FFN model
return nn.Sequential(*ffn)
def get_loss_func(self, args):
def loss_func(preds, targets,
dt=args.dataset_type,
dist_coff=args.dist_coff):
if dt == 'classification':
pred_loss = nn.BCEWithLogitsLoss(reduction='none')
elif dt == 'regression':
pred_loss = nn.MSELoss(reduction='none')
else:
raise ValueError(f'Dataset type "{args.dataset_type}" not supported.')
# print(type(preds))
# TODO: Here, should we need to involve the model status? Using len(preds) is just a hack.
if type(preds) is not tuple:
# if DualMPNN is in eval mode.
return pred_loss(preds, targets)
# if DualMPNN is in train mode.
dist_loss = nn.MSELoss(reduction='none')
# dist_loss = nn.CosineSimilarity(dim=0)
# print(pred_loss)
# print(kl_loss(preds[0], preds[1]))
pt = pred_loss(preds[1], targets)
dist = dist_loss(preds[0], preds[1])
pred_loss1 = pred_loss(preds[0], targets)
pred_loss2 = pred_loss(preds[1], targets)
# TODO: dist constraint should be discussed, as well as the coff.
return pred_loss1 + pred_loss2 + dist_coff * dist
return loss_func
def forward(self, batch, features_batch):
output = self.encoders(batch, features_batch)
if self.training:
atom_ffn_output = self.mol_atom_ffn(output[0])
bond_ffn_output = self.mol_bond_ffn(output[1])
return atom_ffn_output, bond_ffn_output
else:
# TODO: Not a good implantation. Should be controlled by encoders.
atom_ffn_output = self.mol_atom_ffn(output[0])
bond_ffn_output = self.mol_bond_ffn(output[1])
if self.classification:
atom_ffn_output = self.sigmoid(atom_ffn_output)
bond_ffn_output = self.sigmoid(bond_ffn_output)
output = (atom_ffn_output + bond_ffn_output) / 2
return output
class DualMPNNPlus(nn.Module):
"""
A dualMPNN model using the cross-dependent message passing.
"""
def __init__(self, args):
super(DualMPNNPlus, self).__init__()
self.encoders = DualMPNPlus(args=args)
self.mol_atom_ffn = self.create_ffn(args)
self.mol_bond_ffn = self.create_ffn(args)
self.ffn = nn.ModuleList()
self.ffn.append(self.mol_atom_ffn)
self.ffn.append(self.mol_bond_ffn)
self.classification = args.dataset_type == 'classification'
if self.classification:
self.sigmoid = nn.Sigmoid()
def create_ffn(self, args: Namespace):
"""
Creates the feed-forward network for the model.
:param args: Arguments.
"""
if args.features_only:
first_linear_dim = args.features_size
else:
first_linear_dim = int(args.hidden_size * 100)
if args.self_attention:
first_linear_dim = first_linear_dim * args.attn_out
if args.use_input_features:
first_linear_dim += args.features_dim
dropout = nn.Dropout(args.dropout)
activation = get_activation_function(args.activation)
# TODO: ffn_hidden_size
# Create FFN layers
if args.ffn_num_layers == 1:
ffn = [
dropout,
nn.Linear(first_linear_dim, args.output_size)
]
else:
ffn = [
dropout,
nn.Linear(first_linear_dim, args.ffn_hidden_size * 100)
]
for _ in range(args.ffn_num_layers - 2):
ffn.extend([
activation,
dropout,
nn.Linear(args.ffn_hidden_size * 100, args.ffn_hidden_size * 100),
])
ffn.extend([
activation,
dropout,
nn.Linear(args.ffn_hidden_size * 100, args.output_size),
])
# Create FFN model
return nn.Sequential(*ffn)
def get_loss_func(self, args):
def loss_func(preds, targets,
dt=args.dataset_type,
dist_coff=args.dist_coff):
if dt == 'classification':
pred_loss = nn.BCEWithLogitsLoss(reduction='none')
elif dt == 'regression':
pred_loss = nn.MSELoss(reduction='none')
else:
raise ValueError(f'Dataset type "{args.dataset_type}" not supported.')
# print(type(preds))
# TODO: Here, should we need to involve the model status? Using len(preds) is just a hack.
if type(preds) is not tuple:
# if DualMPNN is in eval mode.
return pred_loss(preds, targets)
# if DualMPNN is in train mode.
dist_loss = nn.MSELoss(reduction='none')
# dist_loss = nn.CosineSimilarity(dim=0)
pt = pred_loss(preds[1], targets)
dist = dist_loss(preds[0], preds[1])
pred_loss1 = pred_loss(preds[0], targets)
pred_loss2 = pred_loss(preds[1], targets)
# TODO: dist constraint should be discussed, as well as the coff.
return pred_loss1 + pred_loss2 + dist_coff * dist
return loss_func
def forward(self, batch, features_batch):
output = self.encoders(batch, features_batch)
if self.training:
atom_ffn_output = self.mol_atom_ffn(output[0])
bond_ffn_output = self.mol_bond_ffn(output[1])
return atom_ffn_output, bond_ffn_output
else:
# TODO: Not a good implantation. Should be controlled by encoders.
atom_ffn_output = self.mol_atom_ffn(output[0])
bond_ffn_output = self.mol_bond_ffn(output[1])
if self.classification:
atom_ffn_output = self.sigmoid(atom_ffn_output)
bond_ffn_output = self.sigmoid(bond_ffn_output)
output = (atom_ffn_output + bond_ffn_output) / 2
return output
| 34.204433 | 102 | 0.561604 | 1,579 | 13,887 | 4.704877 | 0.10703 | 0.038767 | 0.048997 | 0.046036 | 0.845336 | 0.836856 | 0.822453 | 0.807242 | 0.807242 | 0.807242 | 0 | 0.010805 | 0.353568 | 13,887 | 405 | 103 | 34.288889 | 0.816754 | 0.130842 | 0 | 0.808118 | 0 | 0 | 0.017977 | 0.003561 | 0 | 0 | 0 | 0.007407 | 0 | 1 | 0.062731 | false | 0 | 0.01476 | 0 | 0.147601 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dc04c8a2651ad896ff3a45d750abd7ab19be1d7b | 39 | py | Python | usumu/__init__.py | jmmills/usumu | f61576e647f0e8bccc3634a38e8baaa21f896eab | [
"MIT"
] | null | null | null | usumu/__init__.py | jmmills/usumu | f61576e647f0e8bccc3634a38e8baaa21f896eab | [
"MIT"
] | null | null | null | usumu/__init__.py | jmmills/usumu | f61576e647f0e8bccc3634a38e8baaa21f896eab | [
"MIT"
] | null | null | null | import functools
class Usumu:
pass | 9.75 | 16 | 0.74359 | 5 | 39 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 39 | 4 | 17 | 9.75 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
dc157c676b0d7a6cf1d70540fd168b6c8e56016b | 75,056 | py | Python | slowfast/models/rnns.py | serre-lab/pred_gn | 437034687a561e72bf013dc295454da239748044 | [
"Apache-2.0"
] | null | null | null | slowfast/models/rnns.py | serre-lab/pred_gn | 437034687a561e72bf013dc295454da239748044 | [
"Apache-2.0"
] | null | null | null | slowfast/models/rnns.py | serre-lab/pred_gn | 437034687a561e72bf013dc295454da239748044 | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn.functional as F
from torch import nn
from torch.autograd import Function
from torch.nn.modules.utils import _pair
import math
from torch.autograd import Variable
from torch.nn import Parameter
from torch.nn import init
import numpy as numpy
from .batch_norm import get_norm
# class RBPFun(Function):
# @staticmethod
# def forward(ctx, state_2nd_last, last_state, *args):
# ctx.save_for_backward(state_2nd_last, last_state)
# ctx.args = args
# return last_state
# @staticmethod
# def backward(ctx, grad):
# neumann_g = neumann_v = None
# neumann_g_prev = grad.clone()
# neumann_v_prev = grad.clone()
# state_2nd_last, last_state = ctx.saved_tensors
# args = ctx.args
# truncate_iter = args[-1]
# # exp_name = args[-2]
# # i = args[-3]
# # epoch = args[-4]
# normsv = []
# normsg = []
# normg = torch.norm(neumann_g_prev)
# normsg.append(normg.data.item())
# normsv.append(normg.data.item())
# for ii in range(truncate_iter):
# neumann_v = torch.autograd.grad(
# last_state,
# state_2nd_last,
# grad_outputs=neumann_v_prev,
# retain_graph=True,
# allow_unused=True)
# normv = torch.norm(neumann_v[0])
# neumann_g = neumann_g_prev + neumann_v[0]
# normg = torch.norm(neumann_g)
# if normg > 1 or normv > normsv[-1] or normv < 1e-9:
# normsg.append(normg.data.item())
# normsv.append(normv.data.item())
# neumann_g = neumann_g_prev
# break
# neumann_v_prev = neumann_v
# neumann_g_prev = neumann_g
# normsv.append(normv.data.item())
# normsg.append(normg.data.item())
# return (None, neumann_g, None, None, None, None)
# def CBP_penalty(last_state, prev_state, mu=0.9, compute_hessian=True, pen_type='l1'):
# """Handles RBP grads in the forward pass."""
# """Compute the constrained RBP penalty."""
# norm_1_vect = torch.ones_like(last_state)
# norm_1_vect.requires_grad = False
# vj_prod = torch.autograd.grad(
# last_state,
# prev_state,
# grad_outputs=[norm_1_vect],
# retain_graph=True,
# create_graph=compute_hessian,
# allow_unused=True)[0]
# vj_penalty = (vj_prod - mu).clamp(0) ** 2
# return vj_penalty.mean() # Save memory with the mean
#########################################################################################################
#### New GN CBP ###################################################################################
#########################################################################################################
class RBPFun(Function):
@staticmethod
def forward(ctx, state_2nd_last, last_state, *args):
ctx.save_for_backward(state_2nd_last, last_state)
ctx.args = args
return last_state
@staticmethod
def backward(ctx, grad, max_steps=15, norm_cap=False):
neumann_g = neumann_v = None
neumann_g_prev = grad.clone()
neumann_v_prev = grad.clone()
state_2nd_last, last_state = ctx.saved_tensors
args = ctx.args
truncate_iter = args[-1]
# exp_name = args[-2]
# i = args[-3]
# epoch = args[-4]
if norm_cap:
normsv = []
normsg = []
normg = torch.norm(neumann_g_prev)
normsg.append(normg.data.item())
normsv.append(normg.data.item())
if truncate_iter <= 0:
normv = 1.
steps = 0. # TODO: Add to TB
while normv > 1e-9 and steps < max_steps:
neumann_v = torch.autograd.grad(
last_state,
state_2nd_last,
grad_outputs=neumann_v_prev,
retain_graph=True,
allow_unused=True)
neumann_g = neumann_g_prev + neumann_v[0]
normv = torch.norm(neumann_v[0])
steps += 1
neumann_v_prev = neumann_v
neumann_g_prev = neumann_g
else:
for ii in range(truncate_iter):
neumann_v = torch.autograd.grad(
last_state,
state_2nd_last,
grad_outputs=neumann_v_prev,
retain_graph=True,
allow_unused=True)
neumann_g = neumann_g_prev + neumann_v[0]
if norm_cap:
normv = torch.norm(neumann_v[0])
normg = torch.norm(neumann_g)
if normg > 1 or normv > normsv[-1] or normv < 1e-9:
normsg.append(normg.data.item())
normsv.append(normv.data.item())
neumann_g = neumann_g_prev
break
normsv.append(normv.data.item())
normsg.append(normg.data.item())
neumann_v_prev = neumann_v
neumann_g_prev = neumann_g
return (None, neumann_g, None, None, None, None)
def CBP_penalty(
last_state,
prev_state,
tau=0.999, # Changed 2/25/20 from 0.9
compute_hessian=True,
pen_type='l1'):
"""Handles RBP grads in the forward pass."""
"""Compute the constrained RBP penalty."""
norm_1_vect = torch.ones_like(last_state)
norm_1_vect.requires_grad = False
vj_prod = torch.autograd.grad(
last_state,
prev_state,
grad_outputs=[norm_1_vect],
retain_graph=True,
create_graph=compute_hessian,
allow_unused=True)[0]
vj_penalty = (vj_prod - tau).clamp(0) ** 2 # Squared to emphasize outliers
return vj_penalty.sum() # Save memory with the mean
#########################################################################################################
#### Currently Used #################################################################################
#########################################################################################################
# class hConvGRUCell_(nn.Module):
# """
# Generate a convolutional GRU cell
# """
# def __init__(
# self,
# input_size,
# hidden_size,
# kernel_size,
# batchnorm=True,
# timesteps=8,
# gala=False,
# spatial_kernel=5,
# hidden_init='zeros',
# r=4,
# grad_method='bptt',
# norm='GN'):
# super(hConvGRUCell, self).__init__()
# self.padding = kernel_size // 2
# self.input_size = input_size
# self.hidden_size = hidden_size
# self.timesteps = timesteps
# self.batchnorm = batchnorm
# self.grad_method = grad_method
# self.gala = gala
# if self.gala:
# self.u1_channel_gate_0 = nn.Conv3d(
# hidden_size, hidden_size // r, 1)
# self.u1_channel_gate_1 = nn.Conv3d(
# hidden_size // r, hidden_size, 1, bias=False)
# self.u1_spatial_gate_0 = nn.Conv3d(
# hidden_size, hidden_size // r, (1,spatial_kernel,spatial_kernel), padding=(0,1,1))
# self.u1_spatial_gate_1 = nn.Conv3d(
# hidden_size // r, 1, (1,spatial_kernel,spatial_kernel), padding=(0,1,1), bias=False)
# self.u1_combine_bias = nn.Parameter(
# torch.empty((hidden_size, 1, 1, 1)))
# else:
# self.u1_gate = nn.Conv3d(hidden_size, hidden_size, 1)
# nn.init.xavier_uniform_(self.u1_gate.weight)
# self.u2_gate = nn.Conv3d(hidden_size, hidden_size, 1)
# self.w_gate_inh = nn.Parameter(
# torch.empty(hidden_size, hidden_size, 1, kernel_size, kernel_size))
# self.w_gate_exc = nn.Parameter(
# torch.empty(hidden_size, hidden_size, 1, kernel_size, kernel_size))
# self.alpha = nn.Parameter(torch.empty((hidden_size, 1, 1, 1)))
# self.gamma = nn.Parameter(torch.empty((hidden_size, 1, 1, 1)))
# self.mu = nn.Parameter(torch.empty((hidden_size, 1, 1, 1)))
# # if norm == "":
# # norm = 'SyncBN'
# # Norm is harcoded to group norm
# self.bn = nn.ModuleList(
# [get_norm(norm, hidden_size) for i in range(2)])
# # TODO: Alekh, why is orthogonal slow af
# nn.init.xavier_uniform_(self.w_gate_inh)
# nn.init.xavier_uniform_(self.w_gate_exc)
# nn.init.xavier_uniform_(self.u2_gate.weight)
# for bn in self.bn:
# nn.init.constant_(bn.weight, 0.1)
# nn.init.constant_(self.alpha, 0.1)
# nn.init.constant_(self.gamma, 1.0)
# nn.init.constant_(self.mu, 1)
# if self.timesteps == 1:
# init_timesteps = 2
# else:
# init_timesteps = self.timesteps
# if self.gala:
# nn.init.uniform_(self.u1_combine_bias, 1, init_timesteps - 1)
# self.u1_combine_bias.data.log()
# self.u2_gate.bias.data = -self.u1_combine_bias.data
# else:
# nn.init.uniform_(self.u1_gate.bias.data, 1, init_timesteps - 1)
# self.u1_gate.bias.data.log()
# self.u2_gate.bias.data = -self.u1_gate.bias.data
# def forward(self, input_, h_, timestep=0, return_extra=[]):
# if timestep==0 and h_ is None:
# if self.hidden_init=='identity':
# h_ = input_
# else:
# h_ = torch.zeros_like(input_)
# extra = {}
# if self.gala:
# global_0 = F.softplus(self.u1_channel_gate_0(h_))
# global_1 = self.u1_channel_gate_1(global_0)
# local_0 = F.softplus(self.u1_spatial_gate_0(h_))
# local_1 = self.u1_spatial_gate_1(local_0)
# import pdb; pdb.set_trace()
# g1_t = F.softplus(global_1 * local_1 + self.u1_combine_bias)
# else:
# g1_t = torch.sigmoid(self.u1_gate(h_))
# c1_t = self.bn[0](
# F.conv3d(
# h_ * g1_t,
# self.w_gate_inh,
# padding=(0, self.padding, self.padding)))
# error = F.softplus(c1_t * (self.alpha * h_ + self.mu))
# # if 'error' in return_extra:
# # extra['error'] = error
# next_state1 = F.softplus(input_ - error)
# if 'error' in return_extra:
# extra['error'] = next_state1
# g2_t = torch.sigmoid(self.u2_gate(next_state1))
# h2_t = self.bn[1](
# F.conv3d(
# next_state1,
# self.w_gate_exc,
# padding=(0, self.padding, self.padding)))
# h_ = (1 - g2_t) * h_ + g2_t * h2_t
# if not extra:
# return h_
# else:
# return h_, extra
# #########################################################################################################
# class tdConvGRUCell_(nn.Module):
# """
# Generate a TD cell
# """
# def __init__(
# self,
# input_size,
# hidden_size,
# diff_size,
# kernel_size,
# batchnorm=True,
# hidden_init='zeros',
# timesteps=8,
# grad_method='bptt',
# norm='GN'):
# super(tdConvGRUCell, self).__init__()
# self.padding = kernel_size // 2
# self.input_size = input_size
# self.hidden_size = hidden_size
# self.timesteps = timesteps
# self.batchnorm = batchnorm
# self.grad_method = grad_method
# self.remap_0 = nn.Conv3d(hidden_size, diff_size, 1)
# self.remap_1 = nn.Conv3d(diff_size, input_size, 1)
# self.u1_gate = nn.Conv3d(input_size, input_size, 1)
# self.u2_gate = nn.Conv3d(input_size, input_size, 1)
# self.w_gate_inh = nn.Parameter(
# torch.empty(input_size, input_size, 1, kernel_size, kernel_size))
# self.w_gate_exc = nn.Parameter(
# torch.empty(input_size, input_size, 1, kernel_size, kernel_size))
# self.alpha = nn.Parameter(torch.empty((input_size, 1, 1, 1)))
# self.gamma = nn.Parameter(torch.empty((input_size, 1, 1, 1)))
# self.mu = nn.Parameter(torch.empty((input_size, 1, 1, 1)))
# self.hidden_init = hidden_init
# # if norm == "":
# # norm = 'SyncBN'
# # Norm is harcoded to group norm
# # norm = 'GN'
# self.bn = nn.ModuleList(
# [get_norm(norm, input_size) for i in range(2)])
# # TODO: Alekh, why is orthogonal slow af
# nn.init.xavier_uniform_(self.w_gate_inh)
# nn.init.xavier_uniform_(self.w_gate_exc)
# nn.init.xavier_uniform_(self.u1_gate.weight)
# nn.init.xavier_uniform_(self.u2_gate.weight)
# for bn in self.bn:
# nn.init.constant_(bn.weight, 0.1)
# nn.init.constant_(self.alpha, 0.1)
# nn.init.constant_(self.gamma, 1.0)
# nn.init.constant_(self.mu, 1)
# if self.timesteps == 1:
# init_timesteps = 2
# else:
# init_timesteps = self.timesteps
# nn.init.uniform_(self.u1_gate.bias.data, 1, init_timesteps - 1)
# self.u1_gate.bias.data.log()
# self.u2_gate.bias.data = -self.u1_gate.bias.data
# def forward(self, lower_, higher_, timestep=0, return_extra=[]):
# extra = {}
# prev_state2 = F.interpolate(
# higher_,
# size = lower_.shape[2:],
# #scale_factor=2,
# mode="nearest")
# prev_state2 = F.softplus(self.remap_0(prev_state2))
# prev_state2 = F.softplus(self.remap_1(prev_state2))
# if 'remap' in return_extra:
# extra['remap'] = prev_state2
# g1_t = torch.sigmoid(self.u1_gate(prev_state2))
# c1_t = self.bn[0](
# F.conv3d(
# prev_state2 * g1_t,
# self.w_gate_inh,
# padding=(0, self.padding, self.padding)))
# error = F.softplus(c1_t * (self.alpha * prev_state2 + self.mu))
# next_state1 = F.softplus(lower_ - error)
# if 'error' in return_extra:
# extra['error'] = next_state1
# # if 'h1' in return_extra:
# # extra['h1'] = next_state1
# g2_t = torch.sigmoid(self.u2_gate(next_state1))
# h2_t = self.bn[1](
# F.conv3d(
# next_state1,
# self.w_gate_exc,
# padding=(0, self.padding, self.padding)))
# prev_state2 = (1 - g2_t) * lower_ + g2_t * h2_t
# if not extra:
# return prev_state2
# else:
# return prev_state2, extra
#########################################################################################################
#### New GN Cells ###################################################################################
#########################################################################################################
class hConvGRUCell(nn.Module):
"""
Generate a convolutional GRU cell
"""
def __init__(
self,
input_size,
hidden_size,
kernel_size,
batchnorm=True,
timesteps=8,
gala=False,
spatial_kernel=3,
less_softplus=False,
r=4,
init=nn.init.orthogonal_,
grad_method='bptt',
norm='GN',
bottom_layer=False):
super(hConvGRUCell, self).__init__()
self.gala = False
self.padding = kernel_size // 2
self.input_size = input_size
self.hidden_size = hidden_size
self.timesteps = timesteps
self.batchnorm = batchnorm
self.grad_method = grad_method
self.gala = gala
self.less_softplus = less_softplus
self.bottom_layer = bottom_layer
if "GN" in norm and input_size<=4:
norm = "IN"
if self.gala:
self.u0_channel_gate_0 = nn.Conv2d(
hidden_size, hidden_size // r, 1)
self.u0_channel_gate_1 = nn.Conv2d(
hidden_size // r, hidden_size, 1, bias=False)
self.u0_spatial_gate_0 = nn.Parameter(
torch.empty(
hidden_size // 2,
hidden_size,
kernel_size,
kernel_size))
self.u0_spatial_bias = nn.Parameter(
torch.empty((hidden_size // 2, 1, 1)))
self.u0_spatial_gate_1 = nn.Parameter(
torch.empty(
1,
hidden_size // 2,
kernel_size,
kernel_size))
self.u0_combine_bias = nn.Parameter(
torch.empty((hidden_size, 1, 1)))
else:
self.u0_gate = nn.Conv2d(hidden_size, hidden_size, 1)
self.u1_gate = nn.Conv2d(hidden_size, hidden_size, 1)
self.w_gate_inh = nn.Parameter(
torch.empty(hidden_size, hidden_size, kernel_size, kernel_size))
self.w_gate_exc = nn.Parameter(
torch.empty(hidden_size, hidden_size, kernel_size, kernel_size))
self.alpha = nn.Parameter(torch.empty((hidden_size, 1, 1)))
self.mu = nn.Parameter(torch.empty((hidden_size, 1, 1)))
if not self.less_softplus:
self.w = nn.Parameter(torch.empty((hidden_size, 1, 1)))
self.kappa = nn.Parameter(torch.empty((hidden_size, 1, 1)))
# Norm is harcoded to group norm
if self.batchnorm:
self.bn = nn.ModuleList(
[get_norm(norm, hidden_size) for i in range(2)])
# TODO: Alekh, why is orthogonal slow af
init(self.w_gate_inh)
init(self.w_gate_exc)
if self.batchnorm:
for bn in self.bn:
nn.init.constant_(bn.weight, 0.1)
nn.init.constant_(self.alpha, 0.1)
nn.init.constant_(self.mu, 1)
if not self.less_softplus:
nn.init.constant_(self.w, 0.5)
nn.init.constant_(self.kappa, 0.5)
if self.timesteps == 1:
init_timesteps = 2
else:
init_timesteps = self.timesteps
if self.gala:
# First init the kernels
init(self.u0_channel_gate_0.weight)
init(self.u0_channel_gate_1.weight)
init(self.u0_spatial_gate_0)
init(self.u0_spatial_gate_1)
init(self.u1_gate.weight)
# Now init biases
nn.init.zeros_(self.u0_spatial_bias)
nn.init.uniform_(self.u0_combine_bias, 1, init_timesteps - 1)
self.u0_combine_bias.data.log()
self.u1_gate.bias.data = -self.u0_combine_bias.data.squeeze()
else:
nn.init.xavier_uniform_(self.u0_gate.weight)
nn.init.uniform_(self.u0_gate.bias.data, 1, init_timesteps - 1)
self.u0_gate.bias.data.log()
self.u1_gate.bias.data = -self.u0_gate.bias.data
def forward(self, input_, h_, timestep=0, return_extra=[]):
extra={}
if self.gala:
global_0 = F.softplus(self.u0_channel_gate_0(h_))
global_1 = self.u0_channel_gate_1(global_0)
local_0 = F.softplus(
F.conv2d(
h_,
self.u0_spatial_gate_0,
padding=self.padding) + self.u0_spatial_bias)
local_1 = F.conv2d(
local_0,
self.u0_spatial_gate_1,
padding=self.padding)
gate_act = global_1 * local_1 + self.u0_combine_bias
g1_t = torch.sigmoid(gate_act)
else:
g1_t = torch.sigmoid(self.u0_gate(h_))
if self.batchnorm:
c0_t = self.bn[0](
F.conv2d(
h_ * g1_t,
self.w_gate_inh,
padding=self.padding))
else:
c0_t = F.conv2d(
h_ * g1_t,
self.w_gate_inh,
padding=self.padding)
if self.less_softplus:
supp_ = F.softplus( # F.softplus(input_) moved outside
input_ - c0_t * (self.alpha * h_ + self.mu))
elif self.bottom_layer:
# extra['inh'] = F.hardtanh(c0_t * (self.alpha * h_ + self.mu), 0, 1)
# supp_ = input_ - extra['inh']
extra['inh'] = c0_t * (self.alpha * h_ + self.mu)
supp_ = F.softplus(input_) - F.softplus(extra['inh'])
else:
# inhibition = F.softplus( # F.softplus(input_) moved outside
# F.softplus(input_) - F.softplus(c0_t * (self.alpha * h_ + self.mu)))
supp_ = F.softplus(input_) - F.softplus(c0_t * (self.alpha * h_ + self.mu))
# supp_ = input_ - c0_t * (self.alpha * h_ + self.mu)
if 'error_' in return_extra:
extra['error_'] = supp_
supp = F.softplus(supp_)
if 'error' in return_extra:
extra['error'] = supp
g2_t = torch.sigmoid(self.u1_gate(supp))
if 'mix_layer' in return_extra:
extra['mix_layer'] = g2_t
if self.batchnorm:
excitation = F.softplus(self.bn[1](
F.conv2d(
supp,
self.w_gate_exc,
padding=self.padding)))
else:
excitation = F.softplus(
F.conv2d(
supp,
self.w_gate_exc,
padding=self.padding))
if self.less_softplus:
h_t = excitation
else:
h_t = F.softplus(
self.kappa * (supp + excitation) + self.w * supp * excitation)
op = (1 - g2_t) * h_ + g2_t * h_t
if extra:
return op, extra
else:
return op
#########################################################################################################
class tdConvGRUCell_err(nn.Module):
"""
Generate a TD cell
"""
def __init__(
self,
fan_in,
td_fan_in,
diff_fan_in,
kernel_size,
gala=False,
batchnorm=True,
timesteps=1,
init=nn.init.orthogonal_,
grad_method='bptt',
norm='SyncBN',
spatial_transform=False,
gn_remap=False):
super(tdConvGRUCell_err, self).__init__()
if "GN" in norm and fan_in<=4:
norm = "IN"
self.padding = kernel_size // 2
self.input_size = fan_in
self.hidden_size = td_fan_in
self.timesteps = timesteps
self.batchnorm = batchnorm
self.grad_method = grad_method
self.gala = gala
self.gn_remap = gn_remap
self.remap_0 = nn.Conv2d(td_fan_in, diff_fan_in, 1)
self.remap_1 = nn.Conv2d(diff_fan_in, fan_in, 1)
if self.gn_remap:
self.gn_remap0 = get_norm(norm, diff_fan_in)
self.gn_remap1 = get_norm(norm, fan_in)
self.spatial_transform = spatial_transform
if self.spatial_transform:
self.warp = SpatialTransformer(fan_in)
self.u1_gate = nn.Conv2d(fan_in, fan_in, 1)
self.u2_gate = nn.Conv2d(fan_in, fan_in, 1)
self.w_gate_inh = nn.Parameter(
torch.empty(fan_in, fan_in, kernel_size, kernel_size))
self.w_gate_exc = nn.Parameter(
torch.empty(fan_in, fan_in, kernel_size, kernel_size))
self.alpha = nn.Parameter(torch.empty((fan_in, 1, 1)))
self.mu = nn.Parameter(torch.empty((fan_in, 1, 1)))
self.w = nn.Parameter(torch.empty((fan_in, 1, 1)))
self.kappa = nn.Parameter(torch.empty((fan_in, 1, 1)))
if self.batchnorm:
self.bn = nn.ModuleList(
[get_norm(norm, fan_in) for i in range(2)])
# TODO: Alekh, why is orthogonal slow af
init(self.w_gate_inh)
init(self.w_gate_exc)
init(self.u1_gate.weight)
init(self.u2_gate.weight)
if self.batchnorm:
for bn in self.bn:
nn.init.constant_(bn.weight, 0.1)
nn.init.constant_(self.alpha, 0.1)
nn.init.constant_(self.mu, 1)
nn.init.constant_(self.w, 0.5)
nn.init.constant_(self.kappa, 0.5)
# if self.timesteps == 1:
# init_timesteps = 2
# else:
# init_timesteps = self.timesteps
nn.init.uniform_(self.u1_gate.bias.data, 1, self.timesteps - 1)
self.u1_gate.bias.data.log()
self.u2_gate.bias.data = -self.u1_gate.bias.data
def forward(self, lower_, higher_, error_=None, timestep=0, return_extra=[]):
extra={}
prev_state2 = F.interpolate(
higher_,
size = lower_.shape[2:],
#scale_factor=2,
mode="nearest")
if self.gn_remap:
prev_state2 = F.softplus(self.gn_remap0(self.remap_0(prev_state2)))
prev_state2 = F.softplus(self.gn_remap1(self.remap_1(prev_state2)))
else:
prev_state2 = F.softplus(self.remap_0(prev_state2))
prev_state2 = F.softplus(self.remap_1(prev_state2))
if error_ is not None:
prev_state2 = prev_state2 + error_
if 'remap' in return_extra:
extra['remap'] = prev_state2
g1_t = torch.sigmoid(self.u1_gate(prev_state2))
if self.batchnorm:
c1_t = self.bn[0](
F.conv2d(
prev_state2 * g1_t,
self.w_gate_inh,
padding=self.padding))
else:
c1_t = F.conv2d(
prev_state2 * g1_t,
self.w_gate_inh,
padding=self.padding)
inh = F.softplus(c1_t * (self.alpha * prev_state2 + self.mu))
if 'inh' in return_extra:
extra['inh'] = inh
supp = F.softplus(
lower_ - inh)
if 'error' in return_extra:
extra['error'] = supp
g2_t = torch.sigmoid(self.u2_gate(supp))
if self.batchnorm:
exc = self.bn[1](
F.conv2d(
supp,
self.w_gate_exc,
padding=self.padding))
else:
exc = F.conv2d(
supp,
self.w_gate_exc,
padding=self.padding)
h2_t = F.softplus(
self.kappa * (
supp + exc) + self.w * supp * exc)
op = (1 - g2_t) * lower_ + g2_t * h2_t # noqa Note: a previous version had higher_ in place of lower_
if self.spatial_transform:
op = self.warp(inh, op)
if extra:
return op, extra
else:
return op
class tdConvGRUCell(nn.Module):
"""
Generate a TD cell
"""
def __init__(
self,
fan_in,
td_fan_in,
diff_fan_in,
kernel_size,
gala=False,
batchnorm=True,
timesteps=1,
init=nn.init.orthogonal_,
grad_method='bptt',
norm='SyncBN',
spatial_transform=False,
gn_remap=False):
super(tdConvGRUCell, self).__init__()
if "GN" in norm and fan_in<=4:
norm = "IN"
self.padding = kernel_size // 2
self.input_size = fan_in
self.hidden_size = td_fan_in
self.timesteps = timesteps
self.batchnorm = batchnorm
self.grad_method = grad_method
self.gala = gala
self.gn_remap = gn_remap
self.remap_0 = nn.Conv2d(td_fan_in, diff_fan_in, 1)
self.remap_1 = nn.Conv2d(diff_fan_in, fan_in, 1)
if self.gn_remap:
self.gn_remap0 = get_norm(norm, diff_fan_in)
self.gn_remap1 = get_norm(norm, fan_in)
self.spatial_transform = spatial_transform
if self.spatial_transform:
self.warp = SpatialTransformer(fan_in)
self.u1_gate = nn.Conv2d(fan_in, fan_in, 1)
self.u2_gate = nn.Conv2d(fan_in, fan_in, 1)
self.w_gate_inh = nn.Parameter(
torch.empty(fan_in, fan_in, kernel_size, kernel_size))
self.w_gate_exc = nn.Parameter(
torch.empty(fan_in, fan_in, kernel_size, kernel_size))
self.alpha = nn.Parameter(torch.empty((fan_in, 1, 1)))
self.mu = nn.Parameter(torch.empty((fan_in, 1, 1)))
self.w = nn.Parameter(torch.empty((fan_in, 1, 1)))
self.kappa = nn.Parameter(torch.empty((fan_in, 1, 1)))
if self.batchnorm:
self.bn = nn.ModuleList(
[get_norm(norm, fan_in) for i in range(2)])
# TODO: Alekh, why is orthogonal slow af
init(self.w_gate_inh)
init(self.w_gate_exc)
init(self.u1_gate.weight)
init(self.u2_gate.weight)
if self.batchnorm:
for bn in self.bn:
nn.init.constant_(bn.weight, 0.1)
nn.init.constant_(self.alpha, 0.1)
nn.init.constant_(self.mu, 1)
nn.init.constant_(self.w, 0.5)
nn.init.constant_(self.kappa, 0.5)
# if self.timesteps == 1:
# init_timesteps = 2
# else:
# init_timesteps = self.timesteps
nn.init.uniform_(self.u1_gate.bias.data, 1, self.timesteps - 1)
self.u1_gate.bias.data.log()
self.u2_gate.bias.data = -self.u1_gate.bias.data
def forward(self, lower_, higher_, timestep=0, return_extra=[]):
extra={}
prev_state2 = F.interpolate(
higher_,
size = lower_.shape[2:],
#scale_factor=2,
mode="nearest")
if self.gn_remap:
prev_state2 = F.softplus(self.gn_remap0(self.remap_0(prev_state2)))
prev_state2 = F.softplus(self.gn_remap1(self.remap_1(prev_state2)))
else:
prev_state2 = F.softplus(self.remap_0(prev_state2))
prev_state2 = F.softplus(self.remap_1(prev_state2))
if 'remap' in return_extra:
extra['remap'] = prev_state2
g1_t = torch.sigmoid(self.u1_gate(prev_state2))
if self.batchnorm:
c1_t = self.bn[0](
F.conv2d(
prev_state2 * g1_t,
self.w_gate_inh,
padding=self.padding))
else:
c1_t = F.conv2d(
prev_state2 * g1_t,
self.w_gate_inh,
padding=self.padding)
# inhibition = F.softplus(
# lower_ - F.softplus(c1_t * (self.alpha * prev_state2 + self.mu)))
inh = F.softplus(c1_t * (self.alpha * prev_state2 + self.mu))
if 'inh' in return_extra:
extra['inh'] = inh
supp = F.softplus(
lower_ - inh)
if 'error' in return_extra:
extra['error'] = supp
g2_t = torch.sigmoid(self.u2_gate(supp))
if self.batchnorm:
excitation = self.bn[1](
F.conv2d(
supp,
self.w_gate_exc,
padding=self.padding))
else:
excitation = F.conv2d(
supp,
self.w_gate_exc,
padding=self.padding)
h2_t = F.softplus(
self.kappa * (supp + excitation) +
self.w * supp * excitation)
op = (1 - g2_t) * lower_ + g2_t * h2_t # noqa Note: a previous version had higher_ in place of lower_
if 'mix_layer' in return_extra:
extra['mix_layer'] = g2_t
if self.spatial_transform:
op = self.warp(inh, op)
if extra:
return op, extra
else:
return op
# class SpatialTransformer(nn.Module):
# def __init__(self, fan_in):
# super(SpatialTransformer, self).__init__()
# # Spatial transformer localization-network
# self.localization = nn.Sequential(
# nn.Conv2d(fan_in, 64, kernel_size=5),
# #nn.MaxPool2d(2, stride=2),
# nn.ReLU(True),
# nn.Conv2d(64, 32, kernel_size=3),
# )
# # Regressor for the 3 * 2 affine matrix
# self.fc_loc = nn.Sequential(
# nn.Linear(32, 32),
# nn.ReLU(True),
# nn.Linear(32, 3 * 2)
# )
# # Initialize the weights/bias with identity transformation
# self.fc_loc[2].weight.data.zero_()
# self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))
# # Spatial transformer network forward function
# def forward(self, x, input_trans=None):
# if input_trans == None:
# input_trans = x
# xs = self.localization(x)
# xs = F.relu(F.max_pool2d(xs, kernel_size=xs.size()[2:]))
# xs = xs.view(-1, xs.shape[1])
# theta = self.fc_loc(xs)
# theta = theta.view(-1, 2, 3)
# grid = F.affine_grid(theta, input_trans.size(), align_corners=True)
# x = F.grid_sample(input_trans, grid, align_corners=True)
# return x
class SpatialTransformer(nn.Module):
def __init__(self, fan_in):
super(SpatialTransformer, self).__init__()
# Spatial transformer localization-network
self.loc = nn.Sequential(
nn.Conv2d(fan_in, 64, kernel_size=5, bias=False),
#nn.MaxPool2d(2, stride=2),
nn.ReLU(True),
nn.Conv2d(64, 32, kernel_size=3),
)
# Regressor for the 3 * 2 affine matrix
self.fc_loc = nn.Sequential(
nn.Linear(32*4*4, 256),
nn.ReLU(True),
nn.Linear(256, 3 * 2)
)
# Initialize the weights/bias with identity transformation
nn.init.xavier_normal_(self.loc[0].weight)
if self.loc[0].bias is not None:
self.loc[0].bias.data.zero_()
nn.init.xavier_normal_(self.loc[2].weight)
if self.loc[2].bias is not None:
self.loc[2].bias.data.zero_()
nn.init.xavier_normal_(self.fc_loc[0].weight)
self.fc_loc[2].bias.data.zero_()
self.fc_loc[2].weight.data.zero_()
self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))
# Spatial transformer network forward function
def forward(self, x, input_trans=None):
if input_trans == None:
input_trans = x
xs = self.loc(x)
# xs = F.relu(F.max_pool2d(xs, kernel_size=xs.size()[2:]))
xs = F.relu(F.adaptive_max_pool2d(xs, output_size=(4,4)))
xs = xs.view(-1, xs.shape[1]*4*4)
theta = self.fc_loc(xs)
theta = theta.view(-1, 2, 3)
grid = F.affine_grid(theta, input_trans.size(), align_corners=True)
x = F.grid_sample(input_trans, grid, align_corners=True)
return x
#########################################################################################################
#### Unused Cells #######################################################################################
#########################################################################################################
# class hConvGRUCell_old(nn.Module):
# """
# Generate a convolutional GRU cell
# """
# def __init__(
# self,
# input_size,
# hidden_size,
# kernel_size,
# batchnorm=True,
# timesteps=8,
# gala=False,
# spatial_kernel=5,
# r=4,
# grad_method='bptt',
# norm='SyncBN'):
# super(hConvGRUCell, self).__init__()
# self.padding = kernel_size // 2
# self.input_size = input_size
# self.hidden_size = hidden_size
# self.timesteps = timesteps
# self.batchnorm = batchnorm
# self.grad_method = grad_method
# self.gala = gala
# if self.gala:
# self.u1_channel_gate_0 = nn.Conv2d(
# hidden_size, hidden_size // r, 1)
# self.u1_channel_gate_1 = nn.Conv2d(
# hidden_size // r, hidden_size, 1, bias=False)
# self.u1_spatial_gate_0 = nn.Conv2d(
# hidden_size, hidden_size // r, spatial_kernel)
# self.u1_spatial_gate_1 = nn.Conv2d(
# hidden_size // r, 1, spatial_kernel, bias=False)
# self.u1_combine_bias = nn.Parameter(
# torch.empty((hidden_size, 1, 1)))
# else:
# self.u1_gate = nn.Conv2d(hidden_size, hidden_size, 1)
# nn.init.xavier_uniform_(self.u1_gate.weight)
# self.u2_gate = nn.Conv2d(hidden_size, hidden_size, 1)
# self.w_gate_inh = nn.Parameter(
# torch.empty(hidden_size, hidden_size, kernel_size, kernel_size))
# self.w_gate_exc = nn.Parameter(
# torch.empty(hidden_size, hidden_size, kernel_size, kernel_size))
# self.alpha = nn.Parameter(torch.empty((hidden_size, 1, 1)))
# self.gamma = nn.Parameter(torch.empty((hidden_size, 1, 1)))
# self.mu = nn.Parameter(torch.empty((hidden_size, 1, 1)))
# if norm == "":
# norm = 'SyncBN'
# # Norm is harcoded to group norm
# norm = 'GN'
# self.bn = nn.ModuleList(
# [get_norm(norm, hidden_size) for i in range(2)])
# # TODO: Alekh, why is orthogonal slow af
# nn.init.xavier_uniform_(self.w_gate_inh)
# nn.init.xavier_uniform_(self.w_gate_exc)
# nn.init.xavier_uniform_(self.u2_gate.weight)
# for bn in self.bn:
# nn.init.constant_(bn.weight, 0.1)
# nn.init.constant_(self.alpha, 0.1)
# nn.init.constant_(self.gamma, 1.0)
# nn.init.constant_(self.mu, 1)
# if self.timesteps == 1:
# init_timesteps = 2
# else:
# init_timesteps = self.timesteps
# if self.gala:
# nn.init.uniform_(self.u1_combine_bias, 1, init_timesteps - 1)
# self.u1_combine_bias.data.log()
# self.u2_gate.bias.data = -self.u1_combine_bias.data
# else:
# nn.init.uniform_(self.u1_gate.bias.data, 1, init_timesteps - 1)
# self.u1_gate.bias.data.log()
# self.u2_gate.bias.data = -self.u1_gate.bias.data
# def forward(self, input_, h_, timestep=0):
# if self.gala:
# global_0 = F.softplus(self.u1_channel_gate_0(h_))
# global_1 = self.u1_channel_gate_1(global_0)
# local_0 = F.softplus(self.u1_spatial_gate_0(h_))
# local_1 = self.u1_spatial_gate_1(local_0)
# import pdb; pdb.set_trace()
# g1_t = F.softplus(global_1 * local_1 + self.u1_combine_bias)
# else:
# g1_t = torch.sigmoid(self.u1_gate(h_))
# c1_t = self.bn[0](
# F.conv2d(
# h_ * g1_t,
# self.w_gate_inh,
# padding=self.padding))
# next_state1 = F.softplus(
# input_ - F.softplus(c1_t * (self.alpha * h_ + self.mu)))
# g2_t = torch.sigmoid(self.u2_gate(next_state1))
# h2_t = self.bn[1](
# F.conv2d(
# next_state1,
# self.w_gate_exc,
# padding=self.padding))
# h_ = (1 - g2_t) * h_ + g2_t * h2_t
# return h_
# #########################################################################################################
# class tdConvGRUCell_old(nn.Module):
# """
# Generate a TD cell
# """
# def __init__(
# self,
# fan_in,
# td_fan_in,
# diff_fan_in,
# kernel_size,
# batchnorm=True,
# timesteps=8,
# grad_method='bptt',
# norm='SyncBN'):
# super(tdConvGRUCell, self).__init__()
# self.padding = kernel_size // 2
# self.input_size = fan_in
# self.hidden_size = td_fan_in
# self.timesteps = timesteps
# self.batchnorm = batchnorm
# self.grad_method = grad_method
# self.remap_0 = nn.Conv2d(td_fan_in, diff_fan_in, 1)
# self.remap_1 = nn.Conv2d(diff_fan_in, fan_in, 1)
# self.u1_gate = nn.Conv2d(fan_in, fan_in, 1)
# self.u2_gate = nn.Conv2d(fan_in, fan_in, 1)
# self.w_gate_inh = nn.Parameter(
# torch.empty(fan_in, fan_in, kernel_size, kernel_size))
# self.w_gate_exc = nn.Parameter(
# torch.empty(fan_in, fan_in, kernel_size, kernel_size))
# self.alpha = nn.Parameter(torch.empty((fan_in, 1, 1)))
# self.gamma = nn.Parameter(torch.empty((fan_in, 1, 1)))
# self.mu = nn.Parameter(torch.empty((fan_in, 1, 1)))
# if norm == "":
# norm = 'SyncBN'
# # Norm is harcoded to group norm
# norm = 'GN'
# self.bn = nn.ModuleList(
# [get_norm(norm, fan_in) for i in range(2)])
# # TODO: Alekh, why is orthogonal slow af
# nn.init.xavier_uniform_(self.w_gate_inh)
# nn.init.xavier_uniform_(self.w_gate_exc)
# nn.init.xavier_uniform_(self.u1_gate.weight)
# nn.init.xavier_uniform_(self.u2_gate.weight)
# for bn in self.bn:
# nn.init.constant_(bn.weight, 0.1)
# nn.init.constant_(self.alpha, 0.1)
# nn.init.constant_(self.gamma, 1.0)
# nn.init.constant_(self.mu, 1)
# if self.timesteps == 1:
# init_timesteps = 2
# else:
# init_timesteps = self.timesteps
# nn.init.uniform_(self.u1_gate.bias.data, 1, init_timesteps - 1)
# self.u1_gate.bias.data.log()
# self.u2_gate.bias.data = -self.u1_gate.bias.data
# def forward(self, lower_, higher_, timestep=0):
# prev_state2 = F.interpolate(
# higher_,
# scale_factor=2,
# mode="nearest")
# prev_state2 = F.softplus(self.remap_0(prev_state2))
# prev_state2 = F.softplus(self.remap_1(prev_state2))
# g1_t = torch.sigmoid(self.u1_gate(prev_state2))
# c1_t = self.bn[0](
# F.conv2d(
# prev_state2 * g1_t,
# self.w_gate_inh,
# padding=self.padding))
# next_state1 = F.softplus(
# lower_ - F.softplus(c1_t * (self.alpha * prev_state2 + self.mu)))
# g2_t = torch.sigmoid(self.u2_gate(next_state1))
# h2_t = self.bn[1](
# F.conv2d(
# next_state1,
# self.w_gate_exc,
# padding=self.padding))
# prev_state2 = (1 - g2_t) * prev_state2 + g2_t * h2_t
# return prev_state2
#########################################################################################################
#### Other ##############################################################################################
#########################################################################################################
class ConvLSTMCell(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=1, dilation=1, groups=1, bias=True):
super(ConvLSTMCell, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.padding_h = tuple(
k // 2 for k, s, p, d in zip(kernel_size, stride, padding, dilation))
self.dilation = dilation
self.groups = groups
self.weight_ih = nn.Parameter(torch.Tensor(
4 * out_channels, in_channels // groups, *kernel_size))
self.weight_hh = nn.Parameter(torch.Tensor(
4 * out_channels, out_channels // groups, *kernel_size))
self.weight_ch = nn.Parameter(torch.Tensor(
3 * out_channels, out_channels // groups, *kernel_size))
if bias:
self.bias_ih = nn.Parameter(torch.Tensor(4 * out_channels))
self.bias_hh = nn.Parameter(torch.Tensor(4 * out_channels))
self.bias_ch = nn.Parameter(torch.Tensor(3 * out_channels))
else:
self.register_parameter('bias_ih', None)
self.register_parameter('bias_hh', None)
self.register_parameter('bias_ch', None)
self.register_buffer('wc_blank', torch.zeros(1, 1, 1, 1))
self.reset_parameters()
def reset_parameters(self):
n = 4 * self.in_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight_ih.data.uniform_(-stdv, stdv)
self.weight_hh.data.uniform_(-stdv, stdv)
self.weight_ch.data.uniform_(-stdv, stdv)
if self.bias_ih is not None:
self.bias_ih.data.uniform_(-stdv, stdv)
self.bias_hh.data.uniform_(-stdv, stdv)
self.bias_ch.data.uniform_(-stdv, stdv)
def forward(self, input, hx):
h_0, c_0 = hx
wx = F.conv2d(input, self.weight_ih, self.bias_ih,
self.stride, self.padding, self.dilation, self.groups)
wh = F.conv2d(h_0, self.weight_hh, self.bias_hh, self.stride,
self.padding_h, self.dilation, self.groups)
# Cell uses a Hadamard product instead of a convolution?
wc = F.conv2d(c_0, self.weight_ch, self.bias_ch, self.stride,
self.padding_h, self.dilation, self.groups)
wxhc = wx + wh + torch.cat([wc[:, :2 * self.out_channels],
self.wc_blank.expand(wc.size(0), wc.size(1) // 3, wc.size(2), wc.size(3)),
wc[:, 2 * self.out_channels:]],
1)
i = torch.sigmoid(wxhc[:, :self.out_channels])
f = torch.sigmoid(wxhc[:, self.out_channels:2 * self.out_channels])
g = torch.tanh(wxhc[:, 2 * self.out_channels:3 * self.out_channels])
o = torch.sigmoid(wxhc[:, 3 * self.out_channels:])
c_1 = f * c_0 + i * g
h_1 = o * torch.tanh(c_1)
return h_1, (h_1, c_1)
class ConvLSTMCell_C(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=1, dilation=1, groups=1, bias=True):
super(ConvLSTMCell_C, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.padding_h = tuple(
k // 2 for k, s, p, d in zip(kernel_size, stride, padding, dilation))
self.dilation = dilation
self.groups = groups
self.weight_ih = nn.Parameter(torch.Tensor(
4 * out_channels, in_channels // groups, *kernel_size))
self.weight_hh = nn.Parameter(torch.Tensor(
4 * out_channels, out_channels // groups, *kernel_size))
# self.weight_ch = nn.Parameter(torch.Tensor(
# 3 * out_channels, out_channels // groups, *kernel_size))
if bias:
self.bias_ih = nn.Parameter(torch.Tensor(4 * out_channels))
self.bias_hh = nn.Parameter(torch.Tensor(4 * out_channels))
# self.bias_ch = nn.Parameter(torch.Tensor(3 * out_channels))
else:
self.register_parameter('bias_ih', None)
self.register_parameter('bias_hh', None)
# self.register_parameter('bias_ch', None)
# self.register_buffer('wc_blank', torch.zeros(1, 1, 1, 1))
self.reset_parameters()
def reset_parameters(self):
n = 4 * self.in_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight_ih.data.uniform_(-stdv, stdv)
self.weight_hh.data.uniform_(-stdv, stdv)
# self.weight_ch.data.uniform_(-stdv, stdv)
if self.bias_ih is not None:
self.bias_ih.data.uniform_(-stdv, stdv)
self.bias_hh.data.uniform_(-stdv, stdv)
# self.bias_ch.data.uniform_(-stdv, stdv)
def forward(self, input, hx):
h_0, c_0 = hx
wx = F.conv2d(input, self.weight_ih, self.bias_ih,
self.stride, self.padding, self.dilation, self.groups)
wh = F.conv2d(h_0, self.weight_hh, self.bias_hh, self.stride,
self.padding_h, self.dilation, self.groups)
# Cell uses a Hadamard product instead of a convolution?
# wc = F.conv2d(c_0, self.weight_ch, self.bias_ch, self.stride,
# self.padding_h, self.dilation, self.groups)
# wxhc = wx + wh + torch.cat([wc[:, :2 * self.out_channels],
# self.wc_blank.expand(wc.size(0), wc.size(1) // 3, wc.size(2), wc.size(3)),
# wc[:, 2 * self.out_channels:]],
# 1)
wxhc = wx + wh
# i = torch.sigmoid(wxhc[:, :self.out_channels])
# f = torch.sigmoid(wxhc[:, self.out_channels:2 * self.out_channels])
# g = torch.tanh(wxhc[:, 2 * self.out_channels:3 * self.out_channels])
# o = torch.sigmoid(wxhc[:, 3 * self.out_channels:])
i = hard_sigmoid(wxhc[:, :self.out_channels])
f = hard_sigmoid(wxhc[:, self.out_channels:2 * self.out_channels])
g = torch.tanh(wxhc[:, 2 * self.out_channels:3 * self.out_channels])
o = hard_sigmoid(wxhc[:, 3 * self.out_channels:])
c_1 = f * c_0 + i * g
h_1 = o * torch.tanh(c_1)
return h_1, (h_1, c_1)
def hard_sigmoid(x, t=2.5):
return torch.clamp((x+t)/(2*t), 0, 1)
class ConvLSTMCell_CG1x1_noI(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=1, dilation=1, groups=1, bias=True):
super(ConvLSTMCell_CG1x1_noI, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.padding_h = tuple(
k // 2 for k, s, p, d in zip(kernel_size, stride, padding, dilation))
self.dilation = dilation
self.groups = groups
self.weight_ih = nn.Parameter(torch.Tensor(
out_channels, in_channels // groups, *kernel_size))
self.weight_hh = nn.Parameter(torch.Tensor(
out_channels, out_channels // groups, *kernel_size))
self.weight_ih_g = nn.Parameter(torch.Tensor(
2 * out_channels, in_channels // groups, 1, 1))
self.weight_hh_g = nn.Parameter(torch.Tensor(
2 * out_channels, out_channels // groups, 1, 1))
# self.weight_ch = nn.Parameter(torch.Tensor(
# 3 * out_channels, out_channels // groups, *kernel_size))
if bias:
self.bias_ih = nn.Parameter(torch.Tensor(out_channels))
self.bias_hh = nn.Parameter(torch.Tensor(out_channels))
# self.bias_ch = nn.Parameter(torch.Tensor(3 * out_channels))
self.bias_ih_g = nn.Parameter(torch.Tensor(2 * out_channels))
self.bias_hh_g = nn.Parameter(torch.Tensor(2 * out_channels))
else:
self.register_parameter('bias_ih', None)
self.register_parameter('bias_hh', None)
# self.register_parameter('bias_ch', None)
self.register_parameter('bias_ih_g', None)
self.register_parameter('bias_hh_g', None)
# self.register_buffer('wc_blank', torch.zeros(1, 1, 1, 1))
self.reset_parameters()
def reset_parameters(self):
n = 4 * self.in_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight_ih.data.uniform_(-stdv, stdv)
self.weight_hh.data.uniform_(-stdv, stdv)
if self.bias_ih is not None:
self.bias_ih.data.uniform_(-stdv, stdv)
self.bias_hh.data.uniform_(-stdv, stdv)
self.weight_ih_g.data.uniform_(-stdv, stdv)
self.weight_hh_g.data.uniform_(-stdv, stdv)
if self.bias_ih_g is not None:
self.bias_ih_g.data.uniform_(-stdv, stdv)
self.bias_hh_g.data.uniform_(-stdv, stdv)
def forward(self, input, hx):
h_0, c_0 = hx
wx = F.conv2d(input, self.weight_ih, self.bias_ih,
self.stride, self.padding, self.dilation, self.groups)
wh = F.conv2d(h_0, self.weight_hh, self.bias_hh, self.stride,
self.padding_h, self.dilation, self.groups)
wx_g = F.conv2d(input, self.weight_ih_g, self.bias_ih_g, stride=1, padding=0)
wh_g = F.conv2d(h_0, self.weight_hh_g, self.bias_hh_g, stride=1, padding=0)
wxhc_g = wx_g + wh_g
wxhc = wx + wh
# i = hard_sigmoid(wxhc_g[:, :self.out_channels])
f = hard_sigmoid(wxhc_g[:, self.out_channels:])
g = torch.tanh(wxhc)
o = hard_sigmoid(wxhc_g[:, self.out_channels:])
# c_1 = f * c_0 + i * g
c_1 = f * c_0 + g
h_1 = o * torch.tanh(c_1)
return h_1, (h_1, c_1)
class ConvLSTMCell_CG1x1_noO(ConvLSTMCell_CG1x1_noI):
def reset_parameters(self):
n = 4 * self.in_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight_ih.data.uniform_(-stdv, stdv)
self.weight_hh.data.uniform_(-stdv, stdv)
if self.bias_ih is not None:
self.bias_ih.data.uniform_(-stdv, stdv)
self.bias_hh.data.uniform_(-stdv, stdv)
self.weight_ih_g.data.uniform_(-stdv, stdv)
self.weight_hh_g.data.uniform_(-stdv, stdv)
if self.bias_ih_g is not None:
self.bias_ih_g.data.uniform_(-stdv, stdv)
self.bias_hh_g.data.uniform_(-stdv, stdv)
def forward(self, input, hx):
h_0, c_0 = hx
wx = F.conv2d(input, self.weight_ih, self.bias_ih,
self.stride, self.padding, self.dilation, self.groups)
wh = F.conv2d(h_0, self.weight_hh, self.bias_hh, self.stride,
self.padding_h, self.dilation, self.groups)
wx_g = F.conv2d(input, self.weight_ih_g, self.bias_ih_g, stride=1, padding=0)
wh_g = F.conv2d(h_0, self.weight_hh_g, self.bias_hh_g, stride=1, padding=0)
wxhc_g = wx_g + wh_g
wxhc = wx + wh
i = hard_sigmoid(wxhc_g[:, :self.out_channels])
f = hard_sigmoid(wxhc_g[:, self.out_channels:])
g = torch.tanh(wxhc)
# o = hard_sigmoid(wxhc_g[:, self.out_channels:])
c_1 = f * c_0 + i * g
c_1 = f * c_0 + g
h_1 = torch.tanh(c_1)
# h_1 = o * torch.tanh(c_1)
return h_1, (h_1, c_1)
class ConvLSTMCell_CG1x1_noF(ConvLSTMCell_CG1x1_noI):
def reset_parameters(self):
n = 4 * self.in_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight_ih.data.uniform_(-stdv, stdv)
self.weight_hh.data.uniform_(-stdv, stdv)
if self.bias_ih is not None:
self.bias_ih.data.uniform_(-stdv, stdv)
self.bias_hh.data.uniform_(-stdv, stdv)
self.weight_ih_g.data.uniform_(-stdv, stdv)
self.weight_hh_g.data.uniform_(-stdv, stdv)
if self.bias_ih_g is not None:
self.bias_ih_g.data.uniform_(-stdv, stdv)
self.bias_hh_g.data.uniform_(-stdv, stdv)
def forward(self, input, hx):
h_0, c_0 = hx
wx = F.conv2d(input, self.weight_ih, self.bias_ih,
self.stride, self.padding, self.dilation, self.groups)
wh = F.conv2d(h_0, self.weight_hh, self.bias_hh, self.stride,
self.padding_h, self.dilation, self.groups)
wx_g = F.conv2d(input, self.weight_ih_g, self.bias_ih_g, stride=1, padding=0)
wh_g = F.conv2d(h_0, self.weight_hh_g, self.bias_hh_g, stride=1, padding=0)
wxhc_g = wx_g + wh_g
wxhc = wx + wh
i = hard_sigmoid(wxhc_g[:, :self.out_channels])
# f = hard_sigmoid(wxhc_g[:, self.out_channels:])
g = torch.tanh(wxhc)
o = hard_sigmoid(wxhc_g[:, self.out_channels:])
c_1 = c_0 + i * g
# c_1 = f * c_0 + g
h_1 = torch.tanh(c_1)
h_1 = o * torch.tanh(c_1)
return h_1, (h_1, c_1)
def hard_sigmoid(x, t=2.5):
return torch.clamp((x+t)/(2*t), 0, 1)
class ConvLSTMCell_CG1x1(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=1, dilation=1, groups=1, bias=True):
super(ConvLSTMCell_CG1x1, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.padding_h = tuple(
k // 2 for k, s, p, d in zip(kernel_size, stride, padding, dilation))
self.dilation = dilation
self.groups = groups
self.weight_ih = nn.Parameter(torch.Tensor(
out_channels, in_channels // groups, *kernel_size))
self.weight_hh = nn.Parameter(torch.Tensor(
out_channels, out_channels // groups, *kernel_size))
self.weight_ih_g = nn.Parameter(torch.Tensor(
3 * out_channels, in_channels // groups, 1, 1))
self.weight_hh_g = nn.Parameter(torch.Tensor(
3 * out_channels, out_channels // groups, 1, 1))
# self.weight_ch = nn.Parameter(torch.Tensor(
# 3 * out_channels, out_channels // groups, *kernel_size))
if bias:
self.bias_ih = nn.Parameter(torch.Tensor(out_channels))
self.bias_hh = nn.Parameter(torch.Tensor(out_channels))
# self.bias_ch = nn.Parameter(torch.Tensor(3 * out_channels))
self.bias_ih_g = nn.Parameter(torch.Tensor(3 * out_channels))
self.bias_hh_g = nn.Parameter(torch.Tensor(3 * out_channels))
else:
self.register_parameter('bias_ih', None)
self.register_parameter('bias_hh', None)
# self.register_parameter('bias_ch', None)
self.register_parameter('bias_ih_g', None)
self.register_parameter('bias_hh_g', None)
# self.register_buffer('wc_blank', torch.zeros(1, 1, 1, 1))
self.reset_parameters()
def reset_parameters(self):
n = 4 * self.in_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight_ih.data.uniform_(-stdv, stdv)
self.weight_hh.data.uniform_(-stdv, stdv)
if self.bias_ih is not None:
self.bias_ih.data.uniform_(-stdv, stdv)
self.bias_hh.data.uniform_(-stdv, stdv)
self.weight_ih_g.data.uniform_(-stdv, stdv)
self.weight_hh_g.data.uniform_(-stdv, stdv)
if self.bias_ih_g is not None:
self.bias_ih_g.data.uniform_(-stdv, stdv)
self.bias_hh_g.data.uniform_(-stdv, stdv)
def forward(self, input, hx):
h_0, c_0 = hx
wx = F.conv2d(input, self.weight_ih, self.bias_ih,
self.stride, self.padding, self.dilation, self.groups)
wh = F.conv2d(h_0, self.weight_hh, self.bias_hh, self.stride,
self.padding_h, self.dilation, self.groups)
wx_g = F.conv2d(input, self.weight_ih_g, self.bias_ih_g, stride=1, padding=0)
wh_g = F.conv2d(h_0, self.weight_hh_g, self.bias_hh_g, stride=1, padding=0)
wxhc_g = wx_g + wh_g
wxhc = wx + wh
i = hard_sigmoid(wxhc_g[:, :self.out_channels])
f = hard_sigmoid(wxhc_g[:, self.out_channels:2 * self.out_channels])
g = torch.tanh(wxhc)
o = hard_sigmoid(wxhc_g[:, 2 * self.out_channels:])
c_1 = f * c_0 + i * g
h_1 = o * torch.tanh(c_1)
return h_1, (h_1, c_1)
class ConvLSTMCell_CGpool(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=1, dilation=1, groups=1, bias=True):
super(ConvLSTMCell_CGpool, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.padding_h = tuple(
k // 2 for k, s, p, d in zip(kernel_size, stride, padding, dilation))
self.dilation = dilation
self.groups = groups
self.weight_ih = nn.Parameter(torch.Tensor(
out_channels, in_channels // groups, *kernel_size))
self.weight_hh = nn.Parameter(torch.Tensor(
out_channels, out_channels // groups, *kernel_size))
self.weight_ih_g = nn.Parameter(torch.Tensor(
3 * out_channels, in_channels // groups, 1, 1))
self.weight_hh_g = nn.Parameter(torch.Tensor(
3 * out_channels, out_channels // groups, 1, 1))
# self.weight_ch = nn.Parameter(torch.Tensor(
# 3 * out_channels, out_channels // groups, *kernel_size))
if bias:
self.bias_ih = nn.Parameter(torch.Tensor(out_channels))
self.bias_hh = nn.Parameter(torch.Tensor(out_channels))
# self.bias_ch = nn.Parameter(torch.Tensor(3 * out_channels))
self.bias_ih_g = nn.Parameter(torch.Tensor(3 * out_channels))
self.bias_hh_g = nn.Parameter(torch.Tensor(3 * out_channels))
else:
self.register_parameter('bias_ih', None)
self.register_parameter('bias_hh', None)
# self.register_parameter('bias_ch', None)
self.register_parameter('bias_ih_g', None)
self.register_parameter('bias_hh_g', None)
# self.register_buffer('wc_blank', torch.zeros(1, 1, 1, 1))
self.reset_parameters()
def reset_parameters(self):
n = 4 * self.in_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight_ih.data.uniform_(-stdv, stdv)
self.weight_hh.data.uniform_(-stdv, stdv)
# self.weight_ch.data.uniform_(-stdv, stdv)
if self.bias_ih is not None:
self.bias_ih.data.uniform_(-stdv, stdv)
self.bias_hh.data.uniform_(-stdv, stdv)
# self.bias_ch.data.uniform_(-stdv, stdv)
self.weight_ih_g.data.uniform_(-stdv, stdv)
self.weight_hh_g.data.uniform_(-stdv, stdv)
# self.weight_ch.data.uniform_(-stdv, stdv)
if self.bias_ih_g is not None:
self.bias_ih_g.data.uniform_(-stdv, stdv)
self.bias_hh_g.data.uniform_(-stdv, stdv)
# self.bias_ch.data.uniform_(-stdv, stdv)
def forward(self, input, hx):
h_0, c_0 = hx
wx = F.conv2d(input, self.weight_ih, self.bias_ih,
self.stride, self.padding, self.dilation, self.groups)
wh = F.conv2d(h_0, self.weight_hh, self.bias_hh, self.stride,
self.padding_h, self.dilation, self.groups)
wx_g = F.conv2d(input, self.weight_ih_g, self.bias_ih_g, stride=1, padding=0)
wh_g = F.conv2d(h_0, self.weight_hh_g, self.bias_hh_g, stride=1, padding=0)
wxhc_g = wx_g + wh_g
wxhc = wx + wh
i = hard_sigmoid(wxhc_g[:, :self.out_channels].mean([1,2,3]))
f = hard_sigmoid(wxhc_g[:, self.out_channels:2 * self.out_channels])
g = torch.tanh(wxhc)
o = hard_sigmoid(wxhc_g[:, 2 * self.out_channels:])
c_1 = f * c_0 + i[:,None,None,None] * g
h_1 = o * torch.tanh(c_1)
return h_1, (h_1, c_1)
class ConvGRUCell(nn.Module):
"""
Generate a convolutional GRU cell
"""
def __init__(self, input_size, hidden_size, kernel_size):
super().__init__()
padding = kernel_size // 2
self.input_size = input_size
self.hidden_size = hidden_size
self.reset_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
self.update_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
self.out_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
init.orthogonal_(self.reset_gate.weight)
init.orthogonal_(self.update_gate.weight)
init.orthogonal_(self.out_gate.weight)
init.constant_(self.reset_gate.bias, 0.)
init.constant_(self.update_gate.bias, 0.)
init.constant_(self.out_gate.bias, 0.)
def forward(self, input_, prev_state):
# get batch and spatial sizes
batch_size = input_.data.size()[0]
spatial_size = input_.data.size()[2:]
# generate empty prev_state, if None is provided
if prev_state is None:
shape = input_.shape
shape[1] = self.hidden_size
prev_state = input_.new_zeros(shape)
# data size is [batch, channel, height, width]
stacked_inputs = torch.cat([input_, prev_state], dim=1)
update = torch.sigmoid(self.update_gate(stacked_inputs))
reset = torch.sigmoid(self.reset_gate(stacked_inputs))
out_inputs = torch.tanh(self.out_gate(torch.cat([input_, prev_state * reset], dim=1)))
new_state = prev_state * (1 - update) + out_inputs * update
return new_state
def reset_parameters(self):
init.orthogonal_(self.reset_gate.weight)
init.orthogonal_(self.update_gate.weight)
init.orthogonal_(self.out_gate.weight)
init.constant_(self.reset_gate.bias, 0.)
init.constant_(self.update_gate.bias, 0.)
init.constant_(self.out_gate.bias, 0.)
# n = 4 * self.in_channels
# for k in self.kernel_size:
# n *= k
# stdv = 1. / math.sqrt(n)
# self.weight_ih.data.uniform_(-stdv, stdv)
# self.weight_hh.data.uniform_(-stdv, stdv)
# # self.weight_ch.data.uniform_(-stdv, stdv)
# if self.bias_ih is not None:
# self.bias_ih.data.uniform_(-stdv, stdv)
# self.bias_hh.data.uniform_(-stdv, stdv)
# # self.bias_ch.data.uniform_(-stdv, stdv)
class ConvGRUCell_out(nn.Module):
"""
Generate a convolutional GRU cell
"""
def __init__(self, input_size, hidden_size, kernel_size):
super().__init__()
padding = kernel_size // 2
self.input_size = input_size
self.hidden_size = hidden_size
# self.reset_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
self.update_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
self.out_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
# init.orthogonal_(self.reset_gate.weight)
init.orthogonal_(self.update_gate.weight)
init.orthogonal_(self.out_gate.weight)
# init.constant_(self.reset_gate.bias, 0.)
init.constant_(self.update_gate.bias, 0.)
init.constant_(self.out_gate.bias, 0.)
def forward(self, input_, prev_state):
# get batch and spatial sizes
batch_size = input_.data.size()[0]
spatial_size = input_.data.size()[2:]
# generate empty prev_state, if None is provided
if prev_state is None:
shape = input_.shape
shape[1] = self.hidden_size
prev_state = input_.new_zeros(shape)
# data size is [batch, channel, height, width]
stacked_inputs = torch.cat([input_, prev_state], dim=1)
update = torch.sigmoid(self.update_gate(stacked_inputs))
# reset = torch.sigmoid(self.reset_gate(stacked_inputs))
# out_inputs = torch.tanh(self.out_gate(torch.cat([input_, prev_state * reset], dim=1)))
out_inputs = torch.tanh(self.out_gate(stacked_inputs))
new_state = prev_state * (1 - update) + out_inputs * update
return new_state
def reset_parameters(self):
# init.orthogonal_(self.reset_gate.weight)
init.orthogonal_(self.update_gate.weight)
init.orthogonal_(self.out_gate.weight)
# init.constant_(self.reset_gate.bias, 0.)
init.constant_(self.update_gate.bias, 0.)
init.constant_(self.out_gate.bias, 0.)
class ConvGRUCell_in(nn.Module):
"""
Generate a convolutional GRU cell
"""
def __init__(self, input_size, hidden_size, kernel_size):
super().__init__()
padding = kernel_size // 2
self.input_size = input_size
self.hidden_size = hidden_size
self.reset_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
# self.update_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
self.out_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
init.orthogonal_(self.reset_gate.weight)
# init.orthogonal_(self.update_gate.weight)
init.orthogonal_(self.out_gate.weight)
init.constant_(self.reset_gate.bias, 0.)
# init.constant_(self.update_gate.bias, 0.)
init.constant_(self.out_gate.bias, 0.)
def forward(self, input_, prev_state):
# get batch and spatial sizes
batch_size = input_.data.size()[0]
spatial_size = input_.data.size()[2:]
# generate empty prev_state, if None is provided
if prev_state is None:
shape = input_.shape
shape[1] = self.hidden_size
prev_state = input_.new_zeros(shape)
# data size is [batch, channel, height, width]
stacked_inputs = torch.cat([input_, prev_state], dim=1)
# update = torch.sigmoid(self.update_gate(stacked_inputs))
reset = torch.sigmoid(self.reset_gate(stacked_inputs))
out_inputs = torch.tanh(self.out_gate(torch.cat([input_, prev_state * reset], dim=1)))
new_state = out_inputs
# new_state = prev_state * (1 - update) + out_inputs * update
return new_state
def reset_parameters(self):
init.orthogonal_(self.reset_gate.weight)
# init.orthogonal_(self.update_gate.weight)
init.orthogonal_(self.out_gate.weight)
init.constant_(self.reset_gate.bias, 0.)
# init.constant_(self.update_gate.bias, 0.)
init.constant_(self.out_gate.bias, 0.)
class ConvGRUCell_RNN(nn.Module):
"""
Convolutional GRU cell without the resent and update gates (basically a nromal rnn)
"""
def __init__(self, input_size, hidden_size, kernel_size):
super().__init__()
padding = kernel_size // 2
self.input_size = input_size
self.hidden_size = hidden_size
# self.reset_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
# self.update_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
self.out_gate = nn.Conv2d(input_size + hidden_size, hidden_size, kernel_size, padding=padding)
# init.orthogonal_(self.reset_gate.weight)
# init.orthogonal_(self.update_gate.weight)
init.orthogonal_(self.out_gate.weight)
# init.constant_(self.reset_gate.bias, 0.)
# init.constant_(self.update_gate.bias, 0.)
init.constant_(self.out_gate.bias, 0.)
def forward(self, input_, prev_state):
# get batch and spatial sizes
batch_size = input_.data.size()[0]
spatial_size = input_.data.size()[2:]
# generate empty prev_state, if None is provided
if prev_state is None:
shape = input_.shape
shape[1] = self.hidden_size
prev_state = input_.new_zeros(shape)
# data size is [batch, channel, height, width]
stacked_inputs = torch.cat([input_, prev_state], dim=1)
# update = torch.sigmoid(self.update_gate(stacked_inputs))
# reset = torch.sigmoid(self.reset_gate(stacked_inputs))
out_inputs = torch.tanh(self.out_gate(stacked_inputs))
# new_state = prev_state * (1 - update) + out_inputs * update
new_state = out_inputs
return new_state
def reset_parameters(self):
# init.orthogonal_(self.reset_gate.weight)
# init.orthogonal_(self.update_gate.weight)
init.orthogonal_(self.out_gate.weight)
# init.constant_(self.reset_gate.bias, 0.)
# init.constant_(self.update_gate.bias, 0.)
init.constant_(self.out_gate.bias, 0.)
| 37.39711 | 117 | 0.554639 | 9,588 | 75,056 | 4.080204 | 0.036713 | 0.031952 | 0.035173 | 0.030112 | 0.953989 | 0.938141 | 0.925922 | 0.915416 | 0.906623 | 0.895223 | 0 | 0.023154 | 0.308929 | 75,056 | 2,006 | 118 | 37.415753 | 0.731073 | 0.331979 | 0 | 0.790541 | 0 | 0 | 0.014808 | 0 | 0 | 0 | 0 | 0.001994 | 0 | 1 | 0.042471 | false | 0 | 0.010618 | 0.001931 | 0.090734 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dc1730e1cdfbd7f6280acdd53d76f9048ae94dbf | 4,541 | py | Python | pytorch_wrapper/samplers.py | skatsaounis/pytorch-wrapper | 57c85161bd6e09961cb7a1e69debc8e3e0bf7d29 | [
"MIT"
] | 111 | 2018-12-11T08:59:20.000Z | 2022-03-16T23:52:26.000Z | pytorch_wrapper/samplers.py | skatsaounis/pytorch-wrapper | 57c85161bd6e09961cb7a1e69debc8e3e0bf7d29 | [
"MIT"
] | 1 | 2020-04-16T14:21:47.000Z | 2020-04-16T14:21:47.000Z | pytorch_wrapper/samplers.py | skatsaounis/pytorch-wrapper | 57c85161bd6e09961cb7a1e69debc8e3e0bf7d29 | [
"MIT"
] | 24 | 2018-12-11T13:36:01.000Z | 2022-03-16T23:52:55.000Z | import random
from torch.utils.data.sampler import Sampler
def _batchify(l, batch_size):
for i in range(0, len(l), batch_size):
yield l[i:i + batch_size]
def _flatten(l):
return [item for sublist in l for item in sublist]
class SubsetSequentialSampler(Sampler):
"""
Samples elements sequentially based on a list of indexes.
"""
def __init__(self, indexes):
"""
:param indexes: a list of indexes.
"""
super(SubsetSequentialSampler, self).__init__(None)
self._indexes = indexes
def __iter__(self):
return (self._indexes[i] for i in range(len(self._indexes)))
def __len__(self):
return len(self._indexes)
class OrderedBatchWiseRandomSampler(Sampler):
"""
Semi-randomly samples indexes from a dataset ensuring that the corresponding examples will have similar values.
Values are returned by a callable.
"""
def __init__(self, data_source, get_order_value_callable, batch_size, seed=1234):
"""
:param data_source: a data source (usually a dataset object).
:param get_order_value_callable: a callable that takes as input the example's index and returns the ordering
value.
:param batch_size: the batch size.
:param seed: the initial seed.
"""
super(OrderedBatchWiseRandomSampler, self).__init__(None)
self._sorted_indexes = sorted(list(range(len(data_source))), key=lambda x: get_order_value_callable(x))
self._batch_size = batch_size
self._current_seed = seed
def __iter__(self):
self._current_seed += 1
rand_state = random.Random(self._current_seed)
indexes = list(_batchify(self._sorted_indexes.copy(), self._batch_size))
rand_state.shuffle(indexes)
return iter(_flatten(indexes))
def __len__(self):
return len(self._sorted_indexes)
class SubsetOrderedBatchWiseRandomSampler(Sampler):
"""
Semi-randomly samples indexes from a list ensuring that the corresponding examples will have similar values. Values
are returned by a callable.
"""
def __init__(self, indexes, get_order_value_callable, batch_size, seed=1234):
"""
:param indexes: a list of indexes.
:param get_order_value_callable: a callable that takes as input the example's index and returns the ordering
value.
:param batch_size: the batch size.
:param seed: the initial seed.
"""
super(SubsetOrderedBatchWiseRandomSampler, self).__init__(None)
self._sorted_indexes = sorted(indexes, key=lambda i: get_order_value_callable(i))
self._batch_size = batch_size
self._current_seed = seed
def __iter__(self):
self._current_seed += 1
rand_state = random.Random(self._current_seed)
indexes = list(_batchify(self._sorted_indexes.copy(), self._batch_size))
rand_state.shuffle(indexes)
return iter(_flatten(indexes))
def __len__(self):
return len(self._sorted_indexes)
class OrderedSequentialSampler(Sampler):
"""
Samples elements from a dataset ordered by a value returned by a callable for each example.
"""
def __init__(self, data_source, get_order_value_callable):
"""
:param data_source: a data source (usually a dataset object).
:param get_order_value_callable: a callable that takes as input the example's index and returns the ordering
value.
"""
super(OrderedSequentialSampler, self).__init__(None)
self._sorted_indexes = sorted(list(range(len(data_source))), key=lambda i: get_order_value_callable(i))
def __iter__(self):
return iter(self._sorted_indexes)
def __len__(self):
return len(self._sorted_indexes)
class SubsetOrderedSequentialSampler(Sampler):
"""
Samples elements from a list of indexes ordered by a value returned by a callable for each example.
"""
def __init__(self, indexes, get_order_value_callable):
"""
:param indexes: a list of indexes.
:param get_order_value_callable: a callable that takes as input the example's index and returns the ordering
value.
"""
super(SubsetOrderedSequentialSampler, self).__init__(None)
self._sorted_indexes = sorted(indexes, key=lambda i: get_order_value_callable(i))
def __iter__(self):
return iter(self._sorted_indexes)
def __len__(self):
return len(self._sorted_indexes)
| 33.88806 | 119 | 0.680467 | 569 | 4,541 | 5.098418 | 0.158172 | 0.046536 | 0.053775 | 0.086867 | 0.7687 | 0.750086 | 0.741124 | 0.704585 | 0.69252 | 0.652189 | 0 | 0.003165 | 0.23475 | 4,541 | 133 | 120 | 34.142857 | 0.831655 | 0.304118 | 0 | 0.542373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.288136 | false | 0 | 0.033898 | 0.152542 | 0.59322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
905c9483785bf7a8f9f238febd6bb54c7e884503 | 10,211 | py | Python | Code/my_optimizers.py | jiahuanluo/label-inference-attacks | ac944418a207943f92f3b45b5c741aa41984a786 | [
"MIT"
] | 3 | 2022-01-14T10:15:41.000Z | 2022-03-14T12:53:11.000Z | Code/my_optimizers.py | jiahuanluo/label-inference-attacks | ac944418a207943f92f3b45b5c741aa41984a786 | [
"MIT"
] | 1 | 2022-03-04T06:28:35.000Z | 2022-03-04T06:28:35.000Z | Code/my_optimizers.py | jiahuanluo/label-inference-attacks | ac944418a207943f92f3b45b5c741aa41984a786 | [
"MIT"
] | 4 | 2021-11-20T09:11:19.000Z | 2022-03-03T08:16:14.000Z | from torch.optim.optimizer import Optimizer
import torch
import matplotlib.pyplot as plt
from time import time
last_whole_model_params_list = []
new_whole_model_params_list = []
batch_cos_list = []
near_minimum = False
class MaliciousSGD(Optimizer):
def __init__(self, params, lr=1e-2, momentum=0, dampening=0,
weight_decay=0, nesterov=False, gamma_lr_scale_up=1.0, min_grad_to_process=1e-4):
self.last_parameters_grads = []
self.gamma_lr_scale_up = gamma_lr_scale_up
self.min_grad_to_process = min_grad_to_process
self.min_ratio = 1.0
self.max_ratio = 5.0
self.certain_grad_ratios = torch.tensor([])
if lr < 0.0:
raise ValueError("Invalid learning rate: {}".format(lr))
if momentum < 0.0:
raise ValueError("Invalid momentum value: {}".format(momentum))
if weight_decay < 0.0:
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
weight_decay=weight_decay, nesterov=nesterov)
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super(MaliciousSGD, self).__init__(params, defaults)
def __setstate__(self, state):
super(MaliciousSGD, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('nesterov', False)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
id_group = 0
for i in range(len(self.param_groups)):
self.last_parameters_grads.append([])
for group in self.param_groups:
weight_decay = group['weight_decay']
momentum = group['momentum']
dampening = group['dampening']
nesterov = group['nesterov']
start = time()
id_parameter = 0
for p in group['params']:
if p.grad is None:
continue
if weight_decay != 0:
p.grad.data.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p]
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.clone(p.grad.data).detach()
else:
buf = param_state['momentum_buffer']
buf.mul_(momentum).add_(1 - dampening, p.grad.data)
if nesterov:
p.grad.data = p.grad.data.add(momentum, buf)
else:
p.grad.data = buf
if not near_minimum:
if len(self.last_parameters_grads[id_group]) <= id_parameter:
self.last_parameters_grads[id_group].append(p.grad.clone().detach())
else:
last_parameter_grad = self.last_parameters_grads[id_group][id_parameter]
current_parameter_grad = p.grad.clone().detach()
ratio_grad_scale_up = 1.0 + self.gamma_lr_scale_up * (current_parameter_grad / (last_parameter_grad + 1e-7))
ratio_grad_scale_up = torch.clamp(ratio_grad_scale_up, self.min_ratio, self.max_ratio)
p.grad.mul_(ratio_grad_scale_up)
end = time()
current_parameter_grad = p.grad.clone().detach()
self.last_parameters_grads[id_group][id_parameter] = current_parameter_grad
p.data.add_(-group['lr'], p.grad.data)
id_parameter += 1
id_group += 1
return loss
class MaliciousSignSGD(Optimizer):
def __init__(self, params, lr=1e-2, momentum=0, dampening=0,
weight_decay=0, nesterov=False, gamma_lr_scale_up=1.0, min_grad_to_process=1e-4):
self.last_parameters_grads = []
self.gamma_lr_scale_up = gamma_lr_scale_up
self.min_grad_to_process = min_grad_to_process
self.certain_grad_ratios = []
if lr < 0.0:
raise ValueError("Invalid learning rate: {}".format(lr))
if momentum < 0.0:
raise ValueError("Invalid momentum value: {}".format(momentum))
if weight_decay < 0.0:
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
weight_decay=weight_decay, nesterov=nesterov)
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super(MaliciousSignSGD, self).__init__(params, defaults)
def __setstate__(self, state):
super(MaliciousSignSGD, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('nesterov', False)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group['weight_decay']
momentum = group['momentum']
dampening = group['dampening']
nesterov = group['nesterov']
# Indicate which module's paras we are processing
id_parameter = 0
for p in group['params']:
if p.grad is None:
continue
if len(self.last_parameters_grads) <= id_parameter:
self.last_parameters_grads.append(p.grad.clone().detach().numpy())
else:
last_parameter_grad = self.last_parameters_grads[id_parameter]
current_parameter_grad = p.grad.clone().detach().numpy()
ratio_grad_scale_up = 1.0 + self.gamma_lr_scale_up * (current_parameter_grad / (last_parameter_grad + 1e-7))
grad_shape = current_parameter_grad.shape
last_parameter_grad = last_parameter_grad.flatten()
grad_length = len(last_parameter_grad)
ratio_grad_scale_up = ratio_grad_scale_up.flatten()
for i in range(grad_length):
if abs(last_parameter_grad[i]) < self.min_grad_to_process:
ratio_grad_scale_up[i] = 1.0
ratio_grad_scale_up[i] = max(ratio_grad_scale_up[i], 1.0)
ratio_grad_scale_up[i] = min(ratio_grad_scale_up[i], 5.0)
ratio_grad_scale_up = ratio_grad_scale_up.reshape(grad_shape)
self.last_parameters_grads[id_parameter] = current_parameter_grad
p.grad = p.grad.mul(torch.tensor(ratio_grad_scale_up))
id_parameter += 1
d_p = p.grad.data
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p]
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.clone(d_p).detach()
else:
buf = param_state['momentum_buffer']
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
# sign SGD only uses sign of gradient to update model
torch.sign(d_p, out=d_p)
p.data.add_(-group['lr'], d_p)
return loss
class SignSGD(Optimizer):
def __init__(self, params, lr=1e-2, momentum=0, dampening=0,
weight_decay=0, nesterov=False):
if lr < 0.0:
raise ValueError("Invalid learning rate: {}".format(lr))
if momentum < 0.0:
raise ValueError("Invalid momentum value: {}".format(momentum))
if weight_decay < 0.0:
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
weight_decay=weight_decay, nesterov=nesterov)
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super(SignSGD, self).__init__(params, defaults)
def __setstate__(self, state):
super(SignSGD, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('nesterov', False)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group['weight_decay']
momentum = group['momentum']
dampening = group['dampening']
nesterov = group['nesterov']
for p in group['params']:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p]
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.clone(d_p).detach()
else:
buf = param_state['momentum_buffer']
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
torch.sign(d_p, out=d_p)
p.data.add_(-group['lr'], d_p)
return loss | 41.173387 | 133 | 0.546176 | 1,160 | 10,211 | 4.52931 | 0.111207 | 0.062809 | 0.03997 | 0.045679 | 0.82204 | 0.790636 | 0.778835 | 0.765702 | 0.745718 | 0.705938 | 0 | 0.012311 | 0.363627 | 10,211 | 248 | 134 | 41.173387 | 0.796245 | 0.009695 | 0 | 0.709184 | 0 | 0 | 0.071479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045918 | false | 0 | 0.020408 | 0 | 0.096939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
90dcd59decc2eaa192bbee5adfba03c783376170 | 14,054 | py | Python | tests/dhcpv4/relay/test_v4_request_relay_part1.py | isc-projects/forge | dfec8b41003d6b5a229f69ee93616e0e5cc6d71b | [
"0BSD"
] | 22 | 2015-02-27T11:51:05.000Z | 2022-02-28T12:39:29.000Z | tests/dhcpv4/relay/test_v4_request_relay_part1.py | isc-projects/forge | dfec8b41003d6b5a229f69ee93616e0e5cc6d71b | [
"0BSD"
] | 16 | 2018-10-30T15:00:12.000Z | 2019-01-11T17:55:13.000Z | tests/dhcpv4/relay/test_v4_request_relay_part1.py | isc-projects/forge | dfec8b41003d6b5a229f69ee93616e0e5cc6d71b | [
"0BSD"
] | 11 | 2015-02-27T11:51:36.000Z | 2021-03-30T08:33:54.000Z | """DHCPv4 address request process"""
# pylint: disable=invalid-name,line-too-long
import pytest
import misc
import srv_msg
import srv_control
@pytest.mark.v4
@pytest.mark.relay
@pytest.mark.request
def test_v4_request_relay_selecting_success_chaddr():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.network_variable('source_port', 67)
srv_msg.network_variable('source_address', '$(GIADDR4)')
srv_msg.network_variable('destination_address', '$(SRV4_ADDR)')
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_include_option(1)
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
@pytest.mark.v4
@pytest.mark.relay
@pytest.mark.request
def test_v4_request_relay_selecting_success_chaddr_empty_pool():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.network_variable('source_port', 67)
srv_msg.network_variable('source_address', '$(GIADDR4)')
srv_msg.network_variable('destination_address', '$(SRV4_ADDR)')
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_requests_option(1)
srv_msg.client_sets_value('Client', 'chaddr', 'ff:01:02:03:ff:04')
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
@pytest.mark.v4
@pytest.mark.relay
@pytest.mark.request
def test_v4_request_relay_selecting_success_client_id():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.network_variable('source_port', 67)
srv_msg.network_variable('source_address', '$(GIADDR4)')
srv_msg.network_variable('destination_address', '$(SRV4_ADDR)')
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_include_option(61)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_include_option(61)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
@pytest.mark.v4
@pytest.mark.relay
@pytest.mark.request
def test_v4_request_relay_selecting_success_client_id_empty_pool():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.network_variable('source_port', 67)
srv_msg.network_variable('source_address', '$(GIADDR4)')
srv_msg.network_variable('destination_address', '$(SRV4_ADDR)')
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_include_option(61)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_include_option(61)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00020304050607')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
@pytest.mark.v4
@pytest.mark.relay
@pytest.mark.request
def test_v4_request_relay_selecting_success_client_id_chaddr_empty_pool():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.network_variable('source_port', 67)
srv_msg.network_variable('source_address', '$(GIADDR4)')
srv_msg.network_variable('destination_address', '$(SRV4_ADDR)')
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_include_option(61)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_include_option(61)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:11')
srv_msg.client_does_include_with_value('client_id', '11020304050607')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '11020304050607')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
@pytest.mark.v4
@pytest.mark.relay
@pytest.mark.request
def test_v4_request_relay_selecting_success_second_request_fail():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.network_variable('source_port', 67)
srv_msg.network_variable('source_address', '$(GIADDR4)')
srv_msg.network_variable('destination_address', '$(SRV4_ADDR)')
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_include_option(1)
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_include_option(1)
srv_msg.response_check_include_option(54)
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:22:11:00')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_requests_option(1)
srv_msg.client_sets_value('Client', 'giaddr', '$(GIADDR4)')
srv_msg.client_sets_value('Client', 'hops', 1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'NAK')
srv_msg.response_check_include_option(54)
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
| 40.618497 | 81 | 0.741212 | 2,124 | 14,054 | 4.48823 | 0.046139 | 0.126508 | 0.130914 | 0.121578 | 0.982797 | 0.982797 | 0.982797 | 0.982797 | 0.982167 | 0.982167 | 0 | 0.079647 | 0.113775 | 14,054 | 345 | 82 | 40.736232 | 0.685749 | 0.005265 | 0 | 0.947552 | 0 | 0 | 0.207815 | 0.010734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020979 | true | 0.059441 | 0.013986 | 0 | 0.034965 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
90e88b033cf424b46ef33d27e254a98b699c3c67 | 5,984 | py | Python | all_moves.py | rezwan2525/Chess-AI | 8555d9d0669322166feb74ac9bdad7f5a4604c3a | [
"MIT"
] | 1 | 2021-10-04T15:08:16.000Z | 2021-10-04T15:08:16.000Z | all_moves.py | rezwan2525/Chess-AI | 8555d9d0669322166feb74ac9bdad7f5a4604c3a | [
"MIT"
] | null | null | null | all_moves.py | rezwan2525/Chess-AI | 8555d9d0669322166feb74ac9bdad7f5a4604c3a | [
"MIT"
] | null | null | null | import pygame
from constants import *
from colors import *
from board_utility import *
def draw_single_possible_square(selectedX, selectedY):
pygame.draw.circle(SCREEN,SELECETED_COLOR,coordinate_to_center_pixel(selectedX, selectedY),15)
def draw_pawn_moves(selectedX,selectedY ):
if all_piece_pos[selectedY][selectedX] == WHITE_PAWN:
for i in range(1, 3):
if (selectedY+i) < len(all_piece_pos) and all_piece_pos[selectedY+i][selectedX] == EMPTY_PIECE:
draw_single_possible_square(selectedX, selectedY+i)
elif all_piece_pos[selectedY][selectedX] == BLACK_PAWN:
for i in range(1, 3):
if (selectedY-i) < len(all_piece_pos) and all_piece_pos[selectedY-i][selectedX] == EMPTY_PIECE:
draw_single_possible_square(selectedX, selectedY-i)
def draw_rook_moves(selectedX,selectedY ):
#print(selectedX, selectedY)
if all_piece_pos[selectedY][selectedX] == WHITE_ROOK or all_piece_pos[selectedY][selectedX] == BLACK_ROOK:
# bottom
for i in range(selectedY+1, 8):
if all_piece_pos[i][selectedX] != EMPTY_PIECE:
break
if is_the_position_on_board(selectedX,i) :
draw_single_possible_square(selectedX, i)
# top
for i in range(selectedY-1, -1, -1):
if all_piece_pos[i][selectedX] != EMPTY_PIECE:
break
if is_the_position_on_board(selectedX, i):
draw_single_possible_square( selectedX, i)
# right
for i in range(selectedX+1, 4):
if all_piece_pos[selectedY][i] != EMPTY_PIECE:
break
if is_the_position_on_board( i, selectedY):
draw_single_possible_square( i, selectedY)
# left
for i in range(selectedX-1, -1, -1):
if all_piece_pos[selectedY][i] != EMPTY_PIECE:
break
if is_the_position_on_board( i, selectedY):
draw_single_possible_square( i, selectedY)
# bishop moves ==>
# (1,1)=>diagonal bottom-right , (-1,-1)=>diagonal top-left,
# (-1, 1)=>diagonal right , (1, -1)=>diagonal right ,
# , (-1, 1) => diagonal top-right, (1, -1)=> diagonal bottom-left
for d in [(1,1), (-1,-1), (1,-1), (-1, 1)]:
newX = selectedX
newY = selectedY
for i in range(0, 8):
newX = newX+d[1]
newY = newY+d[0]
if not is_the_position_on_board(newX, newY) or all_piece_pos[newY][newX] != EMPTY_PIECE:
break
elif is_the_position_on_board(newX, newY):
draw_single_possible_square(newX, newY)
else:
break
def draw_king_moves(selectedX,selectedY ):
if all_piece_pos[selectedY][selectedX] == WHITE_KING or all_piece_pos[selectedY][selectedX] == BLACK_KING:
for d in [(1,1), (-1,-1), (1,-1), (-1, 1), (1, 0), (-1, 0), (0,1), (0,-1)]:
newX = selectedX+d[1]
newY = selectedY+d[0]
if not is_the_position_on_board(newX, newY) or all_piece_pos[newY][newX] != EMPTY_PIECE:
break
elif is_the_position_on_board(newX, newY):
draw_single_possible_square(newX, newY)
else:
break
def draw_queen_moves(selectedX,selectedY ):
if all_piece_pos[selectedY][selectedX] == WHITE_QUEEN or all_piece_pos[selectedY][selectedX] == BLACK_QUEEN:
# bottom
for i in range(selectedY+1, 8):
if all_piece_pos[i][selectedX] != EMPTY_PIECE:
break
if is_the_position_on_board(selectedX,i) :
draw_single_possible_square(selectedX, i)
# top
for i in range(selectedY-1, -1, -1):
if all_piece_pos[i][selectedX] != EMPTY_PIECE:
break
if is_the_position_on_board(selectedX, i):
draw_single_possible_square( selectedX, i)
# right
for i in range(selectedX+1, 4):
if all_piece_pos[selectedY][i] != EMPTY_PIECE:
break
if is_the_position_on_board( i, selectedY):
draw_single_possible_square( i, selectedY)
# left
for i in range(selectedX-1, -1, -1):
if all_piece_pos[selectedY][i] != EMPTY_PIECE:
break
if is_the_position_on_board( i, selectedY):
draw_single_possible_square( i, selectedY)
# diagonal moves
for d in [(1,1), (-1,-1), (1,-1), (-1, 1)]:
newX = selectedX
newY = selectedY
for i in range(0, 8):
newX = newX+d[1]
newY = newY+d[0]
if not is_the_position_on_board(newX, newY) or all_piece_pos[newY][newX] != EMPTY_PIECE:
break
elif is_the_position_on_board(newX, newY):
draw_single_possible_square(newX, newY)
else:
break
# knight moves
for d in [(-2, -1), (-2, 1), (2, 1), (2, -1), (1, 2), (1, -2), (-1, 2), (-1, -2)]:
newX = selectedX
newY = selectedY
for i in range(0, 8):
newX = newX+d[1]
newY = newY+d[0]
if not is_the_position_on_board(newX, newY) or all_piece_pos[newY][newX] != EMPTY_PIECE:
break
elif is_the_position_on_board(newX, newY):
draw_single_possible_square(newX, newY)
else:
break
def select_draw_possible_moves_square(selectedX, selectedY):
draw_pawn_moves(selectedX,selectedY )
draw_rook_moves(selectedX,selectedY )
draw_king_moves(selectedX,selectedY )
draw_queen_moves(selectedX,selectedY )
| 40.161074 | 113 | 0.558489 | 753 | 5,984 | 4.167331 | 0.090305 | 0.023582 | 0.08413 | 0.076482 | 0.885277 | 0.823454 | 0.799235 | 0.746973 | 0.746654 | 0.729446 | 0 | 0.025208 | 0.337066 | 5,984 | 149 | 114 | 40.161074 | 0.765818 | 0.05381 | 0 | 0.747748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0.036036 | 0 | 0.09009 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
90eb05dd710e59519bf4d347930fe366bb29e211 | 22,365 | py | Python | FSJ_django20_project/FSJ/tests/test_models.py | CMPUT401FSJ/FSJAwards | 01f630e5060d70eecf7cb8f35b576f8799e2d7af | [
"MIT"
] | 6 | 2018-02-03T21:37:14.000Z | 2020-11-20T19:07:20.000Z | FSJ_django20_project/FSJ/tests/test_models.py | CMPUT401FSJ/FSJAwards | 01f630e5060d70eecf7cb8f35b576f8799e2d7af | [
"MIT"
] | 145 | 2018-02-01T02:38:17.000Z | 2018-06-06T16:22:05.000Z | FSJ_django20_project/FSJ/tests/test_models.py | CMPUT401FSJ/FSJAwards | 01f630e5060d70eecf7cb8f35b576f8799e2d7af | [
"MIT"
] | 4 | 2018-05-04T22:04:29.000Z | 2020-10-01T11:45:15.000Z | from django.test import TestCase
from django.db.utils import IntegrityError
from django.contrib.auth.models import User
from ..models import Adjudicator, Coordinator, Student, Program, YearOfStudy, Award, Application, Committee, Comment, Ranking
from django.db.models.deletion import ProtectedError
import datetime
import pytz
class AdjudicatorModelTests(TestCase):
def setUp(self):
self.ccid = "Adjudicator"
self.first_name = "An"
self.last_name = "Adjudicator"
self.email = "anAdjudicator@test.com"
Adjudicator.objects.create(ccid = self.ccid, first_name = self.first_name,
last_name = self.last_name, email = self.email)
def test_get_adjudicator(self):
obj = Adjudicator.objects.get(ccid = self.ccid)
self.assertEqual(obj.ccid, self.ccid)
self.assertEqual(obj.first_name, self.first_name)
self.assertEqual(obj.last_name, self.last_name)
self.assertEqual(obj.email, self.email)
def test_create_duplicate_adjudicator(self):
with self.assertRaises(IntegrityError):
Adjudicator.objects.create(ccid = self.ccid)
def test_user_model_is_created_with_adjudicator(self):
obj = Adjudicator.objects.get(ccid = self.ccid)
user = obj.user
self.assertEqual(user.username, obj.ccid)
def test_user_model_is_correct_for_adjudicator(self):
obj = Adjudicator.objects.get(ccid = self.ccid)
user = obj.user
user2 = User.objects.get(username = self.ccid)
self.assertEqual(user, user2)
def test_delete_adjudicator(self):
obj = Adjudicator.objects.get(ccid = self.ccid)
ccid = obj.ccid
self.assertIsNotNone(obj)
user = User.objects.get(username = self.ccid)
self.assertIsNotNone(user)
obj.delete()
with self.assertRaises(Adjudicator.DoesNotExist):
obj = Adjudicator.objects.get(ccid = self.ccid)
with self.assertRaises(User.DoesNotExist):
user = User.objects.get(username = self.ccid)
class CoordinatorModelTests(TestCase):
def setUp(self):
self.ccid = "Coordinator"
self.first_name = "A"
self.last_name = "Coordinator"
self.email = "aCoordinator@test.com"
Coordinator.objects.create(ccid = self.ccid, first_name = self.first_name,
last_name = self.last_name, email = self.email)
def test_get_coordinator(self):
obj = Coordinator.objects.get(ccid = self.ccid)
self.assertEqual(obj.ccid, self.ccid)
self.assertEqual(obj.first_name, self.first_name)
self.assertEqual(obj.last_name, self.last_name)
self.assertEqual(obj.email, self.email)
def test_create_duplicate_coordinator(self):
with self.assertRaises(IntegrityError):
Coordinator.objects.create(ccid = self.ccid)
def test_user_model_is_created_with_coordinator(self):
obj = Coordinator.objects.get(ccid = self.ccid)
user = obj.user
self.assertEqual(user.username, obj.ccid)
def test_user_model_is_correct_for_coordinator(self):
obj = Coordinator.objects.get(ccid = self.ccid)
user = obj.user
user2 = User.objects.get(username = self.ccid)
self.assertEqual(user, user2)
def test_delete_coordinator(self):
obj = Coordinator.objects.get(ccid = self.ccid)
ccid = obj.ccid
self.assertIsNotNone(obj)
user = User.objects.get(username = self.ccid)
self.assertIsNotNone(user)
obj.delete()
with self.assertRaises(Coordinator.DoesNotExist):
obj = Coordinator.objects.get(ccid = self.ccid)
with self.assertRaises(User.DoesNotExist):
user = User.objects.get(username = self.ccid)
class StudentModelTests(TestCase):
def setUp(self):
self.ccid = "Student"
self.first_name = "A"
self.last_name = "Student"
self.email = "aStudent@test.com"
self.student_id = '1'
self.programcode = "SP"
self.programname = "StudentProgram"
self.yearname = "StudentYear"
program = Program.objects.create(code = self.programcode, name = self.programname)
year = YearOfStudy.objects.create(year = self.yearname)
Student.objects.create(ccid = self.ccid, first_name = self.first_name, last_name = self.last_name,
email = self.email, student_id = self.student_id, year = year, program = program)
def test_get_student(self):
obj = Student.objects.get(ccid = self.ccid)
program = Program.objects.get(code = self.programcode)
year = YearOfStudy.objects.get(year = self.yearname)
self.assertEqual(obj.ccid, self.ccid)
self.assertEqual(obj.first_name, self.first_name)
self.assertEqual(obj.last_name, self.last_name)
self.assertEqual(obj.email, self.email)
self.assertEqual(obj.student_id, self.student_id)
self.assertEqual(obj.program, program)
self.assertEqual(obj.year, year)
def test_create_duplicate_student(self):
with self.assertRaises(IntegrityError):
Student.objects.create(ccid = self.ccid)
with self.assertRaises(IntegrityError):
Student.objects.create(student_id = self.student_id)
def test_user_model_is_created_with_student(self):
obj = Student.objects.get(ccid = self.ccid)
user = obj.user
self.assertEqual(user.username, obj.ccid)
def test_user_model_is_correct_for_student(self):
obj = Student.objects.get(ccid = self.ccid)
user = obj.user
user2 = User.objects.get(username = self.ccid)
self.assertEqual(user, user2)
def test_program_delete(self):
Program.objects.get(code = self.programcode).delete()
student = Student.objects.get(ccid = self.ccid)
self.assertIsNone(student.program)
def test_year_delete(self):
with self.assertRaises(ProtectedError):
YearOfStudy.objects.get(year = "StudentYear").delete()
def test_delete_student(self):
obj = Student.objects.get(ccid = self.ccid)
ccid = obj.ccid
self.assertIsNotNone(obj)
user = User.objects.get(username = self.ccid)
self.assertIsNotNone(user)
obj.delete()
with self.assertRaises(Student.DoesNotExist):
obj = Student.objects.get(ccid = self.ccid)
with self.assertRaises(User.DoesNotExist):
user = User.objects.get(username = self.ccid)
class ApplicationModelTests(TestCase):
def setUp(self):
self.ccid = "Student"
self.first_name = "A"
self.last_name = "Student"
self.email = "aStudent@test.com"
self.student_id = '1'
self.year = "First"
self.year_of_study = YearOfStudy.objects.create(year = self.year)
self.program_code = "PRFG"
self.program_name = "Science"
self.program = Program.objects.create(code = self.program_code, name = self.program_name)
self.name = "This award"
self.award_description = "For students"
self.award_value = "One gold pen"
self.award_start_date = str(datetime.datetime.now(pytz.timezone('America/Vancouver')))
self.award_end_date = str(datetime.datetime.now(pytz.timezone('America/Edmonton')))
self.award_documents_needed = False
self.award_is_active = True
self.award = Award.objects.create(name = self.name, description = self.award_description, value = self.award_value,
start_date = self.award_start_date, end_date = self.award_end_date,
documents_needed = self.award_documents_needed, is_active = self.award_is_active)
self.award.programs.add(self.program)
self.award.years_of_study.add(self.year_of_study)
self.student = Student.objects.create(ccid = self.ccid, first_name = self.first_name, last_name = self.last_name,
email = self.email, student_id = self.student_id, year = self.year_of_study, program = self.program)
self.application_is_submitted = False
self.application = Application.objects.create(award = self.award, student = self.student, is_submitted = self.application_is_submitted)
def test_application_creation(self):
application = Application.objects.get(application_id = self.application.application_id)
self.assertEqual(application, self.application)
self.assertEqual(application.application_id, self.application.application_id)
self.assertEqual(application.award, self.application.award)
self.assertEqual(application.student.ccid, self.application.student.ccid)
self.assertEqual(application.is_submitted, self.application.is_submitted)
def test_application_duplicate(self):
with self.assertRaises(IntegrityError):
Application.objects.create(application_id = self.application.application_id)
def test_application_delete_award(self):
Award.objects.get(awardid = self.award.awardid).delete()
with self.assertRaises(Application.DoesNotExist):
application = Application.objects.get(application_id = self.application.application_id)
def test_application_delete_student(self):
Student.objects.get(ccid = self.ccid).delete()
with self.assertRaises(Application.DoesNotExist):
application = Application.objects.get(application_id = self.application.application_id)
def test_submit_application(self):
self.application.is_submitted = True
self.application.save()
application = Application.objects.get(application_id = self.application.application_id)
self.assertTrue(application.is_submitted)
self.application.is_submitted = False
self.application.save()
application = Application.objects.get(application_id = self.application.application_id)
self.assertFalse(application.is_submitted)
class ProgramModelTests(TestCase):
def setUp(self):
self.ccid = "Student"
self.first_name = "A"
self.last_name = "Student"
self.email = "aStudent@test.com"
self.student_id = '1'
self.year = "First"
self.year_of_study = YearOfStudy.objects.create(year = self.year)
self.program_code = "PRFG"
self.program_name = "Science"
self.program = Program.objects.create(code = self.program_code, name = self.program_name)
self.name = "This award"
self.award_description = "For students"
self.award_value = "One gold pen"
self.award_start_date = str(datetime.datetime.now(pytz.timezone('America/Vancouver')))
self.award_end_date = str(datetime.datetime.now(pytz.timezone('America/Edmonton')))
self.award_documents_needed = False
self.award_is_active = True
self.award = Award.objects.create(name = self.name, description = self.award_description, value = self.award_value,
start_date = self.award_start_date, end_date = self.award_end_date,
documents_needed = self.award_documents_needed, is_active = self.award_is_active)
self.award.programs.add(self.program)
self.award.years_of_study.add(self.year_of_study)
self.student = Student.objects.create(ccid = self.ccid, first_name = self.first_name, last_name = self.last_name,
email = self.email, student_id = self.student_id, year = self.year_of_study, program = self.program)
def test_program_creation(self):
program = Program.objects.get(code = self.program_code)
self.assertIsNotNone(program)
def test_program_duplicate(self):
with self.assertRaises(IntegrityError):
new_program = Program.objects.create(code = self.program_code)
def test_program_deletion_cascade(self):
award = Award.objects.get(awardid = self.award.awardid)
student = Student.objects.get(ccid = self.ccid)
program = Program.objects.get(code = self.program_code)
self.assertEqual(student.program, program)
self.assertTrue(program in award.programs.all())
program.delete()
award = Award.objects.get(awardid = self.award.awardid)
student = Student.objects.get(ccid = self.ccid)
self.assertIsNone(student.program)
self.assertFalse(program in award.programs.all())
class AwardModelTests(TestCase):
def setUp(self):
self.name = "Award Name 1"
self.description = "Award Description 1"
self.value = "Award Value 1"
self.start_date = str(datetime.datetime.now(pytz.timezone('America/Vancouver')))
self.end_date = str(datetime.datetime.now(pytz.timezone('America/Edmonton')))
self.programcode = "AP"
self.programname = "AwardProgram"
self.yearname = "AwardYear"
self.documents_needed = False
self.is_active = True
self.year = YearOfStudy.objects.create(year = self.yearname)
self.program = Program.objects.create(code = self.programcode, name = self.programname)
self.award = Award.objects.create(name = self.name, description = self.description, value = self.value,
start_date = self.start_date, end_date = self.end_date,
documents_needed = self.documents_needed, is_active = self.is_active)
self.award.programs.add(self.program)
self.award.years_of_study.add(self.year)
def test_award_creation(self):
award = Award.objects.get(awardid = self.award.awardid)
self.assertIsNotNone(award)
def test_award_duplicate(self):
with self.assertRaises(IntegrityError):
new_award = Award.objects.create(awardid = self.award.awardid)
class CommitteeModelTests(TestCase):
def setUp(self):
self.committee_name = "Committee Name 1"
self.adjudicator_ccid = "Committee Adjudicator 1"
self.adjudicator_first_name = "Adjudicator 1 First"
self.adjudicator_last_name = "Adjudicator 1 Last"
self.adjudicator_email = "Adjudicator1@test.com"
self.adjudicator = Adjudicator.objects.create(ccid = self.adjudicator_ccid, first_name = self.adjudicator_first_name,
last_name = self.adjudicator_last_name, email = self.adjudicator_email)
self.name = "Award Name 1"
self.description = "Award Description 1"
self.value = "Award Value 1"
self.start_date = str(datetime.datetime.now(pytz.timezone('America/Vancouver')))
self.end_date = str(datetime.datetime.now(pytz.timezone('America/Edmonton')))
self.programcode = "AP"
self.programname = "AwardProgram"
self.yearname = "AwardYear"
self.documents_needed = False
self.is_active = True
self.year = YearOfStudy.objects.create(year = self.yearname)
self.program = Program.objects.create(code = self.programcode, name = self.programname)
self.award = Award.objects.create(name = self.name, description = self.description, value = self.value,
start_date = self.start_date, end_date = self.end_date,
documents_needed = self.documents_needed, is_active = self.is_active)
self.award.programs.add(self.program)
self.award.years_of_study.add(self.year)
self.committee = Committee.objects.create(committee_name = self.committee_name)
self.committee.adjudicators.add(self.adjudicator)
self.committee.awards.add(self.award)
def test_committee_creation(self):
committee = Committee.objects.get(committeeid = self.committee.committeeid)
self.assertIsNotNone(committee)
def test_committee_duplicate(self):
with self.assertRaises(IntegrityError):
new_committee = Committee.objects.create(committeeid = self.committee.committeeid)
class CommentModelTests(TestCase):
def setUp(self):
self.ccid = "Student"
self.first_name = "A"
self.last_name = "Student"
self.email = "aStudent@test.com"
self.student_id = '1'
self.year = "First"
self.year_of_study = YearOfStudy.objects.create(year = self.year)
self.program_code = "PRFG"
self.program_name = "Science"
self.program = Program.objects.create(code = self.program_code, name = self.program_name)
self.name = "This award"
self.award_description = "For students"
self.award_value = "One gold pen"
self.award_start_date = str(datetime.datetime.now(pytz.timezone('America/Vancouver')))
self.award_end_date = str(datetime.datetime.now(pytz.timezone('America/Edmonton')))
self.award_documents_needed = False
self.award_is_active = True
self.award = Award.objects.create(name = self.name, description = self.award_description, value = self.award_value,
start_date = self.award_start_date, end_date = self.award_end_date,
documents_needed = self.award_documents_needed, is_active = self.award_is_active)
self.award.programs.add(self.program)
self.award.years_of_study.add(self.year_of_study)
self.student = Student.objects.create(ccid = self.ccid, first_name = self.first_name, last_name = self.last_name,
email = self.email, student_id = self.student_id, year = self.year_of_study, program = self.program)
self.application_is_submitted = False
self.application = Application.objects.create(award = self.award, student = self.student, is_submitted = self.application_is_submitted)
self.adjudicator_ccid = "Committee Adjudicator 1"
self.adjudicator_first_name = "Adjudicator 1 First"
self.adjudicator_last_name = "Adjudicator 1 Last"
self.adjudicator_email = "Adjudicator1@test.com"
self.adjudicator = Adjudicator.objects.create(ccid = self.adjudicator_ccid, first_name = self.adjudicator_first_name,
last_name = self.adjudicator_last_name, email = self.adjudicator_email)
self.comment_text = "Hello World"
self.comment = Comment.objects.create(application = self.application, adjudicator = self.adjudicator, comment_text = self.comment_text)
def test_get_comment(self):
obj = Comment.objects.get(comment_text = self.comment_text)
application = Application.objects.get(application_id = self.application.application_id)
adjudicator = Adjudicator.objects.get(ccid = self.adjudicator_ccid)
self.assertEqual(obj.comment_text, self.comment_text)
self.assertEqual(obj.application, self.application)
self.assertEqual(obj.adjudicator, self.adjudicator)
class RankingModelTests(TestCase):
def setUp(self):
self.ccid = "Student"
self.first_name = "A"
self.last_name = "Student"
self.email = "aStudent@test.com"
self.student_id = '1'
self.year = "First"
self.year_of_study = YearOfStudy.objects.create(year = self.year)
self.program_code = "PRFG"
self.program_name = "Science"
self.program = Program.objects.create(code = self.program_code, name = self.program_name)
self.name = "This award"
self.award_description = "For students"
self.award_value = "One gold pen"
self.award_start_date = str(datetime.datetime.now(pytz.timezone('America/Vancouver')))
self.award_end_date = str(datetime.datetime.now(pytz.timezone('America/Edmonton')))
self.award_documents_needed = False
self.award_is_active = True
self.award = Award.objects.create(name = self.name, description = self.award_description, value = self.award_value,
start_date = self.award_start_date, end_date = self.award_end_date,
documents_needed = self.award_documents_needed, is_active = self.award_is_active)
self.award.programs.add(self.program)
self.award.years_of_study.add(self.year_of_study)
self.student = Student.objects.create(ccid = self.ccid, first_name = self.first_name, last_name = self.last_name,
email = self.email, student_id = self.student_id, year = self.year_of_study, program = self.program)
self.application_is_submitted = False
self.application = Application.objects.create(award = self.award, student = self.student, is_submitted = self.application_is_submitted)
self.adjudicator_ccid = "Committee Adjudicator 1"
self.adjudicator_first_name = "Adjudicator 1 First"
self.adjudicator_last_name = "Adjudicator 1 Last"
self.adjudicator_email = "Adjudicator1@test.com"
self.adjudicator = Adjudicator.objects.create(ccid = self.adjudicator_ccid, first_name = self.adjudicator_first_name,
last_name = self.adjudicator_last_name, email = self.adjudicator_email)
self.rank = 2
self.ranking = Ranking.objects.create(award = self.award, application = self.application, adjudicator = self.adjudicator, rank = self.rank)
def test_get_ranking(self):
obj = Ranking.objects.get(award = self.award)
application = Application.objects.get(application_id = self.application.application_id)
adjudicator = Adjudicator.objects.get(ccid = self.adjudicator_ccid)
award = Award.objects.get(awardid = self.award.awardid)
self.assertEqual(obj.rank, self.rank)
self.assertEqual(obj.application, self.application)
self.assertEqual(obj.adjudicator, self.adjudicator)
self.assertEqual(obj.award, self.award) | 49.921875 | 154 | 0.662598 | 2,584 | 22,365 | 5.566176 | 0.05031 | 0.051311 | 0.026698 | 0.026281 | 0.846416 | 0.821873 | 0.798304 | 0.763749 | 0.760551 | 0.727387 | 0 | 0.001821 | 0.239034 | 22,365 | 448 | 155 | 49.921875 | 0.843293 | 0 | 0 | 0.703125 | 0 | 0 | 0.048958 | 0.004739 | 0 | 0 | 0 | 0 | 0.171875 | 1 | 0.104167 | false | 0 | 0.018229 | 0 | 0.145833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
90f4afb3a3e888ffb1cab17160d67bc9d83b6b32 | 884 | py | Python | ex086.py | gabrielwai/exercicios_de_Python | 3767775748db7c501a6e0364edf7ba4f079e62f9 | [
"MIT"
] | null | null | null | ex086.py | gabrielwai/exercicios_de_Python | 3767775748db7c501a6e0364edf7ba4f079e62f9 | [
"MIT"
] | null | null | null | ex086.py | gabrielwai/exercicios_de_Python | 3767775748db7c501a6e0364edf7ba4f079e62f9 | [
"MIT"
] | null | null | null | <<<<<<< HEAD
lista = list()
linha = list()
coluna = list()
for l in range(0, 3):
for c in range(0, 3):
coluna.append(int(input(f'Digite um número inteiro para [{l}, {c}]: ')))
lista.append(coluna[:])
coluna.clear()
print(lista)
print('-='*20)
#while(l != 0 or c != 0):
for l in range(0, 3):
for c in range(0, 3):
print(f'[ {lista[l][c]} ]', end='')
if c == 2:
print()
=======
lista = list()
linha = list()
coluna = list()
for l in range(0, 3):
for c in range(0, 3):
coluna.append(int(input(f'Digite um número inteiro para [{l}, {c}]: ')))
lista.append(coluna[:])
coluna.clear()
print(lista)
print('-='*20)
#while(l != 0 or c != 0):
for l in range(0, 3):
for c in range(0, 3):
print(f'[ {lista[l][c]} ]', end='')
if c == 2:
print()
>>>>>>> 2c40c1f76c7460c7f674ac76f048fd525a07710e
| 24.555556 | 80 | 0.522624 | 136 | 884 | 3.397059 | 0.220588 | 0.121212 | 0.138528 | 0.155844 | 0.904762 | 0.904762 | 0.904762 | 0.904762 | 0.904762 | 0.904762 | 0 | 0.080182 | 0.252262 | 884 | 35 | 81 | 25.257143 | 0.618759 | 0.054299 | 0 | 0.909091 | 0 | 0 | 0.146283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.242424 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2900a62d8269507cc927b25c6b39b24c11d36acb | 20,999 | py | Python | extensions/mktplace/tests/integration/test_cp_scenarios.py | jrineck/sawtooth-core | e3eb79f32c97a25993c87eda7f77a02fd2086c7c | [
"Apache-2.0"
] | null | null | null | extensions/mktplace/tests/integration/test_cp_scenarios.py | jrineck/sawtooth-core | e3eb79f32c97a25993c87eda7f77a02fd2086c7c | [
"Apache-2.0"
] | null | null | null | extensions/mktplace/tests/integration/test_cp_scenarios.py | jrineck/sawtooth-core | e3eb79f32c97a25993c87eda7f77a02fd2086c7c | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------------
import os
import unittest
from mktmain import client_cli
from mktplace import mktplace_state
from txnintegration.validator_network_manager import get_default_vnm
from integration import ENABLE_INTEGRATION_TESTS
@unittest.skipUnless(ENABLE_INTEGRATION_TESTS, "integration test")
class TestCommercialPaperScenarios(unittest.TestCase):
def setUp(self):
self.save_environ = os.environ.copy()
def tearDown(self):
os.environ.clear()
os.environ.update(self.save_environ)
@classmethod
def setUpClass(cls):
cls.vnm = None
try:
if 'TEST_VALIDATOR_URLS' in os.environ:
urls = (os.environ['TEST_VALIDATOR_URLS']).split(",")
cls.url = urls[0]
else:
families = ['mktplace.transactions.market_place']
overrides = {
"TransactionFamilies": families,
}
cls.vnm = get_default_vnm(5, overrides=overrides)
cls.vnm.do_genesis()
cls.vnm.launch()
# the url of the initial validator
cls.url = cls.vnm.urls()[0] + '/'
os.environ['CURRENCYHOME'] = os.path.join(
os.path.dirname(__file__), "cp_scenarios")
cls.scenarios_path = os.path.join(os.path.dirname(__file__),
'cp_scenarios')
client_cli.main(args=["--name", "mkt",
"--script",
os.path.join(cls.scenarios_path,
"scenario_setup_1_mkt"),
"--echo",
"--url",
cls.url])
client_cli.main(
args=["--name", "BANK-trader",
"--script",
os.path.join(os.path.dirname(__file__),
"cp_scenarios",
"scenario_setup_2_trader"),
"--echo",
"--url",
cls.url])
client_cli.main(args=["--name", "BANK-agent",
"--script",
os.path.join(cls.scenarios_path,
"scenario_setup_3_agent"),
"--echo",
"--url",
cls.url])
client_cli.main(
args=["--name", "BANK-dealer",
"--script",
os.path.join(cls.scenarios_path,
"scenario_setup_4_dealer"),
"--echo",
"--url",
cls.url])
state = mktplace_state.MarketPlaceState(cls.url)
state.fetch()
except:
if cls.vnm is not None:
cls.vnm.shutdown()
raise
def test_scenario_setup(self):
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//marketplace/asset/token",
'Asset'))
self.assertIsNotNone(state.n2i("//mkt", 'Participant'))
self.assertIsNotNone(state.n2i("//mkt/market/account", 'Account'))
self.assertIsNotNone(state.n2i("//mkt/asset-type/currency",
'AssetType'))
self.assertIsNotNone(state.n2i("//mkt/asset-type/commercialpaper",
'AssetType'))
self.assertIsNotNone(state.n2i("//mkt/asset/currency/USD", 'Asset'))
self.assertIsNotNone(state.n2i("//mkt/asset/commercialpaper/note",
'Asset'))
self.assertIsNotNone(state.n2i("//mkt/market/holding/currency/USD",
'Holding'))
self.assertIsNotNone(state.n2i("//mkt/market/holding/token",
'Holding'))
self.assertIsNotNone(state.n2i("//BANK-trader", 'Participant'))
self.assertIsNotNone(state.n2i("//BANK-trader/USD", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-trader/paper", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-trader/holding/token",
'Holding'))
self.assertIsNotNone(state.n2i("//BANK-agent", 'Participant'))
self.assertIsNotNone(state.n2i("//BANK-agent/USD", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-agent/paper", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-agent/holding/token",
'Holding'))
self.assertIsNotNone(state.n2i("//BANK-dealer", 'Participant'))
self.assertIsNotNone(state.n2i("//BANK-dealer/USD",
'Holding'))
self.assertIsNotNone(state.n2i("//BANK-dealer/paper",
'Holding'))
self.assertIsNotNone(state.n2i("//BANK-dealer/holding/token",
'Holding'))
def test_scenario_a(self):
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-trader", 'Participant'))
self.assertIsNotNone(state.n2i("//BANK-trader/USD", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-trader/paper", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-agent", 'Participant'))
self.assertIsNotNone(state.n2i("//BANK-agent/USD", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-agent/paper", 'Holding'))
self.assertEquals(state.State[state.n2i("//BANK-trader/USD",
'Holding')]["count"], 1000000)
self.assertEquals(state.State[state.n2i("//BANK-agent/USD",
'Holding')]["count"], 1000000)
self.assertEquals(
state.State[state.n2i("//BANK-trader/paper",
'Holding')]["count"], 10)
self.assertEquals(
state.State[state.n2i("//BANK-agent/paper",
'Holding')]["count"], 10)
client_cli.main(
args=["--name", "BANK-trader",
"--script", os.path.join(os.path.dirname(__file__),
"cp_scenarios",
"scenario_a_1_trader"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-trader/offer-a",
'ExchangeOffer'))
client_cli.main(args=["--name", "BANK-agent",
"--script",
os.path.join(self.scenarios_path,
"scenario_a_2_agent"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertEquals(state.State[state.n2i("//BANK-trader/USD",
'Holding')]["count"], 900612)
self.assertEquals(state.State[state.n2i("//BANK-agent/USD",
'Holding')]["count"], 1099388)
self.assertEquals(
state.State[state.n2i("//BANK-trader/paper",
'Holding')]["count"], 11)
self.assertEquals(
state.State[state.n2i("//BANK-agent/paper",
"Holding")]["count"], 9)
def test_scenario_b(self):
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-trader", 'Participant'))
self.assertIsNotNone(state.n2i("//BANK-trader/USD", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-trader/paper", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-dealer", "Participant"))
self.assertIsNotNone(state.n2i("//BANK-dealer/USD", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-dealer/paper", "Holding"))
self.assertIn("count", state.State[state.n2i("//BANK-trader/USD",
"Holding")])
self.assertIn("count", state.State[state.n2i("//BANK-trader/paper",
"Holding")])
self.assertIn("count", state.State[state.n2i("//BANK-dealer/USD",
"Holding")])
self.assertIn("count", state.State[state.n2i("//BANK-dealer/paper",
"Holding")])
trader_usd = state.State[state.n2i("//BANK-trader/USD",
'Holding')]["count"]
trader_paper = state.State[state.n2i("//BANK-trader/paper",
"Holding")]["count"]
dealer_usd = state.State[state.n2i("//BANK-dealer/USD",
"Holding")]["count"]
dealer_paper = state.State[state.n2i("//BANK-dealer/paper",
"Holding")]["count"]
client_cli.main(
args=["--name", "BANK-trader",
"--script",
os.path.join(self.scenarios_path,
"scenario_b_1_trader"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-trader/offer-b",
'ExchangeOffer'))
client_cli.main(
args=["--name", "BANK-dealer",
"--script",
os.path.join(self.scenarios_path,
"scenario_b_2_dealer"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertEquals(state.State[state.n2i("//BANK-trader/USD",
"Holding")]["count"],
trader_usd - 99388)
self.assertEquals(state.State[state.n2i("//BANK-dealer/USD",
"Holding")]["count"],
dealer_usd + 99388)
self.assertEquals(
state.State[state.n2i("//BANK-trader/paper", "Holding")]["count"],
trader_paper + 1)
self.assertEquals(
state.State[state.n2i("//BANK-dealer/paper",
"Holding")]["count"],
dealer_paper - 1)
def test_scenario_c(self):
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-trader", "Participant"))
self.assertIsNotNone(state.n2i("//BANK-trader/USD", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-trader/paper", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-dealer", "Participant"))
self.assertIsNotNone(state.n2i("//BANK-dealer/USD", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-dealer/paper", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-agent", "Participant"))
self.assertIsNotNone(state.n2i("//BANK-agent/USD", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-agent/paper", "Holding"))
self.assertIn("count", state.State[state.n2i("//BANK-trader/USD",
"Holding")])
self.assertIn("count", state.State[state.n2i("//BANK-trader/paper",
"Holding")])
self.assertIn("count", state.State[state.n2i("//BANK-dealer/USD",
"Holding")])
self.assertIn("count", state.State[state.n2i("//BANK-agent/USD",
"Holding")])
trader_usd = state.State[state.n2i("//BANK-trader/USD",
"Holding")]["count"]
trader_paper = state.State[state.n2i("//BANK-trader/paper",
"Holding")]["count"]
dealer_usd = state.State[state.n2i("//BANK-dealer/USD",
"Holding")]["count"]
agent_usd = state.State[state.n2i("//BANK-agent/USD",
"Holding")]["count"]
client_cli.main(
args=["--name", "BANK-trader",
"--script",
os.path.join(self.scenarios_path,
"scenario_c_1_trader"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-trader/offer-c-trader",
'ExchangeOffer'))
client_cli.main(
args=["--name", "BANK-dealer",
"--script",
os.path.join(self.scenarios_path,
"scenario_c_2_dealer"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-dealer/paper-scenario-c",
'Holding'))
self.assertIsNotNone(state.n2i("//BANK-dealer/offer-c-dealer",
'ExchangeOffer'))
self.assertEquals(
state.State[state.n2i("//BANK-dealer/paper-scenario-c",
'Holding')]["count"], 0)
client_cli.main(args=["--name", "BANK-agent",
"--script",
os.path.join(self.scenarios_path,
"scenario_c_3_agent"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(
state.n2i("//BANK-agent/paper-scenario-c", 'Holding'))
self.assertEquals(state.State[state.n2i("//BANK-trader/USD",
'Holding')]["count"],
trader_usd)
self.assertEquals(
state.State[state.n2i("//BANK-dealer/USD",
"Holding")]["count"], dealer_usd - 99388)
self.assertEquals(state.State[state.n2i("//BANK-agent/USD",
"Holding")]["count"],
agent_usd + 99388)
self.assertEquals(
state.State[state.n2i("//BANK-dealer/paper-scenario-c",
"Holding")]["count"], 1)
self.assertEquals(
state.State[state.n2i("//BANK-agent/paper-scenario-c",
"Holding")]["count"], 0)
client_cli.main(
args=["--name", "BANK-dealer",
"--script",
os.path.join(self.scenarios_path,
"scenario_c_4_dealer"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-agent/paper-scenario-c",
'Holding'))
self.assertEquals(
state.State[state.n2i("//BANK-trader/USD",
"Holding")]["count"], trader_usd - 99388)
self.assertEquals(state.State[state.n2i("//BANK-dealer/USD",
"Holding")]["count"],
dealer_usd)
self.assertEquals(
state.State[state.n2i("//BANK-agent/USD", "Holding")]["count"],
agent_usd + 99388)
self.assertEquals(
state.State[state.n2i("//BANK-trader/paper", "Holding")]["count"],
trader_paper + 1)
self.assertEquals(
state.State[state.n2i("//BANK-dealer/paper-scenario-c",
"Holding")]["count"], 0)
self.assertEquals(
state.State[state.n2i("//BANK-agent/paper-scenario-c",
"Holding")]["count"], 0)
def test_scenario_d(self):
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-trader", 'Participant'))
self.assertIsNotNone(state.n2i("//BANK-trader/USD", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-trader/paper", 'Holding'))
self.assertIsNotNone(state.n2i("//BANK-agent", 'Participant'))
self.assertIsNotNone(state.n2i("//BANK-agent/USD", "Holding"))
self.assertIsNotNone(state.n2i("//BANK-agent/paper", 'Holding'))
self.assertIn("count", state.State[state.n2i("//BANK-trader/USD",
'Holding')])
self.assertIn("count", state.State[state.n2i("//BANK-trader/paper",
"Holding")])
self.assertIn("count", state.State[state.n2i("//BANK-agent/USD",
'Holding')])
self.assertIn("count", state.State[state.n2i("//BANK-agent/paper",
"Holding")])
trader_usd = state.State[state.n2i("//BANK-trader/USD",
'Holding')]["count"]
trader_paper = state.State[state.n2i("//BANK-trader/paper",
'Holding')]["count"]
agent_usd = state.State[state.n2i("//BANK-agent/USD",
'Holding')]["count"]
agent_paper = state.State[state.n2i("//BANK-agent/paper",
'Holding')]["count"]
client_cli.main(
args=["--name", "BANK-trader",
"--script",
os.path.join(self.scenarios_path,
"scenario_d_1_trader"),
"--echo",
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertIsNotNone(state.n2i("//BANK-trader/offer-d",
'ExchangeOffer'))
client_cli.main(args=["--name", "BANK-agent",
"--script",
os.path.join(self.scenarios_path,
"scenario_d_2_agent"),
"--url",
self.url])
state = mktplace_state.MarketPlaceState(self.url)
state.fetch()
self.assertEquals(state.State[state.n2i("//BANK-trader/USD",
'Holding')]["count"],
trader_usd + 100000)
self.assertEquals(state.State[state.n2i("//BANK-agent/USD",
'Holding')]["count"],
agent_usd - 100000)
self.assertEquals(
state.State[state.n2i("//BANK-trader/paper", 'Holding')]["count"],
trader_paper - 1)
self.assertEquals(
state.State[state.n2i("//BANK-agent/paper", "Holding")]["count"],
agent_paper + 1)
@classmethod
def tearDownClass(cls):
if cls.vnm is not None:
cls.vnm.shutdown(archive_name="TestCommercialPaperScenarios")
else:
print "No Validator data and logs to preserve."
| 44.965739 | 80 | 0.472403 | 1,833 | 20,999 | 5.322968 | 0.097109 | 0.088552 | 0.121759 | 0.154966 | 0.832121 | 0.825049 | 0.81111 | 0.78518 | 0.774111 | 0.740699 | 0 | 0.016962 | 0.385161 | 20,999 | 466 | 81 | 45.062232 | 0.73875 | 0.031716 | 0 | 0.70844 | 0 | 0 | 0.204213 | 0.037506 | 0 | 0 | 0 | 0 | 0.245524 | 0 | null | null | 0 | 0.015345 | null | null | 0.002558 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4626800feebad0c7b442c1edb35cae7168da4c80 | 39,535 | py | Python | pgAdmin/pgadmin4/web/pgadmin/tools/sqleditor/utils/tests/test_save_changed_data.py | WeilerWebServices/PostgreSQL | ae594ed077bebbad1be3c1d95c38b7c2c2683e8c | [
"PostgreSQL"
] | null | null | null | pgAdmin/pgadmin4/web/pgadmin/tools/sqleditor/utils/tests/test_save_changed_data.py | WeilerWebServices/PostgreSQL | ae594ed077bebbad1be3c1d95c38b7c2c2683e8c | [
"PostgreSQL"
] | null | null | null | pgAdmin/pgadmin4/web/pgadmin/tools/sqleditor/utils/tests/test_save_changed_data.py | WeilerWebServices/PostgreSQL | ae594ed077bebbad1be3c1d95c38b7c2c2683e8c | [
"PostgreSQL"
] | null | null | null | ##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2020, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
import json
import random
from pgadmin.browser.server_groups.servers.databases.tests import utils as \
database_utils
from pgadmin.utils.route import BaseTestGenerator
from regression import parent_node_dict
from regression.python_test_utils import test_utils as utils
from pgadmin.tools.sqleditor.tests.execute_query_test_utils \
import execute_query
class TestSaveChangedData(BaseTestGenerator):
""" This class tests saving data changes to updatable query resultsets """
scenarios = [
('When inserting new valid row', dict(
save_payload={
"updated": {},
"added": {
"2": {
"err": False,
"data": {
"pk_col": "3",
"__temp_PK": "2",
"normal_col": "three",
"char_col": "char",
"bit_col": "10101"
}
}
},
"staged_rows": {},
"deleted": {},
"updated_index": {},
"added_index": {"2": "2"},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=True,
check_sql='SELECT * FROM %s WHERE pk_col = 3',
check_result=[[3, "three", "char", "10101"]]
)),
('When inserting row with long value for character varying data type',
dict(
save_payload={
"updated": {},
"added": {
"2": {
"err": False,
"data": {
"pk_col": "3",
"__temp_PK": "2",
"normal_col": "invalid-log-string"
}
}
},
"staged_rows": {},
"deleted": {},
"updated_index": {},
"added_index": {"2": "2"},
"columns": [{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}]
},
save_status=False,
check_sql='SELECT * FROM %s WHERE pk_col = 3',
check_result='SELECT 0')),
('When inserting row with long value for character data type', dict(
save_payload={
"updated": {},
"added": {
"2": {
"err": False,
"data": {
"pk_col": "3",
"__temp_PK": "2",
"char_col": "invalid long string"
}
}
},
"staged_rows": {},
"deleted": {},
"updated_index": {},
"added_index": {"2": "2"},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=False,
check_sql='SELECT * FROM %s WHERE pk_col = 3',
check_result='SELECT 0'
)),
('When inserting row with long value for bit data type', dict(
save_payload={
"updated": {},
"added": {
"2": {
"err": False,
"data": {
"pk_col": "3",
"__temp_PK": "2",
"bit_col": "1010101010"
}
}
},
"staged_rows": {},
"deleted": {},
"updated_index": {},
"added_index": {"2": "2"},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=False,
check_sql='SELECT * FROM %s WHERE pk_col = 3',
check_result='SELECT 0'
)),
('When inserting new invalid row', dict(
save_payload={
"updated": {},
"added": {
"2": {
"err": False,
"data": {
"pk_col": "1",
"__temp_PK": "2",
"normal_col": "four"
}
}
},
"staged_rows": {},
"deleted": {},
"updated_index": {},
"added_index": {"2": "2"},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=False,
check_sql="SELECT * FROM %s "
"WHERE pk_col = 1 AND normal_col = 'four'",
check_result='SELECT 0'
)),
('When updating a row in a valid way', dict(
save_payload={
"updated": {
"1":
{"err": False,
"data": {"normal_col": "ONE"},
"primary_keys":
{"pk_col": 1}
}
},
"added": {},
"staged_rows": {},
"deleted": {},
"updated_index": {"1": "1"},
"added_index": {},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=True,
check_sql='SELECT * FROM %s WHERE pk_col = 1',
check_result=[[1, "ONE", 'ch1 ', '00000']]
)),
('When updating a row with long data for character varying data type',
dict(
save_payload={
"updated": {
"1": {"err": False,
"data": {"normal_col": "INVALID-COL-LENGTH"},
"primary_keys": {"pk_col": 1}
}
},
"added": {},
"staged_rows": {},
"deleted": {},
"updated_index": {"1": "1"},
"added_index": {},
"columns": [{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}]
},
save_status=False,
check_sql='SELECT * FROM %s WHERE pk_col = 1',
check_result=[[1, "one", 'ch1 ', '00000']]
)),
('When updating a row with long data for character data type',
dict(
save_payload={
"updated": {
"1":
{"err": False,
"data": {"char_col": "INVALID-COL-LENGTH"},
"primary_keys":
{"pk_col": 1}
}
},
"added": {},
"staged_rows": {},
"deleted": {},
"updated_index": {"1": "1"},
"added_index": {},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=False,
check_sql='SELECT * FROM %s WHERE pk_col = 1',
check_result=[[1, "one", 'ch1 ', '00000']]
)),
('When updating a row with long data for bit data type',
dict(
save_payload={
"updated": {
"1":
{"err": False,
"data": {"bit_col": "1010110101"},
"primary_keys":
{"pk_col": 1}
}
},
"added": {},
"staged_rows": {},
"deleted": {},
"updated_index": {"1": "1"},
"added_index": {},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=False,
check_sql='SELECT * FROM %s WHERE pk_col = 1',
check_result=[[1, "one", 'ch1 ', '00000']]
)),
('When updating a row in an invalid way', dict(
save_payload={
"updated": {
"1":
{"err": False,
"data": {"pk_col": "1"},
"primary_keys":
{"pk_col": 2}
}
},
"added": {},
"staged_rows": {},
"deleted": {},
"updated_index": {"1": "1"},
"added_index": {},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=False,
check_sql="SELECT * FROM %s "
"WHERE pk_col = 1 AND normal_col = 'two'",
check_result='SELECT 0'
)),
('When deleting a row', dict(
save_payload={
"updated": {},
"added": {},
"staged_rows": {"1": {"pk_col": 2}},
"deleted": {"1": {"pk_col": 2}},
"updated_index": {},
"added_index": {},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False
}, {
"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "char_col",
"display_name": "normal_col",
"column_type": "character",
"column_type_internal": "character",
"pos": 2,
"label": "char_col<br>character",
"cell": "string",
"can_edit": True,
"type": "character",
"not_null": False,
"has_default_val": False,
"is_array": False
}, {
"name": "bit_col",
"display_name": "bit_col",
"column_type": "bit",
"column_type_internal": "bit",
"pos": 3,
"label": "bit_col<br>bit",
"cell": "string",
"can_edit": True,
"type": "bit",
"not_null": False,
"has_default_val": False,
"is_array": False
}
]
},
save_status=True,
check_sql='SELECT * FROM %s WHERE pk_col = 2',
check_result='SELECT 0'
)),
]
def setUp(self):
self._initialize_database_connection()
self._initialize_query_tool()
self._initialize_urls_and_select_sql()
def runTest(self):
self._create_test_table()
self._execute_sql_query(self.select_sql)
self._save_changed_data()
self._check_saved_data()
def tearDown(self):
# Disconnect the database
database_utils.disconnect_database(self, self.server_id, self.db_id)
def _execute_sql_query(self, query):
is_success, response_data = \
execute_query(tester=self.tester,
query=query,
start_query_tool_url=self.start_query_tool_url,
poll_url=self.poll_url)
self.assertEqual(is_success, True)
return response_data
def _save_changed_data(self):
# Send a request to save changed data
response = self.tester.post(self.save_url,
data=json.dumps(self.save_payload),
content_type='html/json')
self.assertEqual(response.status_code, 200)
# Check that the save is successful
response_data = json.loads(response.data.decode('utf-8'))
save_status = response_data['data']['status']
self.assertEqual(save_status, self.save_status)
def _check_saved_data(self):
check_sql = self.check_sql % self.test_table_name
response_data = self._execute_sql_query(check_sql)
# Check table for updates
result = response_data['data']['result']
self.assertEqual(result, self.check_result)
def _initialize_database_connection(self):
database_info = parent_node_dict["database"][-1]
self.db_name = database_info["db_name"]
self.server_id = database_info["server_id"]
self.db_id = database_info["db_id"]
db_con = database_utils.connect_database(self,
utils.SERVER_GROUP,
self.server_id,
self.db_id)
driver_version = utils.get_driver_version()
driver_version = float('.'.join(driver_version.split('.')[:2]))
if driver_version < 2.8:
self.skipTest('Updatable resultsets require pyscopg 2.8 or later')
if not db_con["info"] == "Database connected.":
raise Exception("Could not connect to the database.")
def _initialize_query_tool(self):
self.trans_id = str(random.randint(1, 9999999))
url = '/datagrid/initialize/query_tool/{0}/{1}/{2}/{3}'.format(
self.trans_id, utils.SERVER_GROUP, self.server_id, self.db_id)
response = self.tester.post(url)
self.assertEqual(response.status_code, 200)
def _initialize_urls_and_select_sql(self):
self.start_query_tool_url = \
'/sqleditor/query_tool/start/{0}'.format(self.trans_id)
self.save_url = '/sqleditor/save/{0}'.format(self.trans_id)
self.poll_url = '/sqleditor/poll/{0}'.format(self.trans_id)
def _create_test_table(self):
self.test_table_name = "test_for_save_data" + \
str(random.randint(1000, 9999))
create_sql = """
DROP TABLE IF EXISTS "%s";
CREATE TABLE "%s"(
pk_col INT PRIMARY KEY,
normal_col character varying(5),
char_col character(4),
bit_col bit(5));
INSERT INTO "%s" VALUES
(1, 'one', 'ch1', '00000'),
(2, 'two', 'ch2', '11111');
""" % (self.test_table_name,
self.test_table_name,
self.test_table_name)
self.select_sql = 'SELECT * FROM %s;' % self.test_table_name
utils.create_table_with_query(self.server, self.db_name, create_sql)
| 41.4413 | 78 | 0.349159 | 2,889 | 39,535 | 4.493596 | 0.071997 | 0.067786 | 0.04745 | 0.05084 | 0.803035 | 0.787321 | 0.773995 | 0.773995 | 0.770605 | 0.755662 | 0 | 0.012544 | 0.532187 | 39,535 | 953 | 79 | 41.484785 | 0.689376 | 0.008246 | 0 | 0.799346 | 0 | 0 | 0.273659 | 0.014394 | 0 | 0 | 0 | 0 | 0.005453 | 1 | 0.010905 | false | 0 | 0.007634 | 0 | 0.02181 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
464a9070506ec04fc41a92c1e2bec63020bb820b | 203 | py | Python | user/apps.py | abdukhashimov/django-rest-blog-2 | ae12c24f95b3a8f216c85a8f32c47e215118ce07 | [
"MIT"
] | null | null | null | user/apps.py | abdukhashimov/django-rest-blog-2 | ae12c24f95b3a8f216c85a8f32c47e215118ce07 | [
"MIT"
] | null | null | null | user/apps.py | abdukhashimov/django-rest-blog-2 | ae12c24f95b3a8f216c85a8f32c47e215118ce07 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
from user.signals import password_reset_token_created
class UserConfig(AppConfig):
name = 'user'
def ready(self):
return password_reset_token_created
| 20.3 | 53 | 0.768473 | 26 | 203 | 5.769231 | 0.692308 | 0.173333 | 0.24 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17734 | 203 | 9 | 54 | 22.555556 | 0.898204 | 0 | 0 | 0 | 0 | 0 | 0.019704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.333333 | 0.333333 | 0.166667 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 8 |
d3cd33766e3e4a3b55c345351fc8d03bab155471 | 112 | py | Python | bin/ChainerWing.py | fukatani/ChainerWing | 37a1435635cbc610dc86d15c8baca67622355757 | [
"MIT",
"BSD-3-Clause"
] | 26 | 2017-07-03T13:50:28.000Z | 2021-02-06T08:43:42.000Z | bin/ChainerWing.py | fukatani/CW_gui | 37a1435635cbc610dc86d15c8baca67622355757 | [
"MIT",
"BSD-3-Clause"
] | 10 | 2017-07-03T14:30:00.000Z | 2017-12-21T13:26:43.000Z | bin/ChainerWing.py | fukatani/CW_gui | 37a1435635cbc610dc86d15c8baca67622355757 | [
"MIT",
"BSD-3-Clause"
] | 6 | 2017-03-15T13:48:09.000Z | 2019-04-15T19:28:02.000Z | #!python3
if __name__ == '__main__':
import chainer_wing.gui_main.main
chainer_wing.gui_main.main.run()
| 22.4 | 37 | 0.732143 | 16 | 112 | 4.375 | 0.5625 | 0.314286 | 0.4 | 0.514286 | 0.628571 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010417 | 0.142857 | 112 | 4 | 38 | 28 | 0.71875 | 0.071429 | 0 | 0 | 0 | 0 | 0.07767 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
d3d91f9897831b4a334f6cff4f38e65cfbe75481 | 28,341 | py | Python | tests/test_encode.py | frodrigo/vector-tile-base | 591234374f521967c311dc023475b1e37187e7f9 | [
"BSD-3-Clause"
] | 42 | 2017-08-09T17:21:06.000Z | 2022-02-14T22:05:47.000Z | tests/test_encode.py | frodrigo/vector-tile-base | 591234374f521967c311dc023475b1e37187e7f9 | [
"BSD-3-Clause"
] | 13 | 2017-08-16T14:01:02.000Z | 2021-12-17T10:20:47.000Z | tests/test_encode.py | frodrigo/vector-tile-base | 591234374f521967c311dc023475b1e37187e7f9 | [
"BSD-3-Clause"
] | 13 | 2017-11-18T11:09:13.000Z | 2021-12-16T12:46:21.000Z | import pytest
from vector_tile_base import vector_tile_pb2
from vector_tile_base import VectorTile, SplineFeature, PointFeature, PolygonFeature, LineStringFeature, Layer, FeatureAttributes, Float, FloatList
def test_no_layers():
vt = VectorTile()
assert len(vt.serialize()) == 0
def test_create_layer():
vt = VectorTile()
layer = vt.add_layer('point')
assert layer.name == 'point'
assert layer.version == 2
assert isinstance(layer, Layer)
layer = vt.add_layer('point_3', 3)
assert layer.name == 'point_3'
assert layer.version == 3
assert isinstance(layer, Layer)
layer = vt.add_layer('point_4', 4)
assert layer.name == 'point_4'
assert layer.version == 4
assert isinstance(layer, Layer)
def test_layer_extent():
vt = VectorTile()
layer = vt.add_layer('test')
assert layer.extent == 4096
layer.extent = 8000
assert layer.extent == 8000
def test_layer_zoom_x_y():
vt = VectorTile()
layer = vt.add_layer('test0', version=2)
assert layer.x == None
assert layer.y == None
assert layer.zoom == None
with pytest.raises(Exception):
layer.set_tile_location(zoom=3, x=4, y=2)
layer = vt.add_layer('test1', version=3)
assert layer.x == None
assert layer.y == None
assert layer.zoom == None
layer.set_tile_location(zoom=3, x=4, y=2)
assert layer.x == 4
assert layer.y == 2
assert layer.zoom == 3
layer = vt.add_layer('test2', version=3, x=0, y=0, zoom=0)
assert layer.x == 0
assert layer.y == 0
assert layer.zoom == 0
layer.set_tile_location(zoom=3, x=4, y=2)
assert layer.x == 4
assert layer.y == 2
assert layer.zoom == 3
with pytest.raises(Exception):
layer.x = 5
with pytest.raises(Exception):
layer.y = 5
with pytest.raises(Exception):
layer.zoom = 4
with pytest.raises(Exception):
layer = vt.add_layer('test2', version=2, x=0, y=0, zoom=0)
with pytest.raises(Exception):
vt.add_layer('test3', version=3, x=-1, y=0, zoom=0)
with pytest.raises(Exception):
vt.add_layer('test4', version=3, x=1, y=0, zoom=0)
with pytest.raises(Exception):
vt.add_layer('test5', version=3, x=2, y=0, zoom=1)
with pytest.raises(Exception):
vt.add_layer('test3', version=3, x=0, y=-1, zoom=0)
with pytest.raises(Exception):
vt.add_layer('test4', version=3, x=0, y=1, zoom=0)
with pytest.raises(Exception):
vt.add_layer('test5', version=3, x=0, y=2, zoom=1)
with pytest.raises(Exception):
vt.add_layer('test3', version=3, x=0, y=0, zoom=-1)
with pytest.raises(Exception):
vt.add_layer('test3', version=3, x=0, y=0, zoom=51)
def test_layer_name():
vt = VectorTile()
layer = vt.add_layer('test')
assert layer.name == 'test'
layer.name = 'foo'
assert layer.name == 'foo'
def test_layer_features():
vt = VectorTile()
layer = vt.add_layer('test')
assert len(layer.features) == 0
assert isinstance(layer.features, list)
with pytest.raises(AttributeError):
layer.features = [1,2]
assert len(layer.features) == 0
def test_feature_id():
vt = VectorTile()
layer = vt.add_layer('test', version=2)
feature = layer.add_point_feature()
assert feature.id == None
feature.id = 12
assert feature.id == 12
# Fails for a version 2 layer
with pytest.raises(Exception):
feature.id = "FeatureName"
layer = vt.add_layer('test2', version=3)
feature = layer.add_point_feature()
assert feature.id == None
feature.id = 12
assert feature.id == 12
feature.id = "FeatureName"
assert feature.id == "FeatureName"
data = vt.serialize()
vt = VectorTile(data)
feature = vt.layers[0].features[0]
assert feature.id == 12
feature = vt.layers[1].features[0]
assert feature.id == "FeatureName"
def test_feature_attributes_version_2():
vt = VectorTile()
layer = vt.add_layer('test', version=2)
assert layer.version == 2
feature = layer.add_point_feature()
assert isinstance(feature, PointFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
prop = feature.attributes
assert isinstance(prop, FeatureAttributes)
assert feature.attributes == {}
assert prop == {}
prop['fun'] = 'stuff'
assert 'fun' in prop
assert prop['fun'] == 'stuff'
assert feature.attributes['fun'] == 'stuff'
assert feature.attributes == {'fun':'stuff'}
# Can set by external dictionary
prop_dict = { 'number': 1, 'bool': True, 'string': 'foo', 'float': 4.1 }
feature.attributes = prop_dict
assert feature.attributes == prop_dict
# Key error on not existant property
with pytest.raises(KeyError):
foo = feature.attributes['doesnotexist']
# Type errors on invalid key types
with pytest.raises(TypeError):
feature.attributes[1.234] = True
with pytest.raises(TypeError):
feature.attributes[1] = True
with pytest.raises(TypeError):
foo = feature.attributes[1.234]
with pytest.raises(TypeError):
foo = feature.attributes[1]
# During setting invalid attributes with bad keys or value types will just be dropped
prop_dict = {'foo': [1,2,3], 'fee': [{'a':'b'}, {'a':['c','d']}], 1.2341: 'stuff', 1: 'fish', 'go': False, 'double': 2.32432, 'float': Float(23432.3222) }
prop_dict2 = {'go': False, 'double': 2.32432, 'float': Float(23432.3222) }
feature.attributes = prop_dict
assert feature.attributes != prop_dict
assert feature.attributes == prop_dict2
# Show that geometric attributes don't work with version 2
with pytest.raises(Exception):
feature.geometric_attributes = {'hmm': [1,2,3,4,5]}
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
assert feature.attributes['go'] == prop_dict2['go']
assert feature.attributes['double'] == prop_dict2['double']
# note change is expected due to float encoding!
assert feature.attributes['float'] == 23432.322265625
# Show that geometric attributes don't work with version 2
assert feature.geometric_attributes == {}
def test_feature_attributes_version_3_legacy():
vt = VectorTile()
layer = vt.add_layer('test', version=3, legacy_attributes=True)
assert layer.version == 3
feature = layer.add_point_feature()
assert isinstance(feature, PointFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
prop = feature.attributes
assert isinstance(prop, FeatureAttributes)
assert feature.attributes == {}
assert prop == {}
prop['fun'] = 'stuff'
assert 'fun' in prop
assert prop['fun'] == 'stuff'
assert feature.attributes['fun'] == 'stuff'
assert feature.attributes == {'fun':'stuff'}
# Can set by external dictionary
prop_dict = { 'number': 1, 'bool': True, 'string': 'foo', 'float': 4.1 }
feature.attributes = prop_dict
assert feature.attributes == prop_dict
# Key error on not existant property
with pytest.raises(KeyError):
foo = feature.attributes['doesnotexist']
# Type errors on invalid key types
with pytest.raises(TypeError):
feature.attributes[1.234] = True
with pytest.raises(TypeError):
feature.attributes[1] = True
with pytest.raises(TypeError):
foo = feature.attributes[1.234]
with pytest.raises(TypeError):
foo = feature.attributes[1]
assert len(layer._values) == 5
# During setting invalid attributes with bad keys or value types will just be dropped
prop_dict = {
'foo': [1,2,3],
'fee': [{'a':'b'}, {'a':['c','d']}],
1.2341: 'stuff',
1: 'fish',
'go': False,
'double': 2.32432,
'float': Float(23432.3222),
'double2': 23432.3222,
'double3': 23432.3222
}
prop_dict2 = {
'go': False,
'double': 2.32432,
'float': Float(23432.3222),
'double2': 23432.3222,
'double3': 23432.3222
}
feature.attributes = prop_dict
assert feature.attributes != prop_dict
assert feature.attributes == prop_dict2
assert len(layer._values) == 9
# Show that geometric attributes don't work with version 3 legacy attributes
with pytest.raises(Exception):
feature.geometric_attributes = {'hmm': [1,2,3,4,5]}
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
assert feature.attributes['go'] == prop_dict2['go']
assert feature.attributes['double'] == prop_dict2['double']
# note change is expected due to float encoding!
assert feature.attributes['float'] == 23432.322265625
# Show that geometric attributes don't work with version 3 with legacy attributes
assert feature.geometric_attributes == {}
def test_feature_attributes_version_3():
vt = VectorTile()
layer = vt.add_layer('test', version=3)
assert layer.version == 3
feature = layer.add_point_feature()
assert isinstance(feature, PointFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
prop = feature.attributes
assert isinstance(prop, FeatureAttributes)
assert feature.attributes == {}
assert prop == {}
prop['fun'] = 'stuff'
assert 'fun' in prop
assert prop['fun'] == 'stuff'
assert feature.attributes['fun'] == 'stuff'
assert feature.attributes == {'fun':'stuff'}
# Can set by external dictionary
prop_dict = { 'number': 1, 'bool': True, 'string': 'foo', 'float': 4.1 }
feature.attributes = prop_dict
assert feature.attributes == prop_dict
# Key error on not existant property
with pytest.raises(KeyError):
foo = feature.attributes['doesnotexist']
# Type errors on invalid key types
with pytest.raises(TypeError):
feature.attributes[1.234] = True
with pytest.raises(TypeError):
feature.attributes[1] = True
with pytest.raises(TypeError):
foo = feature.attributes[1.234]
with pytest.raises(TypeError):
foo = feature.attributes[1]
dvalues = [1.0, 2.0, None, 2.3, 4.3, None, None, 15.0, 22.5]
scaling1 = layer.add_attribute_scaling(precision=10.0**-6, min_value=1.0, max_value=25.0)
assert scaling1.index == 0
assert scaling1.offset == 0
assert scaling1.base == 1.0
assert scaling1.multiplier == 9.5367431640625e-07
# roughly 10**-8 precision
scaling2 = layer.add_attribute_scaling(offset=10, base=8.0, multiplier=7.450580596923828e-09)
assert scaling2.index == 1
assert scaling2.offset == 10
assert scaling2.base == 8.0
assert scaling2.multiplier == 7.450580596923828e-09
flist1 = FloatList(scaling1, dvalues)
flist2 = FloatList(scaling2, dvalues)
# During setting invalid attributes with bad keys or value types will just be dropped
prop_dict = {
'foo': [1,2,3],
'fee': [{'a':'b'}, {'a':['c','d']}],
1.2341: 'stuff',
1: 'fish',
'go': False,
'double': 2.32432,
'float': Float(23432.3222),
'doubleList': flist1,
'otherDoubleList': flist2
}
prop_dict2 = {
'foo': [1,2,3],
'fee': [{'a':'b'}, {'a':['c','d']}],
'go': False,
'double': 2.32432,
'float': Float(23432.3222),
'doubleList': flist1,
'otherDoubleList': flist2
}
geometric_dict = {
'now': [1,2,4,5,234],
'stuff': [{'x':12}, None, None, None, {'y':13}],
'dlist': flist2,
'not': None,
'this': 8,
'or': True,
'that': { 'lost':'values'}
}
geometric_dict2 = {
'now': [1,2,4,5,234],
'stuff': [{'x':12}, None, None, None, {'y':13}],
'dlist': flist2
}
feature.attributes = prop_dict
assert feature.attributes != prop_dict
assert feature.attributes == prop_dict2
feature.geometric_attributes = geometric_dict
assert feature.geometric_attributes == geometric_dict2
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
assert feature.attributes['foo'] == prop_dict2['foo']
assert feature.attributes['fee'] == prop_dict2['fee']
assert feature.attributes['go'] == prop_dict2['go']
assert feature.attributes['double'] == prop_dict2['double']
# note change is expected due to float encoding!
assert feature.attributes['float'] == 23432.322265625
# specialized double encoding with scaling should result in approximate equality
assert isinstance(feature.attributes['doubleList'], list)
dlist = feature.attributes['doubleList']
for i in range(len(dlist)):
if dvalues[i] is None:
assert dlist[i] is None
else:
assert abs(dvalues[i] - dlist[i]) < 10.0**-6
assert isinstance(feature.attributes['otherDoubleList'], list)
dlist = feature.attributes['otherDoubleList']
for i in range(len(dlist)):
if dvalues[i] is None:
assert dlist[i] is None
else:
assert abs(dvalues[i] - dlist[i]) < 10.0**-8
assert feature.geometric_attributes['now'] == geometric_dict2['now']
assert feature.geometric_attributes['stuff'] == geometric_dict2['stuff']
dlist = feature.geometric_attributes['dlist']
for i in range(len(dlist)):
if dvalues[i] is None:
assert dlist[i] is None
else:
assert abs(dvalues[i] - dlist[i]) < 10.0**-8
def test_create_point_feature():
vt = VectorTile()
layer = vt.add_layer('test')
feature = layer.add_point_feature()
assert isinstance(feature, PointFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
assert not feature.has_elevation
feature.add_points([10,11])
geometry = feature.get_points()
assert geometry[0] == [10,11]
# add points simply adds to end
feature.add_points([10,12])
feature.add_points([10,13])
geometry = feature.get_points()
assert geometry[1] == [10,12]
assert geometry[2] == [10,13]
# clear current geometry
feature.clear_geometry()
assert feature.get_points() == []
# This is proper way to add multiple points!
feature.add_points([[10,11],[10,12],[10,13]])
assert feature.get_points() == [[10,11],[10,12],[10,13]]
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
feature.add_points([10,14])
assert feature.get_points() == [[10,11],[10,12],[10,13],[10,14]]
def test_create_point_feature_3d():
vt = VectorTile()
layer = vt.add_layer('test')
## Should fail first because layer is a version 2 layer
with pytest.raises(Exception):
feature = layer.add_point_feature(has_elevation=True)
vt = VectorTile()
layer = vt.add_layer('test', version=3)
feature = layer.add_point_feature(has_elevation=True)
assert isinstance(feature, PointFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
assert feature.has_elevation
## Should fail to add 2 point list
with pytest.raises(IndexError):
feature.add_points([10,11])
feature.add_points([10,11,12])
geometry = feature.get_points()
assert geometry[0] == [10,11,12]
# add points simply adds to end
feature.add_points([10,12,13])
feature.add_points([10,13,14])
geometry = feature.get_points()
assert geometry[1] == [10,12,13]
assert geometry[2] == [10,13,14]
# clear current geometry
feature.clear_geometry()
assert feature.get_points() == []
# This is proper way to add multiple points!
feature.add_points([[10,11,12],[10,12,13],[10,13,14]])
assert feature.get_points() == [[10,11,12],[10,12,13],[10,13,14]]
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
feature.add_points([10,14,15])
assert feature.get_points() == [[10,11,12],[10,12,13],[10,13,14],[10,14,15]]
def test_create_line_feature():
vt = VectorTile()
layer = vt.add_layer('test')
feature = layer.add_line_string_feature()
assert isinstance(feature, LineStringFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
assert not feature.has_elevation
line_string = [[10,11],[10,12],[10,13],[10,14]]
feature.add_line_string(line_string)
# note that we pull back possible multi line string here
assert feature.get_line_strings() == [line_string]
bad_line_string = [[1,1]]
with pytest.raises(Exception):
feature.add_line_string(bad_line_string)
assert feature.get_line_strings() == [line_string]
bad_line_string2 = [[1,1], [0]]
with pytest.raises(IndexError):
feature.add_line_string(bad_line_string2)
assert feature.get_line_strings() == [line_string]
line_string2 = [[9,9],[30,5]]
feature.add_line_string(line_string2)
assert feature.get_line_strings() == [line_string, line_string2]
# clear current geometry
feature.clear_geometry()
assert feature.get_line_strings() == []
feature.add_line_string(line_string)
assert feature.get_line_strings() == [line_string]
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
feature.add_line_string(line_string2)
assert feature.get_line_strings() == [line_string, line_string2]
def test_create_line_feature_3d():
vt = VectorTile()
layer = vt.add_layer('test')
# Should raise because is version 2 tile
with pytest.raises(Exception):
feature = layer.add_line_string_feature(has_elevation=True)
vt = VectorTile()
layer = vt.add_layer('test', version=3)
feature = layer.add_line_string_feature(has_elevation=True)
assert isinstance(feature, LineStringFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
assert feature.has_elevation
line_string = [[10,11,12],[10,12,13],[10,13,14],[10,14,15]]
feature.add_line_string(line_string)
# note that we pull back possible multi line string here
assert feature.get_line_strings() == [line_string]
bad_line_string = [[1,1,1]]
with pytest.raises(Exception):
feature.add_line_string(bad_line_string)
assert feature.get_line_strings() == [line_string]
bad_line_string2 = [[1,1],[2,2]]
with pytest.raises(IndexError):
feature.add_line_string(bad_line_string2)
assert feature.get_line_strings() == [line_string]
line_string2 = [[9,9,9],[30,5,9]]
feature.add_line_string(line_string2)
assert feature.get_line_strings() == [line_string, line_string2]
# clear current geometry
feature.clear_geometry()
assert feature.get_line_strings() == []
feature.add_line_string(line_string)
assert feature.get_line_strings() == [line_string]
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
feature.add_line_string(line_string2)
assert feature.get_line_strings() == [line_string, line_string2]
def test_create_polygon_feature():
vt = VectorTile()
layer = vt.add_layer('test')
feature = layer.add_polygon_feature()
assert isinstance(feature, PolygonFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
assert not feature.has_elevation
polygon = [[[0,0],[10,0],[10,10],[0,10],[0,0]],[[3,3],[3,5],[5,5],[3,3]]]
feature.add_ring(polygon[0])
feature.add_ring(polygon[1])
assert feature.get_rings() == polygon
assert feature.get_polygons() == [polygon]
bad_ring1 = [[0,0],[1,0]]
with pytest.raises(Exception):
feature.add_ring(bad_ring)
assert feature.get_polygons() == [polygon]
bad_ring2 = [[0,0],[1,0],[0,0]]
with pytest.raises(Exception):
feature.add_ring(bad_ring2)
assert feature.get_polygons() == [polygon]
bad_ring3 = [[0,0],[1,0],[1,1],[1],[0,0]]
with pytest.raises(IndexError):
feature.add_ring(bad_ring3)
assert feature.get_polygons() == [polygon]
feature.add_ring(polygon[0])
feature.add_ring(polygon[1])
assert feature.get_polygons() == [polygon, polygon]
# clear current geometry
feature.clear_geometry()
assert feature.get_rings() == []
assert feature.get_polygons() == []
# Add in opposite order
feature.add_ring(polygon[1])
feature.add_ring(polygon[0])
assert feature.get_rings() == [polygon[1], polygon[0]]
# First ring in wrong winding order so dropped from polygon output
assert feature.get_polygons() == [[polygon[0]]]
# clear current geometry
feature.clear_geometry()
assert feature.get_rings() == []
assert feature.get_polygons() == []
feature.add_ring(polygon[0])
feature.add_ring(polygon[1])
assert feature.get_rings() == polygon
assert feature.get_polygons() == [polygon]
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
assert feature.get_rings() == polygon
assert feature.get_polygons() == [polygon]
feature.add_ring(polygon[0])
feature.add_ring(polygon[1])
assert feature.get_polygons() == [polygon, polygon]
def test_create_polygon_feature_3d():
vt = VectorTile()
layer = vt.add_layer('test')
# Should not be allowed with version 2 layer
with pytest.raises(Exception):
feature = layer.add_polygon_feature(has_elevation=True)
vt = VectorTile()
layer = vt.add_layer('test', version=3)
feature = layer.add_polygon_feature(has_elevation=True)
assert isinstance(feature, PolygonFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
assert feature.has_elevation
polygon = [[[0,0,1],[10,0,1],[10,10,1],[0,10,1],[0,0,1]],[[3,3,1],[3,5,1],[5,5,1],[3,3,1]]]
feature.add_ring(polygon[0])
feature.add_ring(polygon[1])
assert feature.get_rings() == polygon
assert feature.get_polygons() == [polygon]
bad_ring1 = [[0,0,1],[1,0,1]]
with pytest.raises(Exception):
feature.add_ring(bad_ring)
assert feature.get_polygons() == [polygon]
bad_ring2 = [[0,0,1],[1,0,1],[0,0,1]]
with pytest.raises(Exception):
feature.add_ring(bad_ring2)
assert feature.get_polygons() == [polygon]
bad_ring3 = [[0,0,1],[1,0,1],[1,1,1],[1,1],[0,0,1]]
with pytest.raises(IndexError):
feature.add_ring(bad_ring3)
assert feature.get_polygons() == [polygon]
feature.add_ring(polygon[0])
feature.add_ring(polygon[1])
assert feature.get_polygons() == [polygon, polygon]
# clear current geometry
feature.clear_geometry()
assert feature.get_rings() == []
assert feature.get_polygons() == []
feature.add_ring(polygon[0])
feature.add_ring(polygon[1])
assert feature.get_rings() == polygon
assert feature.get_polygons() == [polygon]
# Now serialize the tile
data = vt.serialize()
# Reload as new tile to check that cursor moves to proper position for another add point
vt = VectorTile(data)
feature = vt.layers[0].features[0]
assert feature.get_rings() == polygon
assert feature.get_polygons() == [polygon]
feature.add_ring(polygon[0])
feature.add_ring(polygon[1])
assert feature.get_polygons() == [polygon, polygon]
def test_create_spline_feature_fail_v2():
vt = VectorTile()
layer = vt.add_layer('test')
with pytest.raises(Exception):
feature = layer.add_spline_feature()
def test_create_spline_feature():
vt = VectorTile()
layer = vt.add_layer('test', version=3)
feature = layer.add_spline_feature()
assert isinstance(feature, SplineFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
assert not feature.has_elevation
assert feature.degree == 2
scaling = layer.add_attribute_scaling(precision=10.0**-8, min_value=0.0, max_value=25.0)
bad_control_points1 = [[8,10]]
knot_values = [0.0, 0.0, 0.0, 1.0, 2.0, 2.0, 2.0]
knots = FloatList(scaling, knot_values)
with pytest.raises(Exception):
feature.add_spline(bad_control_points1, knots)
bad_control_points2 = [[8,10],[9,11],[9],[12,10]]
with pytest.raises(IndexError):
feature.add_spline(bad_control_points2, knots)
control_points = [[8,10],[9,11],[11,9],[12,10]]
feature.add_spline(control_points, knots)
assert feature.get_splines() == [[control_points, knot_values]]
def test_create_spline_feature_3d():
vt = VectorTile()
layer = vt.add_layer('test', version=3)
feature = layer.add_spline_feature(has_elevation=True)
assert isinstance(feature, SplineFeature)
assert len(layer.features) == 1
assert feature == layer.features[0]
assert feature.has_elevation
assert feature.degree == 2
scaling = layer.add_attribute_scaling(precision=10.0**-8, min_value=0.0, max_value=25.0)
bad_control_points1 = [[8,10,1]]
knot_values = [0.0, 0.0, 0.0, 1.0, 2.0, 2.0, 2.0]
knots = FloatList(scaling, knot_values)
with pytest.raises(Exception):
feature.add_spline(bad_control_points1, knots)
bad_control_points2 = [[8,10,1],[9,11,1],[9,1],[12,10,1]]
with pytest.raises(IndexError):
feature.add_spline(bad_control_points2, knots)
control_points = [[8,10,1],[9,11,1],[11,9,1],[12,10,1]]
feature.add_spline(control_points, knots)
assert feature.get_splines() == [[control_points, knot_values]]
def test_create_spline_feature_3d_with_elevation_scaling():
vt = VectorTile()
layer = vt.add_layer('test', version=3)
scaling = layer.add_attribute_scaling(precision=10.0**-8, min_value=0.0, max_value=25.0)
knot_values = [0.0, 0.0, 0.0, 1.0, 2.0, 2.0, 2.0]
knots = FloatList(scaling, knot_values)
control_points = [[8,10,-11000.0],[9,11,9000.0],[11,9,85.1],[12,10,1500.74]]
layer.add_elevation_scaling(precision=10.0**-8, min_value=-11000.0, max_value=9000.0)
with pytest.raises(Exception):
feature = layer.add_spline_feature(has_elevation=True)
feature.add_spline(control_points, knots)
feature = layer.add_spline_feature(has_elevation=True)
layer.add_elevation_scaling(precision=10.0**-5, min_value=-11000.0, max_value=9000.0)
feature.add_spline(control_points, knots)
out = feature.get_splines()
assert len(out) == 1
assert len(out[0]) == 2
out_control_points = out[0][0]
out_knot_values = out[0][1]
assert out_knot_values == knot_values
assert len(control_points) == len(out_control_points)
for i in range(len(control_points)):
assert len(control_points[i]) == len(out_control_points[i])
assert out_control_points[i][0] == control_points[i][0]
assert out_control_points[i][1] == control_points[i][1]
assert out_control_points[i][2] == pytest.approx(control_points[i][2])
def test_float():
x1 = Float(293.045998633665)
x2 = Float(293.045998634)
x3 = Float(293.045998633748)
vm = vector_tile_pb2.Tile.Value()
vm.float_value = 293.045998633665
x1_encoded = vm.float_value
vm.float_value = 293.045998634
x2_encoded = vm.float_value
vm.float_value = 293.045998633748
x3_encoded = vm.float_value
assert x1 == x2
assert x1 == x3
assert x1_encoded == x2_encoded
assert x1_encoded == x3_encoded
assert x1 == x1_encoded
assert x2 == x2_encoded
assert x3 == x3_encoded
| 36.854356 | 158 | 0.658551 | 3,896 | 28,341 | 4.654261 | 0.066478 | 0.078862 | 0.045883 | 0.039982 | 0.847902 | 0.810235 | 0.789555 | 0.775327 | 0.756135 | 0.714443 | 0 | 0.058552 | 0.204545 | 28,341 | 768 | 159 | 36.902344 | 0.745786 | 0.097068 | 0 | 0.714734 | 0 | 0 | 0.03455 | 0 | 0 | 0 | 0 | 0 | 0.347962 | 1 | 0.032915 | false | 0 | 0.004702 | 0 | 0.037618 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d3f2e1e62e169db4f298994119bdbbaa7a759586 | 7,888 | py | Python | tests/genetic_models/test_x_dominant.py | Varstation/genmod | 991a0fca36936b5dde49a95e8ea1d4336288c7c0 | [
"MIT"
] | 46 | 2015-01-15T17:53:22.000Z | 2021-08-09T09:35:29.000Z | tests/genetic_models/test_x_dominant.py | Varstation/genmod | 991a0fca36936b5dde49a95e8ea1d4336288c7c0 | [
"MIT"
] | 55 | 2015-06-04T09:09:29.000Z | 2021-05-20T10:48:18.000Z | tests/genetic_models/test_x_dominant.py | moonso/genmod | 99b6c9510ffc67fd54c07eab24de5db7345ef95d | [
"MIT"
] | 15 | 2015-02-06T04:08:23.000Z | 2021-05-04T10:06:58.000Z | from genmod.annotate_models.models import check_X_dominant
from genmod.vcf_tools import Genotype
from ped_parser import FamilyParser
FAMILY_FILE = "tests/fixtures/recessive_trio.ped"
def get_family(family_file = None, family_lines = None):
"""Return a family object
"""
family = None
if family_file:
family = FamilyParser(open(family_file, 'r'))
elif family_lines:
family = FamilyParser(family_lines)
return family
################# Test affected ###############
def test_x_affected_recessive_male():
"""Test a sick male
"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t1\t2\n"
]
family = get_family(family_lines=family_lines)
recessive_variant = {'genotypes': {}}
recessive_variant['genotypes']['proband'] = Genotype(**{'GT':'0/1'})
assert check_X_dominant(
variant = recessive_variant,
family = family
) == True
def test_x_affected_recessive_female():
"""Test a sick heterozygote female
Females needs to bo hom alt to follow pattern
"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t2\t2\n"
]
family = get_family(family_lines=family_lines)
recessive_variant = {'genotypes': {}}
recessive_variant['genotypes']['proband'] = Genotype(**{'GT':'0/1'})
assert check_X_dominant(
variant = recessive_variant,
family = family
) == True
def test_x_affected_homozygote_male():
"""Test an affected homozygote male"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t1\t2\n"
]
family = get_family(family_lines=family_lines)
homozygote_variant = {'genotypes': {}}
homozygote_variant['genotypes']['proband'] = Genotype(**{'GT':'1/1'})
assert check_X_dominant(
variant = homozygote_variant,
family = family
) == True
def test_x_affected_homozygote_female():
"""Test an affected homozygote male"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t2\t2\n"
]
family = get_family(family_lines=family_lines)
homozygote_variant = {'genotypes': {}}
homozygote_variant['genotypes']['proband'] = Genotype(**{'GT':'1/1'})
assert check_X_dominant(
variant = homozygote_variant,
family = family
) == True
def test_x_affected_male_ref_call():
"""Test an affected ref call male"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t1\t2\n"
]
family = get_family(family_lines=family_lines)
homozygote_variant = {'genotypes': {}}
homozygote_variant['genotypes']['proband'] = Genotype(**{'GT':'0/0'})
assert check_X_dominant(
variant = homozygote_variant,
family = family
) == False
def test_x_affected_female_ref_call():
"""Test an affected ref call male"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t2\t2\n"
]
family = get_family(family_lines=family_lines)
homozygote_variant = {'genotypes': {}}
homozygote_variant['genotypes']['proband'] = Genotype(**{'GT':'0/0'})
assert check_X_dominant(
variant = homozygote_variant,
family = family
) == False
def test_x_affected_no_call_male():
"""Test a sick male with no gt call
This should be true since there is no information that contradicts the model
"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t1\t2\n"
]
family = get_family(family_lines=family_lines)
no_call_variant = {'genotypes': {}}
no_call_variant['genotypes']['proband'] = Genotype(**{'GT':'./.'})
assert check_X_dominant(
variant = no_call_variant,
family = family
) == True
def test_x_affected_no_call_male_strict():
"""Test a sick male with no gt call
This should not be true since we allways need 'proof'
for an inheritance pattern if strict mode.
"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t1\t2\n"
]
family = get_family(family_lines=family_lines)
no_call_variant = {'genotypes': {}}
no_call_variant['genotypes']['proband'] = Genotype(**{'GT':'./.'})
assert check_X_dominant(
variant = no_call_variant,
family = family,
strict = True
) == False
############### Test healthy ##############
def test_x_healthy_recessive_male():
"""Test a healthy recessive male
"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t1\t1\n"
]
family = get_family(family_lines=family_lines)
recessive_variant = {'genotypes': {}}
recessive_variant['genotypes']['proband'] = Genotype(**{'GT':'0/1'})
assert check_X_dominant(
variant = recessive_variant,
family = family
) == False
def test_x_healthy_recessive_female():
"""Test a healthy heterozygote female
Females needs to bo hom alt to follow pattern
"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t2\t1\n"
]
family = get_family(family_lines=family_lines)
recessive_variant = {'genotypes': {}}
recessive_variant['genotypes']['proband'] = Genotype(**{'GT':'0/1'})
assert check_X_dominant(
variant = recessive_variant,
family = family
) == True
def test_x_healthy_homozygote_male():
"""Test an healthy homozygote male"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t1\t1\n"
]
family = get_family(family_lines=family_lines)
homozygote_variant = {'genotypes': {}}
homozygote_variant['genotypes']['proband'] = Genotype(**{'GT':'1/1'})
assert check_X_dominant(
variant = homozygote_variant,
family = family
) == False
def test_x_healthy_homozygote_female():
"""Test an healthy homozygote female"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t2\t1\n"
]
family = get_family(family_lines=family_lines)
homozygote_variant = {'genotypes': {}}
homozygote_variant['genotypes']['proband'] = Genotype(**{'GT':'1/1'})
assert check_X_dominant(
variant = homozygote_variant,
family = family
) == False
def test_x_healthy_male_ref_call():
"""Test an healthy ref call male"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t1\t1\n"
]
family = get_family(family_lines=family_lines)
homozygote_variant = {'genotypes': {}}
homozygote_variant['genotypes']['proband'] = Genotype(**{'GT':'0/0'})
assert check_X_dominant(
variant = homozygote_variant,
family = family
) == True
def test_x_healthy_female_ref_call():
"""Test an healthy female ref call"""
family_lines = [
"#FamilyID\tSampleID\tFather\tMother\tSex\tPhenotype\n",
"1\tproband\t0\t0\t2\t1\n"
]
family = get_family(family_lines=family_lines)
homozygote_variant = {'genotypes': {}}
homozygote_variant['genotypes']['proband'] = Genotype(**{'GT':'0/0'})
assert check_X_dominant(
variant = homozygote_variant,
family = family
) == True
| 27.484321 | 80 | 0.622845 | 902 | 7,888 | 5.215078 | 0.101996 | 0.10523 | 0.088435 | 0.083333 | 0.867985 | 0.833971 | 0.833971 | 0.829719 | 0.81824 | 0.813988 | 0 | 0.015672 | 0.239604 | 7,888 | 286 | 81 | 27.58042 | 0.76859 | 0.098504 | 0 | 0.772222 | 0 | 0 | 0.221451 | 0.160596 | 0 | 0 | 0 | 0 | 0.077778 | 1 | 0.083333 | false | 0 | 0.016667 | 0 | 0.105556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d3faedd7e3bae2ef730e03d887b7787de9555737 | 6,269 | py | Python | loldib/getratings/models/NA/na_olaf/na_olaf_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_olaf/na_olaf_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_olaf/na_olaf_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Olaf_Sup_Aatrox(Ratings):
pass
class NA_Olaf_Sup_Ahri(Ratings):
pass
class NA_Olaf_Sup_Akali(Ratings):
pass
class NA_Olaf_Sup_Alistar(Ratings):
pass
class NA_Olaf_Sup_Amumu(Ratings):
pass
class NA_Olaf_Sup_Anivia(Ratings):
pass
class NA_Olaf_Sup_Annie(Ratings):
pass
class NA_Olaf_Sup_Ashe(Ratings):
pass
class NA_Olaf_Sup_AurelionSol(Ratings):
pass
class NA_Olaf_Sup_Azir(Ratings):
pass
class NA_Olaf_Sup_Bard(Ratings):
pass
class NA_Olaf_Sup_Blitzcrank(Ratings):
pass
class NA_Olaf_Sup_Brand(Ratings):
pass
class NA_Olaf_Sup_Braum(Ratings):
pass
class NA_Olaf_Sup_Caitlyn(Ratings):
pass
class NA_Olaf_Sup_Camille(Ratings):
pass
class NA_Olaf_Sup_Cassiopeia(Ratings):
pass
class NA_Olaf_Sup_Chogath(Ratings):
pass
class NA_Olaf_Sup_Corki(Ratings):
pass
class NA_Olaf_Sup_Darius(Ratings):
pass
class NA_Olaf_Sup_Diana(Ratings):
pass
class NA_Olaf_Sup_Draven(Ratings):
pass
class NA_Olaf_Sup_DrMundo(Ratings):
pass
class NA_Olaf_Sup_Ekko(Ratings):
pass
class NA_Olaf_Sup_Elise(Ratings):
pass
class NA_Olaf_Sup_Evelynn(Ratings):
pass
class NA_Olaf_Sup_Ezreal(Ratings):
pass
class NA_Olaf_Sup_Fiddlesticks(Ratings):
pass
class NA_Olaf_Sup_Fiora(Ratings):
pass
class NA_Olaf_Sup_Fizz(Ratings):
pass
class NA_Olaf_Sup_Galio(Ratings):
pass
class NA_Olaf_Sup_Gangplank(Ratings):
pass
class NA_Olaf_Sup_Garen(Ratings):
pass
class NA_Olaf_Sup_Gnar(Ratings):
pass
class NA_Olaf_Sup_Gragas(Ratings):
pass
class NA_Olaf_Sup_Graves(Ratings):
pass
class NA_Olaf_Sup_Hecarim(Ratings):
pass
class NA_Olaf_Sup_Heimerdinger(Ratings):
pass
class NA_Olaf_Sup_Illaoi(Ratings):
pass
class NA_Olaf_Sup_Irelia(Ratings):
pass
class NA_Olaf_Sup_Ivern(Ratings):
pass
class NA_Olaf_Sup_Janna(Ratings):
pass
class NA_Olaf_Sup_JarvanIV(Ratings):
pass
class NA_Olaf_Sup_Jax(Ratings):
pass
class NA_Olaf_Sup_Jayce(Ratings):
pass
class NA_Olaf_Sup_Jhin(Ratings):
pass
class NA_Olaf_Sup_Jinx(Ratings):
pass
class NA_Olaf_Sup_Kalista(Ratings):
pass
class NA_Olaf_Sup_Karma(Ratings):
pass
class NA_Olaf_Sup_Karthus(Ratings):
pass
class NA_Olaf_Sup_Kassadin(Ratings):
pass
class NA_Olaf_Sup_Katarina(Ratings):
pass
class NA_Olaf_Sup_Kayle(Ratings):
pass
class NA_Olaf_Sup_Kayn(Ratings):
pass
class NA_Olaf_Sup_Kennen(Ratings):
pass
class NA_Olaf_Sup_Khazix(Ratings):
pass
class NA_Olaf_Sup_Kindred(Ratings):
pass
class NA_Olaf_Sup_Kled(Ratings):
pass
class NA_Olaf_Sup_KogMaw(Ratings):
pass
class NA_Olaf_Sup_Leblanc(Ratings):
pass
class NA_Olaf_Sup_LeeSin(Ratings):
pass
class NA_Olaf_Sup_Leona(Ratings):
pass
class NA_Olaf_Sup_Lissandra(Ratings):
pass
class NA_Olaf_Sup_Lucian(Ratings):
pass
class NA_Olaf_Sup_Lulu(Ratings):
pass
class NA_Olaf_Sup_Lux(Ratings):
pass
class NA_Olaf_Sup_Malphite(Ratings):
pass
class NA_Olaf_Sup_Malzahar(Ratings):
pass
class NA_Olaf_Sup_Maokai(Ratings):
pass
class NA_Olaf_Sup_MasterYi(Ratings):
pass
class NA_Olaf_Sup_MissFortune(Ratings):
pass
class NA_Olaf_Sup_MonkeyKing(Ratings):
pass
class NA_Olaf_Sup_Mordekaiser(Ratings):
pass
class NA_Olaf_Sup_Morgana(Ratings):
pass
class NA_Olaf_Sup_Nami(Ratings):
pass
class NA_Olaf_Sup_Nasus(Ratings):
pass
class NA_Olaf_Sup_Nautilus(Ratings):
pass
class NA_Olaf_Sup_Nidalee(Ratings):
pass
class NA_Olaf_Sup_Nocturne(Ratings):
pass
class NA_Olaf_Sup_Nunu(Ratings):
pass
class NA_Olaf_Sup_Olaf(Ratings):
pass
class NA_Olaf_Sup_Orianna(Ratings):
pass
class NA_Olaf_Sup_Ornn(Ratings):
pass
class NA_Olaf_Sup_Pantheon(Ratings):
pass
class NA_Olaf_Sup_Poppy(Ratings):
pass
class NA_Olaf_Sup_Quinn(Ratings):
pass
class NA_Olaf_Sup_Rakan(Ratings):
pass
class NA_Olaf_Sup_Rammus(Ratings):
pass
class NA_Olaf_Sup_RekSai(Ratings):
pass
class NA_Olaf_Sup_Renekton(Ratings):
pass
class NA_Olaf_Sup_Rengar(Ratings):
pass
class NA_Olaf_Sup_Riven(Ratings):
pass
class NA_Olaf_Sup_Rumble(Ratings):
pass
class NA_Olaf_Sup_Ryze(Ratings):
pass
class NA_Olaf_Sup_Sejuani(Ratings):
pass
class NA_Olaf_Sup_Shaco(Ratings):
pass
class NA_Olaf_Sup_Shen(Ratings):
pass
class NA_Olaf_Sup_Shyvana(Ratings):
pass
class NA_Olaf_Sup_Singed(Ratings):
pass
class NA_Olaf_Sup_Sion(Ratings):
pass
class NA_Olaf_Sup_Sivir(Ratings):
pass
class NA_Olaf_Sup_Skarner(Ratings):
pass
class NA_Olaf_Sup_Sona(Ratings):
pass
class NA_Olaf_Sup_Soraka(Ratings):
pass
class NA_Olaf_Sup_Swain(Ratings):
pass
class NA_Olaf_Sup_Syndra(Ratings):
pass
class NA_Olaf_Sup_TahmKench(Ratings):
pass
class NA_Olaf_Sup_Taliyah(Ratings):
pass
class NA_Olaf_Sup_Talon(Ratings):
pass
class NA_Olaf_Sup_Taric(Ratings):
pass
class NA_Olaf_Sup_Teemo(Ratings):
pass
class NA_Olaf_Sup_Thresh(Ratings):
pass
class NA_Olaf_Sup_Tristana(Ratings):
pass
class NA_Olaf_Sup_Trundle(Ratings):
pass
class NA_Olaf_Sup_Tryndamere(Ratings):
pass
class NA_Olaf_Sup_TwistedFate(Ratings):
pass
class NA_Olaf_Sup_Twitch(Ratings):
pass
class NA_Olaf_Sup_Udyr(Ratings):
pass
class NA_Olaf_Sup_Urgot(Ratings):
pass
class NA_Olaf_Sup_Varus(Ratings):
pass
class NA_Olaf_Sup_Vayne(Ratings):
pass
class NA_Olaf_Sup_Veigar(Ratings):
pass
class NA_Olaf_Sup_Velkoz(Ratings):
pass
class NA_Olaf_Sup_Vi(Ratings):
pass
class NA_Olaf_Sup_Viktor(Ratings):
pass
class NA_Olaf_Sup_Vladimir(Ratings):
pass
class NA_Olaf_Sup_Volibear(Ratings):
pass
class NA_Olaf_Sup_Warwick(Ratings):
pass
class NA_Olaf_Sup_Xayah(Ratings):
pass
class NA_Olaf_Sup_Xerath(Ratings):
pass
class NA_Olaf_Sup_XinZhao(Ratings):
pass
class NA_Olaf_Sup_Yasuo(Ratings):
pass
class NA_Olaf_Sup_Yorick(Ratings):
pass
class NA_Olaf_Sup_Zac(Ratings):
pass
class NA_Olaf_Sup_Zed(Ratings):
pass
class NA_Olaf_Sup_Ziggs(Ratings):
pass
class NA_Olaf_Sup_Zilean(Ratings):
pass
class NA_Olaf_Sup_Zyra(Ratings):
pass
| 15.033573 | 46 | 0.75642 | 972 | 6,269 | 4.452675 | 0.151235 | 0.223198 | 0.350739 | 0.446396 | 0.791359 | 0.791359 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177221 | 6,269 | 416 | 47 | 15.069712 | 0.839085 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
310975e8c98a7b971b98c616238a3d89ea6d80ff | 87 | py | Python | StreamDeck-Server/editor/ui/Theme.py | philliphqs/StreamDeck | 0edc20cc5dd35238c0a6ec8988c92934c3c613b0 | [
"MIT"
] | null | null | null | StreamDeck-Server/editor/ui/Theme.py | philliphqs/StreamDeck | 0edc20cc5dd35238c0a6ec8988c92934c3c613b0 | [
"MIT"
] | 9 | 2021-08-14T01:27:36.000Z | 2021-08-24T18:10:18.000Z | StreamDeck-Server/editor/ui/Theme.py | philliphqs/StreamDeck | 0edc20cc5dd35238c0a6ec8988c92934c3c613b0 | [
"MIT"
] | null | null | null | from dearpygui.dearpygui import *
from editor.ui import Buttons
def load():
pass | 12.428571 | 33 | 0.735632 | 12 | 87 | 5.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195402 | 87 | 7 | 34 | 12.428571 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
31d1fbd8bf22fd677b0a71eb21c997de2756a9f2 | 4,745 | py | Python | backend/utils/imageupload.py | lybbn/django-vue3-lyadmin | df8ed48971eb3e3da977e1fd0467b1230b56afe4 | [
"MIT"
] | 1 | 2022-03-01T07:20:36.000Z | 2022-03-01T07:20:36.000Z | backend/utils/imageupload.py | lybbn/django-vue3-lyadmin | df8ed48971eb3e3da977e1fd0467b1230b56afe4 | [
"MIT"
] | null | null | null | backend/utils/imageupload.py | lybbn/django-vue3-lyadmin | df8ed48971eb3e3da977e1fd0467b1230b56afe4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@Remark: 自定义图片上传
"""
import os
import datetime
from django.conf import settings
from utils.common import renameuploadimg,getfulldomian
from config import DOMAIN_HOST
def ImageUpload(request,dirs):
"""
request:请求
dirs:要上传到那个目录
"""
image = request.data.getlist('file')
msg = {}
if not image:
msg['code'] = 400
msg['msg'] = "上传的图片不能为空"
return msg
notimg_file = []
img_file = []
try:
# 多图片上传,也可单图片
for img in image:
img_name = img.name
# 图片类型content-type检查
if not img.content_type.startswith('image/'):
msg['code'] = 400
msg['msg'] = "请上传正确的图片格式"
return msg
if not img_name.endswith(
('.jpg', '.jpeg', '.png', 'gif', '.bmp', '.JPG', '.JPEG', '.PNG', 'GIF', '.BMP')):
notimg_file.append(img_name)
if img.size > 1024 * 50000:
msg['code'] = 400
msg['msg'] = "图片大小不能超过50M"
return msg
else:
curr_time = datetime.datetime.now()
image_name = renameuploadimg(img_name)
time_path = curr_time.strftime("%Y-%m-%d")
img_task_dir = dirs #对应models中的上传路径
sub_path = os.path.join(settings.MEDIA_ROOT, img_task_dir, time_path)
if not os.path.exists(sub_path):
os.makedirs(sub_path)
image_path = os.path.join(sub_path, image_name)
# web_img_url = settings.MEDIA_URL + img_task_dir + "/" + time_path + "/" + image_name#相对路径/media/xxx/xxxx/xxx.png
web_img_url = DOMAIN_HOST+settings.MEDIA_URL + img_task_dir + "/" + time_path + "/" + image_name # 绝对路径http://xxx.xxx.com/media/xxx/xxxx/xxx.png
f = open(image_path, 'wb')
for i in img.chunks():
f.write(i)
f.close()
img_file.append(web_img_url)
if notimg_file:
msg['code'] = 400
msg['msg'] = '请检查是否支持的图片,失败文件部分如下:{0}'.format(','.join(notimg_file[:10]))
return msg
msg['code'] = 200
msg['img'] = img_file#['/media/xxx/xxx/xxx.png']
msg['msg'] = '上传成功'
return msg
except Exception as e:
msg['code'] = 400
msg['msg'] = '图片上传失败'
return msg
def ImageUpload2(request,paramsname,dirs):
"""
根据指定的名称参数名获取上传的文件
request:请求
paramsname:为formData中提交数据的名称
dirs:要上传到那个目录
"""
image = request.data.getlist(paramsname)
msg = {}
if not image:
msg['code'] = 400
msg['msg'] = "上传的图片不能为空"
return msg
notimg_file = []
img_file = []
try:
# 多图片上传,也可单图片
for img in image:
img_name = img.name
# 图片类型content-type检查
if not img.content_type.startswith('image/'):
msg['code'] = 400
msg['msg'] = "请上传正确的图片格式"
return msg
if not img_name.endswith(
('.jpg', '.jpeg', '.png', 'gif', '.bmp', '.JPG', '.JPEG', '.PNG', 'GIF', '.BMP')):
notimg_file.append(img_name)
if img.size > 1024 * 50000:
msg['code'] = 400
msg['msg'] = "图片大小不能超过50M"
return msg
else:
curr_time = datetime.datetime.now()
image_name = renameuploadimg(img_name)
time_path = curr_time.strftime("%Y-%m-%d")
img_task_dir = dirs #对应models中的上传路径
sub_path = os.path.join(settings.MEDIA_ROOT, img_task_dir, time_path)
if not os.path.exists(sub_path):
os.makedirs(sub_path)
image_path = os.path.join(sub_path, image_name)
# web_img_url = settings.MEDIA_URL + img_task_dir + "/" + time_path + "/" + image_name#相对路径/media/xxx/xxxx/xxx.png
web_img_url = DOMAIN_HOST+settings.MEDIA_URL + img_task_dir + "/" + time_path + "/" + image_name # 绝对路径http://xxx.xxx.com/media/xxx/xxxx/xxx.png
f = open(image_path, 'wb')
for i in img.chunks():
f.write(i)
f.close()
img_file.append(web_img_url)
if notimg_file:
msg['code'] = 400
msg['msg'] = '请检查是否支持的图片,失败文件部分如下:{0}'.format(','.join(notimg_file[:10]))
return msg
msg['code'] = 200
msg['img'] = img_file#['/media/xxx/xxx/xxx.png']
msg['msg'] = '上传成功'
return msg
except Exception as e:
msg['code'] = 400
msg['msg'] = '图片上传失败'
return msg | 33.415493 | 162 | 0.510432 | 545 | 4,745 | 4.273395 | 0.2 | 0.036067 | 0.042937 | 0.055818 | 0.87763 | 0.87763 | 0.847574 | 0.847574 | 0.847574 | 0.847574 | 0 | 0.021583 | 0.355532 | 4,745 | 142 | 163 | 33.415493 | 0.740026 | 0.125184 | 0 | 0.914286 | 0 | 0 | 0.082762 | 0.011263 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019048 | false | 0 | 0.047619 | 0 | 0.180952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9ef8dac5210db0928123c8c21028a11e2241b0a8 | 199,093 | py | Python | TEST3D/GUI/0010700_page_skelsel/log.py | usnistgov/OOF3D | 4fd423a48aea9c5dc207520f02de53ae184be74c | [
"X11"
] | 31 | 2015-04-01T15:59:36.000Z | 2022-03-18T20:21:47.000Z | TEST3D/GUI/0010700_page_skelsel/log.py | usnistgov/OOF3D | 4fd423a48aea9c5dc207520f02de53ae184be74c | [
"X11"
] | 3 | 2015-02-06T19:30:24.000Z | 2017-05-25T14:14:31.000Z | TEST3D/GUI/0010700_page_skelsel/log.py | usnistgov/OOF3D | 4fd423a48aea9c5dc207520f02de53ae184be74c | [
"X11"
] | 7 | 2015-01-23T15:19:22.000Z | 2021-06-09T09:03:59.000Z | # -*- python -*-
# This software was produced by NIST, an agency of the U.S. government,
# and by statute is not subject to copyright in the United States.
# Recipients of this software assume all responsibilities associated
# with its operation, modification and maintenance. However, to
# facilitate maintenance we ask that before distributing modified
# versions of this software, you first contact the authors at
# oof_manager@nist.gov.
## The "skeleton selection page grouplist" checkpoint was added by
## hand after this test was recorded, and is probably not present
## everywhere it ought to be.
import tests
#This is the basic skeleton skeleton test case in which we are checking the skeleton
#Selection groups manipulations
#This test is also testing everything related to the Elements Selection Mode case
findWidget('OOF3D').resize(550, 350)
findMenu(findWidget('OOF3D:MenuBar'), 'Windows:Graphics:New').activate()
checkpoint Move Node toolbox info updated
checkpoint toplevel widget mapped OOF3D Graphics 1
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame').size_allocate(gtk.gdk.Rectangle(0, 29, 380, 705))
checkpoint OOF.Windows.Graphics.New
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame').size_allocate(gtk.gdk.Rectangle(0, 29, 380, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame').size_allocate(gtk.gdk.Rectangle(0, 29, 380, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1').resize(1000, 800)
setComboBox(findWidget('OOF3D:Navigation:PageMenu'), 'Microstructure')
checkpoint page installed Microstructure
findWidget('OOF3D:Microstructure Page:Pane').set_position(225)
findWidget('OOF3D:Microstructure Page:New').clicked()
checkpoint toplevel widget mapped Dialog-Create Microstructure
findWidget('Dialog-Create Microstructure').resize(315, 199)
checkpoint meshable button set
findWidget('Dialog-Create Microstructure:gtk-ok').clicked()
findWidget('OOF3D:Microstructure Page:Pane').set_position(159)
checkpoint microstructure page sensitized
checkpoint meshable button set
checkpoint microstructure page sensitized
checkpoint meshable button set
checkpoint meshable button set
checkpoint microstructure page sensitized
checkpoint pixel page updated
checkpoint active area status updated
checkpoint microstructure page sensitized
checkpoint meshable button set
checkpoint Field page sensitized
checkpoint Materials page updated
checkpoint mesh page subproblems sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page sensitized
checkpoint mesh page sensitized
checkpoint pinnodes page sensitized
checkpoint boundary page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint Solver page sensitized
checkpoint OOF.Microstructure.New
setComboBox(findWidget('OOF3D:Navigation:PageMenu'), 'Skeleton')
checkpoint page installed Skeleton
findWidget('OOF3D').resize(601, 357)
findWidget('OOF3D:Skeleton Page:Pane').set_position(250)
checkpoint skeleton page sensitized
findWidget('OOF3D:Skeleton Page:New').clicked()
checkpoint toplevel widget mapped Dialog-New skeleton
findWidget('Dialog-New skeleton').resize(380, 191)
checkpoint skeleton page sensitized
findWidget('Dialog-New skeleton:gtk-ok').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint Graphics_1 Pin Nodes updated
checkpoint skeleton page sensitized
checkpoint Move Node toolbox writable changed
checkpoint Move Node toolbox info updated
checkpoint Graphics_1 Move Nodes sensitized
checkpoint Move Node toolbox writable changed
checkpoint Move Node toolbox info updated
checkpoint Graphics_1 Move Nodes sensitized
checkpoint Graphics_1 Voxel Info updated
checkpoint Graphics_1 Pin Nodes updated
checkpoint Field page sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page sensitized
checkpoint mesh page sensitized
checkpoint pinnodes page sensitized
checkpoint boundary page updated
checkpoint skeleton page info updated
checkpoint skeleton page info updated
checkpoint skeleton page info updated
checkpoint skeleton page info updated
checkpoint skeleton page sensitized
checkpoint skeleton page sensitized
checkpoint skeleton selection page selection sensitized
checkpoint Solver page sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Skeleton.New
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame').size_allocate(gtk.gdk.Rectangle(0, 29, 380, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
setComboBox(findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBChooser'), 'Skeleton Selection')
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame').size_allocate(gtk.gdk.Rectangle(0, 29, 380, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
setComboBox(findWidget('OOF3D:Navigation:PageMenu'), 'Skeleton Selection')
checkpoint page installed Skeleton Selection
findWidget('OOF3D').resize(601, 376)
findWidget('OOF3D:Skeleton Selection Page:Pane').set_position(278)
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:New').clicked()
checkpoint toplevel widget mapped Dialog-Create a new Element group
findWidget('Dialog-Create a new Element group').resize(246, 67)
findWidget('Dialog-Create a new Element group:name:Auto').clicked()
findWidget('Dialog-Create a new Element group:name:Text').set_text('e')
findWidget('Dialog-Create a new Element group:name:Text').set_text('el')
findWidget('Dialog-Create a new Element group:name:Text').set_text('el_')
findWidget('Dialog-Create a new Element group:name:Text').set_text('el_g')
findWidget('Dialog-Create a new Element group:name:Text').set_text('el_gr')
findWidget('Dialog-Create a new Element group:name:Text').set_text('el_gro')
findWidget('Dialog-Create a new Element group:name:Text').set_text('el_grou')
findWidget('Dialog-Create a new Element group:name:Text').set_text('el_group')
findWidget('Dialog-Create a new Element group:gtk-ok').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.ElementGroup.New_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:New').clicked()
checkpoint toplevel widget mapped Dialog-Create a new Face group
findWidget('Dialog-Create a new Face group').resize(246, 67)
findWidget('Dialog-Create a new Face group:name:Auto').clicked()
findWidget('Dialog-Create a new Face group:name:Text').set_text('f')
findWidget('Dialog-Create a new Face group:name:Text').set_text('fa')
findWidget('Dialog-Create a new Face group:name:Text').set_text('fa_')
findWidget('Dialog-Create a new Face group:name:Text').set_text('fa_g')
findWidget('Dialog-Create a new Face group:name:Text').set_text('fa_gr')
findWidget('Dialog-Create a new Face group:name:Text').set_text('fa_gro')
findWidget('Dialog-Create a new Face group:name:Text').set_text('fa_grou')
findWidget('Dialog-Create a new Face group:name:Text').set_text('fa_group')
findWidget('Dialog-Create a new Face group:gtk-ok').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceGroup.New_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:New').clicked()
checkpoint toplevel widget mapped Dialog-Create a new Segment group
findWidget('Dialog-Create a new Segment group').resize(246, 67)
findWidget('Dialog-Create a new Segment group:name:Auto').clicked()
findWidget('Dialog-Create a new Segment group:name:Text').set_text('s')
findWidget('Dialog-Create a new Segment group:name:Text').set_text('se')
findWidget('Dialog-Create a new Segment group:name:Text').set_text('se_')
findWidget('Dialog-Create a new Segment group:name:Text').set_text('se_g')
findWidget('Dialog-Create a new Segment group:name:Text').set_text('se_gr')
findWidget('Dialog-Create a new Segment group:name:Text').set_text('se_gro')
findWidget('Dialog-Create a new Segment group:name:Text').set_text('se_grou')
findWidget('Dialog-Create a new Segment group:name:Text').set_text('se_group')
findWidget('Dialog-Create a new Segment group:gtk-ok').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentGroup.New_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Node').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:New').clicked()
checkpoint toplevel widget mapped Dialog-Create a new Node group
findWidget('Dialog-Create a new Node group').resize(246, 67)
findWidget('Dialog-Create a new Node group:name:Auto').clicked()
findWidget('Dialog-Create a new Node group:name:Text').set_text('b')
findWidget('Dialog-Create a new Node group:name:Text').set_text('')
findWidget('Dialog-Create a new Node group:name:Text').set_text('n')
findWidget('Dialog-Create a new Node group:name:Text').set_text('no')
findWidget('Dialog-Create a new Node group:name:Text').set_text('no_')
findWidget('Dialog-Create a new Node group:name:Text').set_text('no_g')
findWidget('Dialog-Create a new Node group:name:Text').set_text('no_gr')
findWidget('Dialog-Create a new Node group:name:Text').set_text('no_gro')
findWidget('Dialog-Create a new Node group:name:Text').set_text('no_grou')
findWidget('Dialog-Create a new Node group:name:Text').set_text('no_group')
findWidget('Dialog-Create a new Node group:gtk-ok').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeGroup.New_Group
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.5000000000000e+01,y= 1.2200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.5000000000000e+01,y= 1.2200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Messages 1').resize(543, 200)
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.6300000000000e+02,y= 1.1100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.6300000000000e+02,y= 1.1100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.0600000000000e+02,y= 2.4600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.0600000000000e+02,y= 2.4600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.2300000000000e+02,y= 2.3400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.2300000000000e+02,y= 2.3400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
findWidget('OOF3D:Skeleton Selection Page:Mode:Element').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.ElementGroup.Add_to_Group
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.3300000000000e+02,y= 2.1000000000000e+01,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.3300000000000e+02,y= 2.1000000000000e+01,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Face').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.1700000000000e+02,y= 1.1600000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.1700000000000e+02,y= 1.1600000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.0000000000000e+01,y= 2.3700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.0000000000000e+01,y= 2.3700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:tumble').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.4800000000000e+02,y= 2.0200000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 2.0500000000000e+02,y= 2.2500000000000e+02,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.0500000000000e+02,y= 2.2500000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
checkpoint OOF.Graphics_1.Settings.Camera.View
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:select').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.1900000000000e+02,y= 2.3300000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.1900000000000e+02,y= 2.3300000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.1100000000000e+02,y= 3.9900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.1100000000000e+02,y= 3.9900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceGroup.Add_to_Group
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:tumble').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.9300000000000e+02,y= 2.5600000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 3.6300000000000e+02,y= 2.8200000000000e+02,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.6300000000000e+02,y= 2.8200000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
checkpoint OOF.Graphics_1.Settings.Camera.View
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:select').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.4000000000000e+01,y= 1.0000000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.4000000000000e+01,y= 1.0000000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Segment').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.1800000000000e+02,y= 6.4800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.1800000000000e+02,y= 6.4800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.3000000000000e+01,y= 6.2400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.3000000000000e+01,y= 6.2400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.5000000000000e+01,y= 6.0700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.5000000000000e+01,y= 6.0700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 9.8000000000000e+01,y= 6.0900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 9.8000000000000e+01,y= 6.0900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.2800000000000e+02,y= 6.0400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.2800000000000e+02,y= 6.0400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.6600000000000e+02,y= 5.7000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.6600000000000e+02,y= 5.7000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.9000000000000e+01,y= 5.2200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.9000000000000e+01,y= 5.2200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.6000000000000e+02,y= 5.9700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.6000000000000e+02,y= 5.9700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.0800000000000e+02,y= 6.2900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.0800000000000e+02,y= 6.2900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentGroup.Add_to_Group
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.2000000000000e+01,y= 9.0000000000000e+01,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.2000000000000e+01,y= 9.0000000000000e+01,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Node').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.8000000000000e+01,y= 6.5000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.8000000000000e+01,y= 6.5000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 7.7000000000000e+01,y= 5.7000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 7.7000000000000e+01,y= 5.7000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.1000000000000e+02,y= 6.4600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.1000000000000e+02,y= 6.4600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.8500000000000e+02,y= 5.6900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 1.8400000000000e+02,y= 5.6900000000000e+02,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.8400000000000e+02,y= 5.6900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.8700000000000e+02,y= 5.7000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.8700000000000e+02,y= 5.7000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.1500000000000e+02,y= 6.4800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.1500000000000e+02,y= 6.4800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 9.9000000000000e+01,y= 2.9000000000000e+01,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 9.9000000000000e+01,y= 2.9000000000000e+01,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.1500000000000e+02,y= 6.4100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.1500000000000e+02,y= 6.4100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.4600000000000e+02,y= 5.5500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.4600000000000e+02,y= 5.5500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
findWidget('OOF3D:Skeleton Selection Page:Mode:Node').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeGroup.Add_to_Group
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.9000000000000e+01,y= 1.8500000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.9000000000000e+01,y= 1.8500000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
findWidget('OOF3D Graphics 1').resize(1000, 800)
findWidget('OOF3D:Skeleton Selection Page:Mode:Element').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Select_Group
findWidget('OOF3D').resize(601, 376)
findWidget('OOF3D Graphics 1').resize(1000, 800)
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Undo').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Undo
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.FaceSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceSelection.Clear
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceSelection.Select_Group
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Undo').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceSelection.Undo
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentSelection.Select_Group
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Undo').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Undo
findWidget('OOF3D:Skeleton Selection Page:Mode:Node').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Undo').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeSelection.Undo
findWidget('OOF3D:Skeleton Selection Page:Mode:Element').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
findWidget('OOF3D Graphics 1').resize(1000, 800)
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Select_Group
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.4400000000000e+02,y= 4.5500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.4400000000000e+02,y= 4.5500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Element').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.6500000000000e+02,y= 4.2700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.6500000000000e+02,y= 4.2700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.3100000000000e+02,y= 4.8900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.3100000000000e+02,y= 4.8900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.0100000000000e+02,y= 5.7300000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.0100000000000e+02,y= 5.7300000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementAction:Chooser'), 'Unselect Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Unselect_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceSelection.Select_Group
checkpoint skeleton selection page updated
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Face').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.7600000000000e+02,y= 3.8100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.7600000000000e+02,y= 3.8100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.1600000000000e+02,y= 3.0700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.1600000000000e+02,y= 3.0700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.1400000000000e+02,y= 1.6000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.1400000000000e+02,y= 1.6000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.6600000000000e+02,y= 1.3300000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.6600000000000e+02,y= 1.3300000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.2200000000000e+02,y= 1.3700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.2200000000000e+02,y= 1.3700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.2200000000000e+02,y= 1.0100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.2200000000000e+02,y= 1.0100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceAction:Chooser'), 'Unselect Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.FaceSelection.Unselect_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.FaceSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentSelection.Select_Group
checkpoint skeleton selection page updated
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Segment').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.1000000000000e+01,y= 2.4400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.1000000000000e+01,y= 2.4400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.4000000000000e+01,y= 2.4900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.4000000000000e+01,y= 2.4900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 7.3000000000000e+01,y= 2.4500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 7.3000000000000e+01,y= 2.4500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.8000000000000e+01,y= 2.0500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.8000000000000e+01,y= 2.0500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.4000000000000e+01,y= 1.8500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.4000000000000e+01,y= 1.8500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.2000000000000e+02,y= 1.4200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.2000000000000e+02,y= 1.4200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.9800000000000e+02,y= 1.7600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.9800000000000e+02,y= 1.7600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.4400000000000e+02,y= 2.0900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.4400000000000e+02,y= 2.0900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.7500000000000e+02,y= 6.0400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.7500000000000e+02,y= 6.0400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.7700000000000e+02,y= 6.3500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.7700000000000e+02,y= 6.3500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentAction:Chooser'), 'Unselect Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Unselect_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Mode:Node').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeSelection.Select_Group
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Node').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.1900000000000e+02,y= 6.4200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.1900000000000e+02,y= 6.4200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.1300000000000e+02,y= 6.3600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.1300000000000e+02,y= 6.3600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.7500000000000e+02,y= 5.6900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.7500000000000e+02,y= 5.6900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.8200000000000e+02,y= 4.6800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.8200000000000e+02,y= 4.6800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.7000000000000e+01,y= 4.9400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.7000000000000e+01,y= 4.9400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.3000000000000e+01,y= 4.9700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.3000000000000e+01,y= 4.9700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.1200000000000e+02,y= 6.8000000000000e+01,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.1200000000000e+02,y= 6.8000000000000e+01,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.0200000000000e+02,y= 4.1000000000000e+01,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.0200000000000e+02,y= 4.1000000000000e+01,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.2400000000000e+02,y= 8.3000000000000e+01,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.2400000000000e+02,y= 8.3000000000000e+01,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.9700000000000e+02,y= 4.6000000000000e+01,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.9700000000000e+02,y= 4.6000000000000e+01,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.0500000000000e+02,y= 4.6000000000000e+01,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.0500000000000e+02,y= 4.6000000000000e+01,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.2000000000000e+01,y= 2.6000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.2000000000000e+01,y= 2.6000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.4000000000000e+01,y= 2.7100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.4000000000000e+01,y= 2.7100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.1500000000000e+02,y= 1.2000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.1500000000000e+02,y= 1.2000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeAction:Chooser'), 'Unselect Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Unselect_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Clear
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeAction:Chooser'), 'Add Group')
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.1500000000000e+02,y= 6.4400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.1500000000000e+02,y= 6.4400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.8600000000000e+02,y= 5.6800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.8600000000000e+02,y= 5.6800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.1900000000000e+02,y= 6.3600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.1900000000000e+02,y= 6.3600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.1200000000000e+02,y= 6.4000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.1200000000000e+02,y= 6.4000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.8000000000000e+02,y= 5.6700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.8000000000000e+02,y= 5.6700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Add_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Segment').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 9.7000000000000e+01,y= 6.0500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 9.7000000000000e+01,y= 6.0500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.3900000000000e+02,y= 6.4600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.3900000000000e+02,y= 6.4600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.5800000000000e+02,y= 6.2000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.5800000000000e+02,y= 6.2000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.9000000000000e+02,y= 5.9000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.9000000000000e+02,y= 5.9000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.6000000000000e+01,y= 3.4900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.6000000000000e+01,y= 3.4900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.8000000000000e+01,y= 3.5400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.8000000000000e+01,y= 3.5400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.2000000000000e+01,y= 2.7600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.2000000000000e+01,y= 2.7600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 7.1000000000000e+01,y= 2.7800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 7.1000000000000e+01,y= 2.7800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.6200000000000e+02,y= 5.0600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.6200000000000e+02,y= 5.0600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.5900000000000e+02,y= 4.7000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.5900000000000e+02,y= 4.7000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.7500000000000e+02,y= 4.5800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.7500000000000e+02,y= 4.5800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentAction:Chooser'), 'Add Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Add_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Clear
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Face').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.8200000000000e+02,y= 6.1100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.8200000000000e+02,y= 6.1100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.6700000000000e+02,y= 6.1600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.6700000000000e+02,y= 6.1600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.1000000000000e+02,y= 6.2800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.1000000000000e+02,y= 6.2800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.0400000000000e+02,y= 6.2800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.0400000000000e+02,y= 6.2800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 7.9000000000000e+01,y= 5.3200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 7.9000000000000e+01,y= 5.3200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.8000000000000e+01,y= 5.4000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.8000000000000e+01,y= 5.4000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.1000000000000e+01,y= 4.5800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.1000000000000e+01,y= 4.5800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 9.7000000000000e+01,y= 1.8300000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 9.7000000000000e+01,y= 1.8300000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.8000000000000e+01,y= 1.7800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.8000000000000e+01,y= 1.7800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 7.7000000000000e+01,y= 3.0400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 7.7000000000000e+01,y= 3.0400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 9.8000000000000e+01,y= 3.4400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 9.8000000000000e+01,y= 3.4400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceAction:Chooser'), 'Add Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.FaceSelection.Add_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.FaceSelection.Clear
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Element').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.2600000000000e+02,y= 5.7100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.2600000000000e+02,y= 5.7100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.8400000000000e+02,y= 5.7500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.8400000000000e+02,y= 5.7500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.5800000000000e+02,y= 6.2900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.5800000000000e+02,y= 6.2900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.0000000000000e+02,y= 6.2500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.0000000000000e+02,y= 6.2500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.2000000000000e+01,y= 5.0100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.2000000000000e+01,y= 5.0100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.9000000000000e+01,y= 2.7000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.9000000000000e+01,y= 2.7000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
findWidget('OOF3D:Skeleton Selection Page:Mode:Element').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementAction:Chooser'), 'Add Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Add_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Clear
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementAction:Chooser'), 'Intersect Group')
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.2500000000000e+02,y= 2.7200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.2500000000000e+02,y= 2.7200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.3100000000000e+02,y= 2.9200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.3100000000000e+02,y= 2.9200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.0200000000000e+02,y= 4.0200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.0200000000000e+02,y= 4.0200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.9300000000000e+02,y= 3.7700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.9300000000000e+02,y= 3.7700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.8700000000000e+02,y= 5.6700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.8700000000000e+02,y= 5.6700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.4300000000000e+02,y= 5.2900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.4300000000000e+02,y= 5.2900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.9200000000000e+02,y= 3.9700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.9200000000000e+02,y= 3.9700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Single_Element
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Intersect_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Face').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.0000000000000e+02,y= 2.9900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.0000000000000e+02,y= 2.9900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.0600000000000e+02,y= 3.1300000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.0600000000000e+02,y= 3.1300000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.2400000000000e+02,y= 2.3900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.2400000000000e+02,y= 2.3900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.3700000000000e+02,y= 2.9300000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.3700000000000e+02,y= 2.9300000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.6100000000000e+02,y= 3.6700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.6100000000000e+02,y= 3.6700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.1500000000000e+02,y= 4.2200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.1500000000000e+02,y= 4.2200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Single_Face
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceAction:Chooser'), 'Intersect Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.FaceSelection.Intersect_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceSelection.Clear
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Segment').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.5200000000000e+02,y= 6.4800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.5200000000000e+02,y= 6.4800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.4400000000000e+02,y= 6.1700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.4400000000000e+02,y= 6.1700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.0100000000000e+02,y= 5.7700000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.0100000000000e+02,y= 5.7700000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.4000000000000e+02,y= 5.9100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.4000000000000e+02,y= 5.9100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.2300000000000e+02,y= 6.2100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 3.2300000000000e+02,y= 6.2100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.8000000000000e+01,y= 4.5800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.8000000000000e+01,y= 4.5800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.5000000000000e+01,y= 3.9900000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.5000000000000e+01,y= 3.9900000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.6000000000000e+01,y= 3.9100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.6000000000000e+01,y= 3.9100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.8000000000000e+01,y= 3.6000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.8000000000000e+01,y= 3.6000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.4000000000000e+01,y= 3.7000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.4000000000000e+01,y= 3.7000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Single_Segment
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentAction:Chooser'), 'Intersect Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Intersect_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Mode:Node').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Node').clicked()
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.0100000000000e+02,y= 6.3200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.0100000000000e+02,y= 6.3200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.0700000000000e+02,y= 6.4500000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.0700000000000e+02,y= 6.4500000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.8700000000000e+02,y= 5.6800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.8700000000000e+02,y= 5.6800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.7300000000000e+02,y= 2.2600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.7300000000000e+02,y= 2.2600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.1400000000000e+02,y= 6.9000000000000e+01,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.1400000000000e+02,y= 6.9000000000000e+01,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.0100000000000e+02,y= 3.1000000000000e+01,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.0100000000000e+02,y= 3.1000000000000e+01,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.1500000000000e+02,y= 6.3600000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.1500000000000e+02,y= 6.3600000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.1800000000000e+02,y= 6.4400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.1800000000000e+02,y= 6.4400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 4.1800000000000e+02,y= 6.4100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.1800000000000e+02,y= 6.4100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 8.7000000000000e+01,y= 6.5100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 8.7000000000000e+01,y= 6.5100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 9.1000000000000e+01,y= 6.5400000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 9.1000000000000e+01,y= 6.5400000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 7.3000000000000e+01,y= 5.7200000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 7.3000000000000e+01,y= 5.7200000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 7.4000000000000e+01,y= 5.7000000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 7.4000000000000e+01,y= 5.7000000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 7.7000000000000e+01,y= 5.6800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 7.7000000000000e+01,y= 5.6800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 1.0600000000000e+02,y= 5.6100000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 1.0600000000000e+02,y= 5.6100000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 6.3000000000000e+01,y= 4.9800000000000e+02,button=1,state=17,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.3000000000000e+01,y= 4.9800000000000e+02,button=1,state=273,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Single_Node
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeAction:Chooser'), 'Intersect Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Intersect_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Clear
findWidget('OOF3D:Skeleton Selection Page:Mode:Element').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementAction:Chooser'), 'Select Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Clear').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.ElementGroup.Clear_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.ElementGroup.Add_to_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceAction:Chooser'), 'Select Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint OOF.FaceSelection.Select_Group
checkpoint skeleton selection page groups sensitized
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Clear').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceGroup.Clear_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceGroup.Add_to_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page updated
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentAction:Chooser'), 'Select Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Clear').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentGroup.Clear_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentGroup.Add_to_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Node').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
setComboBox(findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeAction:Chooser'), 'Select Group')
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Clear').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeGroup.Clear_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeGroup.Add_to_Group
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:tumble').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.9400000000000e+02,y= 3.1400000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 4.0400000000000e+02,y= 3.0000000000000e+02,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.0400000000000e+02,y= 3.0000000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
checkpoint OOF.Graphics_1.Settings.Camera.View
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Remove').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeGroup.Remove_from_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeGroup.Add_to_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Remove').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentGroup.Remove_from_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentGroup.Add_to_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Remove').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceGroup.Remove_from_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceGroup.Add_to_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Element').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Remove').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.ElementGroup.Remove_from_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Add').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint OOF.ElementGroup.Add_to_Group
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Node:Clear').clicked()
checkpoint OOF.Graphics_1.Toolbox.Select_Node.Clear
findWidget('OOF3D Graphics 1:Pane0:Pane2:select').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Segment').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Segment:Clear').clicked()
checkpoint OOF.Graphics_1.Toolbox.Select_Segment.Clear
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Face').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Face:Clear').clicked()
checkpoint OOF.Graphics_1.Toolbox.Select_Face.Clear
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Select:Element').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2:ToolboxFrame:TBScroll:Skeleton Selection:Element:Clear').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.Graphics_1.Toolbox.Select_Element.Clear
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:ElementHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.ElementSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:FaceHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.FaceSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:SegmentHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.SegmentSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Node').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Selection:NodeHistory:OK').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint OOF.NodeSelection.Select_Group
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Delete').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.NodeGroup.Delete_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Segment').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Delete').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.SegmentGroup.Delete_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Face').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Delete').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.FaceGroup.Delete_Group
findWidget('OOF3D:Skeleton Selection Page:Mode:Element').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
findWidget('OOF3D:Skeleton Selection Page:Pane:Groups:Delete').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(300, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint OOF.ElementGroup.Delete_Group
setComboBox(findWidget('OOF3D:Navigation:PageMenu'), 'Skeleton')
checkpoint page installed Skeleton
findWidget('OOF3D:Skeleton Page:Delete').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(204, 91)
checkpoint skeleton page sensitized
findWidget('Questioner:gtk-ok').clicked()
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page updated
checkpoint pinnodes page sensitized
checkpoint skeleton page sensitized
checkpoint skeleton selection page selection sensitized
checkpoint Graphics_1 Move Nodes sensitized
checkpoint Move Node toolbox writable changed
checkpoint skeleton selection page groups sensitized
checkpoint Move Node toolbox writable changed
checkpoint Move Node toolbox info updated
checkpoint Graphics_1 Move Nodes sensitized
checkpoint Graphics_1 Voxel Info updated
checkpoint Graphics_1 Pin Nodes updated
checkpoint Field page sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page sensitized
checkpoint mesh page sensitized
checkpoint pinnodes page sensitized
checkpoint boundary page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint Solver page sensitized
checkpoint OOF.Skeleton.Delete
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton page sensitized
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2:tumble').clicked()
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.7600000000000e+02,y= 2.3300000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 2.9300000000000e+02,y= 2.2800000000000e+02,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.9300000000000e+02,y= 2.2800000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
checkpoint OOF.Graphics_1.Settings.Camera.View
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 2.9700000000000e+02,y= 2.4000000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 5.2200000000000e+02,y= 2.5900000000000e+02,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 5.2200000000000e+02,y= 2.5900000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
checkpoint OOF.Graphics_1.Settings.Camera.View
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.9500000000000e+02,y= 2.9400000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 6.2200000000000e+02,y= 3.5500000000000e+02,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 6.2300000000000e+02,y= 3.5500000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
checkpoint OOF.Graphics_1.Settings.Camera.View
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 5.0500000000000e+02,y= 3.4500000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 4.6600000000000e+02,y= 3.5500000000000e+02,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 4.6700000000000e+02,y= 3.5500000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
checkpoint OOF.Graphics_1.Settings.Camera.View
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_PRESS,x= 3.7200000000000e+02,y= 4.0700000000000e+02,button=1,state=16,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.MOTION_NOTIFY,x= 2.9200000000000e+02,y= 3.8500000000000e+02,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
window = findOOFWindow('Graphics_1')
oldsize = window.setCanvasSize(614, 671)
canvasobj = findCanvasDrawingArea(findWidget('OOF3D Graphics 1:Pane0:Pane2:Canvas'), windowname='Graphics_1')
canvasobj.emit('event', event(gtk.gdk.BUTTON_RELEASE,x= 2.9200000000000e+02,y= 3.8500000000000e+02,button=1,state=272,window=findCanvasGdkWindow('Graphics_1')))
window.setCanvasSize(oldsize[0], oldsize[1])
findWidget('OOF3D Graphics 1:Pane0:Pane2').size_allocate(gtk.gdk.Rectangle(0, 29, 1000, 705))
checkpoint OOF.Graphics_1.Settings.Camera.View
findMenu(findWidget('OOF3D Graphics 1:MenuBar'), 'File:Redraw').activate()
checkpoint OOF.Graphics_1.File.Redraw
findMenu(findWidget('OOF3D Graphics 1:MenuBar'), 'File:Redraw').activate()
checkpoint OOF.Graphics_1.File.Redraw
setComboBox(findWidget('OOF3D:Navigation:PageMenu'), 'Skeleton Selection')
checkpoint page installed Skeleton Selection
setComboBox(findWidget('OOF3D:Navigation:PageMenu'), 'Microstructure')
checkpoint page installed Microstructure
findWidget('OOF3D:Microstructure Page:Pane').set_position(174)
findWidget('OOF3D:Microstructure Page:Delete').clicked()
checkpoint toplevel widget mapped Questioner
findWidget('Questioner').resize(225, 89)
findWidget('Questioner:gtk-yes').clicked()
checkpoint meshable button set
checkpoint microstructure page sensitized
checkpoint meshable button set
checkpoint microstructure page sensitized
checkpoint Graphics_1 Voxel Info updated
checkpoint Graphics_1 Pin Nodes updated
checkpoint Graphics_1 Voxel Info updated
checkpoint Graphics_1 Pin Nodes updated
checkpoint Graphics_1 Voxel Info updated
checkpoint Graphics_1 Pin Nodes updated
checkpoint pixel page updated
checkpoint active area status updated
checkpoint Field page sensitized
checkpoint Materials page updated
checkpoint mesh page subproblems sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page subproblems sensitized
checkpoint mesh page sensitized
checkpoint mesh page sensitized
checkpoint pinnodes page sensitized
checkpoint boundary page updated
checkpoint skeleton selection page selection sensitized
checkpoint skeleton selection page updated
checkpoint Solver page sensitized
checkpoint skeleton selection page groups sensitized
checkpoint skeleton selection page groups sensitized
checkpoint meshable button set
checkpoint meshable button set
checkpoint microstructure page sensitized
checkpoint Field page sensitized
checkpoint microstructure page sensitized
checkpoint meshable button set
checkpoint Solver page sensitized
checkpoint OOF.Microstructure.Delete
findWidget('OOF3D:Microstructure Page:Pane').set_position(170)
setComboBox(findWidget('OOF3D:Navigation:PageMenu'), 'Skeleton Selection')
checkpoint page installed Skeleton Selection
findMenu(findWidget('OOF3D:MenuBar'), 'File:Save:Python_Log').activate()
checkpoint toplevel widget mapped Dialog-Python_Log
findWidget('Dialog-Python_Log').resize(190, 95)
findWidget('Dialog-Python_Log:filename').set_text('')
findWidget('Dialog-Python_Log:filename').set_text('skelselseg.log')
findWidget('Dialog-Python_Log:gtk-ok').clicked()
checkpoint OOF.File.Save.Python_Log
assert tests.filediff('skelselseg.log')
widget_0=findWidget('OOF3D')
handled_0=widget_0.event(event(gtk.gdk.DELETE,window=widget_0.window))
postpone if not handled_0: widget_0.destroy()
checkpoint OOF.Graphics_1.File.Close
| 66.630857 | 160 | 0.825237 | 25,305 | 199,093 | 6.420707 | 0.014305 | 0.07899 | 0.121366 | 0.153974 | 0.988909 | 0.987069 | 0.986066 | 0.982668 | 0.976458 | 0.973941 | 0 | 0.086488 | 0.063192 | 199,093 | 2,987 | 161 | 66.653164 | 0.784641 | 0.003873 | 0 | 0.84973 | 0 | 0 | 0.171893 | 0.021482 | 0 | 0 | 0 | 0 | 0.000337 | 0 | null | null | 0 | 0.000337 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
734525e037bae94ac7e4db2641b55397c0296f5b | 30,348 | py | Python | tests/v2_validation/cattlevalidationtest/core/test_service_dns_discovery.py | rancherio/validation-tests | 795733982209edc801d5db624a3fb55043987ed0 | [
"Apache-2.0"
] | 7 | 2015-11-18T17:43:08.000Z | 2021-07-14T09:48:18.000Z | tests/v2_validation/cattlevalidationtest/core/test_service_dns_discovery.py | rancher/validation-tests | 795733982209edc801d5db624a3fb55043987ed0 | [
"Apache-2.0"
] | 175 | 2015-07-09T18:41:24.000Z | 2021-06-10T21:23:27.000Z | tests/v2_validation/cattlevalidationtest/core/test_service_dns_discovery.py | rancher/validation-tests | 795733982209edc801d5db624a3fb55043987ed0 | [
"Apache-2.0"
] | 25 | 2015-08-08T04:54:24.000Z | 2021-05-25T21:10:37.000Z | from common_fixtures import * # NOQA
from test_services_sidekick \
import create_env_with_sidekick, validate_sidekick, validate_dns
logger = logging.getLogger(__name__)
def create_environment_with_services(
client, service_scale, consumed_service_scale, port,
ssh_port="22", isnetworkModeHost_svc=False,
isnetworkModeHost_consumed_svc=False):
if not isnetworkModeHost_svc and not isnetworkModeHost_consumed_svc:
env, service, consumed_service = create_env_with_2_svc(
client, service_scale, consumed_service_scale, port)
else:
env, service, consumed_service = create_env_with_2_svc_hostnetwork(
client, service_scale, consumed_service_scale, port, ssh_port,
isnetworkModeHost_svc, isnetworkModeHost_consumed_svc)
service.activate()
consumed_service.activate()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
consumed_service = client.wait_success(
consumed_service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
assert consumed_service.state == "active"
return env, service, consumed_service
def test_dns_discovery_activate_svc_activate_consumed_svc_link(
client):
port = "401"
service_scale = 1
consumed_service_scale = 2
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_service_scale_up(client):
port = "402"
service_scale = 1
consumed_service_scale = 2
final_service_scale = 3
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
service = client.update(service, scale=final_service_scale,
name=service.name)
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
assert service.scale == final_service_scale
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_services_scale_down(client):
port = "403"
service_scale = 3
consumed_service_scale = 2
final_service_scale = 1
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
service = client.update(service, scale=final_service_scale,
name=service.name)
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
assert service.scale == final_service_scale
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_consumed_services_scale_up(client):
port = "404"
service_scale = 1
consumed_service_scale = 2
final_consumed_service_scale = 4
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
consumed_service = client.update(consumed_service,
scale=final_consumed_service_scale,
name=consumed_service.name)
consumed_service = client.wait_success(
consumed_service, SERVICE_WAIT_TIMEOUT)
assert consumed_service.state == "active"
assert consumed_service.scale == final_consumed_service_scale
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_consumed_services_scale_down(client):
port = "405"
service_scale = 2
consumed_service_scale = 3
final_consumed_service_scale = 1
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
consumed_service = client.update(consumed_service,
scale=final_consumed_service_scale,
name=consumed_service.name)
consumed_service = client.wait_success(
consumed_service, SERVICE_WAIT_TIMEOUT)
assert consumed_service.state == "active"
assert consumed_service.scale == final_consumed_service_scale
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_consumed_services_stop_start_instance(
client, socat_containers):
port = "406"
service_scale = 1
consumed_service_scale = 3
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
container_name = get_container_name(env, consumed_service, 2)
containers = client.list_container(name=container_name).data
assert len(containers) == 1
container = containers[0]
# Stop instance
stop_container_from_host(client, container)
service = wait_state(client, service, "active")
wait_for_scale_to_adjust(client, consumed_service)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_consumed_services_restart_instance(
client):
port = "407"
service_scale = 1
consumed_service_scale = 3
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
container_name = get_container_name(env, consumed_service, 2)
containers = client.list_container(name=container_name).data
assert len(containers) == 1
container = containers[0]
# Restart instance
container = client.wait_success(container.restart(), SERVICE_WAIT_TIMEOUT)
assert container.state == 'running'
time.sleep(10)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_consumed_services_delete_instance(client):
port = "408"
service_scale = 1
consumed_service_scale = 3
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
container_name = get_container_name(env, consumed_service, 1)
containers = client.list_container(name=container_name).data
assert len(containers) == 1
container = containers[0]
# Delete instance
container = client.wait_success(client.delete(container))
assert container.state == 'removed'
wait_for_scale_to_adjust(client, consumed_service)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_consumed_services_deactivate_activate(
client):
port = "409"
service_scale = 1
consumed_service_scale = 2
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
consumed_service = consumed_service.deactivate()
consumed_service = client.wait_success(
consumed_service, SERVICE_WAIT_TIMEOUT)
assert consumed_service.state == "inactive"
wait_until_instances_get_stopped(client, consumed_service)
consumed_service = consumed_service.activate()
consumed_service = client.wait_success(
consumed_service, SERVICE_WAIT_TIMEOUT)
assert consumed_service.state == "active"
time.sleep(10)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_service_deactivate_activate(client):
port = "410"
service_scale = 1
consumed_service_scale = 2
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
service = service.deactivate()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "inactive"
wait_until_instances_get_stopped(client, service)
service = service.activate()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
time.sleep(restart_sleep_interval)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_deactivate_activate_environment(client):
port = "411"
service_scale = 1
consumed_service_scale = 2
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port,
)
env = env.deactivateservices()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "inactive"
consumed_service = client.wait_success(
consumed_service, SERVICE_WAIT_TIMEOUT)
assert consumed_service.state == "inactive"
wait_until_instances_get_stopped(client, consumed_service)
env = env.activateservices()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
consumed_service = client.wait_success(
consumed_service, SERVICE_WAIT_TIMEOUT)
assert consumed_service.state == "active"
time.sleep(restart_sleep_interval)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_services_stop_start_instance(client,
socat_containers):
port = "416"
service_scale = 2
consumed_service_scale = 2
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port,
)
container_name = get_container_name(env, consumed_service, 2)
containers = client.list_container(name=container_name).data
assert len(containers) == 1
service_instance = containers[0]
# Stop service instance
stop_container_from_host(client, service_instance)
service = client.wait_success(service)
wait_for_scale_to_adjust(client, service)
time.sleep(restart_sleep_interval)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discovery_services_restart_instance(client):
port = "417"
service_scale = 2
consumed_service_scale = 2
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
container_name = get_container_name(env, service, 2)
containers = client.list_container(name=container_name).data
assert len(containers) == 1
service_instance = containers[0]
# Restart consumed instance
service_instance = client.wait_success(
service_instance.restart(), SERVICE_WAIT_TIMEOUT)
assert service_instance.state == 'running'
time.sleep(restart_sleep_interval)
validate_linked_service(client, service, [consumed_service], port,
)
delete_all(client, [env])
def test_dns_discovery_services_delete_instance(client):
port = "418"
service_scale = 2
consumed_service_scale = 2
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port)
validate_linked_service(client, service, [consumed_service], port)
container_name = get_container_name(env, service, 2)
containers = client.list_container(name=container_name).data
assert len(containers) == 1
service_instance = containers[0]
# Delete instance
container = client.wait_success(client.delete(service_instance))
assert container.state == 'removed'
wait_for_scale_to_adjust(client, service)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discoverys_with_hostnetwork_1(client):
# Verify if able to resolve to containers of service in host network
# from containers that belong to another service in managed network.
port = "419"
service_scale = 1
consumed_service_scale = 2
ssh_port = "33"
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port,
ssh_port, isnetworkModeHost_svc=False,
isnetworkModeHost_consumed_svc=True)
validate_linked_service(client, service, [consumed_service], port)
delete_all(client, [env])
def test_dns_discoverys_with_hostnetwork_2(client):
# Verify if able to resolve to container of service in host network
# from containers that belong to another service in host network in the
# same stack
port = "420"
service_scale = 1
consumed_service_scale = 2
ssh_port = "33"
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port,
ssh_port, isnetworkModeHost_svc=True,
isnetworkModeHost_consumed_svc=True)
validate_linked_service(
client, service, [consumed_service], ssh_port,
linkName=consumed_service.name + "." + env.name + ".rancher.internal")
delete_all(client, [env])
def test_dns_discoverys_with_hostnetwork_3(client):
# Verify if able to resolve to containers of service in managed
# network from containers that belong to another service in host network.
port = "421"
service_scale = 1
consumed_service_scale = 2
ssh_port = "33"
env, service, consumed_service = create_environment_with_services(
client, service_scale, consumed_service_scale, port,
ssh_port, isnetworkModeHost_svc=True,
isnetworkModeHost_consumed_svc=False)
validate_linked_service(
client, service, [consumed_service], ssh_port,
linkName=consumed_service.name + "." + env.name + ".rancher.internal")
delete_all(client, [env])
def test_dns_discoverys_with_hostnetwork_externalService(client):
# Verify if able to resolve external services from containers
# that belong to another service in host network.
port = "422"
env, service, ext_service, con_list = \
create_env_with_ext_svc(client, 1, port)
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID_HOSTNET,
"networkMode": "host",
"labels": dns_labels}
random_name = random_str()
service_name = random_name.replace("-", "")
host_service = client.create_service(name=service_name,
stackId=env.id,
launchConfig=launch_config_svc,
scale=1)
host_service = client.wait_success(host_service)
host_service.activate()
ext_service.activate()
host_service = client.wait_success(host_service, SERVICE_WAIT_TIMEOUT)
ext_service = client.wait_success(ext_service, SERVICE_WAIT_TIMEOUT)
assert host_service.state == "active"
assert ext_service.state == "active"
validate_external_service(
client, host_service, [ext_service], 33, con_list,
fqdn="." + env.name + ".rancher.internal")
con_list.append(env)
delete_all(client, con_list)
def test_dns_discoverys_with_hostnetwork_externalService_cname(
client):
# Verify if able to resolve external services from containers
# that belong to another service in host network.
port = "423"
env, service, ext_service, con_list = \
create_env_with_ext_svc(client, 1, port, True)
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID_HOSTNET,
"networkMode": "host",
"labels": dns_labels}
random_name = random_str()
service_name = random_name.replace("-", "")
host_service = client.create_service(name=service_name,
stackId=env.id,
launchConfig=launch_config_svc,
scale=1)
host_service = client.wait_success(host_service)
host_service.activate()
ext_service.activate()
host_service = client.wait_success(host_service, SERVICE_WAIT_TIMEOUT)
ext_service = client.wait_success(ext_service, SERVICE_WAIT_TIMEOUT)
assert host_service.state == "active"
assert ext_service.state == "active"
validate_external_service_for_hostname(client, host_service,
[ext_service], 33)
delete_all(client, [env])
def test_dns_discoverys_coss_stack_service(
client):
env = create_env(client)
launch_config_svc = {"imageUuid": WEB_IMAGE_UUID}
service_name = "test1"
service = client.create_service(name=service_name,
stackId=env.id,
launchConfig=launch_config_svc,
scale=2)
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
service.activate()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
port = "424"
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID, }
launch_config_svc["ports"] = [port+":"+"22/tcp"]
env1 = create_env(client)
service_name = random_str()
service1 = client.create_service(name=service_name,
stackId=env1.id,
launchConfig=launch_config_svc,
scale=2)
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
service1.activate()
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
assert service1.state == "active"
validate_linked_service(client, service1, [service],
port,
linkName=service.name+"."+env.name)
linkName = service.name+"."+env.name+"."+RANCHER_FQDN
validate_linked_service(client, service1, [service],
port,
linkName=linkName)
delete_all(client, [env, env1])
def test_dns_discoverys_coss_stack_service_uppercase(
client):
env = create_env(client)
launch_config_svc = {"imageUuid": WEB_IMAGE_UUID}
service_name = "TEST"
service = client.create_service(name=service_name,
stackId=env.id,
launchConfig=launch_config_svc,
scale=2)
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
service.activate()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
port = "425"
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID, }
launch_config_svc["ports"] = [port+":"+"22/tcp"]
env1 = create_env(client)
service_name = random_str()
service1 = client.create_service(name=service_name,
stackId=env1.id,
launchConfig=launch_config_svc,
scale=2)
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
service1.activate()
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
assert service1.state == "active"
validate_linked_service(client, service1, [service],
port,
linkName=service.name+"."+env.name)
linkName = service.name+"."+env.name+"."+RANCHER_FQDN
validate_linked_service(client, service1, [service],
port,
linkName=linkName)
delete_all(client, [env, env1])
def test_dns_discoverys_for_containers_by_name_and_fqdn(
client):
env = create_env(client)
launch_config_svc = {"imageUuid": WEB_IMAGE_UUID}
service_name = "TEST"
service = client.create_service(name=service_name,
stackId=env.id,
launchConfig=launch_config_svc,
scale=2)
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
service.activate()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
port = "426"
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID, }
launch_config_svc["ports"] = [port+":"+"22/tcp"]
service_name = random_str()
service1 = client.create_service(name=service_name,
stackId=env.id,
launchConfig=launch_config_svc,
scale=2)
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
service1.activate()
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
assert service1.state == "active"
containers = get_service_container_list(client, service)
assert len(containers) == service.scale
for container in containers:
validate_for_container_dns_resolution(
client, service1, port, container, container.name)
validate_for_container_dns_resolution(
client, service1, port, container,
container.name+"."+RANCHER_FQDN)
delete_all(client, [env])
def test_dns_discoverys_for_containers_by_name_and_fqdn_cross_stack(
client):
env = create_env(client)
launch_config_svc = {"imageUuid": WEB_IMAGE_UUID}
service_name = "TEST"
service = client.create_service(name=service_name,
stackId=env.id,
launchConfig=launch_config_svc,
scale=2)
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
service.activate()
service = client.wait_success(service, SERVICE_WAIT_TIMEOUT)
assert service.state == "active"
# Deploy client service
port = "427"
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID, }
launch_config_svc["ports"] = [port+":"+"22/tcp"]
env1 = create_env(client)
service_name = random_str()
service1 = client.create_service(name=service_name,
stackId=env1.id,
launchConfig=launch_config_svc,
scale=2)
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
service1.activate()
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
assert service1.state == "active"
containers = get_service_container_list(client, service)
assert len(containers) == service.scale
for container in containers:
validate_for_container_dns_resolution(
client, service1, port, container, container.name)
validate_for_container_dns_resolution(
client, service1, port, container,
container.name+"."+RANCHER_FQDN)
delete_all(client, [env, env1])
def test_dns_discovery_for_sidekick_containers_by_name_and_fqdn_cross_stack(
client):
port = "428"
service_scale = 2
env, service, service_name, consumed_service_name = \
create_env_with_sidekick(client, service_scale, port)
env = env.activateservices()
env = client.wait_success(env, 120)
assert env.state == "active"
service = client.wait_success(service, 120)
assert service.state == "active"
validate_sidekick(client, service, service_name,
consumed_service_name, port)
secondary_cons = get_service_containers_with_name(
client, service, consumed_service_name)
assert len(secondary_cons) == service.scale
# Deploy client service in another environment
port = "429"
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID, }
launch_config_svc["ports"] = [port+":"+"22/tcp"]
env1 = create_env(client)
service_name = random_str()
service1 = client.create_service(name=service_name,
stackId=env1.id,
launchConfig=launch_config_svc,
scale=2)
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
service1.activate()
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
assert service1.state == "active"
for container in secondary_cons:
validate_for_container_dns_resolution(
client, service1, port, container, container.name)
validate_for_container_dns_resolution(
client, service1, port, container,
container.name+"."+RANCHER_FQDN)
delete_all(client, [env, env1])
def test_dns_discovery_for_service_with_sidekick(client):
port = "430"
service_scale = 2
env, service, service_name, consumed_service_name = \
create_env_with_sidekick(client, service_scale, port)
env = env.activateservices()
env = client.wait_success(env, 120)
assert env.state == "active"
service = client.wait_success(service, 120)
assert service.state == "active"
dnsname = service.secondaryLaunchConfigs[0].name
validate_sidekick(client, service, service_name,
consumed_service_name, port, dnsname)
secondary_cons = get_service_containers_with_name(
client, service, consumed_service_name)
# Deploy client service in same environment
port = "431"
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID, }
launch_config_svc["ports"] = [port+":"+"22/tcp"]
service_name = random_str()
service1 = client.create_service(name=service_name,
stackId=env.id,
launchConfig=launch_config_svc,
scale=2)
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
service1.activate()
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
assert service1.state == "active"
client_containers = get_service_container_list(client, service1)
dnsname = service.secondaryLaunchConfigs[0].name + "." + service.name
validate_dns(
client, client_containers, secondary_cons, port, dnsname)
delete_all(client, [env])
def test_dns_discovery_for_service_with_sidekick_cross_stack(
client):
port = "432"
service_scale = 2
env, service, service_name, consumed_service_name = \
create_env_with_sidekick(client, service_scale, port)
env = env.activateservices()
env = client.wait_success(env, 120)
assert env.state == "active"
service = client.wait_success(service, 120)
assert service.state == "active"
dnsname = service.secondaryLaunchConfigs[0].name
validate_sidekick(client, service, service_name,
consumed_service_name, port, dnsname)
secondary_cons = get_service_containers_with_name(
client, service, consumed_service_name)
# Deploy client service in a different environment
port = "433"
launch_config_svc = {"imageUuid": SSH_IMAGE_UUID, }
launch_config_svc["ports"] = [port+":"+"22/tcp"]
service_name = random_str()
env1 = create_env(client)
service1 = client.create_service(name=service_name,
stackId=env1.id,
launchConfig=launch_config_svc,
scale=2)
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
service1.activate()
service1 = client.wait_success(service1, SERVICE_WAIT_TIMEOUT)
assert service1.state == "active"
client_containers = get_service_container_list(client, service1)
dnsname = \
service.secondaryLaunchConfigs[0].name + "." + service.name + \
"." + env.name + "." + RANCHER_FQDN
validate_dns(
client, client_containers, secondary_cons, port, dnsname)
delete_all(client, [env, env1])
def validate_for_container_dns_resolution(
client, service, sshport, container, dns_name):
time.sleep(sleep_interval)
client_containers = get_service_container_list(client, service)
assert len(client_containers) == service.scale
for con in client_containers:
host = client.by_id('host', con.hosts[0].id)
# Validate port mapping
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host.ipAddresses().data[0].address, username="root",
password="root", port=int(sshport))
# Validate container name resolution
cmd = "wget -O result.txt --timeout=20 --tries=1 http://" + \
dns_name + ":80/name.html;cat result.txt"
logger.info(cmd)
print(cmd)
stdin, stdout, stderr = ssh.exec_command(cmd)
response = stdout.readlines()
assert len(response) == 1
resp = response[0].strip("\n")
logger.info(resp)
print(resp)
assert resp in (container.externalId[:12])
# Validate DNS resolution using dig
cmd = "dig " + dns_name + " +short"
logger.info(cmd)
print(cmd)
stdin, stdout, stderr = ssh.exec_command(cmd)
response = stdout.readlines()
logger.info("Actual dig Response" + str(response))
assert len(response) == 1
resp = response[0].strip("\n")
logger.info(resp)
print(resp)
assert resp == container.primaryIpAddress
return
| 34.604333 | 78 | 0.678595 | 3,380 | 30,348 | 5.75355 | 0.065089 | 0.112614 | 0.063352 | 0.047205 | 0.901681 | 0.887489 | 0.873091 | 0.860184 | 0.844295 | 0.834165 | 0 | 0.013312 | 0.23761 | 30,348 | 876 | 79 | 34.643836 | 0.827203 | 0.032786 | 0 | 0.782609 | 0 | 0 | 0.027245 | 0 | 0 | 0 | 0 | 0 | 0.091787 | 1 | 0.045089 | false | 0.00161 | 0.003221 | 0 | 0.05153 | 0.006441 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
734574dc20e398bda3076f4438339fc031fd1fad | 15,647 | py | Python | testsmainnet/staking/test_4-StakingUIUserCases.py | DryptoBZX/contractsV2 | 3ee0b7669902ff6b9422440289ddc52f679e636b | [
"Apache-2.0"
] | 177 | 2020-06-13T01:41:04.000Z | 2022-03-28T06:26:53.000Z | testsmainnet/staking/test_4-StakingUIUserCases.py | DryptoBZX/contractsV2 | 3ee0b7669902ff6b9422440289ddc52f679e636b | [
"Apache-2.0"
] | 31 | 2020-08-14T14:30:37.000Z | 2022-03-15T15:36:25.000Z | testsmainnet/staking/test_4-StakingUIUserCases.py | DryptoBZX/contractsV2 | 3ee0b7669902ff6b9422440289ddc52f679e636b | [
"Apache-2.0"
] | 38 | 2020-06-24T22:24:40.000Z | 2022-03-26T00:27:14.000Z | #!/usr/bin/python3
import pytest
from brownie import network, Contract, Wei, chain
@pytest.fixture(scope="module")
def requireMainnetFork():
assert (network.show_active() == "mainnet-fork" or network.show_active() == "mainnet-fork-alchemy")
def testStake_UserStory1_StakedFirstTime(requireMainnetFork, stakingV1_1, bzx, BZRX, vBZRX, iBZRX, LPT, accounts, iUSDC, USDC, WETH, ):
# mint some for testing
BZRX.transfer(accounts[1], 200e18, {'from': BZRX})
BZRX.approve(iBZRX, 100e18, {'from': accounts[1]})
iBZRX.mint(accounts[1], 100e18, {'from': accounts[1]})
vBZRX.transfer(accounts[1], 100e18, {'from': vBZRX})
LPT.transfer(accounts[1], 1e18, { 'from': accounts[9]})
balanceOfBZRX = BZRX.balanceOf(accounts[1])
balanceOfvBZRX = vBZRX.balanceOf(accounts[1])
balanceOfiBZRX = iBZRX.balanceOf(accounts[1])
balanceOfLPT = LPT.balanceOf(accounts[1])
BZRX.approve(stakingV1_1, balanceOfBZRX, {'from': accounts[1]})
vBZRX.approve(stakingV1_1, balanceOfvBZRX, {'from': accounts[1]})
iBZRX.approve(stakingV1_1, balanceOfiBZRX, {'from': accounts[1]})
LPT.approve(stakingV1_1, balanceOfLPT, {'from': accounts[1]})
tokens = [BZRX, vBZRX, iBZRX, LPT]
amounts = [balanceOfBZRX, balanceOfvBZRX, balanceOfiBZRX, balanceOfLPT]
tx = stakingV1_1.stake(tokens, amounts, {'from': accounts[1]})
balances = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balances[0] == balanceOfBZRX)
assert(balances[1] == balanceOfiBZRX)
assert(balances[2] == balanceOfvBZRX)
assert(balances[3] == balanceOfLPT)
assert True
def testStake_UserStory2_StakedMoreTokens(requireMainnetFork, stakingV1_1, bzx, BZRX, vBZRX, iBZRX, accounts, iUSDC, USDC, WETH, ):
# mint some for testing
BZRX.transfer(accounts[1], 200e18, {'from': BZRX})
BZRX.approve(iBZRX, 100e18, {'from': accounts[1]})
iBZRX.mint(accounts[1], 100e18, {'from': accounts[1]})
vBZRX.transfer(accounts[1], 100e18, {'from': vBZRX})
# LPT.transferFrom("0xe95ebce2b02ee07def5ed6b53289801f7fc137a4", accounts[1], 100e18, {
# 'from': "0xe95ebce2b02ee07def5ed6b53289801f7fc137a4"})
balanceOfBZRX = BZRX.balanceOf(accounts[1])
balanceOfvBZRX = vBZRX.balanceOf(accounts[1])
balanceOfiBZRX = iBZRX.balanceOf(accounts[1])
# balanceOfLPT = LPT.balanceOf(accounts[1])
BZRX.approve(stakingV1_1, balanceOfBZRX, {'from': accounts[1]})
vBZRX.approve(stakingV1_1, balanceOfvBZRX, {'from': accounts[1]})
iBZRX.approve(stakingV1_1, balanceOfiBZRX, {'from': accounts[1]})
# LPT.approve(stakingV1_1, balanceOfLPT, {'from': accounts[1]})
tokens = [BZRX, vBZRX, iBZRX]
amounts = [balanceOfBZRX, balanceOfvBZRX, balanceOfiBZRX]
tx = stakingV1_1.stake(tokens, amounts, {'from': accounts[1]})
balances = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balances[0] == balanceOfBZRX)
assert(balances[1] == balanceOfiBZRX)
assert(balances[2] == balanceOfvBZRX)
assert(balances[3] == 0)
# mint some for testing
BZRX.transfer(accounts[1], 200e18, {'from': BZRX})
BZRX.approve(iBZRX, 100e18, {'from': accounts[1]})
iBZRX.mint(accounts[1], 100e18, {'from': accounts[1]})
vBZRX.transfer(accounts[1], 100e18, {'from': vBZRX})
# LPT.transferFrom("0xe95ebce2b02ee07def5ed6b53289801f7fc137a4", accounts[1], 100e18, {
# 'from': "0xe95ebce2b02ee07def5ed6b53289801f7fc137a4"})
balanceOfBZRXAfter = BZRX.balanceOf(accounts[1])
balanceOfvBZRXAfter = vBZRX.balanceOf(accounts[1])
balanceOfiBZRXAfter = iBZRX.balanceOf(accounts[1])
# balanceOfLPT = LPT.balanceOf(accounts[1])
BZRX.approve(stakingV1_1, balanceOfBZRXAfter, {'from': accounts[1]})
vBZRX.approve(stakingV1_1, balanceOfvBZRXAfter, {'from': accounts[1]})
iBZRX.approve(stakingV1_1, balanceOfiBZRXAfter, {'from': accounts[1]})
# LPT.approve(stakingV1_1, balanceOfLPT, {'from': accounts[1]})
tokens = [BZRX, vBZRX, iBZRX]
amounts = [balanceOfBZRXAfter, balanceOfvBZRXAfter, balanceOfiBZRXAfter]
tx = stakingV1_1.stake(tokens, amounts, {'from': accounts[1]})
balances = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balances[0] == balanceOfBZRX + balanceOfBZRXAfter) # some has vested
assert(balances[1] == balanceOfiBZRX + balanceOfiBZRXAfter)
assert(balances[2] == balanceOfvBZRX + balanceOfvBZRXAfter)
assert(balances[3] == 0)
assert True
def testStake_UserStory3_IClaimMyIncentiveRewards(requireMainnetFork, stakingV1_1, bzx, BZRX, vBZRX, iBZRX, accounts, iUSDC, USDC, WETH,):
# those extracted from protocol directly not from staking
assert True
def testStake_UserStory4_IClaimMyStakingRewards(requireMainnetFork, fees_extractor, stakingV1_1, bzx, BZRX, vBZRX, iBZRX, POOL3, accounts, iUSDC, USDC, WETH, ):
# mint some for testing
BZRX.transfer(accounts[1], 200e18, {'from': BZRX})
BZRX.approve(iBZRX, 100e18, {'from': accounts[1]})
iBZRX.mint(accounts[1], 100e18, {'from': accounts[1]})
vBZRX.transfer(accounts[1], 100e18, {'from': vBZRX})
# LPT.transferFrom("0xe95ebce2b02ee07def5ed6b53289801f7fc137a4", accounts[1], 100e18, {
# 'from': "0xe95ebce2b02ee07def5ed6b53289801f7fc137a4"})
balanceOfBZRX = BZRX.balanceOf(accounts[1])
balanceOfvBZRX = vBZRX.balanceOf(accounts[1])
balanceOfiBZRX = iBZRX.balanceOf(accounts[1])
# balanceOfLPT = LPT.balanceOf(accounts[1])
BZRX.approve(stakingV1_1, balanceOfBZRX, {'from': accounts[1]})
vBZRX.approve(stakingV1_1, balanceOfvBZRX, {'from': accounts[1]})
iBZRX.approve(stakingV1_1, balanceOfiBZRX, {'from': accounts[1]})
# LPT.approve(stakingV1_1, balanceOfLPT, {'from': accounts[1]})
tokens = [BZRX, vBZRX, iBZRX]
amounts = [balanceOfBZRX, balanceOfvBZRX, balanceOfiBZRX]
tx = stakingV1_1.stake(tokens, amounts, {'from': accounts[1]})
balances = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balances[0] == balanceOfBZRX)
assert(balances[1] == balanceOfiBZRX)
assert(balances[2] == balanceOfvBZRX)
assert(balances[3] == 0)
# create some fees
borrowAmount = 100*10**6
borrowTime = 7884000
collateralAmount = 1*10**18
collateralAddress = "0x0000000000000000000000000000000000000000"
txBorrow = iUSDC.borrow("", borrowAmount, borrowTime, collateralAmount, collateralAddress,
accounts[0], accounts[0], b"", {'from': accounts[0], 'value': Wei(collateralAmount)})
txSweep = fees_extractor.sweepFees({'from': accounts[9]})
earnings = stakingV1_1.earned.call(accounts[1])
assert(earnings[0] > 0)
assert(earnings[1] > 0)
assert(earnings[2] > 0)
assert(earnings[3] > 0)
stakingV1_1.claim(False, {'from': accounts[1]})
assert(earnings[0] <= BZRX.balanceOf(accounts[1]))
assert(earnings[1] <= POOL3.balanceOf(accounts[1]))
earningsAfterClaim = stakingV1_1.earned.call(accounts[1])
assert(earningsAfterClaim[0] == 0)
assert(earningsAfterClaim[1] == 0)
assert(earningsAfterClaim[2] <= earnings[2])
assert(earningsAfterClaim[3] <= earnings[3])
assert True
def testStake_UserStory5_IClaimAndRestakeMyStakingRewards(requireMainnetFork, fees_extractor, stakingV1_1, bzx, BZRX, vBZRX, iBZRX, POOL3, LPT, accounts, iUSDC, USDC, WETH, ):
# mint some for testing
BZRX.transfer(accounts[1], 200e18, {'from': BZRX})
BZRX.approve(iBZRX, 100e18, {'from': accounts[1]})
iBZRX.mint(accounts[1], 100e18, {'from': accounts[1]})
vBZRX.transfer(accounts[1], 100e18, {'from': vBZRX})
LPT.transfer(accounts[1], 1e18, { 'from': accounts[9]})
balanceOfBZRX = BZRX.balanceOf(accounts[1])
balanceOfvBZRX = vBZRX.balanceOf(accounts[1])
balanceOfiBZRX = iBZRX.balanceOf(accounts[1])
balanceOfLPT = LPT.balanceOf(accounts[1])
BZRX.approve(stakingV1_1, balanceOfBZRX, {'from': accounts[1]})
vBZRX.approve(stakingV1_1, balanceOfvBZRX, {'from': accounts[1]})
iBZRX.approve(stakingV1_1, balanceOfiBZRX, {'from': accounts[1]})
LPT.approve(stakingV1_1, balanceOfLPT, {'from': accounts[1]})
tokens = [BZRX, vBZRX, iBZRX, LPT]
amounts = [balanceOfBZRX, balanceOfvBZRX, balanceOfiBZRX, balanceOfLPT]
tx = stakingV1_1.stake(tokens, amounts, {'from': accounts[1]})
balances = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balances[0] == balanceOfBZRX)
assert(balances[1] == balanceOfiBZRX)
assert(balances[2] == balanceOfvBZRX)
assert(balances[3] == balanceOfLPT)
# create some fees
borrowAmount = 100*10**6
borrowTime = 7884000
collateralAmount = 1*10**18
collateralAddress = "0x0000000000000000000000000000000000000000"
txBorrow = iUSDC.borrow("", borrowAmount, borrowTime, collateralAmount, collateralAddress,
accounts[0], accounts[0], b"", {'from': accounts[0], 'value': Wei(collateralAmount)})
txSweep = fees_extractor.sweepFees({'from': accounts[9]})
balance = stakingV1_1.balanceOfByAssets.call(accounts[1])
earnings = stakingV1_1.earned.call(accounts[1])
assert(earnings[0] > 0)
assert(earnings[1] > 0)
assert(earnings[2] > 0)
assert(earnings[3] > 0)
stakingV1_1.claim(True, {'from': accounts[1]})
assert(0 <= BZRX.balanceOf(accounts[1]))
assert(earnings[1] <= POOL3.balanceOf(accounts[1]))
balanceAfterClaim = stakingV1_1.balanceOfByAssets.call(accounts[1])
earningsAfterClaim = stakingV1_1.earned.call(accounts[1])
assert(earningsAfterClaim[0] == 0)
assert(earningsAfterClaim[1] == 0)
assert(earningsAfterClaim[2] <= earnings[2])
assert(earningsAfterClaim[3] <= earnings[3])
assert(balanceAfterClaim[0] >= balance[0] + earnings[0])
assert(balanceAfterClaim[1] == balance[1])
assert(balanceAfterClaim[2] == balance[2])
assert(balanceAfterClaim[3] == balance[3])
assert True
def testStake_IWantToUnstakeMyTokens(requireMainnetFork, stakingV1_1, bzx, BZRX, vBZRX, iBZRX, LPT, accounts, iUSDC, USDC, WETH):
# mint some for testing
BZRX.transfer(accounts[1], 200e18, {'from': BZRX})
BZRX.approve(iBZRX, 100e18, {'from': accounts[1]})
iBZRX.mint(accounts[1], 100e18, {'from': accounts[1]})
vBZRX.transfer(accounts[1], 100e18, {'from': vBZRX})
LPT.transfer(accounts[1], 1e18, { 'from': accounts[9]})
balanceOfBZRX = BZRX.balanceOf(accounts[1])
balanceOfvBZRX = vBZRX.balanceOf(accounts[1])
balanceOfiBZRX = iBZRX.balanceOf(accounts[1])
balanceOfLPT = LPT.balanceOf(accounts[1])
BZRX.approve(stakingV1_1, balanceOfBZRX, {'from': accounts[1]})
vBZRX.approve(stakingV1_1, balanceOfvBZRX, {'from': accounts[1]})
iBZRX.approve(stakingV1_1, balanceOfiBZRX, {'from': accounts[1]})
LPT.approve(stakingV1_1, balanceOfLPT, {'from': accounts[1]})
tokens = [BZRX, vBZRX, iBZRX, LPT]
amounts = [balanceOfBZRX, balanceOfvBZRX, balanceOfiBZRX, balanceOfLPT]
tx = stakingV1_1.stake(tokens, amounts, {'from': accounts[1]})
balances = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balances[0] == balanceOfBZRX)
assert(balances[1] == balanceOfiBZRX)
assert(balances[2] == balanceOfvBZRX)
assert(balances[3] == balanceOfLPT)
# unstake half
amounts = [balanceOfBZRX - 10, balanceOfvBZRX - 10, balanceOfiBZRX - 10, balanceOfLPT - 10]
tx = stakingV1_1.unstake(tokens, amounts, {'from': accounts[1]})
balanceOfBZRXAfter = BZRX.balanceOf(accounts[1])
balanceOfvBZRXAfter = vBZRX.balanceOf(accounts[1])
balanceOfiBZRXAfter = iBZRX.balanceOf(accounts[1])
balanceOfLPTAfter = LPT.balanceOf(accounts[1])
stakedBalance = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balanceOfBZRXAfter >= balanceOfBZRX - stakedBalance[0])
assert(balanceOfvBZRXAfter == balanceOfvBZRX - stakedBalance[1])
assert(balanceOfiBZRXAfter == balanceOfiBZRX - stakedBalance[2])
assert(balanceOfLPTAfter == balanceOfLPT - stakedBalance[3])
assert True
def testStake_IWantToUnstakeAllMyStakedTokens(requireMainnetFork, stakingV1_1, bzx, BZRX, vBZRX, iBZRX, LPT, accounts, iUSDC, USDC, WETH):
# mint some for testing
BZRX.transfer(accounts[1], 200e18, {'from': BZRX})
BZRX.approve(iBZRX, 100e18, {'from': accounts[1]})
iBZRX.mint(accounts[1], 100e18, {'from': accounts[1]})
vBZRX.transfer(accounts[1], 100e18, {'from': vBZRX})
LPT.transfer(accounts[1], 1e18, { 'from': accounts[9]})
balanceOfBZRX = BZRX.balanceOf(accounts[1])
balanceOfvBZRX = vBZRX.balanceOf(accounts[1])
balanceOfiBZRX = iBZRX.balanceOf(accounts[1])
balanceOfLPT = LPT.balanceOf(accounts[1])
BZRX.approve(stakingV1_1, balanceOfBZRX, {'from': accounts[1]})
vBZRX.approve(stakingV1_1, balanceOfvBZRX, {'from': accounts[1]})
iBZRX.approve(stakingV1_1, balanceOfiBZRX, {'from': accounts[1]})
LPT.approve(stakingV1_1, balanceOfLPT, {'from': accounts[1]})
tokens = [BZRX, vBZRX, iBZRX, LPT]
amounts = [balanceOfBZRX, balanceOfvBZRX, balanceOfiBZRX, balanceOfLPT]
tx = stakingV1_1.stake(tokens, amounts, {'from': accounts[1]})
balances = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balances[0] == balanceOfBZRX)
assert(balances[1] == balanceOfiBZRX)
assert(balances[2] == balanceOfvBZRX)
assert(balances[3] == balanceOfLPT)
stakingV1_1.exit({'from': accounts[1]})
balancesAfter = stakingV1_1.balanceOfByAssets(accounts[1])
assert(balancesAfter[0] == 0)
assert(balancesAfter[1] == 0)
assert(balancesAfter[2] == 0)
assert(balancesAfter[3] == 0)
balanceOfBZRX = BZRX.balanceOf(accounts[1])
balanceOfvBZRX = vBZRX.balanceOf(accounts[1])
balanceOfiBZRX = iBZRX.balanceOf(accounts[1])
balanceOfLPT = LPT.balanceOf(accounts[1])
assert(balanceOfBZRX >= balances[0])
assert(balanceOfvBZRX == balances[2])
assert(balanceOfiBZRX == balances[1])
assert(balanceOfLPT == balances[3])
assert True
def testStake_IShuldBeAbleToUpdateStakingRewards(requireMainnetFork, fees_extractor, stakingV1_1, bzx, BZRX, vBZRX, iBZRX, LPT, accounts, iUSDC, USDC, WETH):
# mint some for testing
BZRX.transfer(accounts[1], 200e18, {'from': BZRX})
BZRX.approve(iBZRX, 100e18, {'from': accounts[1]})
iBZRX.mint(accounts[1], 100e18, {'from': accounts[1]})
vBZRX.transfer(accounts[1], 100e18, {'from': vBZRX})
LPT.transfer(accounts[1], 1e18, { 'from': accounts[9]})
balanceOfBZRX = BZRX.balanceOf(accounts[1])
balanceOfvBZRX = vBZRX.balanceOf(accounts[1])
balanceOfiBZRX = iBZRX.balanceOf(accounts[1])
balanceOfLPT = LPT.balanceOf(accounts[1])
BZRX.approve(stakingV1_1, balanceOfBZRX, {'from': accounts[1]})
vBZRX.approve(stakingV1_1, balanceOfvBZRX, {'from': accounts[1]})
iBZRX.approve(stakingV1_1, balanceOfiBZRX, {'from': accounts[1]})
LPT.approve(stakingV1_1, balanceOfLPT, {'from': accounts[1]})
tokens = [BZRX, vBZRX, iBZRX, LPT]
amounts = [balanceOfBZRX, balanceOfvBZRX, balanceOfiBZRX, balanceOfLPT]
tx = stakingV1_1.stake(tokens, amounts, {'from': accounts[1]})
# create some fees
borrowAmount = 100*10**6
borrowTime = 7884000
collateralAmount = 1*10**18
collateralAddress = "0x0000000000000000000000000000000000000000"
txBorrow = iUSDC.borrow("", borrowAmount, borrowTime, collateralAmount, collateralAddress,
accounts[0], accounts[0], b"", {'from': accounts[0], 'value': Wei(collateralAmount)})
txSweep = fees_extractor.sweepFees({'from': accounts[9]})
earnings = stakingV1_1.earned.call(accounts[1])
assert(earnings[0] > 0)
assert(earnings[1] > 0)
assert(earnings[2] > 0)
assert(earnings[3] > 0)
| 40.431525 | 175 | 0.696619 | 1,679 | 15,647 | 6.438952 | 0.071471 | 0.126538 | 0.072149 | 0.033392 | 0.859772 | 0.842198 | 0.827861 | 0.821386 | 0.821386 | 0.818518 | 0 | 0.070415 | 0.15773 | 15,647 | 387 | 176 | 40.431525 | 0.749905 | 0.07126 | 0 | 0.81323 | 0 | 0 | 0.035503 | 0.008686 | 0 | 0 | 0.008686 | 0 | 0.29572 | 1 | 0.035019 | false | 0 | 0.007782 | 0 | 0.042802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
73498fff7360a8f6b453fba3a54f9d55fe1f2480 | 831 | py | Python | test_tree.py | ddrake/clue | 902d0c998d06c517114f68b0de882e4fc6471ddb | [
"MIT"
] | null | null | null | test_tree.py | ddrake/clue | 902d0c998d06c517114f68b0de882e4fc6471ddb | [
"MIT"
] | null | null | null | test_tree.py | ddrake/clue | 902d0c998d06c517114f68b0de882e4fc6471ddb | [
"MIT"
] | null | null | null | from logic_tree import *
def test_add_both():
t = tree()
t.add_pos((1,2,3))
t.add_pos((4,5,6))
t.add_neg((1,5,6))
common = t.common_elements()
possibles = t.possibles()
assert atom(1, False) in common
assert atom(4, True) in common
assert atom(5, False) in common
assert atom(6, False) in common
assert [3] in possibles
assert [2] in possibles
def test_more_complicated():
t = tree()
t.add_pos((1,2,3))
t.add_pos((4,5,6))
t.add_neg((1,5,6))
t.add_pos((2,8,4))
common = t.common_elements()
possibles = t.possibles()
assert atom(1, False) in common
assert atom(4, True) in common
assert atom(5, False) in common
assert atom(6, False) in common
assert [2, 8] in possibles
assert [3, 8] in possibles
assert [2, 3] in possibles
| 24.441176 | 35 | 0.616125 | 143 | 831 | 3.482517 | 0.195804 | 0.160643 | 0.2249 | 0.228916 | 0.706827 | 0.706827 | 0.706827 | 0.706827 | 0.706827 | 0.706827 | 0 | 0.059011 | 0.245487 | 831 | 33 | 36 | 25.181818 | 0.735247 | 0 | 0 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.448276 | 1 | 0.068966 | false | 0 | 0.034483 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
735b3d08064675ad6b30af2c3a9b6f3573abae85 | 123 | py | Python | eg_mcts/utils/__init__.py | jjljkjljk/EG-MCTS | 53e822d2648368c644bf25f5f82ea37e4c77f249 | [
"MIT"
] | 2 | 2022-03-09T09:42:07.000Z | 2022-03-15T08:00:47.000Z | eg_mcts/utils/__init__.py | jjljkjljk/EG-MCTS | 53e822d2648368c644bf25f5f82ea37e4c77f249 | [
"MIT"
] | null | null | null | eg_mcts/utils/__init__.py | jjljkjljk/EG-MCTS | 53e822d2648368c644bf25f5f82ea37e4c77f249 | [
"MIT"
] | null | null | null | from .logger import setup_logger
from eg_mcts.utils.prepare_methods import *
from eg_mcts.utils.smiles_process import *
| 30.75 | 44 | 0.821138 | 19 | 123 | 5.052632 | 0.578947 | 0.125 | 0.208333 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 123 | 3 | 45 | 41 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7dfeb3f4f7c1d389612e800109c9ba39c35ceb93 | 361 | py | Python | tests/parser/grounding.backjump.5.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/grounding.backjump.5.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/grounding.backjump.5.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | input = """
%#maxint=10.
%number(0..10).
q(C) :- number(C).
a | b :- p(A,B), q(C), r(X,A), q(X), not s(X), not t(C).
p(1,1).
r(3,1).
s(5).
s(0).
t(0).
t(2).
"""
output = """
%#maxint=10.
%number(0..10).
q(C) :- number(C).
a | b :- p(A,B), q(C), r(X,A), q(X), not s(X), not t(C).
p(1,1).
r(3,1).
s(5).
s(0).
t(0).
t(2).
"""
| 10.939394 | 58 | 0.373961 | 84 | 361 | 1.607143 | 0.238095 | 0.059259 | 0.207407 | 0.222222 | 0.918519 | 0.918519 | 0.918519 | 0.918519 | 0.918519 | 0.918519 | 0 | 0.096654 | 0.254848 | 361 | 32 | 59 | 11.28125 | 0.405204 | 0 | 0 | 0.916667 | 0 | 0.083333 | 0.906907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
b4249c305225f31c6319ac58544c0d60987d64b4 | 1,645 | py | Python | tests/oldtests/testvectorsummary.py | karimbahgat/pythongis | 1042ea14de4e2aafd55de4e01d86b7d379d55999 | [
"MIT"
] | 4 | 2015-12-05T14:31:55.000Z | 2018-02-09T05:54:36.000Z | tests/oldtests/testvectorsummary.py | karimbahgat/pythongis | 1042ea14de4e2aafd55de4e01d86b7d379d55999 | [
"MIT"
] | null | null | null | tests/oldtests/testvectorsummary.py | karimbahgat/pythongis | 1042ea14de4e2aafd55de4e01d86b7d379d55999 | [
"MIT"
] | 1 | 2018-10-24T01:08:11.000Z | 2018-10-24T01:08:11.000Z |
import pythongis as pg
# overlap
group = pg.vector.data.VectorData(r"C:\Users\kimo\Dropbox\Work\Workplace\Geobook15\pygeo book 2\code\(raw sandbox,incl abondoned ideas)\test_files\country_convexes.shp")
values = pg.vector.data.VectorData(r"C:\Users\kimo\Dropbox\Work\Workplace\Geobook15\pygeo book 2\code\(raw sandbox,incl abondoned ideas)\test_files\country_centroids.shp")
print group.fields
print values.fields
summ = pg.vector.analyzer.overlap_summary(group, values, fieldmapping=[("countagg",lambda f: f["GWEYEAR"],"count")])
print summ
for f in summ:
print f.row[-1]
summ.view(1000,500,
fillcolor={"breaks": "natural",
"key": lambda f: f["countagg"],
"valuestops": [(255,0,0),(255,255,0),(0,255,0)]
}
)
# within distance
group = pg.vector.data.VectorData(r"C:\Users\kimo\Dropbox\Work\Workplace\Geobook15\pygeo book 2\code\(raw sandbox,incl abondoned ideas)\test_files\country_convexes.shp")
values = pg.vector.data.VectorData(r"C:\Users\kimo\Dropbox\Work\Workplace\Geobook15\pygeo book 2\code\(raw sandbox,incl abondoned ideas)\test_files\country_centroids.shp")
print group.fields
print values.fields
summ = pg.vector.analyzer.near_summary(group, values, radius=1.5, n=10,
fieldmapping=[("countagg",lambda f: f["GWEYEAR"],"count")])
print summ
for f in summ:
print f.row[-1]
summ.view(1000,500,
fillcolor={"breaks": "natural",
"key": lambda f: f["countagg"],
"valuestops": [(255,0,0),(255,255,0),(0,255,0)]
}
)
| 35.76087 | 171 | 0.648024 | 226 | 1,645 | 4.672566 | 0.287611 | 0.045455 | 0.045455 | 0.083333 | 0.907197 | 0.907197 | 0.907197 | 0.907197 | 0.907197 | 0.907197 | 0 | 0.05019 | 0.200608 | 1,645 | 45 | 172 | 36.555556 | 0.752852 | 0.013982 | 0 | 0.733333 | 0 | 0.133333 | 0.391842 | 0.223733 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.033333 | null | null | 0.266667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c3028aed519aa390db6b5da386231e48cd8f1b5a | 159 | py | Python | login_i18n/views.py | AndreJackBia/django-tutorial | cfc3be71f603237f86a7333620033782c289cf10 | [
"Apache-2.0"
] | null | null | null | login_i18n/views.py | AndreJackBia/django-tutorial | cfc3be71f603237f86a7333620033782c289cf10 | [
"Apache-2.0"
] | 5 | 2021-03-19T03:07:54.000Z | 2022-02-10T11:50:42.000Z | login_i18n/views.py | AndreJackBia/django-tutorial | cfc3be71f603237f86a7333620033782c289cf10 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
def index(request):
return render(request, 'index.html')
def login(request):
return render(request, 'login.html') | 22.714286 | 40 | 0.735849 | 21 | 159 | 5.571429 | 0.52381 | 0.222222 | 0.324786 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144654 | 159 | 7 | 41 | 22.714286 | 0.860294 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.